Literatura científica selecionada sobre o tema "Stockage virtuel"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Stockage virtuel".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Artigos de revistas sobre o assunto "Stockage virtuel"

1

Labbé-Pinlon, Blandine, Cindy Lombart e Didier Louis. "Quelle technique promotionnelle privilégier pour défendre le pouvoir d’achat des consommateurs". Décisions Marketing N° 56, n.º 4 (1 de dezembro de 2009): 23–35. http://dx.doi.org/10.3917/dm.056.0023.

Texto completo da fonte
Resumo:
Quelles techniques promotionnelles devraient être privilégiées pour défendre le pouvoir d’achat des consommateurs : les réductions de prix immédiates ou les lots virtuels ? Une expérimentation menée en magasin laboratoire a tout d’abord montré que les lots virtuels sont perçus comme moins intéressants que les réductions de prix immédiates. En outre, ils augmentent les dépenses des consommateurs en les incitant à acheter en plus grande quantité les produits en promotion pour bénéficier de la remise mais aussi, indirectement, d’autres produits hors promotion. Les réductions de prix immédiates sont ainsi plus appropriées pour défendre le pouvoir d’achat des consommateurs. Les lots virtuels seraient toutefois pertinents pour les produits aux taux de consommation élevés et/ou aux habitudes de stockage fortes et/ou aux valeurs faciales unitaires faibles. De futures recherches devraient compléter ces premières motivations d’achat opportunistes de lots virtuels .
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Stanziani, Alessandro. "La puissance des céréales". Multitudes 92, n.º 3 (21 de setembro de 2023): 51–57. http://dx.doi.org/10.3917/mult.092.0051.

Texto completo da fonte
Resumo:
Tous les empires dans l’histoire ont pu faire ce constat : la puissance n’est pas seulement une question d’armement, mais aussi de contrôle des céréales. Elles ont constitué, jusqu’à l’essor de la consommation de viande, le carburant du travail humain. Cet article permet une charnière bienvenue entre histoire et actualité (« pénurie » et déstabilisation des marchés mondiaux des céréales avec la guerre d’Ukraine). Il retrace l’histoire longue des marchés des céréales dans le monde, les successions des politiques visant leur contrôle (stockage, contrôle des prix, blocage des approvisionnements) ou donnant libre cours à la spéculation, les effets de ces cycles sur les famines et les guerres. Ce qui spécifie le « nouveau capitalisme agraire » contemporain depuis les années 1980, c’est la spéculation totalement dérégulée sur les céréales : marchés virtuels, futures , accaparement des terres, contrôle du vivant et monopoles des semences, entrée de nouveaux acteurs comme banques, fonds de pension, hedge funds . L’auteur propose en conclusion quelques options politiques pour lutter contre cette hégémonie.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Ponomareva, O. V., A. V. Ponomarev e N. V. Smirnova. "Digitalization of Spectral Measurements in the Fourier Basis – Development Trends and Problems". Devices and Methods of Measurements 10, n.º 3 (9 de setembro de 2019): 271–80. http://dx.doi.org/10.21122/2220-9506-2019-10-3-271-280.

Texto completo da fonte
Resumo:
At the present stage of development of digital information technologies intensive digitalization (computerization) of both direct and indirect measurement methods is taking place. The direct consequence of the computerization of measurements was, firstly, the emergence of a new class of measuring instruments – Processor measuring instruments (PRIS), secondly, increasing the level of formalization of measuring procedures, thirdly, the creation of a new, revolutionary technology –Virtual Instrument (VI). The purpose of the article is to analyze the development of digital technologies for measuring spectra, identifying the problems that arise in this case and formulating priority scientific and applied problems for their resolution.Theoretical and applied research has established that digital spectrum measurement technologies, in addition to significant advantages, have certain disadvantages. It has been shown that the disadvantages of digital technologies for measuring spectra arise both from the nature of digital methods and from the analytical and stochastic properties of the bases of the applied transformations in measuring the spectra. An analysis of digital methods for measuring spectra showed that methods based on Discrete Fourier Transform (DFT) retain their leading role and are effective in almost all subject areas. However, there are also problems of digitalization of measurements of the spectra of signals based on the DFT, which are associated, first of all, with the manifestation of a number of negative effects that are absent with analog methods for measuring spectra based on the Fourier transform. This is the periodization effect of the measuring signal and its spectrum, the stockade effect, as well as the aliasing effect. As the analysis showed, existing methods of dealing with the negative effects of digitalization of spectrum measurements solve the problems of introducing digital technologies only partially. To combat the negative effects of digitalization of spectral measurements, a generalization of the DFT in the form of a parametric DFT (DFT-P) (Parametric Discrete Fourier Transform – DFT-P) is proposed.The main scientific and applied problems of computerization of signal spectrum measurements are formulated: the development of the theory of digital methods for measuring signal spectra, the creation of new and improvement of existing digital methods for measuring signal spectra, the development of algorithmic, software and metrological software for PRIS and VI for the implementation of DFT-P.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Habsatou, Boukary, Mahamane Moctar Rabe, Haoua Bori, Yahaya Bako Zinniratou e Soumaila Abdoulaye Almoustapha. "Pratiques paysannes de production de bulbes d’oignon (Allium cepa L.) sur le site maraicher de Kollo en zone périurbaine de Niamey". European Scientific Journal, ESJ 19, n.º 36 (31 de dezembro de 2023): 51. http://dx.doi.org/10.19044/esj.2023.v19n36p51.

Texto completo da fonte
Resumo:
L’oignon est une des principales cultures maraichères dans toutes les régions du Niger. Pour ses bulbes et ses feuilles et ses vertus. La culture d’oignon se pratique essentiellement en saison sèche froide sous irrigation. Les raisons qui limitent sa production sur toute l’année sont entre autres le manque de variété qui s’adapte aux différentes saisons. Pour comprendre les pratiques paysannes liées à la production d’oignon dans la commune urbaine de Kollo, une enquête a été menée auprès de 100 producteurs. Les données ont été traitées et analysées avec les logiciels Excel et SPSS version 22. Les résultats montrent que les femmes sont majoritairement responsables (62%) de cette culture. Plus de la moitié (54%) exploitent les terres acquises sous héritage et empruntées. Les variétés cultivées sont le violet de Galmi (97%), la variété Safari (4%) et le blanc de Galmi (2%). Dans la commune urbaine de Kollo, les producteurs ne font pas leur propre pépinière, mais achètent plutôt les plants au près des pépiniéristes. La superficie moyenne exploitée par ménage est de 344m² avec un rendement moyen de 8078,57 kg/ha soit 8 tonnes/ha. Ce rendement a été possible grâce à l'utilisation du système d'irrigation gravitaire, à l'utilisation combinatoire d'engrais chimiques NPK et Urée associés au fumier et aux pesticides naturels pour protéger ces cultures contre certaines attaques phytosanitaires. Toutefois, on note une absence de magasins améliorés pour la conservation d’oignons. Les producteurs conservent leur oignon dans leurs concessions sur du sable, ce qui entraîne souvent les ventes directes après récolte ou les pertes de produit en conservation. Les principales contraintes relevées de l’enquête est l’insuffisance de l’eau d’irrigation, l’insuffisance de formation sur les techniques de productions notamment sur les traitements phytosanitaires mais aussi le stockage. Pour résoudre ces problèmes majeurs affectant leur activité économique, il serait nécessaire d’introduire des variétés adaptées aux différentes saisons et former ces producteurs sur les bonnes pratiques de production de l’oignon. Onion is one of the main market garden crops in all regions of Niger. For its bulbs and its leaves and its virtues. Onion cultivation is mainly practiced in the cold dry season under irrigation. The reasons which limit its production throughout the year include the lack of variety which adapts to the different seasons. To understand peasant practices linked to onion production in the urban commune of Kollo, a survey was conducted among 100 producers. The data were processed and analyzed with Excel and SPSS version 22 software. The results show that women are mainly responsible (62%) for this culture. More than half (54%) exploit land acquired through inheritance and borrowing. The varieties cultivated are Galmi purple (97%), Safari variety (4%), and Galmi white (2%). In the urban commune of Kollo, producers do not make their nurseries but rather buy plants from nurseries. The average surface area used per household is 344m² with an average yield of 8078.57 kg/ha or 8 tonnes/ha. This yield was possible thanks to the use of the gravity irrigation system, the combinatorial use of NPK and Urea chemical fertilizers associated with manure, and natural pesticides to protect these crops against certain phytosanitary attacks. However, there is a lack of improved stores for storing onions. Producers store their onions in their concessions on the sand, which often leads to direct sales after harvest or loss of preserved product. The main constraints identified by the survey are insufficient irrigation water, insufficient training on production techniques, particularly phytosanitary treatments, but also storage. To resolve these major problems affecting their economic activity, it would be necessary to introduce varieties adapted to different seasons and train these producers on good onion production practices.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Dang, Qiyun, Di Wu e Benoit Boulet. "EV Fleet as Virtual Battery Resource for Community Microgrid Energy Storage Planning Le parc de véhicules électriques comme ressource de batterie virtuelle pour la planification du stockage d'énergie des micro-réseaux communautaires". IEEE Canadian Journal of Electrical and Computer Engineering, 2021, 1–12. http://dx.doi.org/10.1109/icjece.2021.3093520.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Kengne Tchendji, Vianney, e Yannick Florian YANKAM. "Dynamic resource allocations in virtual networks through a knapsack problem's dynamic programming solution". Revue Africaine de la Recherche en Informatique et Mathématiques Appliquées Volume 31 - 2019 - CARI 2018 (9 de janeiro de 2020). http://dx.doi.org/10.46298/arima.5321.

Texto completo da fonte
Resumo:
International audience The high-value Internet services that have been significantly enhanced with the integration of network virtualization and Software Defined Networking (SDN) technology are increasingly attracting the attention of end-users and major computer network companies (Google, Amazon, Yahoo, Cisco, ...). In order to cope with this high demand, network resource providers (bandwidth, storage space, throughput, etc.) must implement the right models to understand and hold the users' needs while maximizing profits reaped or the number of satisfied requests into the virtual networks. This need is even more urgent that users' requests can be linked, thereby imposing to the InP some constraints concerning the mutual satisfaction of requests, which further complicates the problem. From this perspective, we show that the problem of resource allocation to users based on their requests is a knapsack problem and can therefore be solved efficiently by using the best dynamic programming solutions for the knapsack problem. Our contribution takes the dynamic resources allocation as a multiple knapsack's problem instances on variable value requests. La multitude des services à forte valeur ajoutée offert par Internet et améliorés considérablement avec l'intégration de la virtualisation réseau et de la technologie des réseaux définis par logiciels (Software Defined Networking), suscite de plus en plus l'attention des utilisateurs finaux et des grands acteurs des réseaux informatiques (Google, Amazon, Yahoo, Cisco, ...); ainsi, pour faire face à cette forte demande, les fournisseurs de ressources réseau (bande passante, espace de stockage, débit, ...) doivent mettre en place les bons modèles permettant de bien prendre en main les besoins des utilisateurs tout en maximisant les profits engrangés ou le nombre de requêtes satis-faites dans les réseaux virtuels. Ce besoin est d'autant plus urgent que les requêtes des utilisateurs peuvent être interdépendantes, imposant de ce fait au FIP des contraintes de satisfaction mutuelle des requêtes, ce qui complexifie encore plus le problème. Dans cette optique, nous montrons que le problème d'allocation des ressources aux utilisateurs en fonction de leurs requêtes, se ramène à un problème de sac à dos et peut par conséquent être résolu de façon efficiente en exploitant les meilleures solutions de programmation dynamique pour le problème de sac à dos. Notre contribution considère l'allocation dynamique des ressources comme une application de plusieurs instances du problème de sac à dos sur des requêtes à valeurs variables.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Uhl, Magali. "Images". Anthropen, 2020. http://dx.doi.org/10.17184/eac.anthropen.126.

Texto completo da fonte
Resumo:
Image matérielle ou image mentale, émanation du geste humain ou production de l’esprit, artefact ou souvenir, l’image recouvre une multiplicité de formes et de significations qui vont des rêves aux dessins d’enfants, des ombres projetées aux peintures célébrées, des traces mnésiques aux images numériques. Tout autant confrontée à cette tension entre matérialité et virtualité, la connaissance anthropologique sur les images, comme les nombreux domaines du savoir qui lui sont associés (sociologie, sémiologie et études médiatiques, principalement) ont proposé des manières distinctes d’aborder les images, abandonnant toutefois aux sciences de l’esprit (psychanalyse et sciences cognitives) la dimension imaginative. Ainsi, deux voies se sont historiquement tracées pour intégrer les apports de la représentation imagée et se partagent, aujourd’hui encore, le domaine de l’anthropologie des images. D’un côté, l’image comme support au discours permet de questionner le potentiel culturel, politique et idéologique de l’image que les chercheurs vont déceler dans des corpus de représentations (publicités, images de la presse, cartes postales, selfies, snapshots et autres illustrations culturelles); de l’autre, l’image comme instrument de recherche dans laquelle la production visuelle des chercheurs (captations photographiques ou filmiques, tableaux, croquis, dessins et plans) est une manière d’accéder à leur terrain d’étude avec parfois pour ambition de proposer une visualisation de leurs résultats de recherche. Pour le dire avec Douglas Harper (1988), l’image peut aussi bien être un objet d’étude sur lequel on porte le regard qu’un instrument de recherche qui conduit ce regard. Si l’anthropologie s’est saisie dès le début du 20e siècle du potentiel expressif et cognitif de l’image avec les travaux photographiques de Margaret Mead et de Gregory Bateson sur les usages sociaux du corps dans la culture Balinaise (1942), et ceux, filmiques, de Robert Flaherty à travers son documentaire sur la population inuite de l’Arctique (1922), c’est l’iconologue et anthropologue Aby Warburg qui, à la même époque, a le plus insisté sur la complémentarité de ces deux formes d’images (matérielles et mentales) comme de ces deux postures de recherche (sur les images et avec les images). En effet, son projet d’un Atlas (2012) – composé de milliers de photographies et baptisé du nom de la déesse grecque de la mémoire, Mnemosyne – avait pour ambition de retracer, par la collecte et l’assemblage d’images, des invariants anthropologiques qui traverseraient les époques et les continents (de la Grèce antique à la Renaissance florentine; des Bacchantes romaines au peuple Hopi d’Arizona), et dont la mise en correspondance permettrait, par-delà les discours, une lecture visuelle de l’histoire culturelle. Dans cette méthode d’interprétation iconologique, les représentations matérielles et l’imagination sont intimement liées dans le processus de connaissance anthropologique : les images sont tout à la fois la source du savoir et son véhicule. Le terme de « formules de pathos » que Warburg propose, exprime, dès lors, le caractère idéal-typique du motif imaginaire qui se répète de représentation en représentation à travers les époques, les espaces et les cultures. La proposition qui, par ailleurs, est faite de mettre le détail au cœur de la démarche de recherche, en insistant sur l’attention aux motifs discrets mais persistants – comme la forme d’un drapé ou le tracé d’un éclair – retrouvera plus tard l’un des impératifs de l’anthropologie interprétative formulée par Geertz et l’effort ténu de description que sa mise en pratique exige (1973). Elle rejoindra également celui de l’anthropologie modale (Laplantine 2013) qui milite pour un mode mineur de la connaissance, à l’image des lucioles qui ne brillent la nuit que pour celles et ceux dont l’acuité sensible est mise au service de cette contemplation. Malgré sa radicalité, le parti pris de considérer les images comme la trame à partir de laquelle l’anthropologie se constitue comme savoir a ceci de fascinant qu’il inspire nombre de recherches actuelles. En effet, dans une société saturée par le visuel et dans laquelle les écrans forgent en partie le rapport au monde, cette voie originale trouve aujourd’hui un écho singulier dans plusieurs travaux d’envergure. Georges Didi-Huberman (2011 : 20) reprend, à son compte, le défi warburgien, autrement dit « le pari que les images, assemblées d’une certaine façon, nous offriraient la possibilité – ou, mieux, la ressource inépuisable – d’une relecture du monde ». De son côté, Hans Belting (2004 : 18) insiste sur le fait que « nous vivons avec des images et nous comprenons le monde en images. Ce rapport vivant à l’image se poursuit en quelque sorte dans la production extérieure et concrète d’images qui s’effectue dans l’espace social et qui agit, à l’égard des représentations mentales, à la fois comme question et réponse ». On le voit, l’héritage de l’iconologie a bel et bien traversé le 20e siècle pour s’ancrer dans le contemporain et ses nouveaux thèmes transversaux de prédilection. Les thèmes de l’expérience et de l’agentivité des images sont de ceux qui redéfinissent les contours de la réflexion sur le sujet en lui permettant de nuancer certains des épistémès qui lui ont préexisté. Désamorçant ainsi le partage épistémologique d’un savoir sur les images, qui témoignerait des représentations véhiculées par les artefacts visuels, et d’un savoir avec les images, qui les concevrait comme partenaires de recherche, on parle désormais de plus en plus d’agir des images aussi bien du côté de l’interprétation culturelle que l’on peut en faire, que du travail des chercheurs qui les captent et les mettent en récit. Par ailleurs, le fait que l’image est « le reflet et l’expression de son expérience et de sa pratique dans une culture donnée [et qu’à] ce titre, discourir sur les images n’est qu’une autre façon de jeter un regard sur les images qu’on a déjà intériorisées (Belting 2004 : 74) », relativise également cet autre partage historique entre image intérieure (mentale) et image extérieure (représentationnelle), image individuelle (idiosyncrasique) et image publique (collective) qui s’enracine dans une généalogie intellectuelle occidentale, non pas universelle, mais construite et située. L’agir des images est alors tout aussi bien l’expression de leur force auratique, autrement dit de leur capacité à présenter une réalité sensible, à faire percevoir une situation sociale, un prisme culturel ou un vécu singulier, mais aussi, celle de leur agentivité comme artefact dans l’espace public. Dans le premier ordre d’idées, l’historienne et artiste Safia Belmenouar, en collectant et en assemblant des centaines de cartes postales coloniales, qui étaient le support médiatique vernaculaire en vogue de 1900 à 1930, montre, à travers un livre (2007) et une exposition (2014), comment les stéréotypes féminins réduisant les femmes des pays colonisés en attributs exotiques de leur culture se construisent socialement, tout en questionnant le regard que l’on porte aujourd’hui sur ces images de femmes anonymes dénudées répondant au statut « d’indigène ». La performance de l’image est ici celle du dessillement que sa seule présentation, en nombre et ordonnée, induit. Dans le deuxième ordre d’idées, l’ethnologue Cécile Boëx (2013) n’hésite pas, dans ses contributions sur la révolte syrienne, à montrer de quelle manière les personnes en lutte contre le pouvoir se servent des représentations visuelles comme support de leur cause en s’appropriant et en utilisant les nouvelles technologies de l’image et l’espace virtuel d’Internet. Les images sont ici entendues comme les actrices des conflits auxquels elles prennent part. L’expérience des images, comme le montre Belting (2004) ou Laplantine (2013), est donc aussi celle dont nous faisons l’épreuve en tant que corps. Cette plongée somatique est, par exemple, au cœur du film expérimental Leviathan (2012), réalisé par les anthropologues Lucien Castaing-Taylor et Véréna Paravel. Partant des images d’une douzaine de caméras GoPro fixées sur le corps de marins de haute mer partis pêcher au large des côtes américaines de Cape Cod, le documentaire immersif fait vivre l’âpre expérience de ce métier ancestral. À l’ère des pratiques photographiques et filmiques amateures (selfies, captations filmiques et montages par téléphones cellulaires) et de l’explosion des environnements numériques de partage (Instagram, Snapchat) et de stockage des données (big data), le potentiel immersif de l’image passe désormais par des pratiques réinventées du quotidien où captation et diffusion sont devenues affaire de tous les corps, indépendamment de leur position dans le champ social et culturel. Critiquées pour leur ambiguïté, leur capacité de falsification et de manipulation, les images ont aussi ce potentiel de remise en cause des normes hégémoniques de genre, de classe et d’ethnicité. Prises, partagées et diffusées de manière de plus en plus massive, elles invitent à l’activité critique afin de concevoir la visualité dans la diversité de ses formes et de ses enjeux contemporains (Mirzoeff 2016). Si aujourd’hui, dans un monde traversé de part en part par les images, l’anthropologie de l’image est un domaine de recherche à part entière dont l’attention plus vive à l’expérience sensible et sensorielle qui la singularise est le prérequis (Uhl 2015), l’iconologie comme méthode anthropologique spécifique répondant aux nouveaux terrains et aux nouvelles altérités a encore du chemin à parcourir et des concepts à inventer afin de ne pas s’enfermer dans le registre instrumental auquel elle est trop souvent réduite. Pour penser l’image dans le contexte actuel de sa prolifération et de la potentielle désorientation qu’elle induit, la tentative d’une iconologie radicale, telle qu’initiée par Warburg, demeure d’une évidente actualité. <
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Potter, Emily. "Calculating Interests: Climate Change and the Politics of Life". M/C Journal 12, n.º 4 (13 de outubro de 2009). http://dx.doi.org/10.5204/mcj.182.

Texto completo da fonte
Resumo:
There is a moment in Al Gore’s 2006 documentary An Inconvenient Truth devised to expose the sheer audacity of fossil fuel lobby groups in the United States. In their attempts to address significant scientific consensus and growing public concern over climate change, these groups are resorting to what Gore’s film suggests are grotesque distortions of fact. A particular example highlighted in the film is the Competitive Enterprise Institute’s (CPE—a lobby group funded by ExxonMobil) “pro” energy industry advertisement: “Carbon dioxide”, the ad states. “They call it pollution, we call it life.” While on the one hand employing rhetoric against the “inconvenient truth” that carbon dioxide emissions are ratcheting up the Earth’s temperature, these advertisements also pose a question – though perhaps unintended – that is worth addressing. Where does life reside? This is not an issue of essentialism, but relates to the claims, materials and technologies through which life as a political object emerges. The danger of entertaining the vested interests of polluting industry in a discussion of climate change and its biopolitics is countered by an imperative to acknowledge the ways in which multiple positions in the climate change debate invoke and appeal to ‘life’ as the bottom line, or inviolable interest, of their political, social or economic work. In doing so, other questions come to the fore that a politics of climate change framed in terms of moral positions or competing values will tend to overlook. These questions concern the manifold practices of life that constitute the contemporary terrain of the political, and the actors and instruments put in this employ. Who speaks for life? And who or what produces it? Climate change as a matter of concern (Latour) has gathered and generated a host of experts, communities, narratives and technical devices all invested in the administration of life. It is, as Malcom Bull argues, “the paradigmatic issue of the new politics,” a politics which “draws people towards the public realm and makes life itself subject to the caprices of state and market” (2). This paper seeks to highlight the politics of life that have emerged around climate change as a public issue. It will argue that these politics appear in incremental and multiple ways that situate an array of actors and interests as active in both contesting and generating the terms of life: what life is and how we come to know it. This way of thinking about climate change debates opposes a prevalent moralistic framework that reads the practices and discourses of debate in terms of oppositional positions alone. While sympathies may flow in varying directions, especially when it comes to such a highly charged and massively consequential issue as climate change, there is little insight to be had from charging the CPE (for example) with manipulating consumers, or misrepresenting well-known facts. Where new and more productive understandings open up is in relation to the fields through which these gathering actors play out their claims to the project of life. These fields, from the state, to the corporation, to the domestic sphere, reveal a complex network of strategies and devices that seek to secure life in constantly renovated terms. Life Politics Biopolitical scholarship in the wake of Foucault has challenged life as a pre-given uncritical category, and sought to highlight the means through which it is put under question and constituted through varying and composing assemblages of practitioners and practices. Such work regards the project of human well-being as highly complex and technical, and has undertaken to document this empirically through close attention to the everyday ecologies in which humans are enmeshed. This is a political and theoretical project in itself, situating political processes in micro, as well as macro, registers, including daily life as a site of (self) management and governance. Rabinow and Rose refer to biopolitical circuits that draw together and inter-relate the multiple sites and scales operative in the administration of life. These involve not just technologies, rationalities and regimes of authority and control, but also politics “from below” in the form of rights claims and community formation and agitation (198). Active in these circuits, too, are corporate and non-state interests for whom the pursuit of maximising life’s qualities and capabilities has become a concern through which “market relations and shareholder value” are negotiated (Rabinow and Rose 211). As many biopolitical scholars argue, biopower—the strategies through which biopolitics are enacted—is characteristic of the “disciplinary neo-liberalism” that has come to define the modern state, and through which the conduct of conduct is practiced (Di Muzio 305). Foucault’s concept of governmentality describes the devolution of state-based disciplinarity and sovereignty to a host of non-state actors, rationalities and strategies of governing, including the self-managing subject, not in opposition to the state, but contributing to its form. According to Bratich, Packer and McCarthy, everyday life is thus “saturated with governmental techniques” (18) in which we are all enrolled. Unlike regimes of biopolitics identified with what Agamben terms “thanopolitics”—the exercise of biopower “which ultimately rests on the power of some to threaten the death of others” (Rabinow and Rose 198), such as the Nazi’s National Socialism and other eugenic campaigns—governmental arts in the service of “vitalist” biopolitics (Rose 1) are increasingly diffused amongst all those with an “interest” in sustaining life, from organisations to individuals. The integration of techniques of self-governance which ask the individual to work on themselves and their own dispositions with State functions has broadened the base by which life is governed, and foregrounded an unsettled terrain of life claims. Rose argues that medical science is at the forefront of these contemporary biopolitics, and to this effect “has […] been fully engaged in the ethical questions of how we should live—of what kinds of creatures we are, of the kinds of obligations that we have to ourselves and to others, of the kinds of techniques we can and should use to improve ourselves” (20). Asking individuals to self-identify through their medical histories and bodily specificities, medical cultures are also shaping new political arrangements, as communities connected by shared genetics or physical conditions, for instance, emerge, evolve and agitate according to the latest medical knowledge. Yet it is not just medicine that provokes ethical work and new political forms. The environment is a key site for life politics that entails a multi-faceted discourse of obligations and entitlements, across fields and scales of engagement. Calculating Environments In line with neo-liberal logic, environmental discourse concerned with ameliorating climate change has increasingly focused upon the individual as an agent of self-monitoring, to both facilitate government agendas at a distance, and to “self-fashion” in the mode of the autonomous subject, securing against external risks (Ong 501). Climate change is commonly represented as such a risk, to both human and non-human life. A recent letter published by the Royal Australasian College of Physicians in two leading British medical journals, named climate change as the “biggest global health threat of the twenty-first century” (Morton). As I have argued elsewhere (Potter), security is central to dominant cultures of environmental governance in the West; these cultures tie sustainability goals to various and interrelated regimes of monitoring which attach to concepts of what Clark and Stevenson call “the good ecological citizen” (238). Citizenship is thus practiced through strategies of governmentality which call on individuals to invest not just in their own well-being, but in the broader project of life. Calculation is a primary technique through which modern environmental governance is enacted; calculative strategies are seen to mediate risk, according to Foucault, and consequently to “assure living” (Elden 575). Rationalised schemes for self-monitoring are proliferating under climate change and the project of environmentalism more broadly, something which critics of neo-liberalism have identified as symptomatic of the privatisation of politics that liberal governmentality has fostered. As we have seen in Australia, an evolving policy emphasis on individual practices and the domestic sphere as crucial sites of environmental action – for instance, the introduction of domestic water restrictions, and the phasing out of energy-inefficient light bulbs in the home—provides a leading discourse of ethico-political responsibility. The rise of carbon dioxide counting is symptomatic of this culture, and indicates the distributed fields of life management in contemporary governmentality. Carbon dioxide, as the CPE is keen to point out, is crucial to life, but it is also—in too large an amount—a force of destruction. Its management, in vitalist terms, is thus established as an effort to protect life in the face of death. The concept of “carbon footprinting” has been promoted by governments, NGOs, industry and individuals as a way of securing this goal, and a host of calculative techniques and strategies are employed to this end, across a spectrum of activities and contexts all framed in the interests of life. The footprinting measure seeks to secure living via self-policed limits, which also—in classic biopolitical form—shift previously private practices into a public realm of count-ability and accountability. The carbon footprint, like its associates the ecological footprint and the water footprint, has developed as a multi-faceted tool of citizenship beyond the traditional boundaries of the state. Suggesting an ecological conception of territory and of our relationships and responsibilities to this, the footprint, as a measure of resource use and emissions relative to the Earth’s capacities to absorb these, calculates and visualises the “specific qualities” (Elden 575) that, in a spatialised understanding of security, constitute and define this territory. The carbon footprint’s relatively simple remit of measuring carbon emissions per unit of assessment—be that the individual, the corporation, or the nation—belies the ways in which life is formatted and produced through its calculations. A tangled set of devices, practices and discourses is employed to make carbon and thus life calculable and manageable. Treading Lightly The old environmental adage to “tread lightly upon the Earth” has been literalised in the metaphor of the footprint, which attempts both to symbolise environmental practice and to directly translate data in order to meaningfully communicate necessary boundaries for our living. The World Wildlife Fund’s Living Planet Report 2008 exemplifies the growing popularity of the footprint as a political and poetic hook: speaking in terms of our “ecological overshoot,” and the move from “ecological credit to ecological deficit”, the report urges an attendance to our “global footprint” which “now exceeds the world’s capacity to regenerate by about 30 per cent” (1). Angela Crombie’s A Lighter Footprint, an instruction manual for sustainable living, is one of a host of media through which individuals are educated in modes of footprint calculation and management. She presents a range of techniques, including carbon offsetting, shifting to sustainable modes of transport, eating and buying differently, recycling and conserving water, to mediate our carbon dioxide output, and to “show […] politicians how easy it is” (13). Governments however, need no persuading from citizens that carbon calculation is an exercise to be harnessed. As governments around the world move (slowly) to address climate change, policies that instrumentalise carbon dioxide emission and reduction via an auditing of credits and deficits have come to the fore—for example, the European Union Emissions Trading Scheme and the Chicago Climate Exchange. In Australia, we have the currently-under-debate Carbon Pollution Reduction Scheme, a part of which is the Australian Emissions Trading Scheme (AETS) that will introduce a system of “carbon credits” and trading in a market-based model of supply and demand. This initiative will put a price on carbon dioxide emissions, and cap the amount of emissions any one polluter can produce without purchasing further credits. In readiness for the scheme, business initiatives are forming to take advantage of this new carbon market. Industries in carbon auditing and off-setting services are consolidating; hectares of trees, already active in the carbon sequestration market, are being cultivated as “carbon sinks” and key sites of compliance for polluters under the AETS. Governments are also planning to turn their tracts of forested public land into carbon credits worth billions of dollars (Arup 7). The attachment of emission measures to goods and services requires a range of calculative experts, and the implementation of new marketing and branding strategies, aimed at conveying the carbon “health” of a product. The introduction of “food mile” labelling (the amount of carbon dioxide emitted in the transportation of the food from source to consumer) in certain supermarkets in the United Kingdom is an example of this. Carbon risk analysis and management programs are being introduced across businesses in readiness for the forthcoming “carbon economy”. As one flyer selling “a suite of carbon related services” explains, “early action will give you the edge in understanding and mitigating the risks, and puts you in a prime position to capitalise on the rewards” (MGI Business Solutions Worldwide). In addition, lobby groups are working to ensure exclusions from or the free allocation of permits within the proposed AETS, with degrees of compulsion applied to different industries – the Federal Government, for instance, will provide a $3.9 billion compensation package for the electric power sector when the AETS commences, to enable their “adjustment” to this carbon regime. Performing Life Noortje Mares provides a further means of thinking through the politics of life in the context of climate change by complicating the distinction between public and private interest. Her study of “green living experiments” describes the rise of carbon calculation in the home in recent years, and the implementation of technologies such as the smart electricity meter that provides a constantly updating display of data relating to amounts and cost of energy consumed and the carbon dioxide emitted in the routines of domestic life. Her research tracks the entry of these personal calculative regimes into public life via internet forums such as blogs, where individuals notate or discuss their experiences of pursing low-carbon lifestyles. On the one hand, these calculative practices of living and their public representation can be read as evidencing the pervasive neo-liberal governmentality at work in contemporary environmental practice, where individuals are encouraged to scrupulously monitor their domestic cultures. The rise of auditing as a technology of self, and more broadly as a technique of public accountability, has come under fire for its “immunity-granting role” (Charkiewicz 79), where internal audits become substituted for external compliance and regulation. Mares challenges this reading, however, by demonstrating the ways in which green living experiments “transform everyday material practices into practices of public involvement” that (118) don’t resolve or pin down relations between the individual, the non-human environment, and the social, or reveal a mappable flow of actions and effects between the public realm and the home. The empirical modes of publicity that these individuals employ, “the careful recording of measurements and the reliable descriptions of sensory observation, so as to enable ‘virtual witnessing’ by wider audiences”, open up to much more complex understandings than one of calculative self-discipline at work. As “instrument[s] of public involvement” (120), the experiments that Mares describe locate the politics of life in the embodied socio-material entanglements of the domestic sphere, in arrangements of humans and non-human technologies. Such arrangements, she suggests, are ontologically productive in that they introduce “not only new knowledge, but also new entities […] to society” (119), and as such these experiments and the modes of calculation they employ become active in the composition of reality. Recent work in economic sociology and cultural studies has similarly contended that calculation, far from either a naturalised or thoroughly abstract process, relies upon a host of devices, relations, and techniques: that is, as Gay Hawkins explains, calculative processes “have to be enacted” (108). Environmental governmentality in the service of securing life is a networked practice that draws in a host of actors, not a top-down imposition. The institution of carbon economies and carbon emissions as a new register of public accountability, brings alternative ways to calculate the world into being, and consequently re-calibrates life as it emerges from these heterogeneous arrangements. All That Gathers Latour writes that we come to know a matter of concern by all the things that gather around it (Latour). This includes the human, as well as the non-human actors, policies, practices and technologies that are put to work in the making of our realities. Climate change is routinely represented as a threat to life, with predicted (and occurring) species extinction, growing numbers of climate change refugees, dispossessed from uninhabitable lands, and the rise of diseases and extreme weather scenarios that put human life in peril. There is no doubt, of course, that climate change does mean death for some: indeed, there are thanopolitical overtones in inequitable relations between the fall-out of impacts from major polluting nations on poorer countries, or those much more susceptible to rising sea levels. Biosocial equity, as Bull points out, is a “matter of being equally alive and equally dead” (2). Yet in the biopolitical project of assuring living, life is burgeoning around the problem of climate change. The critique of neo-liberalism as a blanketing system that subjects all aspects of life to market logic, and in which the cynical techniques of industry seek to appropriate ethico-political stances for their own material ends, are insufficient responses to what is actually unfolding in the messy terrain of climate change and its biopolitics. What this paper has attempted to show is that there is no particular purchase on life that can be had by any one actor who gathers around this concern. Varying interests, ambitions, and intentions, without moral hierarchy, stake their claim in life as a constantly constituting site in which they participate, and from this perspective, the ways in which we understand life to be both produced and managed expand. This is to refuse either an opposition or a conflation between the market and nature, or the market and life. It is also to argue that we cannot essentialise human-ness in the climate change debate. For while human relations with animals, plants and weathers may make us what we are, so too do our relations with (in a much less romantic view) non-human things, technologies, schemes, and even markets—from carbon auditing services, to the label on a tin on the supermarket shelf. As these intersect and entangle, the project of life, in the new politics of climate change, is far from straightforward. References An Inconvenient Truth. Dir. Davis Guggenheim. Village Roadshow, 2006. Arup, Tom. “Victoria Makes Enormous Carbon Stocktake in Bid for Offset Billions.” The Age 24 Sep. 2009: 7. Bratich, Jack Z., Jeremy Packer, and Cameron McCarthy. “Governing the Present.” Foucault, Cultural Studies and Governmentality. Ed. Bratich, Packer and McCarthy. Albany: State University of New York Press, 2003. 3-21. Bull, Malcolm. “Globalization and Biopolitics.” New Left Review 45 (2007): 12 May 2009 . < http://newleftreview.org/?page=article&view=2675 >. Charkiewicz, Ewa. “Corporations, the UN and Neo-liberal Bio-politics.” Development 48.1 (2005): 75-83. Clark, Nigel, and Nick Stevenson. “Care in a Time of Catastrophe: Citizenship, Community and the Ecological Imagination.” Journal of Human Rights 2.2 (2003): 235-246. Crombie, Angela. A Lighter Footprint: A Practical Guide to Minimising Your Impact on the Planet. Carlton North, Vic.: Scribe, 2007. Di Muzio, Tim. “Governing Global Slums: The Biopolitics of Target 11.” Global Governance. 14.3 (2008): 305-326. Elden, Stuart. “Governmentality, Calculation and Territory.” Environment and Planning D: Society and Space 25 (2007): 562-580. Hawkins, Gay. The Ethics of Waste: How We Relate to Rubbish. Sydney: University of New South Wales Press, 2006. Latour, Bruno. “Why Has Critique Run Out of Steam?: From Matters of Fact to Matters of Concern.” Critical Inquiry 30.2 (2004): 225-248. Mares, Noortje. “Testing Powers of Engagement: Green Living Experiments, the Ontological Turn and the Undoability and Involvement.” European Journal of Social Theory 12.1 (2009): 117-133. MGI Business Solutions Worldwide. “Carbon News.” Adelaide. 2 Aug. 2009. Ong, Aihwa. “Mutations in Citizenship.” Theory, Culture and Society 23.2-3 (2006): 499-505. Potter, Emily. “Footprints in the Mallee: Climate Change, Sustaining Communities, and the Nature of Place.” Landscapes and Learning: Place Studies in a Global World. Ed. Margaret Somerville, Kerith Power and Phoenix de Carteret. Sense Publishers. Forthcoming. Rabinow, Paul, and Nikolas Rose. “Biopower Today.” Biosocieties 1 (2006): 195-217. Rose, Nikolas. “The Politics of Life Itself.” Theory, Culture and Society 18.6 (2001): 1-30. World Wildlife Fund. Living Planet Report 2008. Switzerland, 2008.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Dominey-Howes, Dale. "Tsunami Waves of Destruction: The Creation of the “New Australian Catastrophe”". M/C Journal 16, n.º 1 (18 de março de 2013). http://dx.doi.org/10.5204/mcj.594.

Texto completo da fonte
Resumo:
Introduction The aim of this paper is to examine whether recent catastrophic tsunamis have driven a cultural shift in the awareness of Australians to the danger associated with this natural hazard and whether the media have contributed to the emergence of “tsunami” as a new Australian catastrophe. Prior to the devastating 2004 Indian Ocean Tsunami disaster (2004 IOT), tsunamis as a type of hazard capable of generating widespread catastrophe were not well known by the general public and had barely registered within the wider scientific community. As a university based lecturer who specialises in natural disasters, I always started my public talks or student lectures with an attempt at a detailed description of what a tsunami is. With little high quality visual and media imagery to use, this was not easy. The Australian geologist Ted Bryant was right when he named his 2001 book Tsunami: The Underrated Hazard. That changed on 26 December 2004 when the third largest earthquake ever recorded occurred northwest of Sumatra, Indonesia, triggering the most catastrophic tsunami ever experienced. The 2004 IOT claimed at least 220,000 lives—probably more—injured tens of thousands, destroyed widespread coastal infrastructure and left millions homeless. Beyond the catastrophic impacts, this tsunami was conspicuous because, for the first time, such a devastating tsunami was widely captured on video and other forms of moving and still imagery. This occurred for two reasons. Firstly, the tsunami took place during daylight hours in good weather conditions—factors conducive to capturing high quality visual images. Secondly, many people—both local residents and westerners who were on beachside holidays and at the coast at multiple locations impacted by the tsunami—were able to capture images of the tsunami on their cameras, videos, and smart phones. The extensive media coverage—including horrifying television, video, and still imagery that raced around the globe in the hours and days after the tsunami, filling our television screens, homes, and lives regardless of where we lived—had a dramatic effect. This single event drove a quantum shift in the wider cultural awareness of this type of catastrophe and acted as a catalyst for improved individual and societal understanding of the nature and effects of disaster landscapes. Since this event, there have been several notable tsunamis, including the March 2011 Japan catastrophe. Once again, this event occurred during daylight hours and was widely captured by multiple forms of media. These events have resulted in a cascade of media coverage across television, radio, movie, and documentary channels, in the print media, online, and in the popular press and on social media—very little of which was available prior to 2004. Much of this has been documentary and informative in style, but there have also been numerous television dramas and movies. For example, an episode of the popular American television series CSI Miami entitled Crime Wave (Season 3, Episode 7) featured a tsunami, triggered by a volcanic eruption in the Atlantic and impacting Miami, as the backdrop to a standard crime-filled episode ("CSI," IMDb; Wikipedia). In 2010, Warner Bros Studios released the supernatural drama fantasy film Hereafter directed by Clint Eastwood. In the movie, a television journalist survives a near-death experience during the 2004 IOT in what might be the most dramatic, and probably accurate, cinematic portrayal of a tsunami ("Hereafter," IMDb; Wikipedia). Thus, these creative and entertaining forms of media, influenced by the catastrophic nature of tsunamis, are impetuses for creativity that also contribute to a transformation of cultural knowledge of catastrophe. The transformative potential of creative media, together with national and intergovernmental disaster risk reduction activity such as community education, awareness campaigns, community evacuation planning and drills, may be indirectly inferred from rapid and positive community behavioural responses. By this I mean many people in coastal communities who experience strong earthquakes are starting a process of self-evacuation, even if regional tsunami warning centres have not issued an alert or warning. For example, when people in coastal locations in Samoa felt a large earthquake on 29 September 2009, many self-evacuated to higher ground or sought information and instruction from relevant authorities because they expected a tsunami to occur. When interviewed, survivors stated that the memory of television and media coverage of the 2004 IOT acted as a catalyst for their affirmative behavioural response (Dominey-Howes and Thaman 1). Thus, individual and community cultural understandings of the nature and effects of tsunami catastrophes are incredibly important for shaping resilience and reducing vulnerability. However, this cultural shift is not playing out evenly.Are Australia and Its People at Risk from Tsunamis?Prior to the 2004 IOT, there was little discussion about, research in to, or awareness about tsunamis and Australia. Ted Bryant from the University of Wollongong had controversially proposed that Australia had been affected by tsunamis much bigger than the 2004 IOT six to eight times during the last 10,000 years and that it was only a matter of when, not if, such an event repeated itself (Bryant, "Second Edition"). Whilst his claims had received some media attention, his ideas did not achieve widespread scientific, cultural, or community acceptance. Not-with-standing this, Australia has been affected by more than 60 small tsunamis since European colonisation (Dominey-Howes 239). Indeed, the 2004 IOT and 2006 Java tsunami caused significant flooding of parts of the Northern Territory and Western Australia (Prendergast and Brown 69). However, the affected areas were sparsely populated and experienced very little in the way of damage or loss. Thus they did not cross any sort of critical threshold of “catastrophe” and failed to achieve meaningful community consciousness—they were not agents of cultural transformation.Regardless of the risk faced by Australia’s coastline, Australians travel to, and holiday in, places that experience tsunamis. In fact, 26 Australians were killed during the 2004 IOT (DFAT) and five were killed by the September 2009 South Pacific tsunami (Caldwell et al. 26). What Role Do the Media Play in Preparing for and Responding to Catastrophe?Regardless of the type of hazard/disaster/catastrophe, the key functions the media play include (but are not limited to): pre-event community education, awareness raising, and planning and preparations; during-event preparation and action, including status updates, evacuation warnings and notices, and recommendations for affirmative behaviours; and post-event responses and recovery actions to follow, including where to gain aid and support. Further, the media also play a role in providing a forum for debate and post-event analysis and reflection, as a mechanism to hold decision makers to account. From time to time, the media also provide a platform for examining who, if anyone, might be to blame for losses sustained during catastrophes and can act as a powerful conduit for driving socio-cultural, behavioural, and policy change. Many of these functions are elegantly described and a series of best practices outlined by The Caribbean Disaster Emergency Management Agency in a tsunami specific publication freely available online (CDEMA 1). What Has Been the Media Coverage in Australia about Tsunamis and Their Effects on Australians?A manifest contents analysis of media material covering tsunamis over the last decade using the framework of Cox et al. reveals that coverage falls into distinctive and repetitive forms or themes. After tsunamis, I have collected articles (more than 130 to date) published in key Australian national broadsheets (e.g., The Australian and Sydney Morning Herald) and tabloid (e.g., The Telegraph) newspapers and have watched on television and monitored on social media, such as YouTube and Facebook, the types of coverage given to tsunamis either affecting Australia, or Australians domestically and overseas. In all cases, I continued to monitor and collect these stories and accounts for a fixed period of four weeks after each event, commencing on the day of the tsunami. The themes raised in the coverage include: the nature of the event. For example, where, when, why did it occur, how big was it, and what were the effects; what emergency response and recovery actions are being undertaken by the emergency services and how these are being provided; exploration of how the event was made worse or better by poor/good planning and prior knowledge, action or inaction, confusion and misunderstanding; the attribution of blame and responsibility; the good news story—often the discovery and rescue of an “iconic victim/survivor”—usually a child days to weeks later; and follow-up reporting weeks to months later and on anniversaries. This coverage generally focuses on how things are improving and is often juxtaposed with the ongoing suffering of victims. I select the word “victims” purposefully for the media frequently prefer this over the more affirmative “survivor.”The media seldom carry reports of “behind the scenes” disaster preparatory work such as community education programs, the development and installation of warning and monitoring systems, and ongoing training and policy work by response agencies and governments since such stories tend to be less glamorous in terms of the disaster gore factor and less newsworthy (Cox et al. 469; Miles and Morse 365; Ploughman 308).With regard to Australians specifically, the manifest contents analysis reveals that coverage can be described as follows. First, it focuses on those Australians killed and injured. Such coverage provides elements of a biography of the victims, telling their stories, personalising these individuals so we build empathy for their suffering and the suffering of their families. The Australian victims are not unknown strangers—they are named and pictures of their smiling faces are printed or broadcast. Second, the media describe and catalogue the loss and ongoing suffering of the victims (survivors). Third, the media use phrases to describe Australians such as “innocent victims in the wrong place at the wrong time.” This narrative establishes the sense that these “innocents” have been somehow wronged and transgressed and that suffering should not be experienced by them. The fourth theme addresses the difficulties Australians have in accessing Consular support and in acquiring replacement passports in order to return home. It usually goes on to describe how they have difficulty in gaining access to accommodation, clothing, food, and water and any necessary medicines and the challenges associated with booking travel home and the complexities of communicating with family and friends. The last theme focuses on how Australians were often (usually?) not given relevant safety information by “responsible people” or “those in the know” in the place where they were at the time of the tsunami. This establishes a sense that Australians were left out and not considered by the relevant authorities. This narrative pays little attention to the wide scale impact upon and suffering of resident local populations who lack the capacity to escape the landscape of catastrophe.How Does Australian Media Coverage of (Tsunami) Catastrophe Compare with Elsewhere?A review of the available literature suggests media coverage of catastrophes involving domestic citizens is similar globally. For example, Olofsson (557) in an analysis of newspaper articles in Sweden about the 2004 IOT showed that the tsunami was framed as a Swedish disaster heavily focused on Sweden, Swedish victims, and Thailand, and that there was a division between “us” (Swedes) and “them” (others or non-Swedes). Olofsson (557) described two types of “us” and “them.” At the international level Sweden, i.e. “us,” was glorified and contrasted with “inferior” countries such as Thailand, “them.” Olofsson (557) concluded that mediated frames of catastrophe are influenced by stereotypes and nationalistic values.Such nationalistic approaches preface one type of suffering in catastrophe over others and delegitimises the experiences of some survivors. Thus, catastrophes are not evenly experienced. Importantly, Olofsson although not explicitly using the term, explains that the underlying reason for this construction of “them” and “us” is a form of imperialism and colonialism. Sharp refers to “historically rooted power hierarchies between countries and regions of the world” (304)—this is especially so of western news media reporting on catastrophes within and affecting “other” (non-western) countries. Sharp goes much further in relation to western representations and imaginations of the “war on terror” (arguably a global catastrophe) by explicitly noting the near universal western-centric dominance of this representation and the construction of the “west” as good and all “non-west” as not (299). Like it or not, the western media, including elements of the mainstream Australian media, adhere to this imperialistic representation. Studies of tsunami and other catastrophes drawing upon different types of media (still images, video, film, camera, and social media such as Facebook, Twitter, and the like) and from different national settings have explored the multiple functions of media. These functions include: providing information, questioning the authorities, and offering a chance for transformative learning. Further, they alleviate pain and suffering, providing new virtual communities of shared experience and hearing that facilitate resilience and recovery from catastrophe. Lastly, they contribute to a cultural transformation of catastrophe—both positive and negative (Hjorth and Kyoung-hwa "The Mourning"; "Good Grief"; McCargo and Hyon-Suk 236; Brown and Minty 9; Lau et al. 675; Morgan and de Goyet 33; Piotrowski and Armstrong 341; Sood et al. 27).Has Extensive Media Coverage Resulted in an Improved Awareness of the Catastrophic Potential of Tsunami for Australians?In playing devil’s advocate, my simple response is NO! This because I have been interviewing Australians about their perceptions and knowledge of tsunamis as a catastrophe, after events have occurred. These events have triggered alerts and warnings by the Australian Tsunami Warning System (ATWS) for selected coastal regions of Australia. Consequently, I have visited coastal suburbs and interviewed people about tsunamis generally and those events specifically. Formal interviews (surveys) and informal conversations have revolved around what people perceived about the hazard, the likely consequences, what they knew about the warning, where they got their information from, how they behaved and why, and so forth. I have undertaken this work after the 2007 Solomon Islands, 2009 New Zealand, 2009 South Pacific, the February 2010 Chile, and March 2011 Japan tsunamis. I have now spoken to more than 800 people. Detailed research results will be presented elsewhere, but of relevance here, I have discovered that, to begin with, Australians have a reasonable and shared cultural knowledge of the potential catastrophic effects that tsunamis can have. They use terms such as “devastating; death; damage; loss; frightening; economic impact; societal loss; horrific; overwhelming and catastrophic.” Secondly, when I ask Australians about their sources of information about tsunamis, they describe the television (80%); Internet (85%); radio (25%); newspaper (35%); and social media including YouTube (65%). This tells me that the media are critical to underpinning knowledge of catastrophe and are a powerful transformative medium for the acquisition of knowledge. Thirdly, when asked about where people get information about live warning messages and alerts, Australians stated the “television (95%); Internet (70%); family and friends (65%).” Fourthly and significantly, when individuals were asked what they thought being caught in a tsunami would be like, responses included “fun (50%); awesome (75%); like in a movie (40%).” Fifthly, when people were asked about what they would do (i.e., their “stated behaviour”) during a real tsunami arriving at the coast, responses included “go down to the beach to swim/surf the tsunami (40%); go to the sea to watch (85%); video the tsunami and sell to the news media people (40%).”An independent and powerful representation of the disjunct between Australians’ knowledge of the catastrophic potential of tsunamis and their “negative” behavioral response can be found in viewing live television news coverage broadcast from Sydney beaches on the morning of Sunday 28 February 2010. The Chilean tsunami had taken more than 14 hours to travel from Chile to the eastern seaboard of Australia and the ATWS had issued an accurate warning and had correctly forecast the arrival time of the tsunami (approximately 08.30 am). The television and radio media had dutifully broadcast the warning issued by the State Emergency Services. The message was simple: “Stay out of the water, evacuate the beaches and move to higher ground.” As the tsunami arrived, those news broadcasts showed volunteer State Emergency Service personnel and Surf Life Saving Australia lifeguards “begging” with literally hundreds (probably thousands up and down the eastern seaboard of Australia) of members of the public to stop swimming in the incoming tsunami and to evacuate the beaches. On that occasion, Australians were lucky and the tsunami was inconsequential. What do these responses mean? Clearly Australians recognise and can describe the consequences of a tsunami. However, they are not associating the catastrophic nature of tsunami with their own lives or experience. They are avoiding or disallowing the reality; they normalise and dramaticise the event. Thus in Australia, to date, a cultural transformation about the catastrophic nature of tsunami has not occurred for reasons that are not entirely clear but are the subject of ongoing study.The Emergence of Tsunami as a “New Australian Catastrophe”?As a natural disaster expert with nearly two decades experience, in my mind tsunami has emerged as a “new Australian catastrophe.” I believe this has occurred for a number of reasons. Firstly, the 2004 IOT was devastating and did impact northwestern Australia, raising the flag on this hitherto, unknown threat. Australia is now known to be vulnerable to the tsunami catastrophe. The media have played a critical role here. Secondly, in the 2004 IOT and other tsunamis since, Australians have died and their deaths have been widely reported in the Australian media. Thirdly, the emergence of various forms of social media has facilitated an explosion in information and material that can be consumed, digested, reimagined, and normalised by Australians hungry for the gore of catastrophe—it feeds our desire for catastrophic death and destruction. Fourthly, catastrophe has been creatively imagined and retold for a story-hungry viewing public. Whether through regular television shows easily consumed from a comfy chair at home, or whilst eating popcorn at a cinema, tsunami catastrophe is being fed to us in a way that reaffirms its naturalness. Juxtaposed against this idea though is that, despite all the graphic imagery of tsunami catastrophe, especially images of dead children in other countries, Australian media do not and culturally cannot, display images of dead Australian children. Such images are widely considered too gruesome but are well known to drive changes in cultural behaviour because of the iconic significance of the child within our society. As such, a cultural shift has not yet occurred and so the potential of catastrophe remains waiting to strike. Fifthly and significantly, given the fact that large numbers of Australians have not died during recent tsunamis means that again, the catastrophic potential of tsunamis is not yet realised and has not resulted in cultural changes to more affirmative behaviour. Lastly, Australians are probably more aware of “regular or common” catastrophes such as floods and bush fires that are normal to the Australian climate system and which are endlessly experienced individually and culturally and covered by the media in all forms. The Australian summer of 2012–13 has again been dominated by floods and fires. If this idea is accepted, the media construct a uniquely Australian imaginary of catastrophe and cultural discourse of disaster. The familiarity with these common climate catastrophes makes us “culturally blind” to the catastrophe that is tsunami.The consequences of a major tsunami affecting Australia some point in the future are likely to be of a scale not yet comprehensible. References Australian Broadcasting Corporation (ABC). "ABC Net Splash." 20 Mar. 2013 ‹http://splash.abc.net.au/media?id=31077›. Brown, Philip, and Jessica Minty. “Media Coverage and Charitable Giving after the 2004 Tsunami.” Southern Economic Journal 75 (2008): 9–25. Bryant, Edward. Tsunami: The Underrated Hazard. First Edition, Cambridge: Cambridge UP, 2001. ———. Tsunami: The Underrated Hazard. Second Edition, Sydney: Springer-Praxis, 2008. Caldwell, Anna, Natalie Gregg, Fiona Hudson, Patrick Lion, Janelle Miles, Bart Sinclair, and John Wright. “Samoa Tsunami Claims Five Aussies as Death Toll Rises.” The Courier Mail 1 Oct. 2009. 20 Mar. 2013 ‹http://www.couriermail.com.au/news/samoa-tsunami-claims-five-aussies-as-death-toll-rises/story-e6freon6-1225781357413›. CDEMA. "The Caribbean Disaster Emergency Management Agency. Tsunami SMART Media Web Site." 18 Dec. 2012. 20 Mar. 2013 ‹http://weready.org/tsunami/index.php?Itemid=40&id=40&option=com_content&view=article›. Cox, Robin, Bonita Long, and Megan Jones. “Sequestering of Suffering – Critical Discourse Analysis of Natural Disaster Media Coverage.” Journal of Health Psychology 13 (2008): 469–80. “CSI: Miami (Season 3, Episode 7).” International Movie Database (IMDb). ‹http://www.imdb.com/title/tt0534784/›. 9 Jan. 2013. "CSI: Miami (Season 3)." Wikipedia. ‹http://en.wikipedia.org/wiki/CSI:_Miami_(season_3)#Episodes›. 21 Mar. 2013. DFAT. "Department of Foreign Affairs and Trade Annual Report 2004–2005." 8 Jan. 2013 ‹http://www.dfat.gov.au/dept/annual_reports/04_05/downloads/2_Outcome2.pdf›. Dominey-Howes, Dale. “Geological and Historical Records of Australian Tsunami.” Marine Geology 239 (2007): 99–123. Dominey-Howes, Dale, and Randy Thaman. “UNESCO-IOC International Tsunami Survey Team Samoa Interim Report of Field Survey 14–21 October 2009.” No. 2. Australian Tsunami Research Centre. University of New South Wales, Sydney. "Hereafter." International Movie Database (IMDb). ‹http://www.imdb.com/title/tt1212419/›. 9 Jan. 2013."Hereafter." Wikipedia. ‹http://en.wikipedia.org/wiki/Hereafter (film)›. 21 Mar. 2013. Hjorth, Larissa, and Yonnie Kyoung-hwa. “The Mourning After: A Case Study of Social Media in the 3.11 Earthquake Disaster in Japan.” Television and News Media 12 (2011): 552–59. ———, and Yonnie Kyoung-hwa. “Good Grief: The Role of Mobile Social Media in the 3.11 Earthquake Disaster in Japan.” Digital Creativity 22 (2011): 187–99. Lau, Joseph, Mason Lau, and Jean Kim. “Impacts of Media Coverage on the Community Stress Level in Hong Kong after the Tsunami on 26 December 2004.” Journal of Epidemiology and Community Health 60 (2006): 675–82. McCargo, Duncan, and Lee Hyon-Suk. “Japan’s Political Tsunami: What’s Media Got to Do with It?” International Journal of Press-Politics 15 (2010): 236–45. Miles, Brian, and Stephanie Morse. “The Role of News Media in Natural Disaster Risk and Recovery.” Ecological Economics 63 (2007): 365–73. Morgan, Olive, and Charles de Goyet. “Dispelling Disaster Myths about Dead Bodies and Disease: The Role of Scientific Evidence and the Media.” Revista Panamericana de Salud Publica-Pan American Journal of Public Health 18 (2005): 33–6. Olofsson, Anna. “The Indian Ocean Tsunami in Swedish Newspapers: Nationalism after Catastrophe.” Disaster Prevention and Management 20 (2011): 557–69. Piotrowski, Chris, and Terry Armstrong. “Mass Media Preferences in Disaster: A Study of Hurricane Danny.” Social Behavior and Personality 26 (1998): 341–45. Ploughman, Penelope. “The American Print News Media Construction of Five Natural Disasters.” Disasters 19 (1995): 308–26. Prendergast, Amy, and Nick Brown. “Far Field Impact and Coastal Sedimentation Associated with the 2006 Java Tsunami in West Australia: Post-Tsunami Survey at Steep Point, West Australia.” Natural Hazards 60 (2012): 69–79. Sharp, Joanne. “A Subaltern Critical Geopolitics of The War on Terror: Postcolonial Security in Tanzania.” Geoforum 42 (2011): 297–305. Sood, Rahul, Stockdale, Geoffrey, and Everett Rogers. “How the News Media Operate in Natural Disasters.” Journal of Communication 37 (1987): 27–41.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Teses / dissertações sobre o assunto "Stockage virtuel"

1

Nguetchouang, Ngongang Kevin. "Efficient Storage Virtualization in Cloud Environments". Electronic Thesis or Diss., Lyon, École normale supérieure, 2024. http://www.theses.fr/2024ENSL0034.

Texto completo da fonte
Resumo:
L’avènement du Cloud computing a révolutionné le paysage de la technologie moderne, offrant aux organisations une flexibilité, une capacité de passage à l’échelle et une accessibilité sans précédent dans l’exploitation des ressources informatiques à distance via Internet. Des innovations clés telles que le serverless, les conteneurs et les machines virtuelles ont joué des rôles cruciaux dans le remodelage de la manière dont les entreprises opèrent et fournissent des services. À notre époque de prise de décision basée sur les données, des solutions de stockage efficaces sont primordiales. La virtualisation du stockage émerge comme un facilitateur de la gestion efficace du stockage dans les environnements cloud, permettant une évolutivité et une optimisation des ressources sans faille. Cependant, la croissance rapide des données et l’évolution des besoins des utilisateurs ont amené de nouveaux défis tels que la gestion des sauvegardes, la réduction de la latence d’accès au stockage et la tolérance aux pannes dans les environnements de stockage distribué. Cette thèse présente trois contributions significatives pour relever ces défis: Un format de disque virtuel évolutif pour résoudre le problème de passage à l’échelle de disques virtuels composés de longues chaines de snapshots. Un système de mise en cache opportuniste des données pour les plateformes FaaS. Un système de stockage distribué qui tire parti de l’existence de répliques secondaires de données pour garantir l’équilibre de la charge des demandes et l’équité de la gestion des ressources. Nous avons construit un prototype de chacune de nos contributions et validé leur efficacité
The advent of cloud computing has revolutionized the modern technology landscape, providing organizations with unprecedented flexibility, scalability and accessibility in operating computing resources remotely over the Internet. Key innovations such as serverless, containers and virtual machines have played crucial roles in reshaping the way businesses operate and deliver services.In this age of data-driven decision-making, efficient storage solutions are paramount. Storage virtualization is emerging as an enabler for efficient storage management in cloud environments, enabling seamless scalability and resource optimization. However, rapid data growth and evolving user needs have brought new challenges such as managing backups, reducing storage access latency, and fault tolerance in distributed storage environments.This thesis presents three significant contributions to address these challenges:A scalable virtual disk format to solve the problem of scaling virtual disks composed of long snapshot chains.An opportunistic data caching system for FaaS platforms.A distributed storage system that takes advantage of the existence of secondary replicas of data to ensure request load balancing and fairness of resource management.We built a prototype of each of our contributions and validated their effectiveness
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Nikolaidis, Fotios. "Tromos : a software development kit for virtual storage systems". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLV033/document.

Texto completo da fonte
Resumo:
Les applications modernes ont des tendances de diverger à la fois le profile I/O et les requiers du stockage. La liaison d'une application scientifique ou commerciale avec un system "general-purpose" produit probablement un résultât sous-optimale. Même sous la présence des systèmes "purpose specific" des application aux classes multiples de workloads ont encore besoin de distribuer du travail de calcul au correct system. Cependant, cette stratégie n'est pas triviale comme des plateformes différentes butent diversifier leur propos et par conséquence elles requièrent que l'application intégrée des chemins multiples de code. Le but de l'implémentation de ces chemins n'est pas trivial, il requiert beaucoup d'effort et des capacités de codage. Le problème devient vaste quand les applications ont besoin de bénéficier de plusieurs data-stores en parallèle. Dans cette dissertation, on va introduire les "storage containers" comme le prochain étape logique, mais révolutionnaire. Un "storage container" est une infrastructure virtuelle qui découple une application de ses data-stores correspondants avec la même manière que Docker découple l'application runtime des servers physiques. En particulier, un "storage container" est un middleware qui sépare des changements fait pour bouts de code des application par des utilisateurs scientifiques, de celui fait pour des actions de I/O par des développeurs ou des administrateurs.Pour faciliter le développement et déploiement d'un "storage container" on va introduire un cadre appelé Tromos. Parmi son filtre, tout qui est nécessaire pour qu'un architecte d'une application construite une solution de stockage est de modéliser l'environnement voulu dans un fichier de définition and laisser le reste au logiciel. Tromos est livré avec un dépôt de plugins parmi les quelles l'architecte peut choisir d'optimiser le conteneur pour l'application activée. Parmi des options disponibles, sont inclus des transformations des données, des politiques de placement des données, des méthodes de reconstruction des données, du management d'espace de noms, et de la gestion de la cohérence à la demande. Comme preuve de concept, on utilisera Tromos pour créer des environnements de stockage personnalisés facilement comparés à Gluster, un système de stockage bien établi et polyvalent. Les résultats vous montrent que les "storage containers" adaptés aux applications, même s'ils sont auto-produits, peuvent surpasser les systèmes "general purpose" les plus sophistiqués en supprimant simplement la surcharge inutile de fonctionnalités factices
Modern applications tend to diverge both in the I/O profile and storage requirements. Matching a scientific or commercial application with a general-purpose system will most likely yield suboptimal performance. Even in the presence of purpose-specific' systems, applications with multiple classes of workloads are still in need to disseminate the workload to the right system. This strategy, however, is not trivial as different platforms aim at diversified goals and therefore require the application to incorporate multiple codepaths. Implementing such codepaths is non-trivial, requires a lot of effort and programming skills, and is error-prone. The hurdles are getting worse when applications need to leverage multiple data-stores in parallel. In this dissertation, we introduce "storage containers" as the next logical in the storage evolution. A "storage container" is virtual infrastructure that decouples the application from the underlying data-stores in the same way Docker decouples the application runtime from the physical servers. In other words, it is middleware that separate changes made to application codes by science users from changes made to I/O actions by developers or administrators.To facilitate the development and deployment of a "storage container" we introduce a framework called Tromos. Through its lens, all that it takes for an application architect to spin-up a custom storage solution is to model the target environment into a definition file and let the framework handles the rest. Tromos comes with a repository of plugins which the architect can choose as to optimize the container for the application at hand. Available options include data transformations, data placement policies, data reconstruction methods, namespace management, and on-demand consistency handling.As a proof-of-concept we use Tromos to prototype customized storage environments which we compare against Gluster; a well-estalished and versatile storage system. The results have shown that application-tailored "storage containers", even if they are auto-produced, can outperform more mature "general-purpose" systems by merely removing the unnecessary overhead of unused features
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Coutinho, Sofia de Sousa. "Étude et analyse des propriétés fondamentales du dititanate de rubidium Rb2Ti2O5 pour des applications de stockage d’énergie". Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS087.

Texto completo da fonte
Resumo:
Un système memristif est un composant dont la valeur de la résistance est fonction de tout l’historique électrique, soit des charges ou du flux qui l’a traversé. Cette propriété peut être employée pour le stockage de l’information et dans certaines conditions de l’énergie. Or, le dititanate de rubidium (RTO) présente des propriétés memristives et de stockage intrinsèques. Cette thèse a pour objet l’étude et l’analyse des propriétés fondamentales du RTO pour des applications de stockage d’énergie. L’étude fondamentale a été conduite par l’intermédiaire d’expériences de résonance magnétique nucléaire, de spectroscopie d’impédance ou encore de distribution de charge. Elle a permis la mise en évidence de propriétés intrinsèques telles que l’accumulation d’espèces ioniques négatives à l’interface anodique associée à l’existence d’une cathode virtuelle et le rôle fondamental de l’eau dans les propriétés remarquables du RTO. Ces résultats expérimentaux, associés à une étude théorique, ont permis d’aboutir à un modèle microscopique de dissociation de l’eau et de conduction des espèces mobiles au sein du RTO via un mécanisme de Grotthuss. Des dispositifs analogues à des supercondensateurs pour le stockage de l’énergie électrique ont été réalisés et caractérisés. Les résultats obtenus confirment l’intérêt du RTO pour ce genre d’application avec de nombreuses pistes d’améliorations possibles
A so-called memristive system is a component whose resistance value is a function of all the electrical history, which is to say of all the charges or the flow that has passed through it. This property is likely to be used for information storage and under certain conditions for energy storage. However, rubidium dititanate (RTO) has intrinsic memristive and storage properties. The purpose of this thesis is to study and analyze the fundamental properties of RTO for energy storage applications. The fundamental study was carried out through nuclear magnetic resonance experiments, impedance spectroscopy or even charge distribution. It has allowed the demonstration of intrinsic properties such as the accumulation of negative ionic species at the anodic interface associated with the existence of a virtual cathode and the fundamental role of water in the remarkable properties of RTO. These experimental results, combined with a theoretical study, led to a microscopic model of water dissociation and conduction of mobile species within the RTO via a Grotthuss-type mechanism. Supercapacitor-like devices for electrical energy storage were then developed and characterized. The results obtained confirm the interest of RTO for this type of application with many possible avenues for improvement
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Tran, Van Giang. "Conception optimale d’une centrale électrique virtuelle intégrant des énergies renouvelables". Perpignan, 2010. http://www.theses.fr/2010PERP0998.

Texto completo da fonte
Resumo:
Le travail réalisé et présenté dans ce manuscrit répond au nécessaire développement d’une centrale électrique virtuelle, permettant de gérer les systèmes de production d’électricité et de promouvoir les énergies renouvelables au sein de la communauté d’agglomération Perpignan Méditerranée (Pyrénées-Orientales). Dans un premier temps, sont présentés le contexte énergétique global, l’état de l’art concernant l’implantation de centrales électriques virtuelles dans le monde ainsi que l’approche proposée pour la gestion des ressources énergétiques de la communauté d’agglomération. Dans un deuxième temps, des modules de prédiction à court terme de la charge du réseau électrique et de paramètres météorologiques, tels que la vitesse moyenne de vent et l’irradiation solaire globale, ont été développés et intégrés à l’outil développé. Des scénarios prévisionnels et plusieurs stratégies de gestion énergétique ont été proposés afin de répondre au mieux à la demande d’électricité. Est considérée la possibilité de stocker de l’énergie ainsi que de vendre ou d’acheter sur le marché Powernext. Enfin, l’outil développé offre la possibilité de dimensionner de façon optimale de nouveaux systèmes de production et d’étudier la pertinence de leur implantation. Avec le développement rapide du marché concurrentiel de l'électricité et la nécessaire réduction des émissions de gaz à effet de serre, la centrale électrique virtuelle proposée a pour objectif principal d’améliorer l’efficacité économique et de favoriser la protection de l'environnement
The present work deals with the necessary development of a virtual power plant allowing managing energy production systems and promoting renewable energy for the Perpignan Méditerranée agglomeration community (Pyrénées-Orientales, France). First, are presented the worldwide energy context, the state of the art about virtual power plants as well as the proposed approach for managing energy resources. Next, a methodology allowing forecasting the electric load and meteorological parameters, such as both the mean average wind speed and the global solar irradiation, are proposed and integrated as a module in the virtual power plant. Scenarios and energy strategies were developed with the purpose of satisfying the electricity demand, using renewable energy. Storing energy as well as buying or selling on the Powernext market was also considered. Finally, the proposed tool opens the possibility of optimally sizing new production systems. According to both the intensive growth of the electricity market and the greenhouse gas emissions, the developed virtual power plant focuses on improving energy efficiency and favouring environmental protection
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Knaff, Alain. "Conception et réalisation d'un service de stockage fiable et extensible pour un système réparti à objets persistants". Phd thesis, Université Joseph Fourier (Grenoble), 1996. http://tel.archives-ouvertes.fr/tel-00004998.

Texto completo da fonte
Resumo:
Cette thèse décrit la conception et la mise en oeuvre d'un service de stockage fiable et extensible. Les travaux ont été faits dans le cadre de Sirac, un système réparti à objets persistants. L'objectif de Sirac est de fournir des services pour le support d'objets persistants répartis et pour la construction d'applications réparties. Les deux idées qui ont dirigé cette étude sont la souplesse des services offerts et la coopération entre les sous-systèmes. La souplesse, rendue possible par la conception modulaire du système, améliore les performances, étant donné que les applications doivent seulement payer le prix des services qu'elles utilisent. La coopération (par exemple entre le stockage et la pagination) permet aux différents modules de prendre des décisions en connaissance de cause. La thèse présente dans le second chapitre un état de l'art en trois parties. La première partie s'attache à étudier la manière dont un grand espace de stockage unique peut être présenté aux applications. La deuxième partie analyse la mise en oeuvre du stockage fiable en étudiant notamment différentes réalisations de l'atomicité. La troisième partie enfin montre comment ces deux aspects sont mariés dans les systèmes modernes. Dans le troisième chapitre, nous faisons un rapide tour d'horizon d'Arias et de ses différents sous-systèmes~: protection, cohérence, synchronisation et stockage. Au sein des différents services, nous distinguons d'un côté des modules génériques de bas niveau, et d'un autre côté des modules spécifiques aux applications. Les modules génériques mettent en oeuvre les mécanismes tandis que les modules spécifiques définissent la politique. Certains sous-systèmes sont toujours présents, comme la gestion de la cohérence et de la synchronisation, alors que d'autres, comme par exemple la gestion de la protection ou la gestion de la permanence, sont optionnels. Dans les quatrième et cinquième chapitres, nous nous concentrons sur le service de stockage. Le service générique de stockage est subdivisé en deux parties~: d'abord un gestionnaire de volume, qui assure la pérennité des données, et puis un service de journalisation, qui assure l'atomicité des transactions. Ce système a été mis en oeuvre au dessus d'AIX, et la coopération entre les différents modules s'appuie sur le mécanisme des streams. Les performances de notre système sont bonnes, et s'approchent des limites imposées par le matériel dans les cas favorables. Les projets futurs incluent la fourniture d'un vaste éventail de protocoles de journalisation spécifiques, le support de volumes dupliqués ainsi que l'optimisation du gestionnaire du volume.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Knaff, Alain. "Conception et réalisation d'un service de stockage fiable et extensible pour un système réparti à objets persistants". Phd thesis, Grenoble 1, 1996. https://theses.hal.science/tel-00004998.

Texto completo da fonte
Resumo:
Cette thèse décrit la conception et la mise en oeuvre d'un service de stockage fiable et extensible. Les travaux ont été faits dans le cadre de Sirac, un système réparti à objets persistants. L'objectif de Sirac est de fournir des services pour le support d'objets persistants répartis et pour la construction d'applications réparties. Les deux idées qui ont dirigé cette étude sont la souplesse des services offerts et la coopération entre les sous-systèmes. La souplesse, rendue possible par la conception modulaire du système, améliore les performances, étant donné que les applications doivent seulement payer le prix des services qu'elles utilisent. La coopération (par exemple entre le stockage et la pagination) permet aux différents modules de prendre des décisions en connaissance de cause. La thèse présente dans le second chapitre un état de l'art en trois parties. La première partie s'attache à étudier la manière dont un grand espace de stockage unique peut être présenté aux applications. La deuxième partie analyse la mise en oeuvre du stockage fiable en étudiant notamment différentes réalisations de l'atomicité. La troisième partie enfin montre comment ces deux aspects sont mariés dans les systèmes modernes. Dans le troisième chapitre, nous faisons un rapide tour d'horizon d'Arias et de ses différents sous-systèmes~: protection, cohérence, synchronisation et stockage. Au sein des différents services, nous distinguons d'un côté des modules génériques de bas niveau, et d'un autre côté des modules spécifiques aux applications. Les modules génériques mettent en oeuvre les mécanismes tandis que les modules spécifiques définissent la politique. Certains sous-systèmes sont toujours présents, comme la gestion de la cohérence et de la synchronisation, alors que d'autres, comme par exemple la gestion de la protection ou la gestion de la permanence, sont optionnels. Dans les quatrième et cinquième chapitres, nous nous concentrons sur le service de stockage. Le service générique de stockage est subdivisé en deux parties~: d'abord un gestionnaire de volume, qui assure la pérennité des données, et puis un service de journalisation, qui assure l'atomicité des transactions. Ce système a été mis en oeuvre au dessus d'AIX, et la coopération entre les différents modules s'appuie sur le mécanisme des streams. Les performances de notre système sont bonnes, et s'approchent des limites imposées par le matériel dans les cas favorables. Les projets futurs incluent la fourniture d'un vaste éventail de protocoles de journalisation spécifiques, le support de volumes dupliqués ainsi que l'optimisation du gestionnaire du volume
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Ouarnoughi, Hamza. "Placement autonomique de machines virtuelles sur un système de stockage hybride dans un cloud IaaS". Thesis, Brest, 2017. http://www.theses.fr/2017BRES0055/document.

Texto completo da fonte
Resumo:
Les opérateurs de cloud IaaS (Infrastructure as a Service) proposent à leurs clients des ressources virtualisées (CPU, stockage et réseau) sous forme de machines virtuelles (VM). L’explosion du marché du cloud les a contraints à optimiser très finement l’utilisation de leurs centres de données afin de proposer des services attractifs à moindre coût. En plus des investissements liés à l’achat des infrastructures et de leur coût d’utilisation, la consommation énergétique apparaît comme un point de dépense important (2% de la consommation mondiale) et en constante augmentation. Sa maîtrise représente pour ces opérateurs un levier très intéressant à exploiter. D’un point de vue technique, le contrôle de la consommation énergétique s’appuie essentiellement sur les méthodes de consolidation. Or la plupart d'entre elles ne prennent en compte que l’utilisation CPU des machines physiques (PM) pour le placement de VM. En effet, des études récentes ont montré que les systèmes de stockage et les E/S disque constituent une part considérable de la consommation énergétique d’un centre de données (entre 14% et 40%). Dans cette thèse nous introduisons un nouveau modèle autonomique d’optimisation de placement de VM inspiré de MAPE-K (Monitor, Analyze, Plan, Execute, Knowledge), et prenant en compte en plus du CPU, les E/S des VM ainsi que les systèmes de stockage associés. Ainsi, notre première contribution est relative au développement d’un outil de trace des E/S de VM multi-niveaux. Les traces collectées alimentent, dans l’étape Analyze, un modèle de coût étendu dont l’originalité consiste à prendre en compte le profil d’accès des VM, les caractéristiques du système de stockage, ainsi que les contraintes économiques de l’environnement cloud. Nous analysons par ailleurs les caractéristiques des deux principales classes de stockage, pour aboutir à un modèle hybride exploitant au mieux les avantages de chacune. En effet, les disques durs magnétiques (HDD) sont des supports de stockage à la fois énergivores et peu performants comparés aux unités de calcul. Néanmoins, leur prix par gigaoctet et leur longévité peuvent jouer en leur faveur. Contrairement aux HDD, les disques SSD à base de mémoire flash sont plus performants et consomment peu d’énergie. Leur prix élevé par gigaoctet et leur courte durée de vie (comparés aux HDD) représentent leurs contraintes majeures. L’étape Plan a donné lieu, d’une part, à une extension de l'outil de simulation CloudSim pour la prise en compte des E/S des VM, du caractère hybride du système de stockage, ainsi que la mise en oeuvre du modèle de coût proposé dans l'étape Analyze. Nous avons proposé d’autre part, plusieurs heuristiques se basant sur notre modèle de coût et que nous avons intégrées dans CloudSim. Nous montrons finalement que notre approche permet d’améliorer d’un facteur trois le coût de placement de VM obtenu par les approches existantes
IaaS cloud providers offer virtualized resources (CPU, storage, and network) as Virtual Machines(VM). The growth and highly competitive nature of this economy has compelled them to optimize the use of their data centers, in order to offer attractive services at a lower cost. In addition to investments related to infrastructure purchase and cost of use, energy efficiency is a major point of expenditure (2% of world consumption) and is constantly increasing. Its control represents a vital opportunity. From a technical point of view, the control of energy consumption is mainly based on consolidation approaches. These approaches, which exclusively take into account the CPU use of physical machines (PM) for the VM placement, present however many drawbacks. Indeed, recent studies have shown that storage systems and disk I/O represent a significant part of the data center energy consumption (between 14% and 40%).In this thesis we propose a new autonomic model for VM placement optimization based on MAPEK (Monitor, Analyze, Plan, Execute, Knowledge) whereby in addition to CPU, VM I/O and related storage systems are considered. Our first contribution proposes a multilevel VM I/O tracer which overcomes the limitations of existing I/O monitoring tools. In the Analyze step, the collected I/O traces are introduced in a cost model which takes into account the VM I/O profile, the storage system characteristics, and the cloud environment constraints. We also analyze the complementarity between the two main storage classes, resulting in a hybrid storage model exploiting the advantages of each. Indeed, Hard Disk Drives (HDD) represent energy-intensive and inefficient devices compared to compute units. However, their low cost per gigabyte and their long lifetime may constitute positive arguments. Unlike HDD, flash-based Solid-State Disks (SSD) are more efficient and consume less power, but their high cost per gigabyte and their short lifetime (compared to HDD) represent major constraints. The Plan phase has initially resulted in an extension of CloudSim to take into account VM I/O, the hybrid nature of the storage system, as well as the implementation of the previously proposed cost model. Secondly, we proposed several heuristics based on our cost model, integrated and evaluated using CloudSim. Finally, we showed that our contribution improves existing approaches of VM placement optimization by a factor of three
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Azoti, Wiyao Leleng. "Conception et amélioration des propriétés amortissantes des composites auxétiques basés sur l'utilisation des outils de la micromécanique". Thesis, Université de Lorraine, 2012. http://www.theses.fr/2012LORR0218/document.

Texto completo da fonte
Resumo:
La conception de matériaux composites à particules, fibres ou structures sandwichs, faits de renforts auxétiques en vue de l'amélioration des propriétés amortissantes, est analysée dans cette thèse. Pour une telle analyse, le comportement auxétique décrivant un coefficient de Poisson négatif nécessite d'être compris tant d'un point de vue « effet structure » que « effet matériau ». Ce dernier point c'est-à-dire l'« effet matériau », faisant référence à la forme, aux orientations et différentes propriétés des phases constitutives du matériau, reste peu documenté dans la littérature scientifique. Ainsi partant d'un formalisme micromécanique basé sur l'équation cinématique intégrale de Dederichs et Zeller, nous explorons dans un premier temps et analytiquement le domaine de validité du matériau composite auxétique par le schéma mono-site de Mori-Tanaka. Ensuite des microstructures plus complexes, à l'instar de la microstructure du vide multi-enrobé et celle d'un cluster réentrant d'inclusions ellipsoïdales prenant en compte les interactions de ces dernières, sont étudiées et validées par des simulations Eléments Finis. Les résultats de ces analyses nous indiquent, par ailleurs dans le cas des matériaux isotropes que le comportement auxétique n'est atteint que si et seulement si une des phases du composite est initialement auxétique. Aussi, la nécessité d'introduire des liaisons ou inter-connections au niveau des inclusions ellipsoïdales est montrée comme étant la méthode conduisant à un effet auxétique au niveau de la microstructure du cluster réentrant. Outre cette analyse préliminaire sur le domaine de validité du comportement auxétique dans les composites, l'effet de l'introduction d'inclusions auxétiques dans une matrice viscoélastique en l'occurrence le PolyVinyle de Butyral (PVB) d'une part et l'utilisation de couches viscoélastiques et auxétiques dans les structures sandwichs d'autre part, ont été étudiés. Les réponses de ces matériaux en termes de propriétés amortissantes, telles que le module de stockage et le facteur de perte, sont alors déterminées et discutées par rapport aux composites à renforts non auxétiques (conventionnels)
The design of composite (particles/fibers or structures) materials, consisting of auxetic reinforcements, with enhanced damping properties is studied herein. For such analysis, the auxetic behavior describing a negative Poisson's ratio needs to be understood as "structure effect" point of view than "material effect". Indeed, the "material effect" which treats of the topological and morphological textures of the composite constituents remains poorly documented in the literature. Based on the kinematic integral equation of Dederichs and Zeller, the design space of auxetic composite materials is explored initially through an analytical one-site formulation of the Mori-Tanaka micromechanics scheme. Then, more complex microstructures are investigated from micromechanics formalism as well as Finite Element Method (FEM) simulations. One can cite the multilayered hollow-cored microstructure and the microstructure describing a cluster of re-entrant ellipsoidal inclusions in which the interaction among them (inclusions) is taken into account. The results provided by these investigations show us for instance in the case of isotropic materials that auxeticity is achieved if and only if one of the material?s constituents (inclusion or matrix) is initially auxetic. Also, it is noticed in the case of ellipsoidal inclusions describing the re-entrant cluster that auxetic behavior can be recovered by introducing joints between inclusions. Otherwise, favorable issues are only expected with auxetic components. In addition to this preliminary analysis concerning the validity domain of auxetic behavior in composites, the effect of inserting auxetic reinforcements within a viscoelastic matrix for instance PolyVinylButyral (PVB) on the one hand, and the use of auxetic and viscoelastic layers in sandwich structures on the other hand, are studied. The response of these materials in terms of damping properties, such as the storage modulus and the loss factor are then identified and discussed versus non-auxetic (conventional) composite reinforcements
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Correia, Brum Rafaela. "Multi-FedLS : A Scheduler of Federated Learning Applications in a Multi-Cloud Environment". Electronic Thesis or Diss., Sorbonne université, 2023. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2023SORUS539.pdf.

Texto completo da fonte
Resumo:
L'apprentissage fédéré (AF) est un nouveau domaine de l'apprentissage machine distribué où l'apprentissage garantit la confidentialité des données. Chaque client a accès uniquement à son propre ensemble de données local et privé. Cette approche est attrayante dans divers domaines du savoir car elle permet à différentes institutions de collaborer sans partager leurs données confidentielles. Comme la quantité de données requises pour la formation a considérablement augmenté ces dernières années, la plupart des institutions ne peuvent pas se permettre des centres de données physiques pour stocker et manipuler l'ensemble de leurs données. Une option viable consiste à utiliser des services de stockage en nuage proposés par des fournisseurs offrant différentes garanties de confidentialité et de disponibilité des données. L'utilisateur est responsable du choix des régions où ses données sont stockées et du contrôle de leur accès.De plus, les fournisseurs de services en nuage offrent divers services pour exécuter une application. Ils permettent aux utilisateurs de créer des machines virtuelles (MV) avec différentes configurations, où les utilisateurs ont un contrôle total sur celles-ci. Ce type de service est appelé Infrastructure en tant que Service (IaaS). Ainsi, un environnement multi-cloud est propice à la collaboration de différentes institutions dans la création d'un modèle d'apprentissage machine grâce à l'apprentissage fédéré.Dans cette thèse, nous proposons Multi-FedLS, un framework robuste conçu pour exécuter des applications AF dans un environnement multi-cloud. Le framework prend en compte l'emplacement actuel des ensembles de données de chaque client, le délai de communication et le coût d'utilisation dans les nuages, en se concentrant sur la réduction des coûts et du temps d'exécution. De plus, Multi-FedLS utilise des instances moins chères chaque fois que possible pour réduire les coûts, même si elles peuvent être révoquées à tout moment par le fournisseur de services en nuage. Ainsi, pour assurer l'exécution réussie des applications AF, le cadre utilise des techniques de tolérance aux pannes telles que les points de contrôle et la migration des tâches pour reprendre la formation sur une autre MV après une révocation. Multi-FedLS comprend quatre modules: Pre-Scheduling, Initial Maping, Fault Tolerance e Dynamic Scheduler. Les résultats obtenus démontrent la faisabilité de l'exécution d'applications dans des environnements multi-cloud en utilisant des MV peu coûteuses, en utilisant une formulation mathématique, des techniques de tolérance aux pannes et des heuristiques simples pour la sélection de nouvelles MV. Le framework a obtenu une réduction des coûts de 56,92% par rapport au temps d'exécution de l'application en utilisant des MV plus coûteuses, avec seulement une augmentation de 5,44% du temps d'exécution sur les fournisseurs de services en nuage commerciaux
Federated Learning (FL) is a new area of distributed Machine Learning (ML) where learning ensures data privacy. Each client has access only to its own local and private dataset. This approach is attractive in various domains of knowledge because it allows different institutions to collaborate without sharing their confidential data. As the amount of data required for training has grown significantly in recent years, most institutions cannot afford physical data centers to store and manipulate all their data. A viable option is to utilize cloud storage services offered by providers with different data privacy and availability guarantees. The user is responsible for choosing the regions where their data is stored and controlling access to it.Additionally, cloud providers offer various services to execute an application. They provide users with the ability to create Virtual Machines (VMs) with different configurations, where users have full control over them. This type of service is known as Infrastructure-as-a-Service (IaaS). Thus, a multi-cloud environment is conducive to the collaboration of different institutions in creating a Machine Learning model through Federated Learning.In this thesis, we propose extit{Multi-FedLS}, a robust framework designed to execute FL applications in a multi-cloud environment. The framework considers the current location of each client's datasets, communication delay, and cost of utilization in the clouds, focusing on cost and runtime reduction. Moreover, Multi-FedLS utilizes cheaper instances whenever possible to reduce costs, even though they may be revoked at any time by the cloud provider. Thus, to ensure the successful execution of FL applications, the framework employs fault-tolerance techniques such as checkpoints and work migration to resume training on another VM after a revocation. Multi-FedLS comprises four modules: Pre-Scheduling, Initial Mapping, Fault Tolerance, and Dynamic Scheduler. The obtained results demonstrate the feasibility of executing applications in multi-cloud environments using low-cost VMs, employing mathematical formulation, fault-tolerance techniques, and simple heuristics for selecting new VMs. The framework achieved a cost reduction of 56.92% compared to application runtime using more expensive VMs, with only a 5.44% increase in runtime on commercial cloud providers
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Paniah, Crédo. "Approche multi-agents pour la gestion des fermes éoliennes offshore". Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112067/document.

Texto completo da fonte
Resumo:
La raréfaction des sources de production conventionnelles et leurs émissions nocives ont favorisé l’essor notable de la production renouvelable, plus durable et mieux répartie géographiquement. Toutefois, son intégration au système électrique est problématique. En effet, la production renouvelable est peu prédictible et issue de sources majoritairement incontrôlables, ce qui compromet la stabilité du réseau, la viabilité économique des producteurs et rend nécessaire la définition de solutions adaptées pour leur participation au marché de l’électricité. Dans ce contexte, le projet scientifique Winpower propose de relier par un réseau à courant continu les ressources de plusieurs acteurs possédant respectivement des fermes éoliennes offshore (acteurs EnR) et des centrales de stockage de masse (acteurs CSM). Cette configuration impose aux acteurs d’assurer conjointement la gestion du réseau électrique.Nous supposons que les acteurs participent au marché comme une entité unique : cette hypothèse permet aux acteurs EnR de tirer profit de la flexibilité des ressources contrôlables pour minimiser le risque de pénalités sur le marché de l’électricité, aux acteurs CSM de valoriser leurs ressources auprès des acteurs EnR et/ou auprès du marché et à la coalition de faciliter la gestion des déséquilibres sur le réseau électrique, en agrégeant les ressources disponibles. Dans ce cadre, notre travail s’attaque à la problématique de la participation au marché EPEX SPOT Day-Ahead de la coalition comme une centrale électrique virtuelle ou CVPP (Cooperative Virtual Power Plant). Nous proposons une architecture de pilotage multi-acteurs basée sur les systèmes multi-agents (SMA) : elle permet d’allier les objectifs et contraintes locaux des acteurs et les objectifs globaux de la coalition.Nous formalisons alors l’agrégation et la planification de l’utilisation des ressources comme un processus décisionnel de Markov (MDP), un modèle formel adapté à la décision séquentielle en environnement incertain, pour déterminer la séquence d’actions sur les ressources contrôlables qui maximise l’espérance des revenus effectifs de la coalition. Toutefois, au moment de la planification des ressources de la coalition, l’état de la production renouvelable n’est pas connue et le MDP n’est pas résoluble en l’état : on parle de MDP partiellement observable (POMDP). Nous décomposons le POMDP en un MDP classique et un état d’information (la distribution de probabilités des erreurs de prévision de la production renouvelable) ; en extrayant cet état d’information de l’expression du POMDP, nous obtenons un MDP à état d’information (IS-MDP), pour la résolution duquel nous proposons une adaptation d’un algorithme de résolution classique des MDP, le Backwards Induction.Nous décrivons alors un cadre de simulation commun pour comparer dans les mêmes conditions nos propositions et quelques autres stratégies de participation au marché dont l’état de l’art dans la gestion des ressources renouvelables et contrôlables. Les résultats obtenus confortent l’hypothèse de la minimisation du risque associé à la production renouvelable, grâce à l’agrégation des ressources et confirment l’intérêt de la coopération des acteurs EnR et CSM dans leur participation au marché de l’électricité. Enfin, l’architecture proposée offre la possibilité de distribuer le processus de décision optimale entre les différents acteurs de la coalition : nous proposons quelques pistes de solution dans cette direction
Renewable Energy Sources (RES) has grown remarkably in last few decades. Compared to conventional energy sources, renewable generation is more available, sustainable and environment-friendly - for example, there is no greenhouse gases emission during the energy generation. However, while electrical network stability requires production and consumption equality and the electricity market constrains producers to contract future production a priori and respect their furniture commitments or pay substantial penalties, RES are mainly uncontrollable and their behavior is difficult to forecast accurately. De facto, they jeopardize the stability of the physical network and renewable producers competitiveness in the market. The Winpower project aims to design realistic, robust and stable control strategies for offshore networks connecting to the main electricity system renewable sources and controllable storage devices owned by different autonomous actors. Each actor must embed its own local physical device control strategy but a global network management mechanism, jointly decided between connected actors, should be designed as well.We assume a market participation of the actors as an unique entity (the coalition of actors connected by the Winpower network) allowing the coalition to facilitate the network management through resources aggregation, renewable producers to take advantage of controllable sources flexibility to handle market penalties risks, as well as storage devices owners to leverage their resources on the market and/or with the management of renewable imbalances. This work tackles the market participation of the coalition as a Cooperative Virtual Power Plant. For this purpose, we describe a multi-agent architecture trough the definition of intelligent agents managing and operating actors resources and the description of these agents interactions; it allows the alliance of local constraints and objectives and the global network management objective.We formalize the aggregation and planning of resources utilization as a Markov Decision Process (MDP), a formal model suited for sequential decision making in uncertain environments. Its aim is to define the sequence of actions which maximize expected actual incomes of the market participation, while decisions over controllable resources have uncertain outcomes. However, market participation decision is prior to the actual operation when renewable generation still is uncertain. Thus, the Markov Decision Process is intractable as its state in each decision time-slot is not fully observable. To solve such a Partially Observable MDP (POMDP), we decompose it into a classical MDP and an information state (a probability distribution over renewable generation errors). The Information State MDP (IS-MDP) obtained is solved with an adaptation of the Backwards Induction, a classical MDP resolution algorithm.Then, we describe a common simulation framework to compare our proposed methodology to some other strategies, including the state of the art in renewable generation market participation. Simulations results validate the resources aggregation strategy and confirm that cooperation is beneficial to renewable producers and storage devices owners when they participate in electricity market. The proposed architecture is designed to allow the distribution of the decision making between the coalition’s actors, through the implementation of a suitable coordination mechanism. We propose some distribution methodologies, to this end
Estilos ABNT, Harvard, Vancouver, APA, etc.

Livros sobre o assunto "Stockage virtuel"

1

Georgeot, Cédric. Bonnes Pratiques, Planification et Dimensionnement des Infrastructures de Stockage et de Serveur en Environnement Virtuel. Books on Demand GmbH, 2011.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Schulz, Greg. Cloud and Virtual Data Storage Networking. Taylor & Francis Group, 2011.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Schulz, Greg. Cloud and Virtual Data Storage Networking. Auerbach Publishers, Incorporated, 2011.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Cloud and virtual data storage networking. Boca Raton, FL: CRC Press, 2012.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Software-Defined Data Infrastructure Essentials: Cloud, Converged, and Virtual Fundamental Server Storage I/o Tradecraft. Auerbach Publishers, Incorporated, 2017.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Software-Defined Data Infrastructure Essentials: Cloud, Converged, and Virtual Fundamental Server Storage I/o Tradecraft. Taylor & Francis Group, 2017.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Capítulos de livros sobre o assunto "Stockage virtuel"

1

Harris, Joseph. "Philanthropic Misanthropy". In Misanthropy in the Age of Reason, 174–203. Oxford University PressOxford, 2022. http://dx.doi.org/10.1093/oso/9780192867575.003.0007.

Texto completo da fonte
Resumo:
Abstract This chapter explores certain Enlightenment attempts to revalorize misanthropy as something positive—perhaps not a virtue in itself, but rather a distortion or misrecognition of an essentially virtuous impulse. The moral ambivalence of misanthropy, and indeed of the supposed slur ‘misanthrope’, is borne out by two eighteenth-century attempts to rehabilitate the reputations of two key figures who are often dubbed misanthropic. Attempting to exculpate Jonathan Swift from accusations of misanthropy, Percival Stockdale strategically rewrites misanthropy as a stance of benign indulgence towards human weakness. Conversely, Jean-Jacques Rousseau stoutly rejects the label of himself, presenting his misanthropic reputation as a slanderous misrecognition of his true philanthropic nature. Finally, this chapters explores how the emergent idea of the misanthrope as a disillusioned idealist is dramatized in Schiller’s unfinished play Der Menschenfeind.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia