Academic literature on the topic 'Data restore'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Data restore.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Data restore"

1

Lehmann, Christine. "Psychiatrist Urges Congress to Restore Consent Mandate For Genetic Data." Psychiatric News 37, no. 20 (October 18, 2002): 9–10. http://dx.doi.org/10.1176/pn.37.20.0009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Xia, Ruofan, Xiaoyan Yin, Javier Alonso Lopez, Fumio Machida, and Kishor S. Trivedi. "Performance and Availability Modeling of ITSystems with Data Backup and Restore." IEEE Transactions on Dependable and Secure Computing 11, no. 4 (July 2014): 375–89. http://dx.doi.org/10.1109/tdsc.2013.50.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Liu, Lianshan, Xiaoli Wang, Lingzhuang Meng, Gang Tian, and Ting Wang. "Reversible Data Hiding in a Chaotic Encryption Domain Based on Odevity Verification." International Journal of Digital Crime and Forensics 13, no. 6 (November 2021): 1–14. http://dx.doi.org/10.4018/ijdcf.20211101.oa9.

Full text
Abstract:
On the premise of guaranteeing the visual effect, in order to improve the security of the image containing digital watermarking and restore the carrier image without distortion, reversible data hiding in chaotic encryption domain based on odevity verification was proposed. The original image was scrambled and encrypted by Henon mapping, and the redundancy between the pixels of the encrypted image was lost. Then, the embedding capacity of watermarking can be improved by using odevity verification, and the embedding location of watermarking can be randomly selected by using logistic mapping. When extracting the watermarking, the embedded data was judged according to the odevity of the pixel value of the embedding position of the watermarking, and the carrier image was restored nondestructively by odevity check image. The experimental results show that the peak signal-to-noise ratio (PSNR) of the original image is above 53 decibels after the image is decrypted and restored after embedding the watermarking in the encrypted domain, and the invisibility is good.
APA, Harvard, Vancouver, ISO, and other styles
4

Arrieta-Ibarra, Imanol, Leonard Goff, Diego Jiménez-Hernández, Jaron Lanier, and E. Glen Weyl. "Should We Treat Data as Labor? Moving Beyond “Free”." AEA Papers and Proceedings 108 (May 1, 2018): 38–42. http://dx.doi.org/10.1257/pandp.20181003.

Full text
Abstract:
In the digital economy, user data is typically treated as capital created by corporations observing willing individuals. This neglects users' roles in creating data, reducing incentives for users, distributing the gains from the data economy unequally, and stoking fears of automation. Instead, treating data (at least partially) as labor could help resolve these issues and restore a functioning market for user contributions, but may run against the near-term interests of dominant data monopsonists who have benefited from data being treated as “free.” Countervailing power, in the form of competition, a data labor movement, and/or thoughtful regulation could help restore balance.
APA, Harvard, Vancouver, ISO, and other styles
5

Rosano, Andi, and Djadjat Sudaradjat. "Manajemen Backup Data untuk Penyelamatan Data Nasabah pada Sistem Informasi Perbankan (Studi Kasus : PT Bank XYZ)." REMIK (Riset dan E-Jurnal Manajemen Informatika Komputer) 4, no. 2 (April 1, 2020): 1. http://dx.doi.org/10.33395/remik.v4i2.10507.

Full text
Abstract:
Data sistem informasi online pada PT Bank XYZ merupakan database yang tersimpan pada web/email server bank yang dapat diakses secara online oleh pemakai. Beberapa faktor internal telah menjadi penyebab kerusakan server dan berakibat tidak beroperasinya sistem. Salah satu cara untuk menyelamatkan data dari kehilangan atau kerusakan adalah melalui manajemen backup data yang pelaksanaanya dijalankan secara teratur. Solusi untuk masalah ini adalah penggabungan metode full backup dan incremental backup dalam manajemen backup data, dimana metode ini sangat mudah digunakan serta ekonomis. Langkah berikutnya setelah backup data adalah proses restore yang merupakan proses pengembalian atau recovery data yang sangat penting apabila terjadi kerusakan data. Pada tulisan ini akan dibahas metode backup data dalam upaya penyelamatan data online. Dalam pemilihan metode backup ini sangat tergantung pada keandalan sistem dan kinerja, sehingga proses penyelamatan data dilakukan dengan tepat dan aman Kata Kunci; backup, database, incremental, online, recovery, restore, server
APA, Harvard, Vancouver, ISO, and other styles
6

SALVIATI, Leonardo, Evelyn HERNANDEZ-ROSA, Winsome F. WALKER, Sabrina SACCONI, Salvatore DiMAURO, Eric A. SCHON, and Mercy M. DAVIDSON. "Copper supplementation restores cytochrome c oxidase activity in cultured cells from patients with SCO2 mutations." Biochemical Journal 363, no. 2 (April 8, 2002): 321–27. http://dx.doi.org/10.1042/bj3630321.

Full text
Abstract:
Human SCO2 is a nuclear-encoded Cu-binding protein, presumed to be responsible for the insertion of Cu into the mitochondrial cytochrome c oxidase (COX) holoenzyme. Mutations in SCO2 are associated with cardioencephalomyopathy and COX deficiency. Studies in yeast and bacteria have shown that Cu supplementation can restore COX activity in cells harbouring mutations in genes involving Cu transport. Therefore we investigated whether Cu supplementation could restore COX activity in cultured cells from patients with SCO2 mutations. Our data demonstrate that the COX deficiency observed in fibroblasts, myoblasts and myotubes from patients with SCO2 mutations can be restored to almost normal levels by the addition of CuCl2 to the growth medium.
APA, Harvard, Vancouver, ISO, and other styles
7

Adhiwibowo, Whisnumurti, M. Sani Suprayogi, and Atmoko Nugroho. "PENGAMANAN DATA PADA APLIKASI SIJALU UNIVERSITAS SEMARANG DENGAN METODE REMOTE BACKUP & RESTORE." Jurnal Pengembangan Rekayasa dan Teknologi 14, no. 1 (January 16, 2019): 24. http://dx.doi.org/10.26623/jprt.v14i1.1217.

Full text
Abstract:
<p>Security of web applications should include a variety of ways, one of which is concerned about data security. Websites that already have a lot of users it is proper to consider the backup and restore strategy to prevent data loss. Besides the use of backup and restore is done on a scheduled basis should also be done at any time, so it is necessary to do the planning and use of the right tools so that the implementation is easier. Journal of Information Systems (SIJALU) University of Semarang contains data of scientific publications from researchers at tire University of Semarang and other campuses. Currently SIJALU not yet have a strategy for the prevention of data loss, this study intends to design and produce a data security benefits of using remote backup and restore. This research is expected with the data stored in SIJALU can be maintained.</p>
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Songmao, Abderrahim Halimi, Ximing Ren, Aongus McCarthy, Xiuqin Su, Stephen McLaughlin, and Gerald S. Buller. "Learning Non-Local Spatial Correlations To Restore Sparse 3D Single-Photon Data." IEEE Transactions on Image Processing 29 (2020): 3119–31. http://dx.doi.org/10.1109/tip.2019.2957918.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Skedsmo, Kristian. "Multiple Other-Initiations of Repair in Norwegian Sign Language." Open Linguistics 6, no. 1 (December 13, 2020): 532–66. http://dx.doi.org/10.1515/opli-2020-0030.

Full text
Abstract:
AbstractNot all other-initiations of repair (OIR) are instantly followed by a functional self-repair that restores the progress of the conversation. Despite previous observations of OIRs generally leading to restored progress after one single-repair initiation, data from a multiperson conversational corpus of Norwegian Sign Language (NTS) show that 68% of 112 individual repair initiations occur in multiple OIR sequences. This article identifies three different trajectories of multiple OIR sequences in the NTS data, which are as follows: (1) a trouble source being targeted by more than one repair initiation, (2) the self-repair becomes a new trouble source, or (3) the repair initiation becomes a new trouble source. The high frequency of multiple OIR sequences provides an opportunity to quantitatively investigate how the various formats of repair initiation are distributed in single- and multiple-OIR sequences, how they occur as first or subsequent, and whether they restore the progress of the conversation or are followed by another repair initiation.
APA, Harvard, Vancouver, ISO, and other styles
10

Price, Tristan, Nicola Brennan, Geoff Wong, Lyndsey Withers, Jennifer Cleland, Amanda Wanner, Thomas Gale, Linda Prescott-Clements, Julian Archer, and Marie Bryce. "Remediation programmes for practising doctors to restore patient safety: the RESTORE realist review." Health Services and Delivery Research 9, no. 11 (May 2021): 1–116. http://dx.doi.org/10.3310/hsdr09110.

Full text
Abstract:
Background An underperforming doctor puts patient safety at risk. Remediation is an intervention intended to address underperformance and return a doctor to safe practice. Used in health-care systems all over the world, it has clear implications for both patient safety and doctor retention in the workforce. However, there is limited evidence underpinning remediation programmes, particularly a lack of knowledge as to why and how a remedial intervention may work to change a doctor’s practice. Objectives To (1) conduct a realist review of the literature to ascertain why, how, in what contexts, for whom and to what extent remediation programmes for practising doctors work to restore patient safety; and (2) provide recommendations on tailoring, implementation and design strategies to improve remediation interventions for doctors. Design A realist review of the literature underpinned by the Realist And MEta-narrative Evidence Syntheses: Evolving Standards quality and reporting standards. Data sources Searches of bibliographic databases were conducted in June 2018 using the following databases: EMBASE, MEDLINE, Cumulative Index to Nursing and Allied Health Literature, PsycINFO, Education Resources Information Center, Database of Abstracts of Reviews of Effects, Applied Social Sciences Index and Abstracts, and Health Management Information Consortium. Grey literature searches were conducted in June 2019 using the following: Google Scholar (Google Inc., Mountain View, CA, USA), OpenGrey, NHS England, North Grey Literature Collection, National Institute for Health and Care Excellence Evidence, Electronic Theses Online Service, Health Systems Evidence and Turning Research into Practice. Further relevant studies were identified via backward citation searching, searching the libraries of the core research team and through a stakeholder group. Review methods Realist review is a theory-orientated and explanatory approach to the synthesis of evidence that seeks to develop programme theories about how an intervention produces its effects. We developed a programme theory of remediation by convening a stakeholder group and undertaking a systematic search of the literature. We included all studies in the English language on the remediation of practising doctors, all study designs, all health-care settings and all outcome measures. We extracted relevant sections of text relating to the programme theory. Extracted data were then synthesised using a realist logic of analysis to identify context–mechanism–outcome configurations. Results A total of 141 records were included. Of the 141 studies included in the review, 64% related to North America and 14% were from the UK. The majority of studies (72%) were published between 2008 and 2018. A total of 33% of articles were commentaries, 30% were research papers, 25% were case studies and 12% were other types of articles. Among the research papers, 64% were quantitative, 19% were literature reviews, 14% were qualitative and 3% were mixed methods. A total of 40% of the articles were about junior doctors/residents, 31% were about practicing physicians, 17% were about a mixture of both (with some including medical students) and 12% were not applicable. A total of 40% of studies focused on remediating all areas of clinical practice, including medical knowledge, clinical skills and professionalism. A total of 27% of studies focused on professionalism only, 19% focused on knowledge and/or clinical skills and 14% did not specify. A total of 32% of studies described a remediation intervention, 16% outlined strategies for designing remediation programmes, 11% outlined remediation models and 41% were not applicable. Twenty-nine context–mechanism–outcome configurations were identified. Remediation programmes work when they develop doctors’ insight and motivation, and reinforce behaviour change. Strategies such as providing safe spaces, using advocacy to develop trust in the remediation process and carefully framing feedback create contexts in which psychological safety and professional dissonance lead to the development of insight. Involving the remediating doctor in remediation planning can provide a perceived sense of control in the process and this, alongside correcting causal attribution, goal-setting, destigmatising remediation and clarity of consequences, helps motivate doctors to change. Sustained change may be facilitated by practising new behaviours and skills and through guided reflection. Limitations Limitations were the low quality of included literature and limited number of UK-based studies. Future work Future work should use the recommendations to optimise the delivery of existing remediation programmes for doctors in the NHS. Study registration This study is registered as PROSPERO CRD42018088779. Funding This project was funded by the National Institute for Health Research (NIHR) Health Services and Delivery Research programme and will be published in full in Health Services and Delivery Research; Vol. 9, No. 11. See the NIHR Journals Library website for further project information.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Data restore"

1

Kavánková, Iva. "Zálohování dat a datová úložiště." Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2021. http://www.nusl.cz/ntk/nusl-444687.

Full text
Abstract:
This diploma thesis is about data backup and following data archiving in real environment of concrete IT company engaged in software development. Theoretical knowledge concerning the area of data backup and data storages is described here. It also describes the current situation of data backup and problems with the current solution. There are suggestions for improving the current situation, including economic evaluation, to achieve efficient and most importantly secure data backup.
APA, Harvard, Vancouver, ISO, and other styles
2

Stanković, Saša. "Monitor and manage system and application configuration files at kernel level in GNU/Linux." Thesis, Högskolan Väst, Avdelningen för data-, elektro- och lantmäteriteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:hv:diva-8381.

Full text
Abstract:
The aim of this study is to investigate if there is a way a computer can accurately and automatically react on altered configuration file(s) with a minimum of resource utilization and by what means the developer(s) of an application can perform a check of the altered configuration file for their application. In a typical GNU/Linux installation the configuration files are literally counted by the thousands, monitoring these files is a task that for the most part exceeds any system administrator's abilities. Each file has its own syntax that needs to be known by the administrator. Either one of these two tasks could give any system administrator nightmares concerning the difficulty level especially when both tasks are combined. The system administrator could attempt to automate the monitoring tasks together with the syntax checking. There are some tools in the repositories of each distribution for monitoring files but none that lets one monitor and take (predefined or user defined) actions based on what the file monitor reports, the type of file and its contents. A complete tool is not presented in this study, merely a proof of concept that monitoring and taking actions especially with version 2.6.13 (or newer) kernel of GNU/Linux with plugins are quite possible with relatively small computer resource. During this study some questions arose that are worth taking into consideration when a complete monitoring tool is to be developed, amongst others they are: add a trusted user, add both textual and graphical user interface, monitor more than one file path. This study was performed on GNU/Linux CentOS 6 distribution, all programming was done in BASH with an effort to minimize used/installed programs.
APA, Harvard, Vancouver, ISO, and other styles
3

Dagfalk, Johanna, and Ellen Kyhle. "Listening in on Productivity : Applying the Four Key Metrics to measure productivity in a software development company." Thesis, Uppsala universitet, Avdelningen för datalogi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-440147.

Full text
Abstract:
Software development is an area in which companies not only need to keep up with the latest technology, but they additionally need to continuously increase their productivity to stay competitive in the industry. One company currently facing these challenges is Storytel - one of the strongest players on the Swedish audiobook market - with about a fourth of all employees involved with software development, and a rapidly growing workforce. With the purpose of understanding how the Storytel Tech Department is performing, this thesis maps Storytel’s productivity defined through the Four Key Metrics - Deployment Frequency, Delivery Lead Time, Mean Time To Restore and Change Fail Rate. A classification is made into which performance category (Low, Medium, High, Elite) the Storytel Tech Department belongs to through a deep-dive into the raw system data existing at Storytel, mainly focusing on the case management system Jira. A survey of the Tech Department was conducted, to give insights into the connection between human and technical factors influencing productivity (categorized into Culture, Environment, and Process) and estimated productivity. Along with these data collections, interviews with Storytel employees were performed to gather further knowledge about the Tech Department, and to understand potential bottlenecks and obstacles. All Four Key Metrics could be determined based on raw system data, except the metric Mean Time To Restore which was complemented by survey estimates. The generalized findings of the Four Key Metrics conclude that Storytel can be minimally classified as a ‘medium’ performer. The factors, validated through factor analysis, found to have an impact on the Four Key Metrics were Generative Culture, Efficiency (Automation and Shared Responsibility) and Number of Projects. Lastly, the major bottlenecks found were related to Architecture, Automation, Time Fragmentation and Communication. The thesis contributes with interesting findings from an expanding, middle-sized, healthy company in the audiobook streaming industry - but the results can be beneficial for other software development companies to learn from as well. Performing a similar study with a greater sample size, and additionally enabling comparisons between teams, is suggested for future research.
APA, Harvard, Vancouver, ISO, and other styles
4

Yu, Wei, and 余韡. "Reverse Top-k search using random walk with restart." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2013. http://hdl.handle.net/10722/197515.

Full text
Abstract:
With the increasing popularity of social networking applications, large volumes of graph data are becoming available. Large graphs are also derived by structure extraction from relational, text, or scientific data (e.g., relational tuple networks, citation graphs, ontology networks, protein-protein interaction graphs). Nodeto-node proximity is the key building block for many graph based applications that search or analyze the data. Among various proximity measures, random walk with restart (RWR) is widely adapted because of its ability to consider the global structure of the whole network. Although RWR-based similarity search has been well studied before, there is no prior work on reverse top-k proximity search in graphs based on RWR. We discuss the applicability of this query and show that the direct application of existing methods on RWR-based similarity search to solve reverse top-k queries has very high computational and storage demands. To address this issue, we propose an indexing technique, paired with an on-line reverse top-k search algorithm. In the indexing step, we compute from the graph G a graph index, which is based on a K X |V| matrix, containing in each column v the K largest approximate proximity values from v to any other node in G. K is application-dependent and represents the highest value of k in a practical reverse top-k query. At each column v of the index, the approximate values are lower bounds of the K largest proximity values from v to all other nodes. Given the graph index and a reverse top-k query q (k _ K), we prove that the exact proximities from any node v to query q can be efficiently computed by applying the power method. By comparing these with the corresponding lower bounds taken from the k-th row of the graph index, we are able to determine which nodes are certainly not in the reverse top-k result of q. For some of the remaining nodes, we may also be able to determine that they are certainly in the reverse top-k result of q, based on derived upper bounds for the k-th largest proximity value from them. Finally, for any candidate that remains, we progressively refine its approximate proximities, until based on its lower or upper bound it can be determined not to be or to be in the result. The proximities refined during a reverse top-k are used to update the graph index, making its values progressively more accurate for future queries. Our experimental evaluation shows that our technique is efficient and has manageable storage requirements even when applied on very large graphs. We also show the effectiveness of the reverse top-k search in the scenarios of spam detection and determining the popularity of authors.
published_or_final_version
Computer Science
Master
Master of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
5

Thornton, Victor. "DETERMINING TIDAL CHARACTERISTICS IN A RESTORED TIDAL WETLAND USING UNMANNED AERIAL VEHICLES AND DERIVED DATA." VCU Scholars Compass, 2018. https://scholarscompass.vcu.edu/etd/5369.

Full text
Abstract:
Unmanned aerial vehicle (UAV) technology was used to determine tidal extent in Kimages Creek, a restored tidal wetland located in Charles City County, Virginia. A Sensefly eBee Real-Time Kinematic UAV equipped with the Sensor Optimized for Drone Applications (SODA) camera (20-megapixel RGB sensor) was flown during a single high and low tide event in Summer 2017. Collectively, over 1,300 images were captured and processed using Pix4D. Horizontal and vertical accuracy of models created using ground control points (GCP) ranged from 0.176 m to 0.363 m. The high tide elevation model was subtracted from the low tide using the ArcMap 10.5.1 raster calculator. The positive difference was displayed to show the portion of high tide that was above the low tide. These results show that UAVs offer numerous spatial and temporal advantages, but further research is needed to determine the best method of GCP placement in areas of similar forest structure.
APA, Harvard, Vancouver, ISO, and other styles
6

Sarabia, Vicente Jesús. "Estudio de la quimioluminiscencia medida por luminometría y su aplicación en la estimación de la data de los restos óseos." Doctoral thesis, Universidad de Murcia, 2016. http://hdl.handle.net/10803/362935.

Full text
Abstract:
Objetivo: Demostrar que conforme aumenta la data de los restos óseos disminuye proporcionalmente la quimioluminiscencia que se produce al ser enfrentados al reactivo del luminol, y de esta forma aplicar dicha técnica para poder calcular la data de dichos restos con fines medicolegales. El fundamento es que los restos óseos presentan indicios de hematina que como sustancia orgánica estará presente en los mismos cada vez en menos concentración conforme avanza la data de la muerte, y dada la altísima sensibilidad del reactivo de luminol frente a la presencia de dicha molécula medir la intensidad en URL de la reacción y así poder aplicarla al establecimiento de la data. Metodología: Disponemos de 102 muestras de polvo óseo de cortical de huesos largos, en este caso fémures, con data de la muerte conocida. El reactivo de luminol que utilizamos es el propuesto por Weber que estará compuesto por un oxidante, el peróxido de hidrógeno, un reductor, que será el propio luminol y sosa que alcaliniza el medio. Haremos reaccionar 0,1 ml de dicho reactivo con 30 mg de polvo, produciéndose, en caso de presencia de un catalizador como es la hematina, una reacción de quimioluminiscencia que la mediremos en un luminómetro tipo FluostarGalaxy, el cual nos proporcionará los resultados en una hoja de cálculo en donde para cada muestra estudiada tendremos un valor de intensidad de quimioluminiscencia cada 5 segundos durante 5 minutos de lectura. Los valores obtenidos los trasladaremos a una herramienta estadística como el SPSS 20.0 para poder hacer el consiguiente tratamiento. Resultados: Al realizar una correlación bivariada comparando los valores obtenidos en quimioluminiscencia como variable independiente frente a la data conocida, en este caso variable dependiente, hemos podido comprobar que ateniéndonos al valor del coeficiente de Pearson y al nivel de significación en todos los casos, se puede afirmar que la intensidad de quimioluminiscencia y la data se relacionan de forma inversamente proporcional. Con las mismas variables hemos hecho un estudio de regresión para establecer la variable más predictora, resultando ser ésta la quimioluminiscencia a los 15s. El estudio por medio de una estimación curvilínea nos indica que el modelo exponencial es el que más se ajusta. Con un análisis discriminante por pasos hemos comprobado que los mejores resultados se obtienen con la variable de quimioluminiscencia a los 20 segundos. Las curvas COR autorizan la aplicación de esta técnica en la determinación de la data. Este trabajo se limita al no combinar resultados obtenidos con otros que ya se conocen por su aplicación en este campo, y por la necesidad de cuantificar el Fe previo de la muestra. Por otra parte, abre la vía para profundizar en el análisis cinético de las curvas resultantes para cada muestra. Conclusiones Hemos encontrado una correlación negativa entre la quimioluminiscencia medida por luminometría y la data de los restos óseos, de tal forma que la quimioluminiscencia decrece conforme aumenta la data. El modelo matemático que mejor se ajusta de relación entre quimioluminiscencia y data es el exponencial. La lectura de quimioluminiscencia a los 15 segundos (QL15s) es la que presenta una mayor capacidad de predicción de la data. La lectura de quimioluminiscencia a los 20 segundos (QL20s) presenta el mayor poder discriminante (capacidad de clasificación) de todos los tiempos analizados. Del estudio realizado pensamos que nuestros resultados nos permiten proponer el estudio por luminometría de los restos óseos para el cálculo de la data de los mismos al tratarse de una técnica simple, de bajo coste y con una capacidad de clasificación aceptable.
Objective: to demonstrate that chemiluminescence diminishes proportionately as post-mortem interval of bone remains increases when they come into contact with luminol, enabling this technique to be used to calculate the post-mortem interval of these remains for medico-legal interests. The basis is that bone remains show traces of haematin, which is present as an organic substance in the bone, but the concentration diminishes constantly after death. Given the high sensitivity of luminol in the presence of this molecule, it is possible to measure the intensity of the reaction in URL and so establish the date of death. Methodology: we used 102 samples of cortical bone dust belonging to long bones, in this case femurs, with a known date of death. The luminol used is that proposed by Weber, which comprises an oxidant (hydrogen peroxide), a reductant (luminol itself) and sodium carbonate, which alkalises the medium. The reagent (0.1 ml) is reacted with 30 mg of dust to produce, in the presence of a catalyst such as haematin, chemiluminescence, which can be measured with a luminometer (Fluostar Galaxy). This provides the results for each studied sample on a spreadsheet, which shows the chemiluminescence intensity every 5 seconds for 5 minutes. The obtained values are transferred to a statistical program, such as the SPSS 20.0, for treatment. Results: based on a bivariate correlation comparing the values of chemiluminescence as an independent variable with the known date, in this case the dependent variable, from the Pearson coefficient and the significance level, we can affirm that the intensity of chemiluminescence and the age are inversely proportional. A regression study was made using the same variables to establish th emost predictable variable, which was the chemiluminescence at 15 seconds. A curvilinear estimation indicated that the exponential model was the best fitting model, while a step by step discriminant study showed that the best results were obtained with the chemiluminescence variable at 20 seconds. The COR curves results confirm the usefulness of this technique to determine the date. This validity of this work is limited because other results already known in this field were not compared and because of the need to quantify Fe previous to the sample. However, it opens the way to studying in greater detail the kinetic analysis of the resulting curves for each sample. Conclusions We found a negative correlation between chemiluminescence measured by luminotremy and the age of bone remains, the chemiluminescence decreasing as the age increases. The mathematical model that best fits the relation between chemiluminescence and date is exponential. The chemiluminescence reading at 15 seconds (QL15s) is the best predictor of date, while the reading of chemiluminescence at 20 seconds (QL20s) has the best discriminant capacity (classification capacity) of all the times analyzed. The results of the study lead us to propose luminometry as a method for calculating the age of bone remains since it is a simple, low cost technique with an acceptable classification capacity.
APA, Harvard, Vancouver, ISO, and other styles
7

Silva, Andre Gustavo Pereira da. "Uma abordagem dirigida por modelos para desenvolvimento de middlewares auto-adaptativos para transmiss?o de fluxo de dados baseado em restri??es de QoS." Universidade Federal do Rio Grande do Norte, 2010. http://repositorio.ufrn.br:8080/jspui/handle/123456789/18011.

Full text
Abstract:
Made available in DSpace on 2014-12-17T15:47:52Z (GMT). No. of bitstreams: 1 AndreGPS_DISSERT.pdf: 1357503 bytes, checksum: e140d06d3ffeafa9c2f772fa5796fc4d (MD5) Previous issue date: 2010-03-15
The use of middleware technology in various types of systems, in order to abstract low-level details related to the distribution of application logic, is increasingly common. Among several systems that can be benefited from using these components, we highlight the distributed systems, where it is necessary to allow communications between software components located on different physical machines. An important issue related to the communication between distributed components is the provision of mechanisms for managing the quality of service. This work presents a metamodel for modeling middlewares based on components in order to provide to an application the abstraction of a communication between components involved in a data stream, regardless their location. Another feature of the metamodel is the possibility of self-adaptation related to the communication mechanism, either by updating the values of its configuration parameters, or by its replacement by another mechanism, in case of the restrictions of quality of service specified are not being guaranteed. In this respect, it is planned the monitoring of the communication state (application of techniques like feedback control loop), analyzing performance metrics related. The paradigm of Model Driven Development was used to generate the implementation of a middleware that will serve as proof of concept of the metamodel, and the configuration and reconfiguration policies related to the dynamic adaptation processes. In this sense was defined the metamodel associated to the process of a communication configuration. The MDD application also corresponds to the definition of the following transformations: the architectural model of the middleware in Java code, and the configuration model to XML
A utiliza??o da tecnologia de middleware em diversos tipos de sistemas, com a finalidade de abstrair detalhes de baixo n?vel relacionados com a distribui??o da l?gica da aplica??o, ? cada vez mais frequente. Dentre diversos sistemas que podem ser beneficiados com a utiliza??o desses componentes, podemos destacar os sistemas distribu?dos, onde ? necess?rio viabilizar a comunica??o entre componentes de software localizados em diferentes m?quinas f?sicas. Uma importante quest?o relacionada ? comunica??o entre componentes distribu?dos ? o fornecimento de mecanismos para gerenciamento da qualidade de servi?o. Este trabalho apresenta um metamodelo para modelagem de middlewares baseados em componentes que prov?em ? aplica??o a abstra??o da comunica??o entre componentes envolvidos em um fluxo de dados, independente da sua localiza??o. Outra caracter?stica do metamodelo ? a possibilidade de auto-adapta??o relacionada ao mecanismo de comunica??o utilizado, seja atrav?s da atualiza??o dos valores dos seus par?metros de configura??o, ou atrav?s da sua substitui??o por outro mecanismo, caso as restri??es de qualidade de servi?o especificadas n?o estejam sendo garantidas. Nesse prop?sito, ? previsto o monitoramento do estado da comunica??o (aplica??es de t?cnicas do tipo feedback control loop), analisando-se m?tricas de desempenho relacionadas. O paradigma de Desenvolvimento Dirigido por Modelos foi utilizado para gerar a implementa??o de um middleware que servir? como prova de conceito do metamodelo, e as pol?ticas de configura??o e reconfigura??o relacionadas com o processo de adapta??o din?mica; neste sentido, foi definido o metamodelo associado ao processo de configura??o de uma comunica??o. A aplica??o da t?cnica de MDD corresponde ainda ? defini??o das seguintes transforma??es: do modelo arquitetural do middleware para c?digo em linguagem Java, e do modelo de configura??o para c?digo XML
APA, Harvard, Vancouver, ISO, and other styles
8

Bentria, Dounia. "Combining checkpointing and other resilience mechanisms for exascale systems." Thesis, Lyon, École normale supérieure, 2014. http://www.theses.fr/2014ENSL0971/document.

Full text
Abstract:
Dans cette thèse, nous nous sommes intéressés aux problèmes d'ordonnancement et d'optimisation dans des contextes probabilistes. Les contributions de cette thèse se déclinent en deux parties. La première partie est dédiée à l’optimisation de différents mécanismes de tolérance aux pannes pour les machines de très large échelle qui sont sujettes à une probabilité de pannes. La seconde partie est consacrée à l’optimisation du coût d’exécution des arbres d’opérateurs booléens sur des flux de données.Dans la première partie, nous nous sommes intéressés aux problèmes de résilience pour les machines de future génération dites « exascales » (plateformes pouvant effectuer 1018 opérations par secondes).Dans le premier chapitre, nous présentons l’état de l’art des mécanismes les plus utilisés dans la tolérance aux pannes et des résultats généraux liés à la résilience.Dans le second chapitre, nous étudions un modèle d’évaluation des protocoles de sauvegarde de points de reprise (checkpoints) et de redémarrage. Le modèle proposé est suffisamment générique pour contenir les situations extrêmes: d’un côté le checkpoint coordonné, et de l’autre toute une famille de stratégies non-Coordonnées. Nous avons proposé une analyse détaillée de plusieurs scénarios, incluant certaines des plateformes de calcul existantes les plus puissantes, ainsi que des anticipations sur les futures plateformes exascales.Dans les troisième, quatrième et cinquième chapitres, nous étudions l'utilisation conjointe de différents mécanismes de tolérance aux pannes (réplication, prédiction de pannes et détection d'erreurs silencieuses) avec le mécanisme traditionnel de checkpoints et de redémarrage. Nous avons évalué plusieurs modèles au moyen de simulations. Nos résultats montrent que ces modèles sont bénéfiques pour un ensemble de modèles d'applications dans le cadre des futures plateformes exascales.Dans la seconde partie de la thèse, nous étudions le problème de la minimisation du coût de récupération des données par des applications lors du traitement d’une requête exprimée sous forme d'arbres d'opérateurs booléens appliqués à des prédicats sur des flux de données de senseurs. Le problème est de déterminer l'ordre dans lequel les prédicats doivent être évalués afin de minimiser l'espérance du coût du traitement de la requête. Dans le sixième chapitre, nous présentons l'état de l'art de la seconde partie et dans le septième chapitre, nous étudions le problème pour les requêtes exprimées sous forme normale disjonctive. Nous considérons le cas plus général où chaque flux peut apparaître dans plusieurs prédicats et nous étudions deux modèles, le modèle où chaque prédicat peut accéder à un seul flux et le modèle où chaque prédicat peut accéder à plusieurs flux
In this thesis, we are interested in scheduling and optimization problems in probabilistic contexts. The contributions of this thesis come in two parts. The first part is dedicated to the optimization of different fault-Tolerance mechanisms for very large scale machines that are subject to a probability of failure and the second part is devoted to the optimization of the expected sensor data acquisition cost when evaluating a query expressed as a tree of disjunctive Boolean operators applied to Boolean predicates. In the first chapter, we present the related work of the first part and then we introduce some new general results that are useful for resilience on exascale systems.In the second chapter, we study a unified model for several well-Known checkpoint/restart protocols. The proposed model is generic enough to encompass both extremes of the checkpoint/restart space, from coordinated approaches to a variety of uncoordinated checkpoint strategies. We propose a detailed analysis of several scenarios, including some of the most powerful currently available HPC platforms, as well as anticipated exascale designs.In the third, fourth, and fifth chapters, we study the combination of different fault tolerant mechanisms (replication, fault prediction and detection of silent errors) with the traditional checkpoint/restart mechanism. We evaluated several models using simulations. Our results show that these models are useful for a set of models of applications in the context of future exascale systems.In the second part of the thesis, we study the problem of minimizing the expected sensor data acquisition cost when evaluating a query expressed as a tree of disjunctive Boolean operators applied to Boolean predicates. The problem is to determine the order in which predicates should be evaluated so as to shortcut part of the query evaluation and minimize the expected cost.In the sixth chapter, we present the related work of the second part and in the seventh chapter, we study the problem for queries expressed as a disjunctive normal form. We consider the more general case where each data stream can appear in multiple predicates and we consider two models, the model where each predicate can access a single stream and the model where each predicate can access multiple streams
APA, Harvard, Vancouver, ISO, and other styles
9

Ou, Ting-Wei, and 歐庭瑋. "A Data Temperature Restore Scheduling System with Container-based Virtualization for Cloud Applications." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/22809986435557630954.

Full text
Abstract:
碩士
國立中興大學
資訊科學與工程學系
104
As cloud computing grows in popularity, IDC reports that the amount of data in the Internet is doubling the rate of size growth every two years, and it will reach 44 zettabytes by 2020. IDC also noted that the downtime cost from infrastructure failures, data loss, and human error is $1 million per hour, which is tremendous loss for business. It shows the high availability and reliability are important for enterprises. Previous studies with high availability mostly adopted continuous data protection (CDP) to ensure that the data will not lost. However, they do not consider the data access frequency for users’ demands in cloud applications, resulting in higher overhead of backup operations. Recently, virtualization environment such as virtual machines (VM) prevails in cloud. Other studies considered the importance of system environment backup and proposed the backup-and-recovery scheme systems based on VMs. Nevertheless, it reduces the elasticity of system deployment. Therefore, this thesis proposes a data temperature restore scheduling with container-based virtualization (DTRS). The algorithm analyzes users’ demands according to the data access rate of cloud applications to enhance the effectiveness of recovery performance. In the experiment, this thesis uses the testing tools YCSB (Yahoo! Cloud Serving Benchmark), shows that DTRS can effectively shorten the mean time to recovery, and meets high availability requirement for cloud applications.
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Zhi-ming, and 王志明. "Reversible Data Hiding Schemes for ABTC-EQ and AMBTC Restored Images." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/pb3e8n.

Full text
Abstract:
碩士
逢甲大學
資訊工程學系
104
Hiding a message in compression codes can reduce transmission costs and simultaneously make the transmission more secure. In this thesis, we propose two novel reversible data hiding schemes to hide the secret data into the block truncation code and its restored images, respectively. In the first method, we utilize adaptive block truncation coding based on edge-based quantization approach (ABTC-EQ) to compress the image and obtain its compression code. The characteristic not being used in ABTC-EQ and Zero-point Fixed Histogram Shifting (ZPF-HS) are then used to embed the secret data into the compression code. In the second method, we employ the Absolute Moment Block Truncation Coding (AMBTC) to obtain the restored images. Then, secret information is adaptively embedded into pixels of each AMBTC-restored block, except for the positions of two replaced quantization levels using quantization level difference (QLD) and interpolation technique. The performance of two proposed schemes are compared to previous image steganography methods. The experimental results show that our approachs outperforms referenced approaches.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Data restore"

1

Leber, Jody. Windows NT backup & restore. Beijing: O'Reilly, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Leber, Jody. Windows NT backup & restore. Sebastopol, CA: O'Reilly, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Miroslav, Klivansky, and Barto Michael, eds. Backup and restore practices for Sun Enterprise servers. Palo Alto, CA: Sun Microsystems Press, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Optimizing restore and recovery solutions with DB2 Recovery Expert for z/OS V2.1. [United States?]: IBM, International Technical Support Organization, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

honouree, Hustinx P. J., ed. Data protection anno 2014 : how to restore trust?: Contributions in honour of Peter Hustinx, European Data Protection Supervisor (2004-2014). Cambridge [England]: Intersentia, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Blokdijk, Gerard. ITIL practitioner support and restore (IPSR) all-in-one help desk exam guide and certification work book: Define, implement, manage and review service support with service desk, incident management and problem management. [United States]: Gerard Blokdijk, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hobson, Anthony. Lanterns that lit our world: How to identify, date, and restore old railroad, marine, fire, carriage, farm, and other lanterns. Spencertown, N.Y: Golden Hill Press, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Redbooks, IBM. DB2 Recovery on VSE and Vm Using the Data Restore Feature. Ibm, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Beard, Bradley. Beginning Backup and Restore for SQL Server: Data Loss Management and Prevention Techniques. Apress, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lo, Meng-chen, Marie-France Marin, Alik S. Widge, and Mohammed R. Milad. Device-Based Treatment for PTSD. Edited by Frederick J. Stoddard, David M. Benedek, Mohammed R. Milad, and Robert J. Ursano. Oxford University Press, 2018. http://dx.doi.org/10.1093/med/9780190457136.003.0025.

Full text
Abstract:
Device-based neuromodulation is an emerging tool with great potential for significant scientific and clinical implications for a number of mental disorders. Neuromodulation techniques deliver electro-magnetic pulses into the brain via invasive or noninvasive electrodes, with various timing and stimulation parameters. The stimulation is thought to work as a “brain pacemaker” that either activates or inactivates targeted brain regions to restore normal homeostasis. There have been significant recent efforts to explore the clinical utility of device-based approaches for the treatment of mood, anxiety disorders, and to a limited extent posttraumatic stress disorder (PTSD). This chapter outlines the scientific underpinnings and rationale for various device-based treatments of PTSD, highlights positive results of studies in other mental disorders, and summarizes the limited clinical data related specifically to the treatment of PTSD and other trauma- and stressor-related disorders to date.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Data restore"

1

Finsel, Josef. "How Do I Back Up and Restore My Data?" In The Handbook for Reluctant Database Administrators, 69–92. Berkeley, CA: Apress, 2001. http://dx.doi.org/10.1007/978-1-4302-1146-4_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Raineri, Paolo, and Francesco Molinari. "Innovation in Data Visualisation for Public Policy Making." In The Data Shake, 47–59. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-63693-7_4.

Full text
Abstract:
AbstractIn this contribution, we propose a reflection on the potential of data visualisation technologies for (informed) public policy making in a growingly complex and fast changing landscape—epitomized by the situation created after the outbreak of the Covid-19 pandemic. Based on the results of an online survey of more than 50 data scientists from all over the world, we highlight five application areas seeing the biggest needs for innovation according to the domain specialists. Our main argument is that we are facing a transformation of the business cases supporting the adoption and implementation of data visualisation methods and tools in government, which the conventional view of the value of Business Intelligence does not capture in full. Such evolution can drive a new wave of innovations that preserve (or restore) the human brain’s centrality in a decision making environment that is increasingly dominated—for good and bad—by artificial intelligence. Citizen science, design thinking, and accountability are mentioned as triggers of civic engagement and participation that can bring a community of “knowledge intermediaries” into the daily discussion on data supported policy making.
APA, Harvard, Vancouver, ISO, and other styles
3

Solanki, Manishkumar R. "SOLID: A Web System to Restore the Control of Users’ Personal Data." In Advances in Intelligent Systems and Computing, 257–67. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-8289-9_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Fujii, Tomohiro, and Masao Hirokawa. "A Data Concealing Technique with Random Noise Disturbance and a Restoring Technique for the Concealed Data by Stochastic Process Estimation." In International Symposium on Mathematics, Quantum Theory, and Cryptography, 103–24. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-5191-8_11.

Full text
Abstract:
Abstract We propose a technique to conceal data on a physical layer by disturbing them with some random noises, and moreover, a technique to restore the concealed data to the original ones by using the stochastic process estimation. Our concealing-restoring system manages the data on the physical layer from the data link layer. In addition to these proposals, we show the simulation result and some applications of our concealing-restoring technique.
APA, Harvard, Vancouver, ISO, and other styles
5

Ting, I.-Hsien, Chris Kimble, and Daniel Kudenko. "A Pattern Restore Method for Restoring Missing Patterns in Server Side Clickstream Data." In Web Technologies Research and Development - APWeb 2005, 501–12. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/978-3-540-31849-1_49.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bouden, D., and I. Lemahieu. "Use of Blind Deconvolution to Restore Eddy Current Data from Non-Destructive Testing of Defects in Welds." In Review of Progress in Quantitative Nondestructive Evaluation, 743–50. Boston, MA: Springer US, 1999. http://dx.doi.org/10.1007/978-1-4615-4791-4_95.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sallin, Marc, Martin Kropp, Craig Anslow, James W. Quilty, and Andreas Meier. "Measuring Software Delivery Performance Using the Four Key Metrics of DevOps." In Lecture Notes in Business Information Processing, 103–19. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-78098-2_7.

Full text
Abstract:
Abstract The Four Key Metrics of DevOps have become very popular for measuring IT-performance and DevOps adoption. However, the measurement of the four metrics deployment frequency, lead time for change, time to restore service and change failure rate is often done manually and through surveys - with only few data points. In this work we evaluated how the Four Key Metrics can be measured automatically and developed a prototype for the automatic measurement of the Four Key Metrics. We then evaluated if the measurement is valuable for practitioners in a company. The analysis shows that the chosen measurement approach is both suitable and the results valuable for the team with respect to measuring and improving the software delivery performance.
APA, Harvard, Vancouver, ISO, and other styles
8

Sippu, Seppo, and Eljas Soisalon-Soininen. "Transaction Rollback and Restart Recovery." In Data-Centric Systems and Applications, 65–99. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-12292-2_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yan, Yaowei, Dongsheng Luo, Jingchao Ni, Hongliang Fei, Wei Fan, Xiong Yu, John Yen, and Xiang Zhang. "Local Graph Clustering by Multi-network Random Walk with Restart." In Advances in Knowledge Discovery and Data Mining, 490–501. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-93040-4_39.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Rajachandrasekar, Raghunath, Xiangyong Ouyang, Xavier Besseron, Vilobh Meshram, and Dhabaleswar K. Panda. "Can Checkpoint/Restart Mechanisms Benefit from Hierarchical Data Staging?" In Euro-Par 2011: Parallel Processing Workshops, 312–21. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-29740-3_35.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Data restore"

1

Cui, Jian, Gang Li, Pu Qi Zhou, and Jia Qi Zhang. "Restore the Original data by Embedded data." In 2019 IEEE 4th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC). IEEE, 2019. http://dx.doi.org/10.1109/iaeac47372.2019.8997598.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hilprecht, Benjamin, and Carsten Binnig. "ReStore - Neural Data Completion for Relational Databases." In SIGMOD/PODS '21: International Conference on Management of Data. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3448016.3457264.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bardis, Nikolaos, Nikolaos Doukas, and Oleksandr P. Markovskyi. "Effective method to restore data in distributed data storage systems." In MILCOM 2015 - 2015 IEEE Military Communications Conference. IEEE, 2015. http://dx.doi.org/10.1109/milcom.2015.7357617.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Long, Harigovind Ramasamy, Valentina Salapura, Robin Arnold, Xu Wang, Senthil Bakthavachalam, Phil Coulthard, et al. "System Restore in a Multi-cloud Data Pipeline Platform." In 2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks – Industry Track. IEEE, 2019. http://dx.doi.org/10.1109/dsn-industry.2019.00012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bahls, Daniel, Benjamin Zapilko, and Klaus Tochtermann. "A Data Restore Model for Reproducibility in Computational Statistics." In the 13th International Conference. New York, New York, USA: ACM Press, 2013. http://dx.doi.org/10.1145/2494188.2494205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Shriwas, M. S., N. Gupta, and A. Sinhal. "Efficient Method for Backup and Restore Data in Android." In 2013 International Conference on Communication Systems and Network Technologies (CSNT 2013). IEEE, 2013. http://dx.doi.org/10.1109/csnt.2013.148.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yin, Xiaoyan, Javier Alonso, Fumio Machida, Ermeson C. Andrade, and Kishor S. Trivedi. "Availability Modeling and Analysis for Data Backup and Restore Operations." In 2012 IEEE 31st International Symposium on Reliable Distributed Systems (SRDS). IEEE, 2012. http://dx.doi.org/10.1109/srds.2012.9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Dagnaw, Girum, Wang Hua, and Ke Zhou. "SSD Assisted Caching for Restore Optimization in Distributed Deduplication Environment." In 2020 International Conference on High Performance Big Data and Intelligent Systems (HPBD&IS). IEEE, 2020. http://dx.doi.org/10.1109/hpbdis49115.2020.9130572.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhu, Chunbiao, Yuanqi Chen, Yiwei Zhang, Shan Liu, and Ge Li. "ResGAN: A Low-Level Image Processing Network to Restore Original Quality of JPEG Compressed Images." In 2019 Data Compression Conference (DCC). IEEE, 2019. http://dx.doi.org/10.1109/dcc.2019.00128.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Forbes, Florence, and Wojciech Pieczynski. "New trends in Markov models and related learning to restore data." In 2009 IEEE International Workshop on Machine Learning for Signal Processing (MLSP). IEEE, 2009. http://dx.doi.org/10.1109/mlsp.2009.5306255.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Data restore"

1

Berkowitz, Jacob, Christine VanZomeren, Nia Hurst, and Kristina Sebastian. An evaluation of soil phosphorus storage capacity (SPSC) at proposed wetland restoration locations in the western Lake Erie Basin. Engineer Research and Development Center (U.S.), September 2021. http://dx.doi.org/10.21079/11681/42108.

Full text
Abstract:
Historical loss of wetlands coupled with excess phosphorus (P) loading at watershed scales have degraded water quality in portions of the western Lake Erie Basin (WLEB). In response, efforts are underway to restore wetlands and decrease P loading to surface waters. Because wetlands have a finite capacity to retain P, researchers have developed techniques to determine whether wetlands function as P sources or sinks. The following technical report evaluates the soil P storage capacity (SPSC) at locations under consideration for wetland restoration in collaboration with the Great Lakes Restoration Initiative (GLRI) and the H2Ohio initiative. Results indicate that the examined soils display a range of P retention capacities, reflecting historic land-use patterns and management regimes. However, the majority of study locations exhibited some capacity to sequester additional P. The analysis supports development of rankings and comparative analyses of areas within a specific land parcel, informing management through design, avoidance, removal, or remediation of potential legacy P sources. Additionally, the approaches described herein support relative comparisons between multiple potential wetland development properties. These results, in conjunction with other data sources, can be used to target, prioritize, justify, and improve decision-making for wetland management activities in the WLEB.
APA, Harvard, Vancouver, ISO, and other styles
2

Bedford, Philip, Alexis Long, Thomas Long, Erin Milliken, Lauren Thomas, and Alexis Yelvington. Legal Mechanisms for Mitigating Flood Impacts in Texas Coastal Communities. Edited by Gabriel Eckstein. Texas A&M University School of Law Program in Natural Resources Systems, May 2019. http://dx.doi.org/10.37419/eenrs.mitigatingfloodimpactstx.

Full text
Abstract:
Flooding is a major source of concern for Texas’ coastal communities. It affects the quality of infrastructure, the lives of citizens, and the ecological systems upon which coastal communities in Texas rely. To plan for and mitigate the impacts of flooding, Texas coastal communities may implement land use tools such as zoning, drainage utility systems, eminent domain, exactions, and easements. Additionally, these communities can benefit from understanding how flooding affects water quality and the tools available to restore water bodies to healthy water quality levels. Finally, implementing additional programs for education and ecotourism will help citizens develop knowledge of the impacts of flooding and ways to plan and mitigate for coastal flooding. Land use tools can help communities plan for and mitigate flooding. Section III addresses zoning, a land use tool that most municipalities already utilize to organize development. Zoning can help mitigate flooding, drainage, and water quality issues, which, Texas coastal communities continually battle. Section IV discusses municipal drainage utility systems, which are a mechanism available to municipalities to generate dedicated funds that can help offset costs associated with providing stormwater management. Section V addresses land use and revenue-building tools such as easements, eminent domain, and exactions, which are vital for maintaining existing and new developments in Texas coastal communities. Additionally, Section VI addresses conservation easements, which are a flexible tool that can enhance community resilience through increasing purchase power, establishing protected legal rights, and minimizing hazardous flood impacts. Maintaining good water quality is important for sustaining the diverse ecosystems located within and around Texas coastal communities. Water quality is regulated at the federal level through the Clean Water Act. As discussed in Section VII, the state of Texas is authorized to implement and enforce these regulations by implementing point source and nonpoint source pollutants programs, issuing permits, implementing stormwater discharge programs, collecting water quality data, and setting water quality standards. The state of Texas also assists local communities with implementing restorative programs, such as Watershed Protection Programs, to help local stakeholders restore impaired water bodies. Section VIII addresses ecotourism and how these distinct economic initiatives can help highlight the importance of ecosystem services to local communities. Section VIX discusses the role of education in improving awareness within the community and among visitors, and how making conscious decisions can allow coastal communities to protect their ecosystem and protect against flooding.
APA, Harvard, Vancouver, ISO, and other styles
3

Halker Singh, Rashmi B., Juliana H. VanderPluym, Allison S. Morrow, Meritxell Urtecho, Tarek Nayfeh, Victor D. Torres Roldan, Magdoleen H. Farah, et al. Acute Treatments for Episodic Migraine. Agency for Healthcare Research and Quality (AHRQ), December 2020. http://dx.doi.org/10.23970/ahrqepccer239.

Full text
Abstract:
Objectives. To evaluate the effectiveness and comparative effectiveness of pharmacologic and nonpharmacologic therapies for the acute treatment of episodic migraine in adults. Data sources. MEDLINE®, Embase®, Cochrane Central Registrar of Controlled Trials, Cochrane Database of Systematic Reviews, PsycINFO®, Scopus, and various grey literature sources from database inception to July 24, 2020. Comparative effectiveness evidence about triptans and nonsteroidal anti-inflammatory drugs (NSAIDs) was extracted from existing systematic reviews. Review methods. We included randomized controlled trials (RCTs) and comparative observational studies that enrolled adults who received an intervention to acutely treat episodic migraine. Pairs of independent reviewers selected and appraised studies. Results. Data on triptans were derived from 186 RCTs summarized in nine systematic reviews (101,276 patients; most studied was sumatriptan, followed by zolmitriptan, eletriptan, naratriptan, almotriptan, rizatriptan, and frovatriptan). Compared with placebo, triptans resolved pain at 2 hours and 1 day, and increased the risk of mild and transient adverse events (high strength of the body of evidence [SOE]). Data on NSAIDs were derived from five systematic reviews (13,214 patients; most studied was ibuprofen, followed by diclofenac and ketorolac). Compared with placebo, NSAIDs probably resolved pain at 2 hours and 1 day, and increased the risk of mild and transient adverse events (moderate SOE). For other interventions, we included 135 RCTs and 6 comparative observational studies (37,653 patients). Compared with placebo, antiemetics (low SOE), dihydroergotamine (moderate to high SOE), ergotamine plus caffeine (moderate SOE), and acetaminophen (moderate SOE) reduced acute pain. Opioids were evaluated in 15 studies (2,208 patients).Butorphanol, meperidine, morphine, hydromorphone, and tramadol in combination with acetaminophen may reduce pain at 2 hours and 1 day, compared with placebo (low SOE). Some opioids may be less effective than some antiemetics or dexamethasone (low SOE). No studies evaluated instruments for predicting risk of opioid misuse, opioid use disorder, or overdose, or evaluated risk mitigation strategies to be used when prescribing opioids for the acute treatment of episodic migraine. Calcitonin gene-related peptide (CGRP) receptor antagonists improved headache relief at 2 hours and increased the likelihood of being headache-free at 2 hours, at 1 day, and at 1 week (low to high SOE). Lasmiditan (the first approved 5-HT1F receptor agonist) restored function at 2 hours and resolved pain at 2 hours, 1 day, and 1 week (moderate to high SOE). Sparse and low SOE suggested possible effectiveness of dexamethasone, dipyrone, magnesium sulfate, and octreotide. Compared with placebo, several nonpharmacologic treatments may improve various measures of pain, including remote electrical neuromodulation (moderate SOE), magnetic stimulation (low SOE), acupuncture (low SOE), chamomile oil (low SOE), external trigeminal nerve stimulation (low SOE), and eye movement desensitization re-processing (low SOE). However, these interventions, including the noninvasive neuromodulation devices, have been evaluated only by single or very few trials. Conclusions. A number of acute treatments for episodic migraine exist with varying degrees of evidence for effectiveness and harms. Use of triptans, NSAIDs, antiemetics, dihydroergotamine, CGRP antagonists, and lasmiditan is associated with improved pain and function. The evidence base for many other interventions for acute treatment, including opioids, remains limited.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography