Academic literature on the topic 'Shared Data Software'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Shared Data Software.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Shared Data Software"

1

Mandrykin, M. U., and A. V. Khoroshilov. "Towards deductive verification of C programs with shared data." Programming and Computer Software 42, no. 5 (September 2016): 324–32. http://dx.doi.org/10.1134/s0361768816050054.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Robinson, Patrick G., and James D. Arthur. "Distributed process creation within a shared data space framework." Software: Practice and Experience 25, no. 2 (February 1995): 175–91. http://dx.doi.org/10.1002/spe.4380250205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Martin, Bruce. "Concurrent programming vs. concurrency control: shared events or shared data." ACM SIGPLAN Notices 24, no. 4 (April 1989): 142–44. http://dx.doi.org/10.1145/67387.67426.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Baldwin, Adrian, and Simon Shiu. "Enabling shared audit data." International Journal of Information Security 4, no. 4 (February 8, 2005): 263–76. http://dx.doi.org/10.1007/s10207-004-0061-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

SKAF, HALA, FRANCOIS CHAROY, and CLAUDE GODART. "MAINTAINING SHARED WORKSPACES CONSISTENCY DURING SOFTWARE DEVELOPMENT." International Journal of Software Engineering and Knowledge Engineering 09, no. 05 (October 1999): 623–42. http://dx.doi.org/10.1142/s0218194099000334.

Full text
Abstract:
The development of large software is always done by teams of people working together and struggling to produce quality software within their budget. Each person in these teams generally knows his job and wants to do it, without being bothered by other people. However, when people work towards a common goal they have to exchange data and create dependencies between each other regarding these data. If these people have to follow a process, cooperating and synchronizing with co-workers and trying to reach one's own goal becomes too difficult to manage. This may lead to frustration, lower productivity and reluctancy to follow the predefined process. This is why some support is needed to avoid common mistakes that occur when people exchange data. In this paper, a hybrid approach to support cooperation is presented. The originality of this approach is the ability to enforce general properties on cooperative interactions while using the semantic of applications to fit particular situations or requirements. This paper gives a brief idea about the general enforced properties on activity interactions. It describes in detail the semantic rules that control activity results, the impacts of the cooperation on these rules and how both dimensions interact.
APA, Harvard, Vancouver, ISO, and other styles
6

Saifan, Ahmad A., and Zainab Lataifeh. "Privacy preserving defect prediction using generalization and entropy-based data reduction." Intelligent Data Analysis 25, no. 6 (October 29, 2021): 1369–405. http://dx.doi.org/10.3233/ida-205504.

Full text
Abstract:
The software engineering community produces data that can be analyzed to enhance the quality of future software products, and data regarding software defects can be used by data scientists to create defect predictors. However, sharing such data raises privacy concerns, since sensitive software features are usually considered as business assets that should be protected in accordance with the law. Early research efforts on protecting the privacy of software data found that applying conventional data anonymization to mask sensitive attributes of software features degrades the quality of the shared data. In addition, data produced by such approaches is not immune to attacks such as inference and background knowledge attacks. This research proposes a new approach to share protected release of software defects data that can still be used in data science algorithms. We created a generalization (clustering)-based approach to anonymize sensitive software attributes. Tomek link and AllNN data reduction approaches were used to discard noisy records that may affect the usefulness of the shared data. The proposed approach considers diversity of sensitive attributes as an important factor to avoid inference and background knowledge attacks on the anonymized data, therefore data discarded is removed from both defective and non-defective records. We conducted experiments conducted on several benchmark software defect datasets, using both data quality and privacy measures to evaluate the proposed approach. Our findings showed that the proposed approach outperforms existing well-known techniques using accuracy and privacy measures.
APA, Harvard, Vancouver, ISO, and other styles
7

Morris, Donald G., and David K. Lowenthal. "Accurate data redistribution cost estimation in software distributed shared memory systems." ACM SIGPLAN Notices 36, no. 7 (July 2001): 62–71. http://dx.doi.org/10.1145/568014.379570.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Focardi, Riccardo, Roberto Lucchi, and Gianluigi Zavattaro. "Secure shared data-space coordination languages: A process algebraic survey." Science of Computer Programming 63, no. 1 (November 2006): 3–15. http://dx.doi.org/10.1016/j.scico.2005.07.011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Osterbye, Kasper. "Abstract data types with shared operations." ACM SIGPLAN Notices 23, no. 6 (June 1988): 91–96. http://dx.doi.org/10.1145/44546.44554.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Wei Feng. "Software Design for High-Speed Data Capture." Applied Mechanics and Materials 536-537 (April 2014): 536–39. http://dx.doi.org/10.4028/www.scientific.net/amm.536-537.536.

Full text
Abstract:
10G Ethernet technology has been widely used in modern high speed communication system. As a result, program design for high-speed data capture on 10G Ethernet, as the first and important step in network monitor and analysis system, has become a challenging task. This paper proposed a high-speed data capture method based on WinCap and shared memory pool technology and has features of high speed, low packet loss rate, high efficiency and good portability. The system test and data analysis proved that the proposed method in this paper can effectively capture the data at speed of 6Gbps and stably keep the packet loss rate under 0.03%.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Shared Data Software"

1

Tarhan, Faik Aras. "Distance Adaptive Shared Path Protection for Elastic Optical Networks under Dynamic Traffic." Thesis, KTH, Programvaruteknik och Datorsystem, SCS, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-141703.

Full text
Abstract:
Recently, the internet traffic demand has been compoundly rising up as a result of the increase in the number of users as well as data demand per user. That is why, Elastic Optical Networks (EONs), which employ Orthongonal Frequency Division Multiplexing (OFDM) , have been proposed to scale the demands by efficiently utilizing the spectrum as they provide finer spectrum granularity and distance adaptive modulation formatting. Not only efficiency and scalability but also survivability of the network is significant since even a single-link failure may cause huge volume of data considering that even a channel bandwidth may vary between 1 Gb/s and 1Tb/s. Hence, we propose a heuristic algorithm to increase the spectrum efficiency in EONs employing Shared Path Protection (SPP) as the recovery scheme provided that the traffic demand is dynamic and the modulation format is distance adaptive. Our algorithm, Primary First-Fit Modified Backup Last-Fit (PF-MBL), follows two step approach for Routing and Spectrum Assignment (RSA). In the first step, k-shortest path algorithm is applied and candidates paths are found regardless of spectrum availability for routing. In the second step, spectrum is assigned to working paths and backup paths starting from the different ends of the links’ frequency domain so as to group working and backup path resources separately. In working path spectrum assignment, First-Fit strategy is employed. In backup path spectrum assignment, the algorithm chooses a path according to a formula among candidate paths with available spectrum widths found by Last-Fit strategy. In this manner, we expect to provide less fragmented spectrum for backup paths as well as the network, thereby increasing their sharability and thus the spectrum efficiency. We compare our algorithm and the two current solutions by simulations. Results show that PF-MBL can improve the performance in terms of blocking and bandwidth blocking probability by 24% up to 59% compared to the current outperforming algorithm when the bandwidth acceptance ratio of the system varies from 90% to 99.9% in different loads. Moreover, it achieves between 41% to 59% savings over the current outperforming algorithm when the bandwidth acceptance ratio of the system varies from 99% to 99.9%.
APA, Harvard, Vancouver, ISO, and other styles
2

Ganpaa, Gayatri. "An R*-Tree Based Semi-Dynamic Clustering Method for the Efficient Processing of Spatial Join in a Shared-Nothing Parallel Database System." ScholarWorks@UNO, 2006. http://scholarworks.uno.edu/td/298.

Full text
Abstract:
The growing importance of geospatial databases has made it essential to perform complex spatial queries efficiently. To achieve acceptable performance levels, database systems have been increasingly required to make use of parallelism. The spatial join is a computationally expensive operator. Efficient implementation of the join operator is, thus, desirable. The work presented in this document attempts to improve the performance of spatial join queries by distributing the data set across several nodes of a cluster and executing queries across these nodes in parallel. This document discusses a new parallel algorithm that implements the spatial join in an efficient manner. This algorithm is compared to an existing parallel spatial-join algorithm, the clone join. Both algorithms have been implemented on a Beowulf cluster and compared using real datasets. An extensive experimental analysis reveals that the proposed algorithm exhibits superior performance both in declustering time as well as in the execution time of the join query.
APA, Harvard, Vancouver, ISO, and other styles
3

Guler, Sevil. "Secure Bitcoin Wallet." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-177587.

Full text
Abstract:
Virtual currencies and mobile banking are technology advancements that are receiving increased attention in the global community because of their accessibility, convenience and speed. However, this popularity comes with growing security concerns, like increasing frequency of identity theft, leading to bigger problems which put user anonymity at risk. One possible solution for these problems is using cryptography to enhance security of Bitcoin or other decentralised digital currency systems and to decrease frequency of attacks on either communication channels or system storage. This report outlines various methods and solutions targeting these issues and aims to understand their effectiveness. It also describes Secure Bitcoin Wallet, standard Bitcoin transactions client, enhanced with various security features and services.
APA, Harvard, Vancouver, ISO, and other styles
4

Blas, Foix Xavier de. "Proyecto Chronojump-Boscosystem. Herramienta informática libre para el estudio cinemático del salto vertical: medición del tiempo, detección del ángulo de flexión sin marcadores y elaboración de tablas de percentiles." Doctoral thesis, Universitat Ramon Llull, 2012. http://hdl.handle.net/10803/83302.

Full text
Abstract:
La mesura de l’alçada del salt vertical és un indicador de la força i la potència del tren inferior. S’han trobat nombroses eines per a la mesura del salt a partir de la distància, el temps, l’acceleració o la filmació. A excepció del programari d’anàlisi de vídeo Kinovea, la resta són caixes negres que impedeixen la inspecció completa per part de tercers. El projecte Chronojump-Boscosystem proposa la creació d’un sistema de mesura amb programari i maquinari lliures i la compartició de les dades entre els interessats. Objectius: 1) Crear, validar i distribuir una eina de llicència lliure que mesuri els temps de contacte i de vol en el salt vertical, utilitzant una plataforma de contactes. 2) Desenvolupar i validar una eina lliure per a mesurar l’angle de flexió de l’articulació del genoll,prèvia un salt CMJ, en una filmació bidimensional sense l’ús de marcadors. 3) Desenvolupar una eina lliure que s’integri amb les anteriors i permeti compartir dades entre avaluadors per tal de construir taules de percentils. Metodologia. Respecte al primer objectiu: Es va crear un microcontrolador amb KiCadi es va validar amb un generador d’ones quadrades i comparant la captació amb la d’un oscil•loscopi. Es varen desenvolupar dos tipus de plataformes de contactes: rígida de fibra de vidre i flexible d’escuma de polièster. Es varen validar les plataformes per la pressió mínima necessària per a l’activació en diferents punts amb cèl•lula de càrrega,i també per la comparació dels temps d’activació/desactivació respecte al Gold Standard en una mostra de8 subjectes experimentats, realitzant salts submàxims amb un peu a cada plataforma. Es va desenvolupar i validar un programari de gestió seguint els principis de les metodologies àgils. Es va traduir el programar i per part de voluntaris. Respecte al segon objectiu: Es va desenvolupar i validar un programari de seguiment usant OpenCV. Es va crear un model lineal de predicció de l’angle de flexió del genoll a partir de l’anàlisi de l’alçada percentual de les cames respecte a la màxima extensió, i la ubicació de la ròtula respecte a la variable anterior. Per a aquesta anàlisi es van utilitzar les dades obtingudes pel programari de seguiment dissenyat ad-hoc, en una mostra de 35 salts executats per 13 subjectes. Es va validar el model per comparació amb l’angle real en fotogrames no utilitzats en l’entrenament del mateix. Respecte al tercer objectiu: Es van crear criteris de fiabilitat de les dades segons el consens d’un grup de discussió. Es va desenvolupar un programari per a compartir dades usant serveis web. Resultats: Totes les eines tecnològiques s’han desenvolupat i les seves llicències són lliures. L’error del microcontrolador és de 0,1%. La validesa de la plataforma de fibra de vidre és de 0,95 (CCI). La mida del programari de gestió és propera a les 110.000 línies de codi i està disponible en 7 idiomes. La mitjana de l’error de predicció en la flexió del genoll és de 2,6º. Fins al moment s’han compartit 3462 saltsde751personesperpartde24 avaluadors diferents. S’han comptabilitzat 16 publicacions de tipus científic d’altres autors usant las eines creades. Les llicències lliures atorgades permeten que qualsevol interessat pugui revisar en profunditat els instruments dissenyats, adquirir-los a baix cost o construir-los pel seu compte, sense violar normes ètiques o legals.
La medición de la altura del salto vertical es un indicador de la fuerza y potencia del tren inferior. Se han encontrado numerosas herramientas para la medición del salto ya sea a partir de la distancia, el tiempo, la aceleración o la filmación. A excepción del software de análisis de vídeo Kinovea, el resto son cajas negras que impiden su completa inspección por parte de terceros. El proyecto Chronojump-Boscosystem propone la creación de un sistema de medición con software y hardware libres y la compartición de los datos entre interesados. Objetivos: 1) Crear, validar y distribuir una herramienta de licencia libre que mida los tiempos de contacto y vuelo en el salto vertical, usando una plataforma de contactos. 2) Desarrollar y validar una herramienta libre para medir el ángulo de flexión de la articulación de la rodilla, previo a un salto CMJ, a partir de una filmación bidimensional y sin el uso de marcadores. 3) Desarrollar una herramienta libre que se integre con las anteriores y permita compartir datos entre evaluadores a fin de construir tablas de percentiles. Metodología. Respecto al primer objetivo: Se creó de un microcontrolador con KiCady se validó con un generador de ondas cuadradas y comparando la captación con la de un osciloscopio. Se desarrollaron dos tipos de plataformas de contactos: rígida de fibra de vidrio y flexible de espuma de poliéster. Se validaron las plataformas por la presión mínima requerida para la activación en distintos puntos con célula de carga, y también por la comparación de los tiempos de activación/desactivación respecto al Gold Standard en una muestra de 8 sujetos experimentados realizando saltos submáximos que saltaron con un pie en cada plataforma. Se desarrolló y validó un software de gestión a partir de los principios de las metodologías ágiles. Se tradujo el software por parte de voluntarios. Respecto al segundo objetivo: Se desarrolló y validó un software de seguimiento usando OpenCV. Se creó un modelo lineal de predicción del ángulo de flexión de la rodilla a partir del análisis de la altura porcentual de las piernas respecto a la máxima en extensión, y la ubicación de la rótula respecto a la variable anterior. Para este análisis se usaron los datos obtenidos por el software de seguimiento diseñado ad-hoc en una muestra de 35 saltos por parte de 13 sujetos. Se validó el modelo por comparación con el ángulo real en fotogramas no usados en el entrenamiento del mismo. Respecto al tercer objetivo: Se crearon criterios de fiabilidad de los datos según el consenso de un grupo de discusión. Se desarrolló un software para compartir datos usando servicios web. Resultados: Todas las herramientas tecnológicas han sido creadas y sus licencias son libres. El error del microcontrolador es 0,1%. La validez de la plataforma de fibra de vidrio es 0,95 (CCI). El tamaño del software de gestión es cercano a las 110.000 líneas de código y está disponible en 7 idiomas. La media del error de predicción en la flexión de la rodilla es 2,6º. Hasta la fecha se han compartido 3462 saltos de 751 personas por parte de 24 evaluadores distintos. Se han encontrado 16 publicaciones de tipo científico de otros autores usando las herramientas creadas. Las licencias libres otorgadas permiten que cualquier interesado pueda revisar en profundidad los instrumentos diseñados, adquirirlos a un coste bajo o construirlos por su propia cuenta sin violar normas éticas o legales.
Measuring the height of the vertical jump is an indicator of the strength and power of the lower body. Many tools can be found for measuring this jump, either by using distance, duration, acceleration, or by filming. With the sole exception of the video analysis software Kinovea, the rest appear to be closed black boxes, which are impervious to inspection by third parties. In the Chronojump-Boscosystem project, we propose the creation of a FLOSS (Free/Libre/Open-Source Software) measurement system that consists of free hardware and software combined with data-sharing among stakeholders. Objectives: 1) To create, validate and distribute a FLOSS tool that measures the contact and flight times of the vertical jump by using a contact platform. 2) To develop and validate a FLOSS tool for measuring the angle of flexion of the knee joint prior to a CMJ (Counter Movement Jump) based on a two-dimensional image without using markers. 3) To develop a FLOSS tool integrated with the ones described above which facilitates data-sharing among reviewers in order to build percentile tables. Methodology: Regarding the first objective: A microcontroller was created using KiCad, and a square wave generator was employed to validate the results. The data-capture was compared with that of an oscilloscope. Two types of contact-platforms were developed using different materials: rigid fiberglass and flexible polyester foam. These platforms were validated by the minimum pressure required for activation at different points by a load cell, together with the on/off time of our platforms in respect of the Gold Standard by a sample of 8 subjects performing submaximal jumps with one foot on each platform. Management software was developed and validated according to the principles of agile methodologies. Volunteers were used to translate the software. Regarding the second objective: OpenCV was used to develop and validate tracking software. A linear prediction model of the angle of flexion of the knee was developed from the analysis of the percentage height of the legs with respect to the maximum extension, and the location of the patella with respect to the previous variable. This analysis used data obtained from the special-purpose design tracking software, from a sample of 35 jumps by 13 subjects. This model was validated by comparison with the angle obtained from video frames not used in the training session. Regarding the third objective; reliability criteria for the data were established based on group discus¬sion. Software was developed to share data using web services. Results: All the technological tools have been created under the FLOSS system and are, in that sense, free. The margin of error of the microcontroller is 0.1%. The validity of the fiber-glass platform is 0.95 (ICC). The management software runs close to 110,000 lines of code and is available in 7 languages. The mean prediction error in the flexing of the knee is 2.6°.To date, 3462 jumps have been shared from 751 people by 24 different professionals. 16 scientific publications by other authors using the tools developed during the project have been found. The FLOSS licenses granted allow any interested party to review the designed tools in depth, and to purchase and/or build them, at low cost without violating any ethical or legal standards.
APA, Harvard, Vancouver, ISO, and other styles
5

Rajamani, Karthick. "Automatic data aggregation for software distributed shared memory systems." Thesis, 1997. http://hdl.handle.net/1911/17126.

Full text
Abstract:
Software Distributed Shared Memory (DSM) provides a shared-memory abstraction on distributed memory hardware, making a parallel programmer's task easier. Unfortunately, software DSM is less efficient than the direct use of the underlying message-passing hardware. The chief reason for this is that hand-coded and compiler-generated message-passing programs typically achieve better data aggregation in their messages than programs using software DSM. Software DSM has poorer data aggregation because the system lacks the knowledge of the application's behavior that a programmer or compiler analysis can provide. We propose four new techniques to perform automatic data aggregation in software DSM. Our techniques use run-time analysis of past data-fetch accesses made by a processor, to aggregate data movement for future accesses. They do not need any additional compiler support. We implemented our techniques in the TreadMarks software DSM system. We used a test suite of four applications--3D-FFT, Barnes-Hut, Ilink and Shallow. For these applications we obtained 40% to 66% reduction in message counts which resulted in 6% to 19% improvement in execution times.
APA, Harvard, Vancouver, ISO, and other styles
6

Mihailescu, Madalin. "Low-cost Data Analytics for Shared Storage and Network Infrastructures." Thesis, 2013. http://hdl.handle.net/1807/35909.

Full text
Abstract:
Data analytics used to depend on specialized, high-end software and hardware platforms. Recent years, however, have brought forth the data-flow programming model, i.e., MapReduce, and with it a flurry of sturdy, scalable open-source software solutions for analyzing data. In essence, the commoditization of software frameworks for data analytics is well underway. Yet, up to this point, data analytics frameworks are still regarded as standalone, em dedicated components; deploying these frameworks requires companies to purchase hardware to meet storage and network resource demands, and system administrators to handle management of data across multiple storage systems. This dissertation explores the low-cost integration of frameworks for data analytics within existing, shared infrastructures. The thesis centers on smart software being the key enabler for holistic commoditization of data analytics. We focus on two instances of smart software that aid in realizing the low-cost integration objective. For an efficient storage integration, we build MixApart, a scalable data analytics framework that removes the dependency on dedicated storage for analytics; with MixApart, a single, consolidated storage back-end manages data and services all types of workloads, thereby lowering hardware costs and simplifying data management. We evaluate MixApart at scale with micro-benchmarks and production workload traces, and show that MixApart provides faster or comparable performance to an analytics framework with dedicated storage. For an effective sharing of the networking infrastructure, we implement OX, a virtual machine management framework that allows latency-sensitive web applications to share the data center network with data analytics through intelligent VM placement; OX further protects all applications from hardware failures. The two solutions allow the reuse of existing storage and networking infrastructures when deploying analytics frameworks, and substantiate our thesis that smart software upgrades can enable the end-to-end commoditization of analytics.
APA, Harvard, Vancouver, ISO, and other styles
7

Pranavadatta, DN. "Checking Compatability of Programs on Shared Data." Thesis, 2011. http://etd.iisc.ac.in/handle/2005/3899.

Full text
Abstract:
A large software system is built by composing multiple programs, possibly developed independently. The component programs communicate by sharing data. Data sharing involves creation of instances of the shared data by one program, called the producer, and its interpretation by another program, called the consumer. Valid instances of shared data and their correct interpretation is usually specified by a protocol or a standard that governs the communication. If a consumer misinterprets or does not handle some instances of data produced by a producer, it is called as a data compatibility bug. Such bugs manifest as various forms of runtime errors that are difficult to find and fix. In this work, we define various compatibility relations, between both producer-consumer programs and version-related programs, that characterize various subtle requirements for correct sharing of data. We design and implement a static analysis to infer types and guards over elements of shared data and the results are used for automatic compatibility checking. As case studies, we consider two widely used shared data-the TIFF structure, used to store TIFF directory attributes in memory, and IEEE 802. 11 MAC frame header which forms the layer 2 header in Wireless LAN communication. We analyze and check compatibility of 6 pairs of producer-consumer programs drawn from the transmit-receive code of Linux WLAN drivers of 3 different vendors. In the setting of version-related programs, we analyze a total of 48 library and utility routines of 2 pairs of TIFF image library (libtiff) versions. We successfully identify 5 known bugs and 1 new bug. For two of known bugs, bug fixes are available and we verify that they resolve the compatibility issues.
APA, Harvard, Vancouver, ISO, and other styles
8

Pranavadatta, DN. "Checking Compatability of Programs on Shared Data." Thesis, 2011. http://etd.iisc.ernet.in/2005/3899.

Full text
Abstract:
A large software system is built by composing multiple programs, possibly developed independently. The component programs communicate by sharing data. Data sharing involves creation of instances of the shared data by one program, called the producer, and its interpretation by another program, called the consumer. Valid instances of shared data and their correct interpretation is usually specified by a protocol or a standard that governs the communication. If a consumer misinterprets or does not handle some instances of data produced by a producer, it is called as a data compatibility bug. Such bugs manifest as various forms of runtime errors that are difficult to find and fix. In this work, we define various compatibility relations, between both producer-consumer programs and version-related programs, that characterize various subtle requirements for correct sharing of data. We design and implement a static analysis to infer types and guards over elements of shared data and the results are used for automatic compatibility checking. As case studies, we consider two widely used shared data-the TIFF structure, used to store TIFF directory attributes in memory, and IEEE 802. 11 MAC frame header which forms the layer 2 header in Wireless LAN communication. We analyze and check compatibility of 6 pairs of producer-consumer programs drawn from the transmit-receive code of Linux WLAN drivers of 3 different vendors. In the setting of version-related programs, we analyze a total of 48 library and utility routines of 2 pairs of TIFF image library (libtiff) versions. We successfully identify 5 known bugs and 1 new bug. For two of known bugs, bug fixes are available and we verify that they resolve the compatibility issues.
APA, Harvard, Vancouver, ISO, and other styles
9

Chakraborty, Abhirup. "Processing Exact Results for Queries over Data Streams." Thesis, 2010. http://hdl.handle.net/10012/5048.

Full text
Abstract:
In a growing number of information-processing applications, such as network-traffic monitoring, sensor networks, financial analysis, data mining for e-commerce, etc., data takes the form of continuous data streams rather than traditional stored databases/relational tuples. These applications have some common features like the need for real time analysis, huge volumes of data, and unpredictable and bursty arrivals of stream elements. In all of these applications, it is infeasible to process queries over data streams by loading the data into a traditional database management system (DBMS) or into main memory. Such an approach does not scale with high stream rates. As a consequence, systems that can manage streaming data have gained tremendous importance. The need to process a large number of continuous queries over bursty, high volume online data streams, potentially in real time, makes it imperative to design algorithms that should use limited resources. This dissertation focuses on processing exact results for join queries over high speed data streams using limited resources, and proposes several novel techniques for processing join queries incorporating secondary storages and non-dedicated computers. Existing approaches for stream joins either, (a) deal with memory limitations by shedding loads, and therefore can not produce exact or highly accurate results for the stream joins over data streams with time varying arrivals of stream tuples, or (b) suffer from large I/O-overheads due to random disk accesses. The proposed techniques exploit the high bandwidth of a disk subsystem by rendering the data access pattern largely sequential, eliminating small, random disk accesses. This dissertation proposes an I/O-efficient algorithm to process hybrid join queries, that join a fast, time varying or bursty data stream and a persistent disk relation. Such a hybrid join is the crux of a number of common transformations in an active data warehouse. Experimental results demonstrate that the proposed scheme reduces the response time in output results by exploiting spatio-temporal locality within the input stream, and minimizes disk overhead through disk-I/O amortization. The dissertation also proposes an algorithm to parallelize a stream join operator over a shared-nothing system. The proposed algorithm distributes the processing loads across a number of independent, non-dedicated nodes, based on a fixed or predefined communication pattern; dynamically maintains the degree of declustering in order to minimize communication and processing overheads; and presents mechanisms for reducing storage and communication overheads while scaling over a large number of nodes. We present experimental results showing the efficacy of the proposed algorithms.
APA, Harvard, Vancouver, ISO, and other styles
10

Masson, Constantin. "Framework for Real-time collaboration on extensive Data Types using Strong Eventual Consistency." Thèse, 2018. http://hdl.handle.net/1866/22532.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Shared Data Software"

1

Michele, McCarthy, ed. Software for your head: Core protocols for creating and maintaining shared vision. Boston, MA: Addison-Wesley, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

SHARE, ed. Proceedings of SHARE 73: August 20-25, 1989, the Peabody Orlando, Orlando, Florida. Chicago, Ill: SHARE, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

National Credit Union Share Insurance Fund (U.S.), ed. The year 2000 data change: What the year 2000 date change means to you and your insured credit union. [Washington, D.C.?: NUCA, National Credit Union Share Insurance Fund, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

McCarthy, Michele, and Jim McCarthy. Software for Your Head: Core Protocols for Creating and Maintaining Shared Vision. Addison-Wesley Professional, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bisseling, Rob H. Parallel Scientific Computation. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780198788348.001.0001.

Full text
Abstract:
This book explains how to use the bulk synchronous parallel (BSP) model to design and implement parallel algorithms in the areas of scientific computing and big data. Furthermore, it presents a hybrid BSP approach towards new hardware developments such as hierarchical architectures with both shared and distributed memory. The book provides a full treatment of core problems in scientific computing and big data, starting from a high-level problem description, via a sequential solution algorithm to a parallel solution algorithm and an actual parallel program written in the communication library BSPlib. Numerical experiments are presented for parallel programs on modern parallel computers ranging from desktop computers to massively parallel supercomputers. The introductory chapter of the book gives a complete overview of BSPlib, so that the reader already at an early stage is able to write his/her own parallel programs. Furthermore, it treats BSP benchmarking and parallel sorting by regular sampling. The next three chapters treat basic numerical linear algebra problems such as linear system solving by LU decomposition, sparse matrix-vector multiplication (SpMV), and the fast Fourier transform (FFT). The final chapter explores parallel algorithms for big data problems such as graph matching. The book is accompanied by a software package BSPedupack, freely available online from the author’s homepage, which contains all programs of the book and a set of test programs.
APA, Harvard, Vancouver, ISO, and other styles
6

Wright, Dawn J., and Christian Harder, eds. GIS for Science, Volume 3: Maps for Saving the Planet. Esri Press, 2021. http://dx.doi.org/10.17128/9781589486713.

Full text
Abstract:
GIS for Science: Maps for Saving the Planet, Volume 3, highlights real-world examples of scientists creating maps about saving life on Earth and preserving biodiversity. With Earth and the natural world at risk from various forces, geographic information system (GIS) mapping is essential for driving scientifically conscious decision-making about how to protect life on Earth. In volume 3 of GIS for Science, explore a collection of maps from scientists working to save the planet through documenting and protecting its biodiversity. In this volume, learn how GIS and data mapping are used in tandem with: global satellite observation forestry marine policy artificial intelligence conservation biology, and environmental education to help preserve and chronicle life on Earth. This volume also spotlights important global action initiatives incorporating conservation, including Half-Earth, 30 x 30, AI for Earth, the Blue Nature Alliance, and the Sustainable Development Solutions Network. The stories presented in this third volume are ideal for the professional scientist and conservationist and anyone interested in the intersection of technology and the conservation of nature. The book’s contributors include scientists who are applying geographic data gathered from the full spectrum of remote sensing and on-site technologies. The maps and data are brought to life using ArcGIS® software and other spatial data science tools that support research, collaboration, spatial analysis, and science communication across many locations and within diverse communities. The stories shared in this book and its companion website present inspirational ideas so that GIS users and scientists can work toward preserving biodiversity and saving planet Earth before time runs out.
APA, Harvard, Vancouver, ISO, and other styles
7

SAS Institute. Communications Access Methods for SAS/CONNECT 9.1 and SAS/SHARE 9.1. SAS, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Reengineering the customer registration process for earnings, growth, and higher share of customer: Best practice models and prescriptives for success in the aftermarket for software, information, and interactive entertainment. Larkspur, Calif: GISTICS Incorporated, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Guide, NAS. DIY NAS Guide: NAS Configuration Guide with Open Source Software on Raspberry Pi or PC for Network Hard Disk Drive, Backup and Data Share. A lot of screenshots. Independently Published, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Advances and Applications of DSmT for Information Fusion (Collected works). Am. Res. Press, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Shared Data Software"

1

Codd, E. F. "A Relational Model of Data for Large Shared Data Banks." In Software Pioneers, 263–94. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/978-3-642-59412-0_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Codd, E. F. "A Relational Model of Data for Large Shared Data Banks." In Pioneers and Their Contributions to Software Engineering, 61–98. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/978-3-642-48354-7_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Dobson, S., and C. P. Wadsworth. "Towards a theory of shared data in distributed systems." In Software Engineering for Parallel and Distributed Systems, 170–82. Boston, MA: Springer US, 1996. http://dx.doi.org/10.1007/978-0-387-34984-8_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Keerup, Kalmer, Dan Bogdanov, Baldur Kubo, and Per Gunnar Auran. "Privacy-Preserving Analytics, Processing and Data Management." In Big Data in Bioeconomy, 157–68. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-71069-9_12.

Full text
Abstract:
AbstractTypically, data cannot be shared among competing organizations due to confidentiality or regulatory restrictions. We present several technological alternatives to solve the problem: secure multi-party computation (MPC), trusted execution environments (TEE) and multi-key fully homomorphic encryption (MKFHE). We compare these privacy-enhancing technologies from deployment and performance point of view and explain how we selected technology and machine learning methods. We introduce a demonstrator built in the DataBio project for securely combining private and public data for planning of fisheries. The secure machine learning of best catch locations is a web solution utilizing Intel® Software Guard Extensions (Intel® SGX)-based TEE and built with the Sharemind HI (Hardware Isolation) development tools. Knowing where to go fishing is a competitive advantage that a fishery is not interested to share with competitors. Therefore, joint intelligence from public and private sector data while protecting secrets of each contributing organization is an important enabler. Finally, we discuss the wider business impact of secure machine learning in situations where data confidentiality is a concern.
APA, Harvard, Vancouver, ISO, and other styles
5

Takahashi, Kazumasa, and Shinji Sugawara. "In-Advance Replica Arrangement of Shared Data over Hybrid Peer-to-Peer Network According to Users’ Locations and Preferences." In Complex, Intelligent and Software Intensive Systems, 548–59. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-50454-0_57.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ahle, Ulrich, and Juan Jose Hierro. "FIWARE for Data Spaces." In Designing Data Spaces, 395–417. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-93975-5_24.

Full text
Abstract:
AbstractThis chapter describes how smart applications from multiple domains can participate in the creation of data spaces based on FIWARE software building blocks. Smart applications participating in such data spaces share digital twin data in real time using a common standard API like NGSI-LD and relying on standard data models. Each smart solution contributes to build a complete digital twin data representation of the real world sharing their data. At the same time, they can exploit data shared by other applications. Relying on FIWARE Data Marketplace components, smart applications can publish data under concrete terms and conditions which include pricing or data usage/access policies.A federated cloud infrastructure and mechanisms supporting data sovereignty are necessary to create data spaces. However, additional elements have to be added to ease the creation of data value chains and the materialization of a data economy. Standard APIs, combined with standard data models, are crucial to support effective data exchange enabling loose coupling between parties as well as reusability and replaceability of data resources and applications. Similarly, data spaces need to incorporate mechanisms for publication, discovery, and trading of data resources. These are elements that FIWARE implements, and they can be combined with IDSA architecture elements like the IDS Connector to create data spaces supporting trusted and effective data sharing.The GAIA-X project, started in 2020, is aimed at creating a federated form of data infrastructure in Europe which strengthens the ability to both access and share data securely and confidently. FIWARE is bringing mature technologies, compatible with IDS and CEF Building Blocks, which will accelerate the delivery of GAIA-X to the market.
APA, Harvard, Vancouver, ISO, and other styles
7

Hori, Masakazu, Yoichi Shinoda, and Koichiro Ochimizu. "Shared data management mechanism for distributed software development based on a reflective object-oriented model." In Notes on Numerical Fluid Mechanics and Multidisciplinary Design, 362–82. Cham: Springer International Publishing, 1996. http://dx.doi.org/10.1007/3-540-61292-0_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Ze Shi, and Colin Werner. "Ongoing Challenges and Solutions of Managing Data Privacy for Smart Cities." In Smart Cities in Asia, 23–32. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-1701-1_3.

Full text
Abstract:
AbstractSmart cities represent the epitome of utilizing data sourced from sensors and devices in a city to make informed decisions. Facilitating the massive breadth of data are millions and billions of “smart” devices interconnected through high-speed telecommunication networks, so naturally software organizations began specializing in various parts of the smart city data spectrum. In a smart city, new business opportunities are created for software organizations to process, manage, utilize, examine, and generate data. While smart cities support the ability to make rational and prudent decisions based on real data, the privacy of the data cannot be overlooked. In particular, there are privacy challenges regarding the collection, analysis, and dissemination of data. More precisely, we recognize that there are a multitude of challenges facing software organizations, which include obtaining a shared understanding of privacy and achieving compliance with privacy regulations.
APA, Harvard, Vancouver, ISO, and other styles
9

Holfelder, Wieland, Andreas Mayer, and Thomas Baumgart. "Sovereign Cloud Technologies for Scalable Data Spaces." In Designing Data Spaces, 419–36. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-93975-5_25.

Full text
Abstract:
AbstractThe cloud has changed the way we consume technology either as individual users or in a business context. However, cloud computing can only transform organizations, create innovation, or provide the ability to scale digital business models if there is trust in the cloud and if the data that is being generated, processed, exchanged, and stored in the cloud has the appropriate safeguards. Therefore, sovereignty and control over data and its protection are paramount. Data spaces provide organizations with additional capabilities to govern strict data usage rules over the whole life cycle of information sharing with others and enable new use cases and new business models where data can be securely shared among a defined set of collaborators and with clear and enforceable usage rights attached to create new value. Open and sovereign cloud technologies will provide the necessary transparency, control, and the highest levels of privacy and security that are required to fully leverage the potential of such data spaces. Digital sovereignty, however, still means many things to many people. So to make it more concrete, in this article, we will look at digital sovereignty across three layers: data sovereignty, operational sovereignty, and software sovereignty. With these layers, we will create a spectrum of solutions that enable scalable data spaces that will be critical for the digital transformation of the European economy.
APA, Harvard, Vancouver, ISO, and other styles
10

Bux, Tobias, Oliver Riedel, and Armin Lechler. "Security Analysis of a Blockchain Based Data Collection Method for Cross Company Information Sharing." In Advances in Automotive Production Technology – Towards Software-Defined Manufacturing and Resilient Supply Chains, 230–39. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-27933-1_22.

Full text
Abstract:
AbstractDigitization within medium-sized enterprises advanced in the last years. Collecting and analyzing data for optimizing internal production processes therefor is the current state of many companies. The next step of digitization is using this collected data not only for internal processes but for cross company business models along the value network. This step brings new requirements for how data is collected, stored and shared. In this paper those requirements are listed and explained. Afterwards, an implemented solution for data collection fulfilling the requirements is analyzed. The focus of the analysis lies on security issues within the data flow between data creation and cross-company usage. Therefore, the timespan between data creation on a sensor, processing the data within local IT-systems and reliably storing data within a blockchain is considered. A threat modeling approach considering attack vectors along the described data flow is used to quantitatively compare the proposed solution to regular industrial solutions. The analysis will highlight the differences of the compared solutions on different topics like data integrity and immutability. Lastly, an outlook on industrial usage of the analyzed solution is given.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Shared Data Software"

1

Tudor, Dacian, and Vladimir Cretu. "Experiences on Grid Shared Data Programming." In 2008 International Conference on Complex, Intelligent and Software Intensive Systems. IEEE, 2008. http://dx.doi.org/10.1109/cisis.2008.118.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tudor, Dacian, Georgiana Macariu, Wolfgang Schreiner, and Vladimir Cretu. "Shared Data Grid Programming Improvements Using Specialized Objects." In 2010 International Conference on Complex, Intelligent and Software Intensive Systems (CISIS). IEEE, 2010. http://dx.doi.org/10.1109/cisis.2010.35.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

"PARALLEL PROCESSING OF ”GROUP-BY JOIN” QUERIES ON SHARED NOTHING MACHINES." In 1st International Conference on Software and Data Technologies. SciTePress - Science and and Technology Publications, 2006. http://dx.doi.org/10.5220/0001316003010307.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

"PIPELINED PARALLELISM IN MULTI-JOIN QUERIES ON HETEROGENEOUS SHARED NOTHING ARCHITECTURES." In 3rd International Conference on Software and Data Technologies. SciTePress - Science and and Technology Publications, 2008. http://dx.doi.org/10.5220/0001889901270134.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bilgin, Enes, and Tolga Ovatman. "Coordinated Access to Shared Data Sources for Geo-replicated State Machines." In 17th International Conference on Software Technologies. SCITEPRESS - Science and Technology Publications, 2022. http://dx.doi.org/10.5220/0011269600003266.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Liu, Haiyang, Tingting Hu, and Zongyan Qiu. "Automatic fine-grained locking generation for shared data structures." In 2017 International Symposium on Theoretical Aspects of Software Engineering (TASE). IEEE, 2017. http://dx.doi.org/10.1109/tase.2017.8285633.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Cecchinel, Cyril, Sebastien Mosser, and Philippe Collet. "Automated Deployment of Data Collection Policies over Heterogeneous Shared Sensing Infrastructures." In 2016 23rd Asia-Pacific Software Engineering Conference (APSEC). IEEE, 2016. http://dx.doi.org/10.1109/apsec.2016.053.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gornish, Edward, and Alexander Veidenbaum. "An Integrated Hardware/Software Data Prefetching Scheme for Shared-Memory Multiprocessors." In 1994 International Conference on Parallel Processing (ICPP'94). IEEE, 1994. http://dx.doi.org/10.1109/icpp.1994.57.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Morris, Donald G., and David K. Lowenthal. "Accurate data redistribution cost estimation in software distributed shared memory systems." In the eighth ACM SIGPLAN symposium. New York, New York, USA: ACM Press, 2001. http://dx.doi.org/10.1145/379539.379570.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wu, Bo, Weilin Wang, and Xipeng Shen. "Software-level scheduling to exploit non-uniformly shared data cache on GPGPU." In the ACM SIGPLAN Workshop. New York, New York, USA: ACM Press, 2013. http://dx.doi.org/10.1145/2492408.2492421.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Shared Data Software"

1

Juden, Matthew, Tichaona Mapuwei, Till Tietz, Rachel Sarguta, Lily Medina, Audrey Prost, Macartan Humphreys, et al. Process Outcome Integration with Theory (POInT): academic report. Centre for Excellence and Development Impact and Learning (CEDIL), March 2023. http://dx.doi.org/10.51744/crpp5.

Full text
Abstract:
This paper describes the development and testing of a novel approach to evaluating development interventions – the POInT approach. The authors used Bayesian causal modelling to integrate process and outcome data to generate insights about all aspects of the theory of change, including outcomes, mechanisms, mediators and moderators. They partnered with two teams who had evaluated or were evaluating complex development interventions: The UPAVAN team had evaluated a nutrition-sensitive agriculture intervention in Odisha, India, and the DIG team was in the process of evaluating a disability-inclusive poverty graduation intervention in Uganda. The partner teams’ theory of change were adapted into a formal causal model, depicted as a directed acyclic graph (DAG). The DAG was specified in the statistical software R, using the CausalQueries package, having extended the package to handle large models. Using a novel prior elicitation strategy to elicit beliefs over many more parameters than has previously been possible, the partner teams’ beliefs about the nature and strength of causal links in the causal model (priors) were elicited and combined into a single set of shared prior beliefs. The model was updated on data alone as well as on data plus priors to generate posterior models under different assumptions. Finally, the prior and posterior models were queried to learn about estimates of interest, and the relative role of prior beliefs and data in the combined analysis.
APA, Harvard, Vancouver, ISO, and other styles
2

Park, Donghyun, and Kwanho Shin. Technology and Wage Share of Older Workers. Asian Development Bank, May 2023. http://dx.doi.org/10.22617/wps230088-2.

Full text
Abstract:
This paper examines the impact of technological change on the wage share of older workers, using data from 30 countries experiencing population aging. It finds that recent technological developments centered on information and communication technology, software, and robots do not adversely affect older workers. This suggests that older workers may be more open to learning and adopting new technologies than widely presumed.
APA, Harvard, Vancouver, ISO, and other styles
3

Harris, Melissa, and Alexia Pretari. Going Digital – Computer-Assisted Telephone Interviewing (CATI): Lessons learned from a pilot study. Oxfam GB, May 2021. http://dx.doi.org/10.21201/2021.7581.

Full text
Abstract:
In this sixth instalment of the Going Digital Series, we share our experiences of using computer-assisted telephone interviewing (CATI) software, which was researched and piloted following the outbreak of COVID-19 and the subsequent need for improved remote data collection practices. CATI is a survey technique in which interviews are conducted via a phone call, using an electronic device to follow a survey script and enter the information collected. This paper looks at the experience of piloting the technique in phone interviews with women in Kirkuk Governorate, Iraq.
APA, Harvard, Vancouver, ISO, and other styles
4

Albanesi, Stefania, António Dias da Silva, Juan F. Jimeno, Ana Lamo, and Alena Wabitsch. New technologies and jobs in Europe. Madrid: Banco de España, August 2023. http://dx.doi.org/10.53479/33414.

Full text
Abstract:
We examine the link between labour market developments and new technologies such as artificial intelligence (AI) and software in 16 European countries over the period 2011-2019. Using data for occupations at the 3-digit level in Europe, we find that on average employment shares have increased in occupations more exposed to AI. This is particularly the case for occupations with a relatively higher proportion of younger and skilled workers. This evidence is in line with the Skill-Biased Technological Change theory. While there is heterogeneity across countries, very few countries show a decline in the employment shares of occupations more exposed to AI-enabled automation. Country heterogeneity for this result appears to be linked to the pace of technology diffusion and education, but also to the level of product market regulation (competition) and employment protection laws. In contrast to the findings for employment, we find little evidence for any correlation between wages and potential exposures to new technologies.
APA, Harvard, Vancouver, ISO, and other styles
5

Duque, Earl, Steve Legensky, Brad Whitlock, David Rogers, Andrew Bauer, Scott Imlay, David Thompson, and Seiji Tsutsumi. Summary of the SciTech 2020 Technical Panel on In Situ/In Transit Computational Environments for Visualization and Data Analysis. Engineer Research and Development Center (U.S.), June 2021. http://dx.doi.org/10.21079/11681/40887.

Full text
Abstract:
At the AIAA SciTech 2020 conference, the Meshing, Visualization and Computational Environments Technical Committee hosted a special technical panel on In Situ/In Transit Computational Environments for Visualization and Data Analytics. The panel brought together leading experts from industry, software vendors, Department of Energy, Department of Defense and the Japan Aerospace Exploration Agency (JAXA). In situ and in transit methodologies enable Computational Fluid Dynamic (CFD) simulations to avoid the excessive overhead associated with data I/O at large scales especially as simulations scale to millions of processors. These methods either share the data analysis/visualization pipelines with the memory space of the solver or efficiently off load the workload to alternate processors. Using these methods, simulations can scale and have the promise of enabling the community to satisfy the Knowledge Extraction milestones as envisioned by the CFD Vision 2030 study for "on demand analysis/visualization of a 100 Billion point unsteady CFD simulation". This paper summarizes the presentations providing a discussion point of how the community can achieve the goals set forth in the CFD Vision 2030.
APA, Harvard, Vancouver, ISO, and other styles
6

Coulson, Saskia, Melanie Woods, Drew Hemment, and Michelle Scott. Report and Assessment of Impact and Policy Outcomes Using Community Level Indicators: H2020 Making Sense Report. University of Dundee, 2017. http://dx.doi.org/10.20933/100001192.

Full text
Abstract:
Making Sense is a European Commission H2020 funded project which aims at supporting participatory sensing initiatives that address environmental challenges in areas such as noise and air pollution. The development of Making Sense was informed by previous research on a crowdfunded open source platform for environmental sensing, SmartCitizen.me, developed at the Fab Lab Barcelona. Insights from this research identified several deterrents for a wider uptake of participatory sensing initiatives due to social and technical matters. For example, the participants struggled with the lack of social interactions, a lack of consensus and shared purpose amongst the group, and a limited understanding of the relevance the data had in their daily lives (Balestrini et al., 2014; Balestrini et al., 2015). As such, Making Sense seeks to explore if open source hardware, open source software and and open design can be used to enhance data literacy and maker practices in participatory sensing. Further to this, Making Sense tests methodologies aimed at empowering individuals and communities through developing a greater understanding of their environments and by supporting a culture of grassroot initiatives for action and change. To do this, Making Sense identified a need to underpin sensing with community building activities and develop strategies to inform and enable those participating in data collection with appropriate tools and skills. As Fetterman, Kaftarian and Wanderman (1996) state, citizens are empowered when they understand evaluation and connect it in a way that it has relevance to their lives. Therefore, this report examines the role that these activities have in participatory sensing. Specifically, we discuss the opportunities and challenges in using the concept of Community Level Indicators (CLIs), which are measurable and objective sources of information gathered to complement sensor data. We describe how CLIs are used to develop a more indepth understanding of the environmental problem at hand, and to record, monitor and evaluate the progress of change during initiatives. We propose that CLIs provide one way to move participatory sensing beyond a primarily technological practice and towards a social and environmental practice. This is achieved through an increased focus in the participants’ interests and concerns, and with an emphasis on collective problem solving and action. We position our claims against the following four challenge areas in participatory sensing: 1) generating and communicating information and understanding (c.f. Loreto, 2017), 2) analysing and finding relevance in data (c.f. Becker et al., 2013), 3) building community around participatory sensing (c.f. Fraser et al., 2005), and 4) achieving or monitoring change and impact (c.f. Cheadle et al., 2000). We discuss how the use of CLIs can tend to these challenges. Furthermore, we report and assess six ways in which CLIs can address these challenges and thereby support participatory sensing initiatives: i. Accountability ii. Community assessment iii. Short-term evaluation iv. Long-term evaluation v. Policy change vi. Capability The report then returns to the challenge areas and reflects on the learnings and recommendations that are gleaned from three Making Sense case studies. Afterwhich, there is an exposition of approaches and tools developed by Making Sense for the purposes of advancing participatory sensing in this way. Lastly, the authors speak to some of the policy outcomes that have been realised as a result of this research.
APA, Harvard, Vancouver, ISO, and other styles
7

Eastman, Brittany. Legal Issues Facing Automated Vehicles, Facial Recognition, and Privacy Rights. SAE International, July 2022. http://dx.doi.org/10.4271/epr2022016.

Full text
Abstract:
Facial recognition software (FRS) is a form of biometric security that detects a face, analyzes it, converts it to data, and then matches it with images in a database. This technology is currently being used in vehicles for safety and convenience features, such as detecting driver fatigue, ensuring ride share drivers are wearing a face covering, or unlocking the vehicle. Public transportation hubs can also use FRS to identify missing persons, intercept domestic terrorism, deter theft, and achieve other security initiatives. However, biometric data is sensitive and there are numerous remaining questions about how to implement and regulate FRS in a way that maximizes its safety and security potential while simultaneously ensuring individual’s right to privacy, data security, and technology-based equality. Legal Issues Facing Automated Vehicles, Facial Recognition, and Individual Rights seeks to highlight the benefits of using FRS in public and private transportation technology and addresses some of the legitimate concerns regarding its use by private corporations and government entities, including law enforcement, in public transportation hubs and traffic stops. Constitutional questions, including First, Forth, and Ninth Amendment issues, also remain unanswered. FRS is now a permanent part of transportation technology and society; with meaningful legislation and conscious engineering, it can make future transportation safer and more convenient.
APA, Harvard, Vancouver, ISO, and other styles
8

Wu, Yingjie, Selim Gunay, and Khalid Mosalam. Hybrid Simulations for the Seismic Evaluation of Resilient Highway Bridge Systems. Pacific Earthquake Engineering Research Center, University of California, Berkeley, CA, November 2020. http://dx.doi.org/10.55461/ytgv8834.

Full text
Abstract:
Bridges often serve as key links in local and national transportation networks. Bridge closures can result in severe costs, not only in the form of repair or replacement, but also in the form of economic losses related to medium- and long-term interruption of businesses and disruption to surrounding communities. In addition, continuous functionality of bridges is very important after any seismic event for emergency response and recovery purposes. Considering the importance of these structures, the associated structural design philosophy is shifting from collapse prevention to maintaining functionality in the aftermath of moderate to strong earthquakes, referred to as “resiliency” in earthquake engineering research. Moreover, the associated construction philosophy is being modernized with the utilization of accelerated bridge construction (ABC) techniques, which strive to reduce the impact of construction on traffic, society, economy and on-site safety. This report presents two bridge systems that target the aforementioned issues. A study that combined numerical and experimental research was undertaken to characterize the seismic performance of these bridge systems. The first part of the study focuses on the structural system-level response of highway bridges that incorporate a class of innovative connecting devices called the “V-connector,”, which can be used to connect two components in a structural system, e.g., the column and the bridge deck, or the column and its foundation. This device, designed by ACII, Inc., results in an isolation surface at the connection plane via a connector rod placed in a V-shaped tube that is embedded into the concrete. Energy dissipation is provided by friction between a special washer located around the V-shaped tube and a top plate. Because of the period elongation due to the isolation layer and the limited amount of force transferred by the relatively flexible connector rod, bridge columns are protected from experiencing damage, thus leading to improved seismic behavior. The V-connector system also facilitates the ABC by allowing on-site assembly of prefabricated structural parts including those of the V-connector. A single-column, two-span highway bridge located in Northern California was used for the proof-of-concept of the proposed V-connector protective system. The V-connector was designed to result in an elastic bridge response based on nonlinear dynamic analyses of the bridge model with the V-connector. Accordingly, a one-third scale V-connector was fabricated based on a set of selected design parameters. A quasi-static cyclic test was first conducted to characterize the force-displacement relationship of the V-connector, followed by a hybrid simulation (HS) test in the longitudinal direction of the bridge to verify the intended linear elastic response of the bridge system. In the HS test, all bridge components were analytically modeled except for the V-connector, which was simulated as the experimental substructure in a specially designed and constructed test setup. Linear elastic bridge response was confirmed according to the HS results. The response of the bridge with the V-connector was compared against that of the as-built bridge without the V-connector, which experienced significant column damage. These results justified the effectiveness of this innovative device. The second part of the study presents the HS test conducted on a one-third scale two-column bridge bent with self-centering columns (broadly defined as “resilient columns” in this study) to reduce (or ultimately eliminate) any residual drifts. The comparison of the HS test with a previously conducted shaking table test on an identical bridge bent is one of the highlights of this study. The concept of resiliency was incorporated in the design of the bridge bent columns characterized by a well-balanced combination of self-centering, rocking, and energy-dissipating mechanisms. This combination is expected to lead to minimum damage and low levels of residual drifts. The ABC is achieved by utilizing precast columns and end members (cap beam and foundation) through an innovative socket connection. In order to conduct the HS test, a new hybrid simulation system (HSS) was developed, utilizing commonly available software and hardware components in most structural laboratories including: a computational platform using Matlab/Simulink [MathWorks 2015], an interface hardware/software platform dSPACE [2017], and MTS controllers and data acquisition (DAQ) system for the utilized actuators and sensors. Proper operation of the HSS was verified using a trial run without the test specimen before the actual HS test. In the conducted HS test, the two-column bridge bent was simulated as the experimental substructure while modeling the horizontal and vertical inertia masses and corresponding mass proportional damping in the computer. The same ground motions from the shaking table test, consisting of one horizontal component and the vertical component, were applied as input excitations to the equations of motion in the HS. Good matching was obtained between the shaking table and the HS test results, demonstrating the appropriateness of the defined governing equations of motion and the employed damping model, in addition to the reliability of the developed HSS with minimum simulation errors. The small residual drifts and the minimum level of structural damage at large peak drift levels demonstrated the superior seismic response of the innovative design of the bridge bent with self-centering columns. The reliability of the developed HS approach motivated performing a follow-up HS study focusing on the transverse direction of the bridge, where the entire two-span bridge deck and its abutments represented the computational substructure, while the two-column bridge bent was the physical substructure. This investigation was effective in shedding light on the system-level performance of the entire bridge system that incorporated innovative bridge bent design beyond what can be achieved via shaking table tests, which are usually limited by large-scale bridge system testing capacities.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography