Dissertations / Theses on the topic 'Elaborazione dei dati'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 25 dissertations / theses for your research on the topic 'Elaborazione dei dati.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Schipilliti, Luca. "Progetto del software di acquisizione ed elaborazione dei dati di un Sonar multibeam." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/21608/.
Full textFaedi, Alberto. "Elaborazione di dati da sensori inerziali per monitoraggio strutturale." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/14247/.
Full textChiarelli, Rosamaria. "Elaborazione ed analisi di dati geomatici per il monitoraggio del territorio costiero in Emilia-Romagna." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/19783/.
Full textMatrone, Erika. "Elaborazione dei dati inerenti alla raccolta dei rifiuti della Regione Emilia Romagna finalizzata all'individuazione dei più performanti sistemi di raccolta." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/12934/.
Full textGuidetti, Mattia. "Ricostruzione di flussi veicolari su scala regionale: analisi dei dati disponibili." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amslaurea.unibo.it/5957/.
Full textFioravanti, Matteo. "Sviluppo di tecniche di elaborazione di dati elettroanatomici per l'analisi dei pattern di attivazione elettrica in fibrillazione atriale." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017.
Find full textLo, Piccolo Salvatore. "La digestione anaerobica dei fanghi prodotti dal depuratore di Savignano sul Rubicone: elaborazione dei dati sperimentali di impianto e simulazione del processo tramite il modello ADM1." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.
Find full textBaietta, Alessia. "Preparazione dei dati e generazione delle mappe di TC perfusionale nel cancro al polmone." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amslaurea.unibo.it/9279/.
Full textSPALLANZANI, MATTEO. "Un framework per l’analisi dei sistemi di apprendimento automatico." Doctoral thesis, Università degli studi di Modena e Reggio Emilia, 2020. http://hdl.handle.net/11380/1200571.
Full textMaking predictions is about getting insights into the patterns of our environment. We can access the physical world through media, measuring instruments, which provide us with data in which we hope to find useful patterns. The development of computing machines has allowed storing large data sets and processing them at high speed. Machine learning studies systems which can automate the detection of patterns in large data sets using computers. Machine learning lies at the core of data science and artificial intelligence, two research fields which are changing the economy and the society in which we live. Machine learning systems are usually trained and deployed on powerful computer clusters composed by hundreds or thousands of machines. Nowadays, the miniaturisation of computing devices is allowing deploying them on battery-powered systems embedded into diverse environments. With respect to computer clusters, these devices are far less powerful, but have the advantage of being nearer to the source of the data. On one side, this increases the number of applications of machine learning systems; on the other side, the physical limitations of the computing machines require identifying proper metrics to assess the fitness of different machine learning systems in a given context. In particular, these systems should be evaluated according not only to their modelling and statistical properties, but also to their algorithmic costs and their fitness to different computer architectures. In this thesis, we analyse modelling, algorithmic and architectural properties of different machine learning systems. We present the fingerprint method, a system which was developed to solve a business intelligence problem where statistical accuracy was more important than latency or energy constraints. Then, we analyse artificial neural networks and discuss their appealing computational properties; we also describe an example application, a model we designed to identify the objective causes of subjective driving perceptions. Finally, we describe and analyse quantized neural networks, artificial neural networks which use finite sets for the parameters and step activation functions. These limitations pose challenging mathematical problems, but quantized neural networks can be executed extremely efficiently on dedicated hardware accelerators, making them ideal candidates to deploy machine learning on edge computers. In particular, we show that quantized neural networks are equivalent to classical artificial neural networks (at least on the set of targets represented by continuous functions defined on compact domains); we also present a novel gradient-based learning algorithm for, named additive noise annealing, based on the regularisation effect of additive noise on the argument of discontinuous functions, reporting state-of-the-art results on image classification benchmarks.
PARMIGGIANI, Nicolò. "Metodi per l’analisi e la gestione dei dati dell’astrofisica gamma in tempo reale." Doctoral thesis, Università degli studi di Modena e Reggio Emilia, 2021. http://hdl.handle.net/11380/1239980.
Full textThe context of this Ph.D. is the data analysis and management for gamma-ray astronomy, which involves the observation of gamma-rays, the most energetic form of electromagnetic radiation. From the gamma-ray observations performed by telescopes or satellites, it is possible to study catastrophic events involving compact objects, such as white dwarves, neutron stars, and black holes. These events are called gamma-ray transients. To understand these phenomena, they must be observed during their evolution. For this reason, the speed is crucial, and automated data analysis pipelines are developed to detect gamma-ray transients and generate science alerts during the astrophysical observations or immediately after. A science alert is an immediate communication from one observatory to other observatories that an interesting astrophysical event is occurring in the sky. The astrophysical community is experiencing a new era called "multi-messenger astronomy", where the astronomical sources are observed by different instruments, collecting different signals: gravitational waves, electromagnetic radiation, and neutrinos. In the multi-messenger era, astrophysical projects share science alerts through different communication networks. The coordination of different projects done by sharing science alerts is mandatory to understand the nature of these physical phenomena. Observatories have to manage the follow-up of these external science alerts by developing dedicated software. During this Ph. D., the research activity had the main focus on the AGILE space mission, currently in operation, and on the Cherenkov Telescope Array Observatory (CTA), currently in the construction phase. The follow-up of external science alerts received from Gamma-Ray Bursts (GRB) and Gravitational Waves (GW) detectors is one of the AGILE Team's current major activities. Future generations of gamma-ray observatories like the CTA or the ASTRI Mini-Array can take advantage of the technologies developed for AGILE. This research aims to develop analysis and management software for gamma-ray data to fulfill the context requirements. The first chapter of this thesis describes the web platform used by AGILE researchers to prepare the Second AGILE Catalog of Gamma-ray sources. The analysis performed for this catalog is stored in a dedicated database, and the web platform queries this database. This was preparatory work to understand how to manage detections of gamma-ray sources and light curve for the subsequent phase: the development of a scientific pipeline to manage gamma-ray detection and science alerts in real-time. The second chapter presents a framework designed to facilitate the development of real-time scientific analysis pipelines. The framework provides a common pipeline architecture and automatisms that can be used by observatories to develop their own pipelines. This framework was used to develop the pipelines for the AGILE space mission and to develop a prototype of the scientific pipeline of the Science Alert Generation system of the CTA Observatory. The third chapter describes a new method to detect GRBs in the AGILE-GRID data using the Convolutional Neural Network. With this Deep Learning technology, it is possible to improve the detection capabilities of AGILE. This method was also integrated as a science tool in the AGILE pipelines. The last chapter of the thesis shows the scientific results obtained with the software developed during the Ph.D. research activities. Part of the results was published in refereed journals. The remaining part was sent to the scientific community through The Astronomer's Telegram or the Gamma-ray Coordination Network.
Merante, Brunella. "Sviluppo e validazione sperimentale di un software di acquisizione ed elaborazione dati provenienti da un sistema di eye tracking per lo studio dei movimenti oculari." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amslaurea.unibo.it/7941/.
Full textROLLO, FEDERICA. "Verso soluzioni di sostenibilità e sicurezza per una città intelligente." Doctoral thesis, Università degli studi di Modena e Reggio Emilia, 2022. http://hdl.handle.net/11380/1271183.
Full textA smart city is a place where technology is exploited to help public administrations make decisions. The technology can contribute to the management of multiple aspects of everyday life, offering more reliable services to citizens and improving the quality of life. However, technology alone is not enough to make a smart city; suitable methods are needed to analyze the data collected by technology and manage them in such a way as to generate useful information. Some examples of smart services are the apps that allow to reach a destination through the least busy road route or to find the nearest parking slot, or the apps that suggest better paths for a walk based on air quality. This thesis focuses on two aspects of smart cities: sustainability and safety. The first aspect concerns studying the impact of vehicular traffic on air quality through the development of a network of traffic and air quality sensors, and the implementation of a chain of simulation models. This work is part of the TRAFAIR project, co-financed by the European Union, which is the first project with the scope of monitoring in real-time and predicting air quality on an urban scale in 6 European cities, including Modena. The project required the management of a large amount of heterogeneous data and their integration on a complex and scalable data platform shared by all the partners of the project. The data platform is a PostgreSQL database, suitable for dealing with spatio-temporal data, and contains more than 60 tables and 435 GB of data (only for Modena). All the processes of the TRAFAIR pipeline, the dashboards and the mobile apps exploit the database to get the input data and, eventually, store the output, generating big data streams. The simulation models, executed on HPC resources, use the sensor data and provide results in real-time (as soon as the sensor data are stored in the database). Therefore, the anomaly detection techniques applied to sensor data need to perform in real-time in a short time. After a careful study of the distribution of the sensor data and the correlation among the measurements, several anomaly detection techniques have been implemented and applied to sensor data. A novel approach for traffic data that employs a flow-speed correlation filter, STL decomposition and IQR analysis has been developed. In addition, an innovative framework that implements 3 algorithms for anomaly detection in air quality sensor data has been created. The results of the experiments have been compared to the ones of the LSTM autoencoder, and the performances have been evaluated after the calibration process. The safety aspect in the smart city is related to a crime analysis project, the analytical processes directed at providing timely and pertinent information to assist the police in crime reduction, prevention, and evaluation. Due to the lack of official data to produce the analysis, this project exploits the news articles published in online newspapers. The goal is to categorize the news articles based on the crime category, geolocate the crime events, detect the date of the event, and identify some features (e.g. what has been stolen during the theft). A Java application has been developed for the analysis of news articles, the extraction of semantic information through the use of NLP techniques, and the connection of entities to Linked Data. The emerging technology of Word Embeddings has been employed for the text categorization, while the Question Answering through BERT has been used for extracting the 5W+1H. The news articles referring to the same event have been identified through the application of cosine similarity to the shingles of the news articles' text. Finally, a tool has been developed to show the geolocalized events and provide some statistics and annual reports. This is the only project in Italy that starting from news articles tries to provide analyses on crimes and makes them available through a visualization tool.
Ciandrini, Giovanni. "Elaborazione del linguaggio naturale nell'IA e tecnologie moderne: Sentiment Analysis come caso di studio." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amslaurea.unibo.it/8966/.
Full textPAGANELLI, MATTEO. "Gestione ed Analisi di Big Data: Sfide e Opportunità nell'Integrazione e nell'Estrazione di Conoscenza dai Dati." Doctoral thesis, Università degli studi di Modena e Reggio Emilia, 2021. http://hdl.handle.net/11380/1239979.
Full textIn the Big Data era, the adequate management and consumption of data represents one of the most challenging activities, due to a series of critical issues that are usually categorized into 5 key concepts: volume, velocity, variety, veridicity and variability. In response to these needs, a large number of algorithms and technologies have been proposed in recent years, however many open problems remain and new challenges have emerged. Among these, just to name a few, there is the need to have annotated data for the training of machine learning techniques, to interpret the logic of the systems used, to reduce the impact of their management in production (i.e. the so-called technical debt) and to provide tools to support human-machine interaction. In this thesis, the challenges affecting the areas of data integration and modern management (in terms of readjustment with respect to the new requirements) of relational DBMS are studied in depth. The main problem affecting data integration concerns its evaluation in real contexts, which typically requires the costly and time-demanding involvement of domain experts. In this perspective, the use of tools for the support and automation of this critical task, as well as its unsupervised resolution, would be very useful. In this context, my contribution can be summarized in the following points: 1) the realization of techniques for the unsupervised evaluation of data integration tasks and 2) the development of automatic approaches for the configuration of rules-based matching models. As for relational DBMSs, they have proved to be, over the last few decades, the workhorse of many companies, thanks to their simplicity of governance, security, audibility and high performance. Today, however, we are witnessing a partial rethinking of their use compared to the original design. For example, they are used in solving more advanced tasks, such as classification, regression and clustering, typical of the machine learning field. The establishment of a symbiotic relationship between these two research fields could be essential to solve some of the critical issues listed above. In this context, my main contribution was to verify the possibility of performing in-DBMS inference of machine learning pipeline at serving time.
FABBRI, MATTEO. "Sfruttare i Dati Sintetici per Migliorare la Comprensione del Comportamento Umano." Doctoral thesis, Università degli studi di Modena e Reggio Emilia, 2021. http://hdl.handle.net/11380/1239978.
Full textMost recent Deep Learning techniques require large volumes of training data in order to achieve human-like performance. Especially in Computer Vision, datasets are expensive to create because they usually require a considerable manual effort that can not be automated. Indeed, manual annotation is error-prone, inconsistent for subjective tasks (e.g. age classification), and not applicable to particular data (e.g. high frame-rate videos). For some tasks, like pose estimation and tracking, an alternative to manual annotation implies the use of wearable sensors. However, this approach is not feasible under some circumstances (e.g. in crowded scenarios) since the need to wear sensors limits its application to controlled environments. To overcome all the aforementioned limitations, we collected a set of synthetic datasets exploiting a photorealistic videogame. By relying on a virtual simulator, the annotations are error-free and always consistent as there is no manual annotation involved. Moreover, our data is suitable for in-the-wild applications as it contains multiple scenarios and a high variety of people appearances. In addition, our datasets are privacy compliant as no real human was involved in the data acquisition. Leveraging this newly collected data, extensive studies have been conducted on a plethora of tasks. In particular, for 2D pose estimation and tracking, we propose a deep network architecture that jointly extracts people body parts and associates them across short temporal spans. Our model explicitly deals with occluded body parts, by hallucinating plausible solutions of not visible joints. For 3D pose estimation, we propose to use high-resolution volumetric heatmaps to model joint locations, devising a simple and effective compression method to drastically reduce the size of this representation. For attribute classification, we overcome a common problem in surveillance, namely people occlusion, by designing a network capable of hallucinating occluded people with a plausible aspect. From a more practical point of view, we design an edge-AI system capable of evaluating in real-time the COVID-19 contagion risk of a monitored area by analyzing video streams. As synthetic data might suffer domain-shift related problems, we further investigate image translation techniques for the tasks of head pose estimation, attribute recognition and face landmark localization.
FERRARETTI, Denis. "Data Mining for Petroleum Geology." Doctoral thesis, Università degli studi di Ferrara, 2012. http://hdl.handle.net/11392/2389427.
Full textVIRGILI, LUCA. "Graphs behind data: A network-based approach to model different scenarios." Doctoral thesis, Università Politecnica delle Marche, 2022. http://hdl.handle.net/11566/295088.
Full textNowadays, the amount and variety of scenarios that can benefit from techniques for extracting and managing knowledge from raw data have dramatically increased. As a result, the search for models capable of ensuring the representation and management of highly heterogeneous data is a hot topic in the data science literature. In this thesis, we aim to propose a solution to address this issue. In particular, we believe that graphs, and more specifically complex networks, as well as the concepts and approaches associated with them, can represent a solution to the problem mentioned above. In fact, we believe that they can be a unique and unifying model to uniformly represent and handle extremely heterogeneous data. Based on this premise, we show how the same concepts and/or approach has the potential to address different open issues in different contexts.
Dashdorj, Zolzaya. "Semantic Enrichment of Mobile Phone Data Records Exploiting Background Knowledge." Doctoral thesis, Università degli studi di Trento, 2015. https://hdl.handle.net/11572/367796.
Full textDashdorj, Zolzaya. "Semantic Enrichment of Mobile Phone Data Records Exploiting Background Knowledge." Doctoral thesis, University of Trento, 2015. http://eprints-phd.biblio.unitn.it/1612/1/PhD-Thesis.pdf.
Full textAmelio, Alessia, Luigi Palopoli, and Clara Pizzuti. "Pattern extraction from data with application to image processing." Thesis, 2012. http://hdl.handle.net/10955/1171.
Full textThe term Information Extraction refers to the automatic extraction of structured information from data. In such a context, the task of pattern extraction plays a key role, as it allows to identify particular trends and recurring structures of interest to a given user. For this reason, pattern extraction techniques are available in a wide range of applications, such as enterprise applications, personal information management, web oriented and scientific applications. In this thesis, analysis is focused on pattern extraction techniques from images and from political data. Patterns in image processing are defined as features derived from the subdivision of the image in regions or objects and several techniques have been introduced in the literature for extracting these kinds of features. Specifically, image segmentation approaches divide an image in ”uniform” region patterns and both boundary detection and region-clustering based algorithms have been adopted to solve this problem. A drawback of these methods is that the number of clusters must be predetermined. Furthermore, evolutionary techniques have been successfully applied to the problem of image segmentation. However, one of the main problems of such approaches is the determination of the number of regions, that cannot be changed during execution. Consequently, we formalize a new genetic graph-based image segmentation algorithm that, thanks to the new fitness function, a new concept of neighborhood of pixels and the genetic representation, is able to partition images without the need to set a priori the number of segments. On the other hand, some image compression algorithms, recently proposed in literature, extract image patterns for performing compression, such as extensions to 2D of the classical Lempel-Ziv parses, where repeated occurrences of a pattern are substituted by a pointer to that pattern. However, they require a preliminary linearization of the image and a consequent extraction of linear patterns. This could miss some 2D recurrent structures which are present inside the image. We propose here a new technique of image compression which extracts 2D motif patterns from the image in which also some pixels are omitted in order to increase the gain in compression and which uses these patterns to perform compression. About pattern extraction in political science, it consists in detecting voter profiles, ideological positions and political interactions from political data. Some proposed pattern extraction techniques analyze the Finnish Parliament and the United States Senate in order to discover political trends. Specifically, hierarchical clustering has been employed to discover meaningful groups of senators inside the United States Senate. Furthermore, different methods of community detection, based on the concept of modularity, have been used to detect the hierarchical and modular design of the networks of U.S. parliamentarians. In addition, SVD has been applied to analyze the votes of the U.S. House of Representatives. In this thesis, we analyze the Italian Parliament by using different tools coming from Data Mining and Network Analysis with the aim of characterizing the changes occurred inside the Parliament, without any prior knowledge about the ideology or political affiliation of its representatives, but considering only the votes cast by each parliamentarian.
Università della Calabria
Reale, Maria. "Il procedimento amministrativo informatico." Thesis, 2012. http://eprints.bice.rm.cnr.it/12719/1/Reale_Proc_amm_informatico_def.pdf.
Full textMASSARO, CRESCENZO. "Elaborazione di procedure per la bonifica e gestione dei rifiuti contenenti amianto, anche sulla base di dati analitici ottenuti con tecniche innovative." Doctoral thesis, 2023. https://hdl.handle.net/11573/1666824.
Full textCoppola, Fabrizio. "Le unità di controllo del supercalcolatore A.P.E." Thesis, 1987. http://eprints.bice.rm.cnr.it/3855/1/Coppola-Tesi-Fisica-Pisa-1987.pdf.
Full textNOBILE, ALESSIA. "I sistemi a scansione 3D per la documentazione metrica e lo studio diagnostico dei Beni Culturali. Dalla scala edilizia alla scala urbana. I casi studio della Basilica dell’Umiltà di Pistoia e delle Torri di San Gimignano." Doctoral thesis, 2013. http://hdl.handle.net/2158/797885.
Full textTriggiani, Maurizio. "Integration of machine learning techniques in chemometrics practices." Doctoral thesis, 2022. http://hdl.handle.net/11589/237998.
Full text