Tesis sobre el tema "Gestione intelligente dei dati"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores tesis para su investigación sobre el tema "Gestione intelligente dei dati".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Pisanò, Lorenzo. "IoT e Smart Irrigation: gestione dei Big Data attraverso un sistema di notifica intelligente". Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23531/.
Texto completoCavallin, Riccardo. "Approccio blockchain per la gestione dei dati personali". Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/21604/.
Texto completoBERTOCCHI, DARIO. "Analisi dei dati per la gestione della destinazione turistica". Doctoral thesis, Università IUAV di Venezia, 2017. http://hdl.handle.net/11578/278745.
Texto completoUGLIOTTI, FRANCESCA MARIA. "BIM and Facility Management for smart data management and visualization". Doctoral thesis, Politecnico di Torino, 2017. http://hdl.handle.net/11583/2696432.
Texto completoBIM is for all buildings. As a disruptive technology, BIM completely changes the traditional way of working of the Construction Industry, starting from the design stage. However, the challenging issue is to establish a framework that brings together methods and tools for the buildings lifecycle, focusing on the existing buildings management. Smart city means smart data, including, therefore, intelligent use of Real Estate information. Involving Facility Management in the process is the key to ensure the availability of the proper dataset of information, supporting the idea of a BIM-based knowledge management system. According to this approach, BIM Management is achievable applying a reverse engineering process to guarantee the BIM effectiveness and to provide Facility 4.0 smart services.
Restuccia, Martina <1992>. "Le app e la politica di protezioni dei dati. Analisi della percezione degli utenti in merito alla protezione dei dati". Master's Degree Thesis, Università Ca' Foscari Venezia, 2021. http://hdl.handle.net/10579/18515.
Texto completoMazziol, Antonella <1965>. "La interoperabilità nella gestione dei dati della Pubblica Amministrazione: il caso dei Comuni italiani". Master's Degree Thesis, Università Ca' Foscari Venezia, 2017. http://hdl.handle.net/10579/9401.
Texto completoBasso, Andrea <1991>. "La gestione informatica dei dati personali nei servizi di sharing economy". Master's Degree Thesis, Università Ca' Foscari Venezia, 2020. http://hdl.handle.net/10579/17907.
Texto completoZaccherini, Giovanni. "Innovazione e digitalizzazione a supporto dei processi di gestione dei trasporti. Il caso UNILOG". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.
Buscar texto completoManzone, Francesca. "Analisi dei dati della gestione dei rifiuti urbani di alcuni Comuni campione della Regione Emilia Romagna". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016.
Buscar texto completoLeoni, Anna Giulia. "Gestione di un data lake strutturato attraverso il riconoscimento semantico dei dati acquisiti". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/18048/.
Texto completoLava, Gianluca <1988>. "Il controllo di gestione e l’analisi dei dati economici nel contesto delle PMI". Master's Degree Thesis, Università Ca' Foscari Venezia, 2013. http://hdl.handle.net/10579/3441.
Texto completoPARMIGGIANI, Nicolò. "Metodi per l’analisi e la gestione dei dati dell’astrofisica gamma in tempo reale". Doctoral thesis, Università degli studi di Modena e Reggio Emilia, 2021. http://hdl.handle.net/11380/1239980.
Texto completoThe context of this Ph.D. is the data analysis and management for gamma-ray astronomy, which involves the observation of gamma-rays, the most energetic form of electromagnetic radiation. From the gamma-ray observations performed by telescopes or satellites, it is possible to study catastrophic events involving compact objects, such as white dwarves, neutron stars, and black holes. These events are called gamma-ray transients. To understand these phenomena, they must be observed during their evolution. For this reason, the speed is crucial, and automated data analysis pipelines are developed to detect gamma-ray transients and generate science alerts during the astrophysical observations or immediately after. A science alert is an immediate communication from one observatory to other observatories that an interesting astrophysical event is occurring in the sky. The astrophysical community is experiencing a new era called "multi-messenger astronomy", where the astronomical sources are observed by different instruments, collecting different signals: gravitational waves, electromagnetic radiation, and neutrinos. In the multi-messenger era, astrophysical projects share science alerts through different communication networks. The coordination of different projects done by sharing science alerts is mandatory to understand the nature of these physical phenomena. Observatories have to manage the follow-up of these external science alerts by developing dedicated software. During this Ph. D., the research activity had the main focus on the AGILE space mission, currently in operation, and on the Cherenkov Telescope Array Observatory (CTA), currently in the construction phase. The follow-up of external science alerts received from Gamma-Ray Bursts (GRB) and Gravitational Waves (GW) detectors is one of the AGILE Team's current major activities. Future generations of gamma-ray observatories like the CTA or the ASTRI Mini-Array can take advantage of the technologies developed for AGILE. This research aims to develop analysis and management software for gamma-ray data to fulfill the context requirements. The first chapter of this thesis describes the web platform used by AGILE researchers to prepare the Second AGILE Catalog of Gamma-ray sources. The analysis performed for this catalog is stored in a dedicated database, and the web platform queries this database. This was preparatory work to understand how to manage detections of gamma-ray sources and light curve for the subsequent phase: the development of a scientific pipeline to manage gamma-ray detection and science alerts in real-time. The second chapter presents a framework designed to facilitate the development of real-time scientific analysis pipelines. The framework provides a common pipeline architecture and automatisms that can be used by observatories to develop their own pipelines. This framework was used to develop the pipelines for the AGILE space mission and to develop a prototype of the scientific pipeline of the Science Alert Generation system of the CTA Observatory. The third chapter describes a new method to detect GRBs in the AGILE-GRID data using the Convolutional Neural Network. With this Deep Learning technology, it is possible to improve the detection capabilities of AGILE. This method was also integrated as a science tool in the AGILE pipelines. The last chapter of the thesis shows the scientific results obtained with the software developed during the Ph.D. research activities. Part of the results was published in refereed journals. The remaining part was sent to the scientific community through The Astronomer's Telegram or the Gamma-ray Coordination Network.
Romano, Paolo. "Progettazione e realizzazione di una web application per la gestione dei dati dei fornitori, basata sul framework Competitoor". Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/12936/.
Texto completoGambarelli, Nicolo'. "Progettazione ed implementazione di un'applicazione mobile per la gestione dei dati di vendita aziendali". Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amslaurea.unibo.it/8226/.
Texto completoGallo, Ilaria <1993>. "La gestione dei dati. Come creare un blog in Wordpress e portarlo al successo?" Master's Degree Thesis, Università Ca' Foscari Venezia, 2018. http://hdl.handle.net/10579/13427.
Texto completoKirschner, Paolo. "Progetto ADaM - Archeological DAta Management Progetto per la creazione di una banca dati relazionale per la gestione dei dati di scavo". Doctoral thesis, Università degli studi di Padova, 2008. http://hdl.handle.net/11577/3425584.
Texto completoPaci, Simone <1981>. "Sviluppo di un'applicazione bioinformatica per la gestione dei dati di antibiotico sensibilità di isolati clinici". Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2012. http://amsdottorato.unibo.it/4580/1/paci_simone_tesi.pdf.
Texto completoDrug resistance is a huge problem for healthcare and to face it a monitoring system based on collection and analysis of laboratory epidemiological data is required. The PHD project was focused on the development of a web application, which we called ResMon2, for management of such type of data (drug resistance of bacteria present in clinical isolates) useable in a hospital. A web platform associated with a relational database was created in order to have an application easy to update inserting new data without directly editing the HTML pages of the application. The open-source MySQL database was chosen since it has many assets: extremely stable, high performance, supported by a huge online community and it is free. The dynamic content of the web application is generated using a scripting-type programming language: PHP. It is an open-source language precisely developed for construction of dynamic-content web pages perfect for automation of inserting, editing, deleting and displaying functions of data. Besides, it integrates easily with MySQL database thanks to many integrated functions for dynamic data manipulation. A new database was designed creating tables and relations among them: registries, samples, isolated microorganisms and antibiogram data (sensitive, resistant, intermediate). Once defined the database the PHP and HTML code composing the main functions of the application was written. Such functions are: manual insert of single antibiogram, multiple antibiograms importing from specific instruments data files, edit/delete of previously inserted antibiograms, data analysis for trends detection of specific microorganisms species prevalence and of their drug resistance, with attached cake-type graphics and histograms. All functions were tested with real sample clinical data and were provided with specific controls, and simple and clean graphics were added to the application.
Paci, Simone <1981>. "Sviluppo di un'applicazione bioinformatica per la gestione dei dati di antibiotico sensibilità di isolati clinici". Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2012. http://amsdottorato.unibo.it/4580/.
Texto completoDrug resistance is a huge problem for healthcare and to face it a monitoring system based on collection and analysis of laboratory epidemiological data is required. The PHD project was focused on the development of a web application, which we called ResMon2, for management of such type of data (drug resistance of bacteria present in clinical isolates) useable in a hospital. A web platform associated with a relational database was created in order to have an application easy to update inserting new data without directly editing the HTML pages of the application. The open-source MySQL database was chosen since it has many assets: extremely stable, high performance, supported by a huge online community and it is free. The dynamic content of the web application is generated using a scripting-type programming language: PHP. It is an open-source language precisely developed for construction of dynamic-content web pages perfect for automation of inserting, editing, deleting and displaying functions of data. Besides, it integrates easily with MySQL database thanks to many integrated functions for dynamic data manipulation. A new database was designed creating tables and relations among them: registries, samples, isolated microorganisms and antibiogram data (sensitive, resistant, intermediate). Once defined the database the PHP and HTML code composing the main functions of the application was written. Such functions are: manual insert of single antibiogram, multiple antibiograms importing from specific instruments data files, edit/delete of previously inserted antibiograms, data analysis for trends detection of specific microorganisms species prevalence and of their drug resistance, with attached cake-type graphics and histograms. All functions were tested with real sample clinical data and were provided with specific controls, and simple and clean graphics were added to the application.
Sansò, Federica. "Analisi di dati telerilevati ottici e radar per la gestione dei disastri: le alluvioni nel Bangladesh". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2011. http://amslaurea.unibo.it/2302/.
Texto completoESPOSITO, SALVATORE. "La raccolta ragionata dei dati e delle informazioni per la gestione , l a manutenzione ed il monitoraggio dei beni culturali architettonici". Doctoral thesis, Politecnico di Torino, 2013. http://hdl.handle.net/11583/2510123.
Texto completoBova, Matteo. "Miglioramento del Sistema Gestione Qualità rivolto all’acquisizione, conservazione e ricerca dei dati di produzione e configurazione prodotto". Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021.
Buscar texto completoBellò, Alice <1995>. "Il museo diffuso di Asolo e il Covid-19. Un'analisi dei dati sui visitatori e della gestione". Master's Degree Thesis, Università Ca' Foscari Venezia, 2021. http://hdl.handle.net/10579/19725.
Texto completoCesaroni, Maurizio. "Armonizzazione dei dati per l’addestramento di reti neurali ricorrenti: applicazione per la gestione delle promozioni nel settore retail". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23194/.
Texto completoAldini, Benedetta. "La gestione dei cambiamenti e la validazione dei sistemi computerizzati nell'industria di processo: un caso di studio". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020.
Buscar texto completoLUCIDI, ALIGHIERO. "Il dato di rilievo digitale per la conoscenza, valutazione e gestione del patrimonio storico costruito ". Doctoral thesis, Università Politecnica delle Marche, 2021. http://hdl.handle.net/11566/289599.
Texto completoThe evolution of the tools and techniques of survey of the historical built heritage has reached technological levels that allow the acquisition of a huge volume of data in an increasingly limited time. This inversely proportional relationship between capability and acquisition time places the issue of managing the survey data more and more at the center of research. From this perspective, this work aims to offer systems and processes that can optimize the management, use, and analysis of a large amount of data in the field of historical architecture. In detail, the problem was addressed in two distinct areas: at the urban scale and at the scale of the building. At the urban scale, the goal was to implement and improve an existing management system within the company that hosted the PhD. Through case studies on historical centers such as Ascoli Piceno and Venice, instruments have been optimized, testing them in the particularly challenging context of the representation of historical architecture. As for the scale of the building, the research investigated the possibility of using the point cloud for the purposes of finite element structural analysis, through the generation of a semi-automatic process that allows to exploit the point cloud for the generation of the structural model. The case studies, on which the methodology was studied, focused on the typology of historic masonry towers. In both research areas addressed, the proposed solutions have been validated: in the urban environment the platform has been tested by its use in a professional environment; while in the building environment a comparison was made between the results provided by the structural model, obtained through the proposed methodology, and the experimental data recorded in the field.
De, Gironimo Simone. "IHE Technology Framework: un modello di ottimizzazione della gestione dei dati clinici e integrazione tra le apparecchiature mediche in ambito di Cardiologia". Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2012. http://amslaurea.unibo.it/3715/.
Texto completoRicciardelli, Filippo. "Gestione di una base dati per la mappatura e l’ottimizzazione di processi logistici di una filiera agroalimentare". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.
Buscar texto completoROLLO, FEDERICA. "Verso soluzioni di sostenibilità e sicurezza per una città intelligente". Doctoral thesis, Università degli studi di Modena e Reggio Emilia, 2022. http://hdl.handle.net/11380/1271183.
Texto completoA smart city is a place where technology is exploited to help public administrations make decisions. The technology can contribute to the management of multiple aspects of everyday life, offering more reliable services to citizens and improving the quality of life. However, technology alone is not enough to make a smart city; suitable methods are needed to analyze the data collected by technology and manage them in such a way as to generate useful information. Some examples of smart services are the apps that allow to reach a destination through the least busy road route or to find the nearest parking slot, or the apps that suggest better paths for a walk based on air quality. This thesis focuses on two aspects of smart cities: sustainability and safety. The first aspect concerns studying the impact of vehicular traffic on air quality through the development of a network of traffic and air quality sensors, and the implementation of a chain of simulation models. This work is part of the TRAFAIR project, co-financed by the European Union, which is the first project with the scope of monitoring in real-time and predicting air quality on an urban scale in 6 European cities, including Modena. The project required the management of a large amount of heterogeneous data and their integration on a complex and scalable data platform shared by all the partners of the project. The data platform is a PostgreSQL database, suitable for dealing with spatio-temporal data, and contains more than 60 tables and 435 GB of data (only for Modena). All the processes of the TRAFAIR pipeline, the dashboards and the mobile apps exploit the database to get the input data and, eventually, store the output, generating big data streams. The simulation models, executed on HPC resources, use the sensor data and provide results in real-time (as soon as the sensor data are stored in the database). Therefore, the anomaly detection techniques applied to sensor data need to perform in real-time in a short time. After a careful study of the distribution of the sensor data and the correlation among the measurements, several anomaly detection techniques have been implemented and applied to sensor data. A novel approach for traffic data that employs a flow-speed correlation filter, STL decomposition and IQR analysis has been developed. In addition, an innovative framework that implements 3 algorithms for anomaly detection in air quality sensor data has been created. The results of the experiments have been compared to the ones of the LSTM autoencoder, and the performances have been evaluated after the calibration process. The safety aspect in the smart city is related to a crime analysis project, the analytical processes directed at providing timely and pertinent information to assist the police in crime reduction, prevention, and evaluation. Due to the lack of official data to produce the analysis, this project exploits the news articles published in online newspapers. The goal is to categorize the news articles based on the crime category, geolocate the crime events, detect the date of the event, and identify some features (e.g. what has been stolen during the theft). A Java application has been developed for the analysis of news articles, the extraction of semantic information through the use of NLP techniques, and the connection of entities to Linked Data. The emerging technology of Word Embeddings has been employed for the text categorization, while the Question Answering through BERT has been used for extracting the 5W+1H. The news articles referring to the same event have been identified through the application of cosine similarity to the shingles of the news articles' text. Finally, a tool has been developed to show the geolocalized events and provide some statistics and annual reports. This is the only project in Italy that starting from news articles tries to provide analyses on crimes and makes them available through a visualization tool.
BETTIO, CINZIA. "Gestione e analisi dei dati del Registro Nazionale Italiano per la distrofia muscolare facio-scapolo-omerale e tecniche avanzate per la predizione della malattia: un passo verso un approccio clinico su misura del paziente". Doctoral thesis, Università degli studi di Modena e Reggio Emilia, 2022. http://hdl.handle.net/11380/1278857.
Texto completoThe Italian National Registry for FSHD (INRF), collects data from patients suffering from Facioscapulohumeral Muscular Dystrophy (FSHD) (MIM\#158900) a rare hereditary myopathy. A platform for data management, the MOMIS FSHD Web Platform, has made it possible to study and integrate information from different sources. Until 2020, INRF collected molecular data on 7485 subjects and identified 3396 individuals carrying a D4Z4 Reduced Allele (DRA), the genetic marker associated with FSHD. Clinical data are collected through standardized and validated evaluation protocols. Since 2009, 3574 clinical forms have been systematically collected including information on demographic data, clinical history and neurological evaluation and allow the description of phenotypes and their classification into categories: classical phenotype, incomplete, asymptomatic/healthy subjects and atypical/complex phenotype. The integration of such information demonstrated both the wide phenotypic variability and the incomplete penetrance of FSHD supporting the idea that the expression and evolution of the disease are determined by several factors. In fact, the marker of FSHD is a common polymorphism, detected in 3\% of the healthy population. In addition, 23.4% of DRA carriers in our cohort are asymptomatic. The present thesis aims to underline the importance of the systematic collection of standardized clinical data. This approach improved the understanding of the expression of this rare disease and made its complexity explicit. Chapter 1 provides a broad overview of the FSHD, disease registries and the INRF. Chapter 2 describes the methods for molecular diagnosis of the disease and material and methods of the studies carried out in this thesis work. The results of the studies are described in Chapter 4 as follows: 1) INRF: analysis of clinical and molecular data of the registry cohort, description of its structure, its organization and its contribution in the European context; 2) genotype/phenotype correlation study of DRA carriers with 9-10 RU: 46.0\% of index cases do not show the classic FSHD phenotype; 10.0\% of relatives show a classical phenotype; 70.9\% of carrier relatives show no motor impairment. 3)Characterization of 125 subjects with incomplete phenotype (absence of facial weakness). This phenotype is significantly milder than the classical phenotype. Out of 33 families with a proband with incomplete phenotype, in 18 (54.5\%) the proband was the only one that expressed a myopathic profile. 36% of these 125 subjects were not carriers of DRA, suggesting that other elements may underlie this phenotype. 4) Study of environmental factors that may play a role in disease progression or onset. The retrospective study on a cohort of DRA carriers confirms that the regular practice of physical activities in young people is not harmful but, on the contrary, is associated with milder clinical severity in adulthood; 5-6) Evaluation of couples in prenatal counselling and assessment of the genetic risk associated with FSHD. With the aim of improving genetic counseling by providing a personalized approach to the family history of the consultants, an innovative tool (classifier) based on Machine Learning technology has been developed. The classifier is able to predict the probability that a newborn will develop a myopathic phenotype based on family history and molecular data, achieving 80\% sensitivity and over 70\% specificity. This software has the potential to increase the quality of counselling and become a reliable ally in risk assessment in personalised medicine.
Machì, Gaetano. "Nuove tecnologie e gestione del mercato del lavoro: profili giuridici". Doctoral thesis, Università di Siena, 2023. https://hdl.handle.net/11365/1226414.
Texto completoThe characteristics of the labor market require an increasing personalization of labor market management interventions (training, guidance, passive and active labor policies), with the aim of satisfying the recipients of the measures also considering their heterogeneity and emerging needs, including those of an extra-work nature. This study aims to assess the possible use of innovative labor market management tools within the national context in order to meet these emerging needs. To this end, a survey of the main instruments adopted, including experimental ones, for labor market management was conducted, highlighting their common features and defining their positive and critical effects. Secondly, an analysis of the regulatory framework of reference at the national and supranational level was carried out in order to understand the application limitations of the investigated instruments and to identify possible interventions to adapt current legislation. Consideration was given to European, national and in some cases regional regulatory acts that rule how labor supply and demand are matched, rulings of national and European courts, decisions and guidelines of independent authorities, especially data protection authorities, and collective bargaining. The thesis developed from the analysis of national and international scientific literature related to the subjects under research: labor market, IT law, and data governance. The normative and literature analysis activity was accompanied by experiential data related to the apprenticeship, thanks to which it was possible to develop an in-depth knowledge of the main tools useful for labor market management. The analysis carried out showed that the new technologies in use for labor market management focus mainly on the utilization of data in various forms and allow an increase in the quality of services and greater personalization of interventions. They fit within a complex, multilevel regulatory framework, in some ways still in its embryonic stage, where some technical and organizational aspects have only recently been taken into account by European law and subsequently by Italian law. The priorities that emerge so that all labor market actors can successfully benefit from the new technologies used to manage the labor market are the initiation of a widespread digital literacy process and the creation of an IT and organizational infrastructure. This thesis is integrated within a multi-disciplinary discussion on labor market management policy development that links issues specific to labor market law with those of legal informatics and the inherent regulation of data circulation. The cross-disciplinary nature of the paper allowed for a broader reading of the topic of the use of new technologies for labor market management, involving branches of law and disciplines that will be increasingly closely related to each other in the future.
FRATTA, ANDREA. "Nuove tecnologie applicate alla comunicazione della ricerca archeologica. Dal trattamento dei dati alla gestione efficiente per la fruizione e la condivisione su piattaforme web". Doctoral thesis, Università di Foggia, 2016. http://hdl.handle.net/11369/353975.
Texto completoBillet, Benjamin. "Système de gestion de flux pour l'Internet des objets intelligents". Thesis, Versailles-St Quentin en Yvelines, 2015. http://www.theses.fr/2015VERS012V/document.
Texto completoThe Internet of Things (IoT) is currently characterized by an ever-growing number of networked Things, i.e., devices which have their own identity together with advanced computation and networking capabilities: smartphones, smart watches, smart home appliances, etc. In addition, these Things are being equipped with more and more sensors and actuators that enable them to sense and act on their environment, enabling the physical world to be linked with the virtual world. Specifically, the IoT raises many challenges related to its very large scale and high dynamicity, as well as the great heterogeneity of the data and systems involved (e.g., powerful versus resource-constrained devices, mobile versus fixed devices, continuously-powered versus battery-powered devices, etc.). These challenges require new systems and techniques for developing applications that are able to (i) collect data from the numerous data sources of the IoT and (ii) interact both with the environment using the actuators, and with the users using dedicated GUIs. To this end, we defend the following thesis: given the huge volume of data continuously being produced by sensors (measurements and events), we must consider (i) data streams as the reference data model for the IoT and (ii) continuous processing as the reference computation model for processing these data streams. Moreover, knowing that privacy preservation and energy consumption are increasingly critical concerns, we claim that all the Things should be autonomous and work together in restricted areas as close as possible to the users rather than systematically shifting the computation logic into powerful servers or into the cloud. For this purpose, our main contribution can be summarized as designing and developing a distributed data stream management system for the IoT. In this context, we revisit two fundamental aspects of software engineering and distributed systems: service-oriented architecture and task deployment. We address the problems of (i) accessing data streams through services and (ii) deploying continuous processing tasks automatically, according to the characteristics of both tasks and devices. This research work lead to the development of a middleware layer called Dioptase, designed to run on the Things and abstract them as generic devices that can be dynamically assigned communication, storage and computation tasks according to their available resources. In order to validate the feasability and the relevance of our work, we implemented a prototype of Dioptase and evaluated its performance. In addition, we show that Dioptase is a realistic solution which can work in cooperation with legacy sensor and actuator networks currently deployed in the environment
Govoni, Irene. "Analisi e progettazione delle funzionalità di un software per la gestione delle competenze: sviluppo del modello di processo e del prototipo funzionale. Struttura di analisi e definizione dei dati per la gestione delle Risorse umane". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2010. http://amslaurea.unibo.it/1007/.
Texto completoPalaia, Gaetano. "Riqualificazione del data warehouse di una pubblica amministrazione in ambito agricolo". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amslaurea.unibo.it/25073/.
Texto completoRadicati, Eleonora. "Efficienza energetica globale: sviluppo di uno strumento innovativo per la gestione dei dati e la definizione di possibili scenari. Il caso studio del nuovo Tecnopolo di Bologna". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2012. http://amslaurea.unibo.it/4815/.
Texto completoOsmani, Laura. "Database relazionali e applicazioni gis e webgis per la gestione, l'analisi e la comunicazione dei dati territoriali di un'area protetta. Il Parco Regionale del Conero come caso applicativo". Doctoral thesis, Università degli studi di Trieste, 2010. http://hdl.handle.net/10077/3637.
Texto completoAlla base del lavoro di ricerca è stato posto un impianto metodologico a carattere multidisciplinare contrassegnato da un lato da un’analisi introduttiva teorico-geografica relativa alle tematiche inerenti il governo del territorio e del paesaggio in area protetta, dall’altro, da una fase di esame (connessa ad un ambito di indagine più strettamente cartografico – digitale) tesa a fare il punto sullo stato di avanzamento -a livello comunitario e nazionale- in merito al tema della costruzione di infrastrutture di dati spaziali, pubblicazione e condivisione di servizi legati al settore del Geographical Information System, con attenzione alla comunicazione degli elementi di natura ambientale (rintracciabili anche all’interno del contesto aree protette). Entrambe le panoramiche, arricchite al loro interno dalla descrizione del quadro normativo transcalare di riferimento, risultano necessarie ai fini della contestualizzazione e impostazione del lavoro e conducono ad una fase di screening in merito al tema comunicazione dei dati territoriali in ambiente webgis da parte degli enti italiani gestori delle aree protette, nello specifico parchi nazionali e regionali. Tali elementi teorici, legislativi e conoscitivi sono stati poi presi a riferimento nel corso della fase applicativa della ricerca con lo scopo di guidare e supportare i momenti che hanno condotto alla realizzazione di applicazioni dedicate all’area Parco del Conero facenti seguito ad una fase di survey sul campo, ad un’organizzata raccolta di dati territoriali (di base e di Piano del Parco) e successive fasi di analisi spaziale. Lo scopo è quello di supportare (grazie agli applicativi realizzati) le operazioni di gestione, studio e comunicazione territoriale che un Ente responsabile di un’area protetta si trova a dover definire e implementare alla luce delle tematiche considerate nel corso della sezione teorica. I risultati tangibili si incarnano nella creazione di un’architettura che partendo dal relational database, passando per il geodatabase e giungendo alle piattaforme webgis dinamiche e interattive funga da supporto ai processi di coordinamento, analisi e diffusione di selezionati elementi territoriali relativi al comprensorio Parco del Conero e al suo principale strumento di pianificazione (Piano del Parco) agevolando e supportando così sia processi gestionali e decisionali più “consapevoli”, sia percorsi informativi e partecipativi strutturati. Il corpus definitivo dell’elaborato è stato suddiviso in due parti distinte allo scopo di scandire i momenti dello studio e consentirne una più immediata lettura. Ciascuna si articola in tre capitoli. La prima parte, a cui si è assegnato il titolo “Governo del territorio e condivisione del dato informativo e cartografico. Scenari evolutivi verso lo sviluppo di dinamiche partecipative” esplicita al suo interno il quadro teorico, normativo e conoscitivo posto alla base della ricerca. - Nel corso del primo capitolo si è ritenuto opportuno introdurre brevemente alle recenti dinamiche che hanno interessato i concetti, le definizioni e gli aspetti normativi inerenti le tematiche relative al governo del territorio e del paesaggio in area protetta, più nel dettaglio di quello dei parchi naturali regionali in Italia e forme del paesaggio da tutelare. Un excursus che ha preso in esame gli scritti geografici nazionali e internazionali sul tema, facendo emergere posizioni eterogenee, in continua evoluzione e, comunque, oggi in linea con i recenti indirizzi di contesto sviluppati e approvati in ambito comunitario e convenientemente riletti alla scala nazionale. Il tutto ha la necessità di essere supportato da un’adeguata rappresentazione cartografico-tassonomica delle diverse tipologie, unità e categorie di paesaggio e parco. Principio, quello della classificazione, che caratterizza una delle fondamentali linee di dibattito, internazionale e nazionale sull’argomento. - Il secondo, attraverso un approccio che lega il mondo del Geographical Information System e le aree protette tramite il tema della pubblicazione e condivisione dei dati spaziali e ambientali, configura brevemente lo stato dell’arte nel contesto di realizzazione di infrastrutture ad essi dedicate, di implementazioni relative alla stesura dei metadati da indicare per set e serie di elementi territoriali, nonché servizi per i medesimi. Lo sguardo viene rivolto alle direttive, ai regolamenti e alle decisioni in ambito comunitario e alle trasposizioni delle stesse all’interno del contesto nazionale. - Nel terzo si inizia ad entrare nella parte del lavoro di ricerca caratterizzata da un’impronta più conoscitiva che teorico-normativa. Ci si spinge oltre il quadro concettuale e si cerca di capire, attraverso la realizzazione di uno screening sul tema della comunicazione e diffusione (da parte dei rispettivi enti gestori) dei più rilevanti dati territoriali relativi ai parchi nazionali e regionali italiani tramite piattaforme webgis, cosa nel nostro paese è stato fatto a favore della loro divulgazione e quali possono configurarsi come margini di miglioramento futuro. L’analisi è corredata da grafici e tabelle di dettaglio in relazione alle quali si espongono commenti relativi ai risultati ricavati nel corso dell’indagine -sia in valore assoluto che in valore percentuale-. Il capitolo funge da ponte tra la sezione teorica del lavoro e quella dedicata invece al caso di studio specifico. La seconda parte “Un’applicazione territoriale: il Parco del Conero. Da un’analisi geografica di contesto ad una di dettaglio attraverso tools gis-analyst. Database Management System e Web Service Application per la gestione e la comunicazione”, memore dell’indagine teorico-conoscitiva, è dedicata alla presentazione del caso applicato all’area protetta del Conero. Nel dettaglio: - all’interno del capitolo quarto si fornisce un inquadramento territoriale dell’area oggetto di esame tramite analisi condotte grazie a tools gis-analyst (ArcGis – ArcToolbox). Tale inquadramento viene arricchito dal rilievo sul campo della rete sentieristica interna al Parco del Conero in relazione alla quale si descrivono le modalità di acquisizione dei dati e le successive fasi di post-elaborazione. Il rilievo dei sentieri (reso necessario dal fatto che la rete era stata solo digitalizzata sulla carta) ha consentito di completare il quadro di analisi relativo alla viabilità pedonale interna all’area parco, ponendo l’accento non solo sulle caratteristiche di fruibilità turistico-paesaggistica che questa possiede, ma integrando i dati raccolti con quelli del Piano del Parco già a disposizione dell’Ente al fine di giungere alla realizzazione di modelli di analisi spaziale (ESRI Model Builder) da poter applicare in successive fasi di valutazione territoriale dell’area stessa o di programmazione concernente interventi puntuali da effettuarsi sulla rete sentieristica in relazione a tratti di percorso caratterizzati da elementi di criticità. Di tali modelli si sottolineano le caratteristiche di versatilità e adattabilità a qualsiasi tipologia di territorio, protetto e non, che risulti attraversato da sentieri, percorsi e itinerari turistico - culturali o di fruibilità paesaggistica e naturalistica. Il capitolo si conclude con la descrizione delle finalità di indagine e struttura dei modelli stessi. - Nel capitolo quinto i dati alfanumerici, quelli ricavati dalle survey della rete sentieristica, quelli di piano, nonché quelli riguardanti le fonti bibliografiche vengono integrati all’interno di un database relazionale MS Access pensato ai fini della loro consultazione anche da parte di utenti non esperti GIS. Tale database consente collegamenti e interazioni sia con un personal geodatabase ESRI che con il database spatial PostgreSQL (estensione PostGIS) all’interno dei quali sono stati archiviati i dati spaziali dedicati invece ad una utenza GIS specialist. Si prosegue con la descrizione delle tipologie di dataset territoriali in essi inseriti ai fini della loro archiviazione e del loro aggiornamento. - Il sesto capitolo risulta, infine, dedicato al testing e sviluppo (localhost) di un applicativo Webgis UMN Mapserver con front-end dinamico P.Mapper contenente una selezione dei dati spaziali di cui sopra. In relazione ad esso si delineeranno le caratteristiche fondanti, le categorie e le query di interrogazione, i parametri degli strati informativi di cui si intende consentire la visualizzazione. Il tutto consapevoli che la pubblicazione web di un Sistema Informativo Territoriale trova, di fatto, il suo fine ultimo non solo nel mero passaggio da un’utenza locale a una multiutenza condivisa del dato/database spaziale, ma anche nella sua auto-identificazione a strumento atto a supportate, favorire e attivare processi di condivisione informativa e partecipazione decisionale collettiva secondo dinamiche che, alternativamente, vertano ad un andamento di tipo top - down e bottom – up. Il lavoro, dopo le note conclusive, si chiude con le consuete indicazioni bibliografiche e sitografiche e tre allegati all’interno dei quali si riportano: due tabelle sinottiche relative allo screening sui parchi nazionali e regionali presentato nel corso del terzo capitolo, l’estratto di alcuni strati informativi inseriti nel file .map di Mapserver e infine un elenco delle sigle e degli acronimi incontrati nel corso dello scritto.
XXII Ciclo
1979
FACCIA, ALESSIO. "Analisi dei dati RICA finalizzati all'approfondimento del tema della gestione del rischio in agricoltura. Misurazione delle performance finanziarie e patrimoniali delle aziende agrarie e relativa definizione di un modello di rating". Doctoral thesis, Università Politecnica delle Marche, 2012. http://hdl.handle.net/11566/242051.
Texto completoThis study has determined a rating algorithm (weighted ratio) to assess the creditworthiness of farms based on a single source of information: the RICA data set. This source was chosen for the following reasons: • ease of use; • reliability of the data contained therein; • significant depth of data sets available (6 years of data). The variables used in the calculations of the rating algorithm proposed have some substantial differences from the two methods used in previous similar studies in the agricultural sector (Moody's ISMEA results of Altman and EM). The weighted ratio is determined as follows: Q = (15% * A) + (30% * B) + (25% * C) + (30% * D) with: A: Sup_TOT. B: Cap_FOND_TOT / Sup_TOT. C: Inv_FOND_NEW / Cap_FOND_TOT. D: Cap_ESE_PROP / Sup_TOT. The ratio can vary between a minimum of 0.75 and a maximum of 3. The probability of default associated with the different classes of rating is shown in the table below. The discreet nature of empirical evidence has shown a trend in "bell", where the Gaussian curve is shown in the frequency of companies concentrated mainly in the central values. The rating model is characterized by the exclusive use of quantitative variables as they are the only ones that incorporate the following features: objectivity of metering, presence in the archive RICA expression of strength, opportunities for development and intrinsic potential of the business system. Rather than focusing primarily on the potential risks that could cause a default, the association to a Probability of Default has been determined as an ambitious objective of creating a system of calculation guided by positive terms, ie of opportunities development potential inherent to the business system, based on available resources. The value of the creation of development opportunities and the intrinsic potential of the business system, establishing a clear reduction of risk of insolvency, has allowed an association of mainly indirect range of Probability of Default rating to each class.
Peloso, Pietro. "Riqualificazione del Data Mart della Contabilità di UniBO: mappatura concettuale e revisione della reportistica". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017.
Buscar texto completoLebrun, Philippe. "Comment les "intelligences articulées" peuvent-elles améliorer la gestion des savoir-faire? : une application au supply chain management". Caen, 2015. http://www.theses.fr/2015CAEN0508.
Texto completoThis research looks at the problems of creation and process improvement for the management of know-how. In response, we are advancing the concept of "hinged intelligences" which is based on four dominant principles. The first of these is based on an adaptation of "agile methods1", the second part declines the concepts of the "intervention research", the third uses computer techniques in transposing those "business intelligence" and the fourth takes into account the acceptability of the players face the changes. The proposed "approach" is so noticeable as a double vector management and support in the conduct of change. In this study, the managerial dimension is addressed in a pragmatic way, and positioned the dissemination of information as a central element. Indeed, by strengthening communication and transparency through the use of pilotage and simulatiod tools, we seek to enhance the performance of managers. These elements are applied to the case of the "Supply Chain Management" in the industrial sector. For the conduct of our work, we are building on the principles of the "intervention research" which prove appropriate to the development of this type of project. Conduct research is organized according to the following path, first presentation of the conceptual framework which is at the origin of the approach. Then we discuss the description of the land of experimentation with the development of two case studies. The first concerns the design of a human resource allocation tool. This allows to assign staff in the context of the actual. The second focuses on the redesign of a logistics organization in a workshop industrial with the implementation of a "Milkrun2" for the management of physical flows of the production site. At the end of the presentation of these cases, we propose a discussion with a formalization of the concepts around the themes studied. The research then recommends the elements of implementation of the "approach" for the management of improvement projects that values collaborative dimension and the emergence of the know-how
Marinaro, Isabella. "Analisi e Progettazione del Data Mart Ricerca e Terza Missione per l'Università di Bologna". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018.
Buscar texto completoBreslas, Grigore. "Riqualificazione del Data Warehouse UniBO: un dashboard per il Piano Strategico d'Ateneo". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20756/.
Texto completoCastellano, Mattia. "Business Process Management e tecniche per l'applicazione del Process Mining. Il caso Università degli Studi di Parma". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017.
Buscar texto completoBouabdallaoui, Yassine. "Introduction de l'intelligence artificielle dans le secteur de la construction : études de cas du Facility Management". Electronic Thesis or Diss., Centrale Lille Institut, 2021. http://www.theses.fr/2021CLIL0022.
Texto completoThe industry of Facility Management (FM) has known a rapid advancement through the last decades which leads to a largeexpansion of FM activities. The FM organisations have evolved from the traditional role of providing maintenance services toinclude complex and interconnected activities involving people, processes and technologies. As a consequence of thisexponential growth, facility managers are dealing with growing and varied challenges ranging from energy efficiency andenvironmental challenges to service customisation and customer satisfaction. The development of Artificial Intelligence (AI)is offering academics and practitioners a new set of tools to address these challenges. AI is enabling multiple solutions suchas automation, improving predictability and forecasting and offering services customisation. The Facility Managementindustry can benefit from these new techniques to better manage their assets and improve their processes. However, theintegration of AI into the FM ecosystem is a challenging task that needs to overcome the gap between the business driversand the AI. To unlock the full potential of data analytics and AI in the FM industry, significant work is needed to overcomethe issues of data quality and data management in the FM sector. The overall aim of this thesis is to conceptualise thetheoretical and practical understanding and implementation of artificial intelligence and data-driven technologies into FacilityManagement activities to leverage data and optimise facilities usage. Promises of AI implementations were presented alongwith the challenges and the barriers limiting the development of AI in the FM sector. To resolve these issues, a frameworkwas proposed to improve data management and leverage AI in FM. Multiple case studies were selected to address thisframework. The selected case studies covered predictive maintenance, virtual assistant and natural language processingapplications. The results of this work demonstrated the potential of AI to address FM challenges such in maintenancemanagement and waste management. However, multiple barriers limiting the development of AI in the FM sector wereidentified including data availability issues
El, Haddadi Anass. "Fouille multidimensionnelle sur les données textuelles visant à extraire les réseaux sociaux et sémantiques pour leur exploitation via la téléphonie mobile". Toulouse 3, 2011. http://thesesups.ups-tlse.fr/1378/.
Texto completoCompetition is a fundamental concept of the liberal economy tradition that requires companies to resort to Competitive Intelligence (CI) in order to be advantageously positioned on the market, or simply to survive. Nevertheless, it is well known that it is not the strongest of the organizations that survives, nor the most intelligent, but rather, the one most adaptable to change, the dominant factor in society today. Therefore, companies are required to remain constantly on a wakeful state to watch for any change in order to make appropriate solutions in real time. However, for a successful vigil, we should not be satisfied merely to monitor the opportunities, but before all, to anticipate risks. The external risk factors have never been so many: extremely dynamic and unpredictable markets, new entrants, mergers and acquisitions, sharp price reduction, rapid changes in consumption patterns and values, fragility of brands and their reputation. To face all these challenges, our research consists in proposing a Competitive Intelligence System (CIS) designed to provide online services. Through descriptive and statistics exploratory methods of data, Xplor EveryWhere display, in a very short time, new strategic knowledge such as: the profile of the actors, their reputation, their relationships, their sites of action, their mobility, emerging issues and concepts, terminology, promising fields etc. The need for security in XPlor EveryWhere arises out of the strategic nature of information conveyed with quite a substantial value. Such security should not be considered as an additional option that a CIS can provide just in order to be distinguished from one another. Especially as the leak of this information is not the result of inherent weaknesses in corporate computer systems, but above all it is an organizational issue. With Xplor EveryWhere we completed the reporting service, especially the aspect of mobility. Lastly with this system, it's possible to: View updated information as we have access to our strategic database server in real-time, itself fed daily by watchmen. They can enter information at trade shows, customer visits or after meetings
GALLO, Giuseppe. "Architettura e second digital turn, l’evoluzione degli strumenti informatici e il progetto". Doctoral thesis, Università degli Studi di Palermo, 2021. http://hdl.handle.net/10447/514731.
Texto completoThe digital condition that has gradually hybridized our lives, transforming atoms into bits, has now cemented itself in our society, enriching post-modernity and determining a new form of liquidity that has sharpened with the advent of the internet. It is a historical moment marked by a new digital maturity, evident in our diverse relationship to data and in the spread of advanced machine learning methods, which both promise a new understanding of contemporary complexity as well as contribute to the propagation of the technical apparatus throughout the world. These changes, so profound as to affect our culture, are changing our way of perceiving space, and therefore of inhabiting it: conditions that undoubtedly have repercussions on architectural design in its capacity as a human activity geared towards human beings. The increased complexity that has touched our discipline with Postmodernism has meanwhile found new support in Derridian deconstruction, in a historical moment marked by great emphasis on the opportunities that digital tools offer. These are means we first welcomed into our discipline exclusively as tools for representation, and ones that then themselves determined the emergence of new approaches based on the inclusive potential of continuity and variation. None of the protagonists of the first digital turn could probably have imagined the effects that digital culture would now be having on architectural design. A digital culture that has become increasingly stronger due to almost thirty years of both methodological and formal experimentation, as well as to organizational and instrumental changes, starting with the rise of BIM to new algorithmic possibilities represented by visual programming languages and numerical simulations. These have been the primary tools of concentration in the push towards digital, a digital which today has reached a second turn in the field of architecture, identified by Carpo in new design approaches that are now possible thanks to the larger availability of data. A condition that inevitably affects both science and architectural design, but which, nevertheless, fails to fully share a contemporaneity where technology spreads its wings as far as architecture is concerned, thus affecting the meaning of our role within society. With these multifaceted considerations as a starting point, and fully aware of how complex the dialogue we must engage in in order to reconstruct a neutral, historical, and organic as possible vision of the phase that architecture is experiencing, it is my opinion a holistic approach must be established by us. One that is both inclusive and capable of expanding to the point of acquiring a philosophical perspective, as well as being able to attend to areas that cover technical, operational, methodological, instrumental, and relational details. This objective is one I have striven to keep alive throughout the three years of my doctoral research, which in its various phases looks at the mutations that digital technology is producing in society and therefore in architectural design. My research is enriched by the inclusion of ten interviews with prominent protagonists of contemporary architecture, for whose time and availability I am grateful. These testimonials allowed me to see the complexities of contemporary design up close and personal, and they represent a central part of this thesis, which equally aims to provide a historical interpretation of the challenges posed by contemporaneity and to identify the responsibilities that we must uphold for human beings to remain at the centre of our work.
Kubler, Sylvain. "Premiers travaux relatifs au concept de matière communicante : Processus de dissémination des informations relatives au produit". Phd thesis, Université Henri Poincaré - Nancy I, 2012. http://tel.archives-ouvertes.fr/tel-00759600.
Texto completoContreras, Ochando Lidia. "Towards Data Wrangling Automation through Dynamically-Selected Background Knowledge". Doctoral thesis, Universitat Politècnica de València, 2021. http://hdl.handle.net/10251/160724.
Texto completo[CA] El procés de ciència de dades és essencial per extraure valor de les dades. No obstant això, la part més tediosa del procés, la preparació de les dades, implica una sèrie de transformacions, neteja i identificació de problemes que principalment són tasques manuals. La preparació de dades encara es resisteix a l'automatització en part perquè el problema depén en gran manera de la informació del domini, que es converteix en un coll de botella per als sistemes d'última generació a mesura que augmenta la diversitat de dominis, formats i estructures de les dades. En aquesta tesi ens enfoquem a generar algorismes que aprofiten el coneixement del domini per a l'automatització de parts del procés de preparació de dades. Mostrem la forma en què les tècniques generals d'inducció de programes, en lloc dels llenguatges específics del domini, es poden aplicar de manera flexible a problemes on el coneixement és important, mitjançant l'ús dinàmic de coneixement específic del domini. De manera més general, sostenim que una combinació d'enfocaments d'aprenentatge dinàmics i basats en coneixement pot conduir a les bones solucions. Proposem diverses estratègies per seleccionar o construir automàticament el coneixement previ apropiat en diversos escenaris de preparació de dades. La idea principal es basa a triar les millors primitives especialitzades d'acord amb el context del problema particular a resoldre. Abordem dos escenaris. En el primer, manegem dades personals (noms, dates, telèfons, etc.) que es presenten en formats de cadena de text molt diferents i han de ser transformats a un format unificat. El problema és com construir una transformació compositiva a partir d'un gran conjunt de primitives en el domini (per exemple, manejar mesos, anys, dies de la setmana, etc.). Desenvolupem un sistema (BK-ADAPT) que guia la cerca a través del coneixement previ extraient diverses meta-característiques dels exemples que caracteritzen el domini de la columna. En el segon escenari, ens enfrontem a la transformació de matrius de dades en llenguatges de programació genèrics com a R, utilitzant com a exemples una matriu d'entrada i algunes dades de la matriu d'eixida. També desenvolupem un sistema guiat per una cerca basada en arbres (AUTOMAT[R]IX) que usa diverses restriccions, probabilitats prèvies per a les primitives i suggeriments textuals, per aprendre eficientment les transformacions. Amb aquests sistemes, mostrem que la combinació de programació inductiva amb la selecció dinàmica de les primitives apropiades a partir del coneixement previ, és capaç de millorar els resultats d'altres enfocaments de preparació de dades d'última generació i més específics.
[EN] Data science is essential for the extraction of value from data. However, the most tedious part of the process, data wrangling, implies a range of mostly manual formatting, identification and cleansing manipulations. Data wrangling still resists automation partly because the problem strongly depends on domain information, which becomes a bottleneck for state-of-the-art systems as the diversity of domains, formats and structures of the data increases. In this thesis we focus on generating algorithms that take advantage of the domain knowledge for the automation of parts of the data wrangling process. We illustrate the way in which general program induction techniques, instead of domain-specific languages, can be applied flexibly to problems where knowledge is important, through the dynamic use of domain-specific knowledge. More generally, we argue that a combination of knowledge-based and dynamic learning approaches leads to successful solutions. We propose several strategies to automatically select or construct the appropriate background knowledge for several data wrangling scenarios. The key idea is based on choosing the best specialised background primitives according to the context of the particular problem to solve. We address two scenarios. In the first one, we handle personal data (names, dates, telephone numbers, etc.) that are presented in very different string formats and have to be transformed into a unified format. The problem is how to build a compositional transformation from a large set of primitives in the domain (e.g., handling months, years, days of the week, etc.). We develop a system (BK-ADAPT) that guides the search through the background knowledge by extracting several meta-features from the examples characterising the column domain. In the second scenario, we face the transformation of data matrices in generic programming languages such as R, using an input matrix and some cells of the output matrix as examples. We also develop a system guided by a tree-based search (AUTOMAT[R]IX) that uses several constraints, prior primitive probabilities and textual hints to efficiently learn the transformations. With these systems, we show that the combination of inductive programming with the dynamic selection of the appropriate primitives from the background knowledge is able to improve the results of other state-of-the-art and more specific data wrangling approaches.
This research was supported by the Spanish MECD Grant FPU15/03219;and partially by the Spanish MINECO TIN2015-69175-C4-1-R (Lobass) and RTI2018-094403-B-C32-AR (FreeTech) in Spain; and by the ERC Advanced Grant Synthesising Inductive Data Models (Synth) in Belgium.
Contreras Ochando, L. (2020). Towards Data Wrangling Automation through Dynamically-Selected Background Knowledge [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/160724
TESIS
Francq, Pascal. "Structured and collaborative search: an integrated approach to share documents among users". Doctoral thesis, Universite Libre de Bruxelles, 2003. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/211315.
Texto completoAujourd'hui, la gestion des documents est l'un des problèmes les plus importants en informatique. L'objectif de cette thèse est de proposer un système de gestion documentaire basé sur une approche appelée recherche structurée et collaborative. Les caractéristiques essentielles sont :
Dès lors que les utilisateurs ont plusieurs centres d'intérêts, ils sont décrits par des profils, un profil correspondant à un centre d'intérêt particulier. C'est la partie structurée du système.
Pour construire une description des profils, les utilisateurs jugent des documents en fonction de leur intérêt
Le système regroupe les profils similaires pour former un certain nombre de communautés virtuelles
Une fois les communautés virtuelles définies, des documents jugés comme intéressants par certains utilisateurs d'une communauté peuvent être partagés dans toute la communauté. C'est la partie collaborative du système.
Le système a été validé sur plusieurs corpora de documents en utilisant une méthodologie précise et offre des résultats prometteurs.
Doctorat en sciences appliquées
info:eu-repo/semantics/nonPublished
Mnie, Filali Imane. "Distribution multi-contenus sur Internet". Thesis, Université Côte d'Azur (ComUE), 2016. http://www.theses.fr/2016AZUR4068/document.
Texto completoIn this study, we focused on peer-to-peer protocols (P2P), which represent a promising solution for data dissemination and content delivery at low-cost in the Internet. We performed, initially, a behavioral study of various P2P protocols for file sharing (content distribution without time constraint) and live streaming. Concerning file sharing, we have shown the impact of Hadopi on users’ behavior and discussed the effectiveness of protocols according to content type, based on users’ choice. BitTorrent appeared as the most efficient approach during our study, especially when it comes to large content. As for streaming, we studied the quality of service of Sopcast, a live distribution network that accounts for more than 60% of P2P broadcast live events. Our in-depth analysis of these two distributionmodes led us to focus on the BitTorrent protocol because of its proven efficiency in file sharing and the fact that it is open source. In the second part of the thesis, we proposed and implemented a new protocol based on BitTorrent, in a controlled environment. The modifications that we proposed allow to increase the efficiency of the protocol through improved dissemination of metadata (the rarest piece), both for live and file sharing. An enhanced version is introduced with a push method, where nodes that lag behind receive an extra service so as to improve the overall performance
PANTINI, SARA. "Analysis and modelling of leachate and gas generation at landfill sites focused on mechanically-biologically treated waste". Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2013. http://hdl.handle.net/2108/203393.
Texto completo