Tesis sobre el tema "Intelligent Edge Networks"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 21 mejores tesis para su investigación sobre el tema "Intelligent Edge Networks".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Mestoukirdi, Mohamad. "Reliable and Communication-Efficient Federated Learning for Future Intelligent Edge Networks". Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS432.
Texto completoIn the realm of future 6G wireless networks, integrating the intelligent edge through the advent of AI signifies a momentous leap forward, promising revolutionary advancements in wireless communication. This integration fosters a harmonious synergy, capitalizing on the collective potential of these transformative technologies. Central to this integration is the role of federated learning, a decentralized learning paradigm that upholds data privacy while harnessing the collective intelligence of interconnected devices. By embracing federated learning, 6G networks can unlock a myriad of benefits for both wireless networks and edge devices. On one hand, wireless networks gain the ability to exploit data-driven solutions, surpassing the limitations of traditional model-driven approaches. Particularly, leveraging real-time data insights will empower 6G networks to adapt, optimize performance, and enhance network efficiency dynamically. On the other hand, edge devices benefit from personalized experiences and tailored solutions, catered to their specific requirements. Specifically, edge devices will experience improved performance and reduced latency through localized decision-making, real-time processing, and reduced reliance on centralized infrastructure. In the first part of the thesis, we tackle the predicament of statistical heterogeneity in federated learning stemming from divergent data distributions among devices datasets. Rather than training a conventional one-model-fits-all, which often performs poorly with non-IID data, we propose user-centric set of rules that produce personalized models tailored to each user objectives. To mitigate the prohibitive communication overhead associated with training distinct personalized model for each user, users are partitioned into clusters based on their objectives similarity. This enables collective training of cohort-specific personalized models. As a result, the total number of personalized models trained is reduced. This reduction lessens the consumption of wireless resources required to transmit model updates across bandwidth-limited wireless channels. In the second part, our focus shifts towards integrating IoT remote devices into the intelligent edge by leveraging unmanned aerial vehicles as a federated learning orchestrator. While previous studies have extensively explored the potential of UAVs as flying base stations or relays in wireless networks, their utilization in facilitating model training is still a relatively new area of research. In this context, we leverage the UAV mobility to bypass the unfavorable channel conditions in rural areas and establish learning grounds to remote IoT devices. However, UAV deployments poses challenges in terms of scheduling and trajectory design. To this end, a joint optimization of UAV trajectory, device scheduling, and the learning performance is formulated and solved using convex optimization techniques and graph theory. In the third and final part of this thesis, we take a critical look at thecommunication overhead imposed by federated learning on wireless networks. While compression techniques such as quantization and sparsification of model updates are widely used, they often achieve communication efficiency at the cost of reduced model performance. We employ over-parameterized random networks to approximate target networks through parameter pruning rather than direct optimization to overcome this limitation. This approach has been demonstrated to require transmitting no more than a single bit of information per model parameter. We show that SoTA methods fail to capitalize on the full attainable advantages in terms of communication efficiency using this approach. Accordingly, we propose a regularized loss function which considers the entropy of transmitted updates, resulting in notable improvements to communication and memory efficiency during federated training on edge devices without sacrificing accuracy
Sigwele, Tshiamo, Yim Fun Hu, M. Ali, Jiachen Hou, M. Susanto y H. Fitriawan. "An intelligent edge computing based semantic gateway for healthcare systems interoperability and collaboration". IEEE, 2018. http://hdl.handle.net/10454/17552.
Texto completoThe use of Information and Communications Technology (ICTs) in healthcare has the potential of minimizing medical errors, reducing healthcare cost and improving collaboration between healthcare systems which can dramatically improve the healthcare service quality. However interoperability within different healthcare systems (clinics/hospitals/pharmacies) remains an issue of further research due to a lack of collaboration and exchange of healthcare information. To solve this problem, cross healthcare system collaboration is required. This paper proposes a conceptual semantic based healthcare collaboration framework based on Internet of Things (IoT) infrastructure that is able to offer a secure cross system information and knowledge exchange between different healthcare systems seamlessly that is readable by both machines and humans. In the proposed framework, an intelligent semantic gateway is introduced where a web application with restful Application Programming Interface (API) is used to expose the healthcare information of each system for collaboration. A case study that exposed the patient's data between two different healthcare systems was practically demonstrated where a pharmacist can access the patient's electronic prescription from the clinic.
British Council Institutional Links grant under the BEIS-managed Newton Fund.
Hasanaj, Enis, Albert Aveler y William Söder. "Cooperative edge deepfake detection". Thesis, Jönköping University, JTH, Avdelningen för datateknik och informatik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-53790.
Texto completoKattadige, Chamara Manoj Madarasinghe. "Network and Content Intelligence for 360 Degree Video Streaming Optimization". Thesis, The University of Sydney, 2023. https://hdl.handle.net/2123/29904.
Texto completoAbernot, Madeleine. "Digital oscillatory neural network implementation on FPGA for edge artificial intelligence applications and learning". Electronic Thesis or Diss., Université de Montpellier (2022-....), 2023. http://www.theses.fr/2023UMONS074.
Texto completoIn the last decades, the multiplication of edge devices in many industry domains drastically increased the amount of data to treat and the complexity of tasks to solve, motivating the emergence of probabilistic machine learning algorithms with artificial intelligence (AI) and artificial neural networks (ANNs). However, classical edge hardware systems based on von Neuman architecture cannot efficiently handle this large amount of data. Thus, novel neuromorphic computing paradigms with distributed memory are explored, mimicking the structure and data representation of biological neural networks. Lately, most of the neuromorphic paradigm research has focused on Spiking neural networks (SNNs), taking inspiration from signal transmission through spikes in biological networks. In SNNs, information is transmitted through spikes using the time domain to provide a natural and low-energy continuous data computation. Recently, oscillatory neural networks (ONNs) appeared as an alternative neuromorphic paradigm for low-power, fast, and efficient time-domain computation. ONNs are networks of coupled oscillators emulating the collective computational properties of brain areas through oscillations. The recent ONN implementations combined with the emergence of low-power compact devices for ONN encourage novel attention over ONN for edge computing. State-of-the-art ONN is configured as an oscillatory Hopfield network (OHN) with fully coupled recurrent connections to perform pattern recognition with limited accuracy. However, the large number of OHN synapses limits the scalability of ONN implementation and the ONN application scope. The focus of this thesis is to study if and how ONN can solve meaningful AI edge applications using a proof-of-concept of the ONN paradigm with a digital implementation on FPGA. First, it explores novel learning algorithms for OHN, unsupervised and supervised, to improve accuracy performances and to provide continual on-chip learning. Then, it studies novel ONN architectures, taking inspiration from state-of-the-art layered ANN models, to create cascaded OHNs and multi-layer ONNs. Novel learning algorithms and architectures are demonstrated with the digital design performing edge AI applications, from image processing with pattern recognition, image edge detection, feature extraction, or image classification, to robotics applications with obstacle avoidance
Laroui, Mohammed. "Distributed edge computing for enhanced IoT devices and new generation network efficiency". Electronic Thesis or Diss., Université Paris Cité, 2022. http://www.theses.fr/2022UNIP7078.
Texto completoTraditional cloud infrastructure will face a series of challenges due to the centralization of computing, storage, and networking in a small number of data centers, and the long-distance between connected devices and remote data centers. To meet this challenge, edge computing seems to be a promising possibility that provides resources closer to IoT devices. In the cloud computing model, compute resources and services are often centralized in large data centers that end-users access from the network. This model has an important economic value and more efficient resource-sharing capabilities. New forms of end-user experience such as the Internet of Things require computing resources near to the end-user devices at the network edge. To meet this need, edge computing relies on a model in which computing resources are distributed to the edge of a network as needed, while decentralizing the data processing from the cloud to the edge as possible. Thus, it is possible to quickly have actionable information based on data that varies over time. In this thesis, we propose novel optimization models to optimize the resource utilization at the network edge for two edge computing research directions, service offloading and vehicular edge computing. We study different use cases in each research direction. For the optimal solutions, First, for service offloading we propose optimal algorithms for services placement at the network edge (Tasks, Virtual Network Functions (VNF), Service Function Chain (SFC)) by taking into account the computing resources constraints. Moreover, for vehicular edge computing, we propose exact models related to maximizing the coverage of vehicles by both Taxis and Unmanned Aerial Vehicle (UAV) for online video streaming applications. In addition, we propose optimal edge-autopilot VNFs offloading at the network edge for autonomous driving. The evaluation results show the efficiency of the proposed algorithms in small-scale networks in terms of time, cost, and resource utilization. To deal with dense networks with a high number of devices and scalability issues, we propose large-scale algorithms that support a huge amount of devices, data, and users requests. Heuristic algorithms are proposed for SFC orchestration, maximum coverage of mobile edge servers (vehicles). Moreover, The artificial intelligence algorithms (machine learning, deep learning, and deep reinforcement learning) are used for 5G VNF slices placement, edge-autopilot VNF placement, and autonomous UAV navigation. The numerical results give good results compared with exact algorithms with high efficiency in terms of time
Minerva, Roberto. "Will the Telco survive to an ever changing world ? Technical considerations leading to disruptive scenarios". Thesis, Evry, Institut national des télécommunications, 2013. http://www.theses.fr/2013TELE0011/document.
Texto completoThe telecommunications industry is going through a difficult phase because of profound technological changes, mainly originated by the development of the Internet. They have a major impact on the telecommunications industry as a whole and, consequently, the future deployment of new networks, platforms and services. The evolution of the Internet has a particularly strong impact on telecommunications operators (Telcos). In fact, the telecommunications industry is on the verge of major changes due to many factors, such as the gradual commoditization of connectivity, the dominance of web services companies (Webcos), the growing importance of software based solutions that introduce flexibility (compared to static system of telecom operators). This thesis develops, proposes and compares plausible future scenarios based on future solutions and approaches that will be technologically feasible and viable. Identified scenarios cover a wide range of possibilities: 1) Traditional Telco; 2) Telco as Bit Carrier; 3) Telco as Platform Provider; 4) Telco as Service Provider; 5) Telco Disappearance. For each scenario, a viable platform (from the point of view of telecom operators) is described highlighting the enabled service portfolio and its potential benefits
PELUSO, VALENTINO. "Optimization Tools for ConvNets on the Edge". Doctoral thesis, Politecnico di Torino, 2020. http://hdl.handle.net/11583/2845792.
Texto completoBusacca, Fabio Antonino. "AI for Resource Allocation and Resource Allocation for AI: a two-fold paradigm at the network edge". Doctoral thesis, Università degli Studi di Palermo, 2022. https://hdl.handle.net/10447/573371.
Texto completoMinerva, Roberto. "Will the Telco survive to an ever changing world ? Technical considerations leading to disruptive scenarios". Electronic Thesis or Diss., Evry, Institut national des télécommunications, 2013. http://www.theses.fr/2013TELE0011.
Texto completoThe telecommunications industry is going through a difficult phase because of profound technological changes, mainly originated by the development of the Internet. They have a major impact on the telecommunications industry as a whole and, consequently, the future deployment of new networks, platforms and services. The evolution of the Internet has a particularly strong impact on telecommunications operators (Telcos). In fact, the telecommunications industry is on the verge of major changes due to many factors, such as the gradual commoditization of connectivity, the dominance of web services companies (Webcos), the growing importance of software based solutions that introduce flexibility (compared to static system of telecom operators). This thesis develops, proposes and compares plausible future scenarios based on future solutions and approaches that will be technologically feasible and viable. Identified scenarios cover a wide range of possibilities: 1) Traditional Telco; 2) Telco as Bit Carrier; 3) Telco as Platform Provider; 4) Telco as Service Provider; 5) Telco Disappearance. For each scenario, a viable platform (from the point of view of telecom operators) is described highlighting the enabled service portfolio and its potential benefits
Lanzarone, Lorenzo Biagio. "Manutenzione predittiva di macchinari industriali tramite tecniche di intelligenza artificiale: una valutazione sperimentale". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/22853/.
Texto completoMINERVA, Roberto. "Will the Telco survive to an ever changing world ? Technical considerations leading to disruptive scenarios". Phd thesis, Institut National des Télécommunications, 2013. http://tel.archives-ouvertes.fr/tel-00917966.
Texto completoBarreto, Ricardo Manuel Carriço. "IoT Edge Computing Neural Networks on Reconfigurable Logic". Master's thesis, 2019. http://hdl.handle.net/10316/87970.
Texto completoNos últimos anos, temos visto a expanção da inteligência artificial em diferentes áreas e dispositivos. No entanto, no ecossistema IoT, temos uma tendência constante a usar a computação na nuvem para armazenar e processar as vastas quantidades de dados geradas por estes dispositivos, devido aos recursos locais limitados. Esta dissertação propõe a implementa çãao de dispositivos IoT inteligentes capazes de fornecer informações específicas a partir de dados produzidos a partir de algum sensor, por exemplo uma câmara ou microfone, em vez dos próprios dados brutos. O foco será o processamento de imagens usando CNNs. Essa abordagem é claramente distinta das tendências atuais em dispositivos IoT que usam computação na nuvem para processar os dados produzidos. Pretendemos uma viragem no paradigma estabelecido e procuramos uma abordagem deedge computing. Como o foco ser ão dispositivos pequenos e simples, precisamos de uma solução de baixa potência para o cálculo da CNN. Os dispositivos SoC ganharam popularidade devido à sua heterogeneidade. Este trabalho usará um sistema que combina uma unidade de processamento ARM em conjunto com a FPGA, mantendo baixa potência e aproveitando a FPGA para obter um alto desempenho. O HADDOC2 será usado como uma ferramenta que converterá o código CNN em VHDL para ser sintetizado na FPGA, enquanto no ARM haverá um sistema que irá gerir todo o processo usando pontes de comunicação com a FPGA e protocolos de comunicação IoT para enviar as informações processadas. No fim é obtido um sistema com uma CNN implementada na FPGA o usando o HPS como gestor de todo o processo e que se comunica com o exterior através do MQTT.
In recent years we have seen the emergence of AI in wider application areas and in more devices. However, in the IoT ecosystem there is the tendency to use cloud computing to store and process the vast amounts of information generated by these devices, due to the limited local resources. This dissertation proposes the implementation of smart IoT devices able to provide specific information from raw data produced from some sensor, e.g. a camera or microphone, instead of the raw data itself. The focus will be embedded image processing using Convolutional Neuronal Networks (CNN). This approach is clearly distinct from the current trends in IoT devices that use cloud computing to process the collected data. We intend a twist on the established paradigm and pursue an edge computing approach. Since we are targeting small and simple devices, we need some low power solution for the CNN computation. SoC devices have gained popularity due to their heterogeneity. In our work we use a system that combines an ARM processing unit in conjunction with FPGA, while maintaining low power, taking advantage of FPGA to achieve high performance.HADDOC2 will be used as a tool that will convert CNN to VHDL code to be synthesized to FPGA, while in ARM there will be a system that will manage the entire process using IoT communication protocols to send the processed information. A system with a CNN implemented in the FPGA is obtained using HPS as the manager of the entire process and then this system communicates with the outside through MQTT.
Wong, Jun Hua. "Efficient Edge Intelligence in the Era of Big Data". Thesis, 2021. http://hdl.handle.net/1805/26385.
Texto completoSmart wearables, known as emerging paradigms for vital big data capturing, have been attracting intensive attentions. However, one crucial problem is their power-hungriness, i.e., the continuous data streaming consumes energy dramatically and requires devices to be frequently charged. Targeting this obstacle, we propose to investigate the biodynamic patterns in the data and design a data-driven approach for intelligent data compression. We leverage Deep Learning (DL), more specifically, Convolutional Autoencoder (CAE), to learn a sparse representation of the vital big data. The minimized energy need, even taking into consideration the CAE-induced overhead, is tremendously lower than the original energy need. Further, compared with state-of-the-art wavelet compression-based method, our method can compress the data with a dramatically lower error for a similar energy budget. Our experiments and the validated approach are expected to boost the energy efficiency of wearables, and thus greatly advance ubiquitous big data applications in era of smart health. In recent years, there has also been a growing interest in edge intelligence for emerging instantaneous big data inference. However, the inference algorithms, especially deep learning, usually require heavy computation requirements, thereby greatly limiting their deployment on the edge. We take special interest in the smart health wearable big data mining and inference. Targeting the deep learning’s high computational complexity and large memory and energy requirements, new approaches are urged to make the deep learning algorithms ultra-efficient for wearable big data analysis. We propose to leverage knowledge distillation to achieve an ultra-efficient edge-deployable deep learning model. More specifically, through transferring the knowledge from a teacher model to the on-edge student model, the soft target distribution of the teacher model can be effectively learned by the student model. Besides, we propose to further introduce adversarial robustness to the student model, by stimulating the student model to correctly identify inputs that have adversarial perturbation. Experiments demonstrate that the knowledge distillation student model has comparable performance to the heavy teacher model but owns a substantially smaller model size. With adversarial learning, the student model has effectively preserved its robustness. In such a way, we have demonstrated the framework with knowledge distillation and adversarial learning can, not only advance ultra-efficient edge inference, but also preserve the robustness facing the perturbed input.
Gupta, Rachana Ashok. "An edge-detection and HPF based intelligent space a network based integrated navigation system /". 2006. http://www.lib.ncsu.edu/theses/available/etd-05072006-132525/unrestricted/etd.pdf.
Texto completo(11013474), Jun Hua Wong. "Efficient Edge Intelligence In the Era of Big Data". Thesis, 2021.
Buscar texto completo(10716561), Sanket Ramesh Joshi. "HBONEXT: AN EFFICIENT DNN FOR LIGHT EDGE EMBEDDED DEVICES". Thesis, 2021.
Buscar texto completo"Study of Knowledge Transfer Techniques For Deep Learning on Edge Devices". Master's thesis, 2018. http://hdl.handle.net/2286/R.I.49325.
Texto completoDissertation/Thesis
Masters Thesis Computer Science 2018
(10911822), Priyank Kalgaonkar. "AI on the Edge with CondenseNeXt: An Efficient Deep Neural Network for Devices with Constrained Computational Resources". Thesis, 2021.
Buscar texto completoKalgaonkar, Priyank B. "AI on the Edge with CondenseNeXt: An Efficient Deep Neural Network for Devices with Constrained Computational Resources". Thesis, 2021. http://dx.doi.org/10.7912/C2/64.
Texto completoResearch work presented within this thesis propose a neoteric variant of deep convolutional neural network architecture, CondenseNeXt, designed specifically for ARM-based embedded computing platforms with constrained computational resources. CondenseNeXt is an improved version of CondenseNet, the baseline architecture whose roots can be traced back to ResNet. CondeseNeXt replaces group convolutions in CondenseNet with depthwise separable convolutions and introduces group-wise pruning, a model compression technique, to prune (remove) redundant and insignificant elements that either are irrelevant or do not affect performance of the network upon disposition. Cardinality, a new dimension to the existing spatial dimensions, and class-balanced focal loss function, a weighting factor inversely proportional to the number of samples, has been incorporated in order to relieve the harsh effects of pruning, into the design of CondenseNeXt’s algorithm. Furthermore, extensive analyses of this novel CNN architecture was performed on three benchmarking image datasets: CIFAR-10, CIFAR-100 and ImageNet by deploying the trained weight on to an ARM-based embedded computing platform: NXP BlueBox 2.0, for real-time image classification. The outputs are observed in real-time in RTMaps Remote Studio’s console to verify the correctness of classes being predicted. CondenseNeXt achieves state-of-the-art image classification performance on three benchmark datasets including CIFAR-10 (4.79% top-1 error), CIFAR-100 (21.98% top-1 error) and ImageNet (7.91% single model, single crop top-5 error), and up to 59.98% reduction in forward FLOPs compared to CondenseNet. CondenseNeXt can also achieve a final trained model size of 2.9 MB, however at the cost of 2.26% in accuracy loss. Thus, performing image classification on ARM-Based computing platforms without requiring a CUDA enabled GPU support, with outstanding efficiency.
Ganin, Iaroslav. "Natural image processing and synthesis using deep learning". Thèse, 2019. http://hdl.handle.net/1866/23437.
Texto completoIn the present thesis, we study how deep neural networks can be applied to various tasks in computer vision. Computer vision is an interdisciplinary field that deals with understanding of digital images and video. Traditionally, the problems arising in this domain were tackled using heavily hand-engineered adhoc methods. A typical computer vision system up until recently consisted of a sequence of independent modules which barely talked to each other. Such an approach is quite reasonable in the case of limited data as it takes major advantage of the researcher's domain expertise. This strength turns into a weakness if some of the input scenarios are overlooked in the algorithm design process. With the rapidly increasing volumes and varieties of data and the advent of cheaper and faster computational resources end-to-end deep neural networks have become an appealing alternative to the traditional computer vision pipelines. We demonstrate this in a series of research articles, each of which considers a particular task of either image analysis or synthesis and presenting a solution based on a ``deep'' backbone. In the first article, we deal with a classic low-level vision problem of edge detection. Inspired by a top-performing non-neural approach, we take a step towards building an end-to-end system by combining feature extraction and description in a single convolutional network. The resulting fully data-driven method matches or surpasses the detection quality of the existing conventional approaches in the settings for which they were designed while being significantly more usable in the out-of-domain situations. In our second article, we introduce a custom architecture for image manipulation based on the idea that most of the pixels in the output image can be directly copied from the input. This technique bears several significant advantages over the naive black-box neural approach. It retains the level of detail of the original images, does not introduce artifacts due to insufficient capacity of the underlying neural network and simplifies training process, to name a few. We demonstrate the efficiency of the proposed architecture on the challenging gaze correction task where our system achieves excellent results. In the third article, we slightly diverge from pure computer vision and study a more general problem of domain adaption. There, we introduce a novel training-time algorithm (\ie, adaptation is attained by using an auxilliary objective in addition to the main one). We seek to extract features that maximally confuse a dedicated network called domain classifier while being useful for the task at hand. The domain classifier is learned simultaneosly with the features and attempts to tell whether those features are coming from the source or the target domain. The proposed technique is easy to implement, yet results in superior performance in all the standard benchmarks. Finally, the fourth article presents a new kind of generative model for image data. Unlike conventional neural network based approaches our system dubbed SPIRAL describes images in terms of concise low-level programs executed by off-the-shelf rendering software used by humans to create visual content. Among other things, this allows SPIRAL not to waste its capacity on minutae of datasets and focus more on the global structure. The latent space of our model is easily interpretable by design and provides means for predictable image manipulation. We test our approach on several popular datasets and demonstrate its power and flexibility.