Network architectures assisted by Generative Artificial Intelligence (GAI) are envisioned as foundational elements of sixth-generation (6G) communication system. To deliver ubiquitous intelligent services and meet diverse service requirements, 6G network architecture should offer personalized services to various mobile devices. Federated learning (FL) with personalized local training, as a privacy-preserving machine learning (ML) approach, can be applied to address these challenges. In this paper, we propose a meta-learning-based personalized FL (PFL) method that improves both communication and computation efficiency by utilizing over-the-air computations. Its "pretraining-and-fine-tuning" principle makes it particularly suitable for enabling edge nodes to access personalized GAI services while preserving local privacy. Experiment results demonstrate the outperformance and efficacy of the proposed algorithm, and notably indicate enhanced communication efficiency without compromising accuracy.
This paper presents an algorithm named the dependency-aware offloading framework (DeAOff), which is designed to optimize the deployment of Gen-AI decoder models in mobile edge computing (MEC) environments. These models, such as decoders, pose significant challenges due to their inter-layer dependencies and high computational demands, especially under edge resource constraints. To address these challenges, we propose a two-phase optimization algorithm that first handles dependency-aware task allocation and subsequently optimizes energy consumption. By modeling the inference process using directed acyclic graphs (DAGs) and applying constraint relaxation techniques, our approach effectively reduces execution latency and energy usage. Experimental results demonstrate that our method achieves a reduction of up to 20% in task completion time and approximately 30% savings in energy consumption compared to traditional methods. These outcomes underscore our solution's robustness in managing complex sequential dependencies and dynamic MEC conditions, enhancing quality of service. Thus, our work presents a practical and efficient resource optimization strategy for deploying models in resource-constrained MEC scenarios.
The advent of the internet-of-everything era has led to the increased use of mobile edge computing. The rise of artificial intelligence has provided many possibilities for the low-latency task-offloading demands of users, but existing technologies rigidly assume that there is only one task to be offloaded in each time slot at the terminal. In practical scenarios, there are often numerous computing tasks to be executed at the terminal, leading to a cumulative delay for subsequent task offloading. Therefore, the efficient processing of multiple computing tasks on the terminal has become highly challenging. To address the low-latency offloading requirements for multiple computational tasks on terminal devices, we propose a terminal multitask parallel offloading algorithm based on deep reinforcement learning. Specifically, we first establish a mobile edge computing system model consisting of a single edge server and multiple terminal users. We then model the task offloading decision problem as a Markov decision process, and solve this problem using the Dueling Deep-Q Network algorithm to obtain the optimal offloading strategy. Experimental results demonstrate that, under the same constraints, our proposed algorithm reduces the average system latency.
Federated semi-supervised learning (FSSL) faces two major challenges: the scarcity of labeled data across clients and the non-independent and identically distributed (Non-IID) nature of data among clients. To address these issues, we propose diffusion model-based data synthesis aided FSSL (DDSA-FSSL), a novel approach that leverages diffusion model (DM) to generate synthetic data, thereby bridging the gap between heterogeneous local data distributions and the global data distribution. In the proposed DDSA-FSSL, each client addresses the scarcity of labeled data by utilizing a federated learning-trained classifier to perform pseudo labeling for unlabeled data. The DM is then collaboratively trained using both labeled and precision-optimized pseudo-labeled data, enabling clients to generate synthetic samples for classes that are absent in their labeled datasets. As a result, the disparity between local and global distributions is reduced and clients can create enriched synthetic datasets that better align with the global data distribution. Extensive experiments on various datasets and Non-IID scenarios demonstrate the effectiveness of DDSA-FSSL, achieving significant performance improvements, such as increasing accuracy from 38.46% to 52.14% on CIFAR-10 datasets with 10% labeled data.
The intelligent operation management of distribution services is crucial for the stability of power systems. Integrating the large language model (LLM) with 6G edge intelligence provides customized management solutions. However, the adverse effects of false data injection (FDI) attacks on the performance of LLMs cannot be overlooked. Therefore, we propose an FDI attack detection and LLM-assisted resource allocation algorithm for 6G edge intelligence-empowered distribution power grids. First, we formulate a resource allocation optimization problem. The objective is to minimize the weighted sum of the global loss function and total LLM fine-tuning delay under constraints of long-term privacy entropy and energy consumption. Then, we decouple it based on virtual queues. We utilize an LLM-assisted deep Q network (DQN) to learn the resource allocation strategy and design an FDI attack detection mechanism to ensure that fine-tuning remains on the correct path. Simulations demonstrate that the proposed algorithm has excellent performance in convergence, delay, and security.