April 2024 Vol. 21 No. 4  
  
  • Select all
    |
    PHYSICAL AND FUNDAMENTALS
  • PHYSICAL AND FUNDAMENTALS
    Jiahao Huo, Jin Zhu, Zuqing Zhu, Xiaoying Zhang, Shaonan Liu, Jianlong Tao, Huangfu Wei
    Abstract ( )   Knowledge map   Save
    This paper aimed to propose two algorithms, DA-M and RF-M, of reducing the impact of multipath interference (MPI) on intensity modulation direct detection (IM-DD) systems, particularly for four-level pulse amplitude modulation (PAM4) systems. DA-M reduced the fluctuation by averaging the signal in blocks, RF-M estimated MPI by subtracting the decision value of the corresponding block from the mean value of a signal block, and then generated interference-reduced samples by subtracting the interference signal from the product of the corresponding MPI estimate and then weighting factor. This paper firstly proposed to separate the signal before decision-making into multiple blocks, which significantly reduced the complexity of DA-M and RF-M. Simulation results showed that the MPI noise of 28 GBaud IM-DD system under the linewidths of 1e5 Hz, 1e6 Hz and 10e6 Hz can be effectively alleviated.
  • PHYSICAL AND FUNDAMENTALS
    Jiwei Zhao, Jiacheng Chen, Zeyu Sun, Yuhang Shi, Haibo Zhou, XueminShen
    Abstract ( )   Knowledge map   Save
    As the demand for high-quality services proliferates, an innovative network architecture, the fully-decoupled RAN (FD-RAN), has emerged for more flexible spectrum resource utilization and lower network costs. However, with the decoupling of uplink base stations and downlink base stations in FD-RAN, the traditional transmission mechanism, which relies on real-time channel feedback, is not suitable as the receiver is not able to feedback accurate and timely channel state information to the transmitter. This paper proposes a novel transmission scheme without relying on physical layer channel feedback. Specifically, we design a radio map based complex-valued precoding network~(RMCPNet) model, which outputs the base station precoding based on user location. RMCPNet comprises multiple subnets, with each subnet responsible for extracting unique modal features from diverse input modalities. Furthermore, the multi-modal embeddings derived from these distinct subnets are integrated within the information fusion layer, culminating in a unified representation. We also develop a specific RMCPNet training algorithm that employs the negative spectral efficiency as the loss function. We evaluate the performance of the proposed scheme on the public DeepMIMO dataset and show that RMCPNet can achieve 16% and 76% performance improvements over the conventional real-valued neural network and statistical codebook approach, respectively.
  • PHYSICAL AND FUNDAMENTALS
    Yue Ma, Ruiqian Ma, Zhi Lin, Weiwei Yang, Yueming Cai, Chen Miao, Wen Wu
    Abstract ( )   Knowledge map   Save
    In this paper, the covert age of information (CAoI), which characterizes the timeliness and covertness performance of communication, is first investigated in the short-packet covert communication with time modulated retrodirective array (TMRDA). Specifically, the TMRDA is designed to maximize the antenna gain in the target direction while the side lobe is sufficiently suppressed. On this basis, the covertness constraint and CAoI are derived in closed form. To facilitate the covert transmission design, the transmit power and block-length are jointly optimized to minimize the CAoI, which demonstrates the trade-off between covertness and timelessness. Our results illustrate that there exists an optimal block-length that yields the minimum CAoI, and the presented optimization results can achieve enhanced performance compared with the fixed block-length case. Additionally, we observe that smaller beam pointing error at Bob leads to improvements in CAoI.
  • PHYSICAL AND FUNDAMENTALS
    Fan Jiang, Junwei Qin, Lei Liu, Hui Tian
    Abstract ( )   Knowledge map   Save
    The Internet of Medical Things (IoMT) is regarded as a critical technology for intelligent healthcare in the foreseeable 6G era. Nevertheless, due to the limited computing power capability of edge devices and task-related coupling relationships, IoMT faces unprecedented challenges. Considering the associative connections among tasks, this paper proposes a computing offloading policy for multiple-user devices (UDs) considering device-to-device (D2D) communication and a multi-access edge computing (MEC) technique under the scenario of IoMT. Specifically, to minimize the total delay and energy consumption concerning the requirement of IoMT, we first analyze and model the detailed local execution, MEC execution, D2D execution, and associated tasks offloading exchange model. Consequently, the associated tasks’ offloading scheme of multi-UDs is formulated as a mixed-integer nonconvex optimization problem. Considering the advantages of deep reinforcement learning (DRL) in processing tasks related to coupling relationships, a Double DQN based associative tasks computing offloading (DDATO) algorithm is then proposed to obtain the optimal solution, which can make the best offloading decision under the condition that tasks of UDs are associative. Furthermore, to reduce the complexity of the DDATO algorithm, the cache-aided procedure is intentionally introduced before the data training process. This avoids redundant offloading and computing procedures concerning tasks that previously have already been cached by other UDs. In addition, we use a dynamic $\varepsilon-greedy$ strategy in the action selection section of the algorithm, thus preventing the algorithm from falling into a locally optimal solution. Simulation results demonstrate that compared with other existing methods for associative task models concerning different structures in the IoMT network, the proposed algorithm can lower the total cost more effectively and efficiently while also providing a tradeoff between delay and energy consumption tolerance.
  • PHYSICAL AND FUNDAMENTALS
    Chuan Feng, Xu Zhang, Pengchao Han, Tianchun Ma, Xiaoxue Gong
    Abstract ( )   Knowledge map   Save
    The Multi-access Edge Cloud (MEC) networks extend cloud computing services and capabilities to the edge of the networks. By bringing computation and storage capabilities closer to end-users and connected devices, MEC networks can support a wide range of applications. MEC networks can also leverage various types of resources, including computation resources, network resources, radio resources, and location-based resources, to provide multidimensional resources for intelligent applications in 5/6G. However, tasks generated by users often consist of multiple subtasks that require different types of resources. It is a challenging problem to offload multi-resource task requests to the edge cloud aiming at maximizing benefits due to the heterogeneity of resources provided by devices. To address this issue, we mathematically model the task requests with multiple subtasks. Then, the problem of task offloading of multi-resource task requests is proved to be NP-hard. Furthermore, we propose a novel Dual-Agent Deep Reinforcement Learning algorithm with Node First and Link features (NF_L_DA_DRL) based on the policy network, to optimize the benefits generated by offloading multi-resource task requests in MEC networks. Finally, simulation results show that the proposed algorithm can effectively improve the benefit of task offloading with higher resource utilization compared with baseline algorithms.
  • MAC AND NETWORKS
  • MAC AND NETWORKS
    Honghan She, Yufan Cheng, Wenzihan Zhang, Yaohui Zhang, Yuheng Zhao, Haoran Shen, Ying Mou
    Abstract ( )   Knowledge map   Save
    As modern electromagnetic environments are more and more complex, the anti-interference performance of the synchronization acquisition is becoming vital in wireless communications. With the rapid development of the digital signal processing technologies, some synchronization acquisition algorithms for hybrid direct-sequence (DS)/frequency hopping (FH) spread spectrum communications have been proposed. However, these algorithms do not focus on the analysis and the design of the synchronization acquisition under typical interferences. In this paper, a synchronization acquisition algorithm based on the frequency hopping pulses combining (FHPC) is proposed. Specifically, the proposed algorithm is composed of two modules: an adaptive interference suppression (IS) module and an adaptive combining decision module. The adaptive IS module mitigates the effect of the interfered samples in the time-domain or the frequency-domain, and the adaptive combining decision module can utilize each frequency hopping pulse to construct an anti-interference decision metric and generate an adaptive acquisition decision threshold to complete the acquisition. Theory and simulation demonstrate that the proposed algorithm significantly enhances the anti-interference and anti-noise performances of the synchronization acquisition for hybrid DS/FH communications.
  • MAC AND NETWORKS
    Hongchang Ke, Hui Wang, Hongbin Sun, Halvin Yang
    Abstract ( )   Knowledge map   Save
    Emerging mobile edge computing (MEC) is considered a feasible solution for offloading the computation-intensive request tasks generated from mobile wireless equipment (MWE) with limited computational resources and energy. Due to the homogeneity of request tasks from one MWE during a long-term time period, it is vital to predeploy the particular service cachings required by the request tasks at the MEC server. In this paper, we model a service caching-assisted MEC framework that takes into account the constraint on the number of service cachings hosted by each edge server and the migration of request tasks from the current edge server to another edge server with service caching required by tasks. Furthermore, we propose a multiagent deep reinforcement learning-based computation offloading and task migrating decision-making scheme (MBOMS) to minimize the long-term average weighted cost. The proposed MBOMS can learn the near-optimal offloading and migrating decision-making policy by centralized training and decentralized execution. Systematic and comprehensive simulation results reveal that our proposed MBOMS can converge well after training and outperforms the other five baseline algorithms.
  • MAC AND NETWORKS
    Xiaoge Huang, Hongbo Yin, Bin Cao, Yongsheng Wang, Qianbin Chen, Jie Zhang
    Abstract ( )   Knowledge map   Save
    Fog computing is considered as a solution to accommodate the emergence of booming requirements from a large variety of resource-limited Internet of Things (IoT) devices. To ensure the security of private data, in this paper, we introduce a blockchain-enabled three-layer device-fog-cloud heterogeneous network. A reputation model is proposed to update the credibility of the fog nodes (FN), which is used to select blockchain nodes (BN) from FNs to participate in the consensus process. According to the Rivest-Shamir-Adleman (RSA) encryption algorithm applied to the blockchain system, FNs could verify the identity of the node through its public key to avoid malicious attacks. Additionally, to reduce the computation complexity of the consensus algorithms and the network overhead, we propose a dynamic offloading and resource allocation (DORA) algorithm and a reputation-based democratic byzantine fault tolerant (R-DBFT) algorithm to optimize the offloading decisions and decrease the number of BNs in the consensus algorithm while ensuring the network security. Simulation results demonstrate that the proposed algorithm could efficiently reduce the network overhead, and obtain a considerable performance improvement compared to the related algorithms in the previous literature.
  • MAC AND NETWORKS
    Xinxin He, Xuan Qi, Wei Meng, Wei Liu, Changchuan Yin
    Abstract ( )   Knowledge map   Save
    Wireless Power Transfer (WPT) technology can provide real-time power for many terminal devices in Internet of Things (IoT) through millimeter Wave (mmWave) to support applications with large capacity and low latency. Although the intelligent refiecting surface (IRS) can be adopted to create effective virtual links to address the mmWave blockage problem, the conventional solutions only adopt IRS in the downlink from the Base Station (BS) to the users to enhance the received signal strength. In practice, the refiection of IRS is also applicable to the uplink to improve the spectral efficiency. It is a challenging to jointly optimize IRS beamforming and system resource allocation for wireless energy acquisition and information transmission. In this paper, we first design a Low-Energy Adaptive Clustering Hierarchy (LEACH) clustering protocol for clustering and data collection. Then, the problem of maximizing the minimum system spectral efficiency is constructed by jointly optimizing the transmit power of sensor devices, the uplink and downlink transmission times, the active beamforming at the BS, and the IRS dynamic beamforming. To solve this non-convex optimization problem, we propose an alternating optimization (AO)-based joint solution algorithm. Simulation results show that the use of IRS dynamic beamforming can significantly improve the spectral efficiency of the system, and ensure the reliability of equipment communication and the sustainability of energy supply under NLOS link.
  • MAC AND NETWORKS
    Tianhao Lin, Zhiyong Luo
    Abstract ( )   Knowledge map   Save
    The satellite-terrestrial networks possess the ability to transcend geographical constraints inherent in traditional communication networks, enabling global coverage and offering users ubiquitous computing power support, which is an important development direction of future communications. In this paper, we take into account a multi-scenario network model under the coverage of low earth orbit (LEO) satellite, which can provide computing resources to users in faraway areas to improve task processing efficiency. However, LEO satellites experience limitations in computing and communication resources and the channels are time-varying and complex, which makes the extraction of state information a daunting task. Therefore, we explore the dynamic resource management issue pertaining to joint computing, communication resource allocation and power control for multi-access edge computing (MEC). In order to tackle this formidable issue, we undertake the task of transforming the issue into a Markov decision process (MDP) problem and propose the self-attention based dynamic resource management (SABDRM) algorithm, which effectively extracts state information features to enhance the training process. Simulation results show that the proposed algorithm is capable of effectively reducing the long-term average delay and energy consumption of the tasks.
  • MAC AND NETWORKS
    Ying Yang, Lidong Zhu, Changjie Cao
    Abstract ( )   Knowledge map   Save
    In recent years, deep learning-based signal recognition technology has gained attention and emerged as an important approach for safeguarding the electromagnetic environment. However, training deep learning-based classifiers on large signal datasets with redundant samples requires significant memory and high costs. This paper proposes a support data-based core-set selection method (SD) for signal recognition, aiming to screen a representative subset that approximates the large signal dataset. Specifically, this subset can be identified by employing the labeled information during the early stages of model training, as some training samples are labeled as supporting data frequently. This support data is crucial for model training and can be found using a border sample selector. Simulation results demonstrate that the SD method minimizes the impact on model recognition performance while reducing the dataset size, and outperforms five other state-of-the-art core-set selection methods when the fraction of training sample kept is less than or equal to 0.3 on the RML2016.04C dataset or 0.5 on the RML22 dataset. The SD method is particularly helpful for signal recognition tasks with limited memory and computing resources.
  • MAC AND NETWORKS
    Xiaoge Huang, Lingzhi Wang, Yong He, Qianbin Chen
    Abstract ( )   Knowledge map   Save
    Wireless Sensor Network (WSN) is widely utilized in large-scale distributed unmanned detection scenarios due to its low cost and flexible installation. However, WSN data collection encounters challenges in scenarios lacking communication infrastructure. Unmanned aerial vehicle (UAV) offers a novel solution for WSN data collection, leveraging their high mobility. In this paper, we present an efficient UAV-assisted data collection algorithm aimed at minimizing the overall power consumption of the WSN. Firstly, a two-layer UAV-assisted data collection model is introduced, including the ground and aerial layers. The ground layer senses the environmental data by the cluster members (CMs), and the CMs transmit the data to the cluster heads (CHs), which forward the collected data to the UAVs. The aerial network layer consists of multiple UAVs that collect, store, and forward data from the CHs to the data center for analysis. Secondly, an improved clustering algorithm based on K-Means++ is proposed to optimize the number and locations of CHs. Moreover, an Actor-Critic based algorithm is introduced to optimize the UAV deployment and the association with CHs. Finally, simulation results verify the effectiveness of the proposed algorithms.
  • EMERGING TECHNOLOGIES AND SERVICES
  • EMERGING TECHNOLOGIES AND SERVICES
    Liwei Shao, Liping Qian, Mengru Wu, Yuan Wu
    Abstract ( )   Knowledge map   Save
    With the development of the Internet of Things (IoT), it requires better performance from wireless sensor networks (WSNs), such as larger coverage, longer lifetime, and lower latency. However, a large amount of data generated from monitoring and long-distance transmission places a heavy burden on sensor nodes with the limited battery power. For this, we investigate an unmanned aerial vehicles assisted mobile wireless sensor network (UAV-assisted WSN) to prolong the network lifetime in this paper. Specifically, we use UAVs to assist the WSN in collecting data. In the current UAV-assisted WSN, the clustering and routing schemes are determined sequentially. However, such a separate consideration might not maximize the lifetime of the whole WSN due to the mutual coupling of clustering and routing. To efficiently prolong the lifetime of the WSN, we propose an integrated clustering and routing scheme that jointly optimizes the clustering and routing together. In the whole network space, it is intractable to efficiently obtain the optimal integrated clustering and routing scheme. Therefore, we propose the Monte-Las search strategy based on Monte Carlo and Las Vegas ideas, which can generate the chain matrix to guide the algorithm to find the solution faster. Unnecessary point-to-point collection leads to long collection paths, so a triangle optimization strategy is then proposed that finds a compromise path to shorten the collection path based on the geometric distribution and energy of sensor nodes. To avoid the coverage hole caused by the death of sensor nodes, the deployment of mobile sensor nodes and the preventive mechanism design are indispensable. An emergency data transmission mechanism is further proposed to reduce the latency of collecting the latency-sensitive data due to the absence of UAVs. Compared with the existing schemes, the proposed scheme can prolong the lifetime of the UAV-assisted WSN at least by 360%, and shorten the collection path of UAVs by 56.24 %.
  • EMERGING TECHNOLOGIES AND SERVICES
    Jie Liu, Sibo Chen, Yuqin Liu, Zhiwei Mo, Yilin Lin, Hongmei Zhu, Yufeng He
    Abstract ( )   Knowledge map   Save
    To ensure the extreme performances of the new 6G services, applications will be deployed at deep edge, resulting in a serious challenge of distributed application addressing. This paper traces back the latest development of mobile network application addressing, analyzes two novel addressing methods in carrier network, and puts forward a 6G endogenous application addressing scheme by integrating some of their essence into the 6G network architecture, combining the new 6G capabilities of computing & network convergence, endogenous intelligence, and communication-sensing integration. This paper further illustrates how that the proposed method works in 6G networks and gives preliminary experimental verification.
  • EMERGING TECHNOLOGIES AND SERVICES
    Shangguang Wang, Qiyang Zhang, Ruolin Xing, Fei Qi, Mengwei Xu
    Abstract ( )   Knowledge map   Save
    Recent advancements in satellite technologies and the declining cost of access to space have led to the emergence of large satellite constellations in Low Earth Orbit (LEO). However, these constellations often rely on bent-pipe architecture, resulting in high communication costs. Existing onboard inference architectures suffer from limitations in terms of low accuracy and inflexibility in the deployment and management of in-orbit applications. To address these challenges, we propose a cloud-native-based satellite design specifically tailored for Earth Observation tasks, enabling diverse computing paradigms. In this work, we present a case study of a satellite-ground collaborative inference system deployed in the Tiansuan constellation, demonstrating a remarkable 50% accuracy improvement and a substantial 90% data reduction. Our work sheds light on in-orbit energy, where in-orbit computing accounts for 17% of the total onboard energy consumption. Our approach represents a significant advancement of cloud-native satellite, aiming to enhance the accuracy of in-orbit computing while simultaneously reducing communication cost.
  • EMERGING TECHNOLOGIES AND SERVICES
    Wenjing Xu, Wei Wang, Zuguang Li, Qihui Wu, Xianbin Wang
    Abstract ( )   Knowledge map   Save
    Collaborative edge computing is a promising direction to handle the computation intensive tasks in B5G wireless networks. However, edge computing servers (ECSs) from different operators may not trust each other, and thus the incentives for collaboration cannot be guaranteed. In this paper, we propose a consortium blockchain enabled collaborative edge computing framework, where users can offload computing tasks to ECSs from different operators. To minimize the total delay of users, we formulate a joint task offloading and resource optimization problem, under the constraint of the computing capability of each ECS. We apply the Tammer decomposition method and heuristic optimization algorithms to obtain the optimal solution. Finally, we propose a reputation based node selection approach to facilitate the consensus process, and also consider a completion time based primary node selection to avoid monopolization of certain edge node and enhance the security of the blockchain. Simulation results validate the effectiveness of the proposed algorithm, and the total delay can be reduced by up to 40% compared with the non-cooperative case.
  • EMERGING TECHNOLOGIES AND SERVICES
    Xiaohan Lin, Yuan Liu, Fangjiong Chen, Yang Huang, Xiaohu Ge
    Abstract ( )   Knowledge map   Save
    As a mature distributed machine learning paradigm, federated learning enables wireless edge devices to collaboratively train a shared AI-model by stochastic gradient descent (SGD). However, devices need to upload high-dimensional stochastic gradients to edge server in training, which cause severe communication bottleneck. To address this problem, we compress the communication by sparsifying and quantizing the stochastic gradients of edge devices. We first derive a closed form of the communication compression in terms of sparsification and quantization factors. Then, the convergence rate of this communication-compressed system is analyzed and several insights are obtained. Finally, we formulate and deal with the quantization resource allocation problem for the goal of minimizing the convergence upper bound, under the constraint of multiple-access channel capacity. Simulations show that the proposed scheme outperforms the benchmarks.