Archive

  • Select all
    |
    FEATURE TOPIC: BLOCKCHAIN FOR INTERNET OF THINGS
  • FEATURE TOPIC: BLOCKCHAIN FOR INTERNET OF THINGS
    Zheng Huang, Zeyu Mi, Zhichao Hua
    2020, 17(9): 1-10.
    Abstract ( )   Knowledge map   Save
    Cloud computing has been exploited in managing large-scale IoT systems. IoT cloud servers usually handle a large number of requests from various IoT devices. Due to the fluctuant and heavy workload, the servers require the cloud to provide high scalability, stable performance, low price and necessary functionalities. However, traditional clouds usually offer computing service with the abstraction of virtual machine (VM), which can hardly meet these requirements. Meanwhile, different cloud vendors provide different performance stabilities and price models, which fluctuate according to the dynamic workload. A single cloud cannot satisfy all the requirements of the IoT scenario well. The JointCloud computing model empowers the cooperation among multiple public clouds. However, it is still difficult to dynamically schedule the workload on different clouds based on the VM abstraction. This paper introduces HCloud, a trusted JointCloud platform for IoT systems using serverless computing model. HCloud allows an IoT server to be implemented with multiple serverless functions and schedules these functions on different clouds based on a schedule policy. The policy is specified by the client and includes the required functionalities, execution resources, latency, price and so on. HCloud collects the status of each cloud and dispatches serverless functions to the most suitable cloud based on the schedule policy. By leveraging the blockchain technology, we further enforce that our system can neither fake the cloud status nor wrongly dispatch the target functions. We have implemented a prototype of HCloud and evaluated it by simulating multiple cloud providers. The evaluation results show that HCloud can greatly improve the performance of serverless workloads with negligible costs.
  • FEATURE TOPIC: BLOCKCHAIN FOR INTERNET OF THINGS
    Bin Jia, Yongquan Liang
    2020, 17(9): 11-24.
    Abstract ( )   Knowledge map   Save
    With rapid development of blockchain technology, blockchain and its security theory research and practical application have become crucial. At present, a new DDoS attack has arisen, and it is the DDoS attack in blockchain network. The attack is harmful for blockchain technology and many application scenarios. However, the traditional and existing DDoS attack detection and defense means mainly come from the centralized tactics and solution. Aiming at the above problem, the paper proposes the virtual reality parallel anti-DDoS chain design philosophy and distributed anti-D Chain detection framework based on hybrid ensemble learning. Here, AdaBoost and Random Forest are used as our ensemble learning strategy, and some different lightweight classifiers are integrated into the same ensemble learning algorithm, such as CART and ID3. Our detection framework in blockchain scene has much stronger generalization performance, universality and complementarity to identify accurately the onslaught features for DDoS attack in P2P network. Extensive experimental results confirm that our distributed heterogeneous anti-D chain detection method has better performance in six important indicators (such as Precision, Recall, F-Score, True Positive Rate, False Positive Rate, and ROC curve).
  • FEATURE TOPIC: BLOCKCHAIN FOR INTERNET OF THINGS
    Zhen Wang
    2020, 17(9): 25-33.
    Abstract ( )   Knowledge map   Save
    Prediction market can help to trade the outcome of events, which is very useful for risk hedging, especially when people are making business ventures. However, the traditional prediction market is centralized, which means the platform is completely controlled by just one person or team. This centralization has bad influence on the platform to obtaining trust from massive users. So it is important for the prediction market industry to decentralize the traditional platform into a decentralized one, and get its control from few person and give it back to a lot of people in the prediction market community. Blockchain is a very promising technology to make it true as it is a popular decentralizing technology. However, existing blockchain technologies like Bitcoin or Ethernum bring about new problems like high energy consumption, expensive fee, and very low system capacity, which is not suitable for the current prediction market. To solve these problems, we propose to combine masternode technology together with blockchain to serve as a decentralized node, and each masternode is deployed and running on an edge server in mobile edge network. The network of masternodes serves together as the decentralized prediction market platform. As we know, it is not easy to run a masternode. So the number of masternodes is much less than the number of complete nodes of the blockchain. As a result, it is easier to reach the consensus among these masternodes. Theoretical analysis and experimental results show that our proposed method is useful.
  • FEATURE TOPIC: BLOCKCHAIN FOR INTERNET OF THINGS
    Jianbo Xu, Xiangwei Meng, Wei Liang, Hongbo Zhou, Kuan-Ching Li
    2020, 17(9): 34-49.
    Abstract ( )   Knowledge map   Save
    Wireless Body Area Networks (WBANs) refer to small sensor network that consists of sensor devices mounted on the surface of the body or implanted in the body, as such networks are employed to harvest physiological data of the human body or to act as an assistant regulator of several specific physiological indicators of the human body. The sensor devices transmit the harvested human physiological data to the local node via a public channel. Before transmitting data, the sensor device and the local node should perform mutual authentication and key agreement. It is proposed in this paper a secure mutual authentication scheme of blockchain-based in WBANs. To analyze the security of this scheme, formal security analysis, and informal security analysis are used, then the computation and communication costs are compared with those of the relevant schemes. Relevant experimental results reveal that the proposed scheme exhibit more effective control over energy consumption and promising.
  • FEATURE TOPIC: BLOCKCHAIN FOR INTERNET OF THINGS
    Mengnan Bi, Yingjie Wang, Zhipeng Cai, Xiangrong Tong
    2020, 17(9): 50-65.
    Abstract ( )   Knowledge map   Save
    With the development of Internet of Things (IoT), the delay caused by network transmission has led to low data processing efficiency. At the same time, the limited computing power and available energy consumption of IoT terminal devices are also the important bottlenecks that would restrict the application of blockchain, but edge computing could solve this problem. The emergence of edge computing can effectively reduce the delay of data transmission and improve data processing capacity. However, user data in edge computing is usually stored and processed in some honest-but-curious authorized entities, which leads to the leakage of users’ privacy information. In order to solve these problems, this paper proposes a location data collection method that satisfies the local differential privacy to protect users’ privacy. In this paper, a Voronoi diagram constructed by the Delaunay method is used to divide the road network space and determine the Voronoi grid region where the edge nodes are located. A random disturbance mechanism that satisfies the local differential privacy is utilized to disturb the original location data in each Voronoi grid. In addition, the effectiveness of the proposed privacy-preserving mechanism is verified through comparison experiments. Compared with the existing privacy-preserving methods, the proposed privacy-preserving mechanism can not only better meet users’ privacy needs, but also have higher data availability.
  • FEATURE TOPIC: BLOCKCHAIN FOR INTERNET OF THINGS
    Hongman Wang, Yingxue Li, Xiaoqi Zhao, Fangchun Yang
    2020, 17(9): 66-76.
    Abstract ( )   Knowledge map   Save
    Reasonable allocation of storage and computing resources is the basis of building big data system. With the development of IoT (Internet of Things), more data will be brought. A three-layer architecture includes smart devices layer, edge cloud layer and blockchain-based distributed cloud layer. Blockchain is used in IoT for building a distributed decentralize P2P architecture to deal with the secure issue while edge computing deals with increasing volume of data. Edge caching is one of the important application scenarios. In order to allocate edge cache resources reasonably, to improve the quality of service and to reduce the waste of bandwidth resources, this paper proposes a content selection algorithm of edge cache nodes. The algorithm adopts markov chain model, improves the utilization of cache space and reduces the content transmission delay. The hierarchical caching strategy is adopted and the secondary cache stores slides of contents to expand the coverage of cached content and to reduce user waiting time. Regional node cooperation is adopted to expand the cache space and to support the regional preference of cache content. Compared with the classical substitution algorithm, simulation results show that the algorithm in this paper has higher cache hit ratio and higher space utilization.
  • FEATURE TOPIC: BLOCKCHAIN FOR INTERNET OF THINGS
    Zhen Huang, Feng Liu, Mingxing Tang, Jinyan Qiu, Yuxing Peng
    2020, 17(9): 77-89.
    Abstract ( )   Knowledge map   Save
    To security support large-scale intelligent applications, distributed machine learning based on blockchain is an intuitive solution scheme. However, the distributed machine learning is difficult to train due to that the corresponding optimization solver algorithms converge slowly, which highly demand on computing and memory resources. To overcome the challenges, we propose a distributed computing framework for L-BFGS optimization algorithm based on variance reduction method, which is a lightweight, few additional cost and parallelized scheme for the model training process. To validate the claims, we have conducted several experiments on multiple classical datasets. Results show that our proposed computing framework can steadily accelerate the training process of solver in either local mode or distributed mode.
  • FEATURE TOPIC: 6G MOBILE NETWORKS: EMERGING TECHNOLOGIES AND APPLICATIONS
  • FEATURE TOPIC: 6G MOBILE NETWORKS: EMERGING TECHNOLOGIES AND APPLICATIONS
    2020, 17(9): 90-91.
    Abstract ( )   Knowledge map   Save
  • FEATURE TOPIC: 6G MOBILE NETWORKS: EMERGING TECHNOLOGIES AND APPLICATIONS
    Guangyi Liu, Yuhong Huang, Na Li, Jing Dong, Jing Jin, Qixing Wang, Nan Li
    2020, 17(9): 92-104.
    Abstract ( )   Knowledge map   Save
    With the 5th Generation (5G) Mobile network being rolled out gradually in 2019, the research for the next generation mobile network has been started and targeted for 2030. To pave the way for the development of the 6th Generation (6G) mobile network, the vision and requirements should be identified first for the potential key technology identification and comprehensive system design. This article first identifies the vision of the society development towards 2030 and the new application scenarios for mobile communication, and then the key performance requirements are derived from the service and application perspective. Taken into account the convergence of information technology, communication technology and big data technology, a logical mobile network architecture is proposed to resolve the lessons from 5G network design. To compromise among the cost, capability and flexibility of the network, the features of the 6G mobile network are proposed based on the latest progress and applications of the relevant fields, namely, on-demand fulfillment, lite network, soft network, native AI and native security. Ultimately, the intent of this article is to serve as a basis for stimulating more promising research on 6G.
  • FEATURE TOPIC: 6G MOBILE NETWORKS: EMERGING TECHNOLOGIES AND APPLICATIONS
    Yi Liu, Xingliang Yuan, Zehui Xiong, Jiawen Kang, Xiaofei Wang, Dusit Niyato
    2020, 17(9): 105-118.
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    As the 5G communication networks are being widely deployed worldwide, both industry and academia have started to move beyond 5G and explore 6G communications. It is generally believed that 6G will be established on ubiquitous Artificial Intelligence (AI) to achieve data-driven Machine Learning (ML) solutions in heterogeneous and massive-scale networks. However, traditional ML techniques require centralized data collection and processing by a central server, which is becoming a bottleneck of large-scale implementation in daily life due to significantly increasing privacy concerns. Federated learning, as an emerging distributed AI approach with privacy preservation nature, is particularly attractive for various wireless applications, especially being treated as one of the vital solutions to achieve ubiquitous AI in 6G. In this article, we first introduce the integration of 6G and federated learning and provide potential federated learning applications for 6G. We then describe key technical challenges, the corresponding federated learning methods, and open problems for future research on federated learning in the context of 6G communications.
  • FEATURE TOPIC: 6G MOBILE NETWORKS: EMERGING TECHNOLOGIES AND APPLICATIONS
    Rui Chen, Hong Zhou, Wen-Xuan Long, Marco Moretti
    2020, 17(9): 119-127.
    Abstract ( )   Knowledge map   Save
    Not only high spectral efficiency (SE) but also high energy efficiency (EE) are required for future wireless communication systems. Radio orbital angular momentum (OAM) provides a new perspective of mode multiplexing to improve SE. However, there are few studies on the EE performance of OAM mode multiplexing. In this paper, we investigate the SE and EE of a misaligned uniform concentric circle array (UCCA)-based multi-carrier multi-mode OAM and multiple-input multiple-output (MCMM-OAM-MIMO) system in the line-of-sight (LoS) channel, in which two transceiver architectures implemented by radio frequency (RF) analog synthesis and baseband digital synthesis are considered. The distance and angle of arrival (AoA) estimation are utilized for channel estimation and signal detection, whose training overhead is much less than that of traditional MIMO systems. Simulation results validate that the UCCA-based MCMM-OAM-MIMO system is superior to conventional MIMO- OFDM system in the EE and SE performances.
  • FEATURE TOPIC: 6G MOBILE NETWORKS: EMERGING TECHNOLOGIES AND APPLICATIONS
    Jiaxin Zhang, Xing Zhang, Peng Wang, Liangjingrong Liu, Yuanjun Wang
    2020, 17(9): 128-146.
    Abstract ( )   Knowledge map   Save
    The efficient integration of satellite and terrestrial networks has become an important component for 6G wireless architectures to provide highly reliable and secure connectivity over a wide geographical area. As the satellite and cellular networks are developed separately these years, the integrated network should synergize the communication, storage, computation capabilities of both sides towards an intelligent system more than mere consideration of coexistence. This has motivated us to develop double-edge intelligent integrated satellite and terrestrial networks (DILIGENT). Leveraging the boost development of multi-access edge computing (MEC) technology and artificial intelligence (AI), the framework is entitled with the systematic learning and adaptive network management of satellite and cellular networks. In this article, we provide a brief review of the state-of-art contributions from the perspective of academic research and standardization. Then we present the overall design of the proposed DILIGENT architecture, where the advantages are discussed and summarized. Strategies of task offloading, content caching and distribution are presented. Numerical results show that the proposed network architecture outperforms the existing integrated networks.
  • COVER PAPER
  • COVER PAPER
    Zhenyu Xiao, Lipeng Zhu, Xiang-Gen Xia
    2020, 17(9): 147-166.
    Abstract ( )   Knowledge map   Save
    Unmanned aerial vehicle (UAV) has been widely used in many fields and is arousing global attention. As the resolution of the equipped sensors in the UAV becomes higher and the tasks become more complicated, much higher data rate and longer communication range are required in the foreseeable future. As the millimeter-wave (mmWave) band can provide more abundant frequency resources than the microwave band, much higher achievable rate can be guaranteed to support UAV services such as video surveillance, hotspot coverage, and emergency communications, etc. The flexible mmWave beamforming can be used to overcome the high path loss caused by the long propagation distance. In this paper, we study three typical application scenarios for mmWave-UAV communications, namely communication terminal, access point, and backbone link. We present several key enabling techniques for UAV communications, including beam tracking, multi-beam forming, joint Tx/Rx beam alignment, and full-duplex relay techniques. We show the coupling relation between mmWave beamforming and UAV positioning for mmWave-UAV communications. Lastly, we summarize the challenges and research directions of mmWave-UAV communications in detail.
  • COMMUNICATIONS THEORIES & SYSTEMS
  • COMMUNICATIONS THEORIES & SYSTEMS
    Hua Qing, Hua Yu, Yun Liu, Wei Duan, Miaowen Wen, Fei Ji
    2020, 17(9): 167-176.
    Abstract ( )   Knowledge map   Save
    This paper considered a multi-relay distributed cooperative system in which not only the source communicates with the destination, but also the relays have communication requests with the destination. In order to achieve the requirements of simultaneous communication for the source and relays, we propose a distributed cooperative system based on orthogonal frequency division multiplexing with index modulation (OFDM-IM). In this system, the relay can communicate with the destination by superimposing its own signal over the inactive subcarriers on the decoded OFDM-IM signal. Upper bounds on the bit error rates of the source and the active relay are both derived in closed form, whose tightness is verified through simulation results.
  • COMMUNICATIONS THEORIES & SYSTEMS
    Zhitong Xing, Kaiming Liu, Kaiyuan Huang, Bihua Tang, Yuanan Liu
    2020, 17(9): 177-192.
    Abstract ( )   Knowledge map   Save
    In this paper, a novel efficient continuous piecewise nonlinear companding scheme is proposed for reducing the peak-to-average power ratio (PAPR) of orthogonal frequency division multiplexing (OFDM) systems. In the proposed companding transform, signal samples with large amplitudes is clipped for peak power reduction, and the signal samples with medium amplitudes is nonlinear transformed with power compensation. While the signal samples with small amplitudes remain unchanged. The whole companding function is continuous and smooth in the range of positive numbers, which is beneficial for guaranteeing the bit error rate (BER) and power spectral density (PSD) performance. This scheme can achieve a significant reduction in PAPR. And at the same time, it cause little increment in BER and PSD performance. Simulation results indicate the superiority of the proposed scheme over existing companding schemes.
  • COMMUNICATIONS THEORIES & SYSTEMS
    Yali Zheng, Yitian Zhang, Yang Wang, Jie Hu, Kun Yang
    2020, 17(9): 193-209.
    Abstract ( )   Knowledge map   Save
    In order to satisfy the ever-increasing energy appetite of the massive battery-powered and batteryless communication devices, radio frequency (RF) signals have been relied upon for transferring wireless power to them. The joint coordination of wireless power transfer (WPT) and wireless information transfer (WIT) yields simultaneous wireless information and power transfer (SWIPT) as well as data and energy integrated communication network (DEIN). However, as a promising technique, few efforts are invested in the hardware implementation of DEIN. In order to make DEIN a reality, this paper focuses on hardware implementation of a DEIN. It firstly provides a brief tutorial on SWIPT, while summarising the latest hardware design of WPT transceiver and the existing commercial solutions. Then, a prototype design in DEIN with full protocol stack is elaborated, followed by its performance evaluation.
  • COMMUNICATIONS THEORIES & SYSTEMS
    Yonggang He, Xiongzhu Bu, Ming Jiang, Maojun Fan
    2020, 17(9): 210-219.
    Abstract ( )   Knowledge map   Save
    In view of the limited bandwidth of underwater video image transmission, a low bit rate underwater video compression coding method is proposed. Based on the preprocessing process of wavelet transform and coefficient down-sampling, the visual redundancy of underwater image is removed and the computational coefficients and coding bits are reduced. At the same time, combined with multi-level wavelet decomposition, inter frame motion compensation, entropy coding and other methods, according to the characteristics of different types of frame image data, reduce the number of calculations and improve the coding efficiency. The experimental results show that the reconstructed image quality can meet the visual requirements, and the average compression ratio of underwater video can meet the requirements of underwater acoustic channel transmission rate.
  • EMERGING TECHNOLOGIES & APPLICATIONS
  • EMERGING TECHNOLOGIES & APPLICATIONS
    Xiaolan Liu, Jiadong Yu, Zhiyong Feng, Yue Gao
    2020, 17(9): 220-236.
    Abstract ( )   Knowledge map   Save
    To support popular Internet of Things (IoT) applications such as virtual reality and mobile games, edge computing provides a front-end distributed computing archetype of centralized cloud computing with low latency and distributed data processing. However, it is challenging for multiple users to offload their computation tasks because they are competing for spectrum and computation as well as Radio Access Technologies (RAT) resources. In this paper, we investigate computation offloading mechanism of multiple selfish users with resource allocation in IoT edge computing networks by formulating it as a stochastic game. Each user is a learning agent observing its local network environment to learn optimal decisions on either local computing or edge computing with a goal of minimizing long term system cost by choosing its transmit power level, RAT and sub-channel without knowing any information of the other users. Since users’ decisions are coupling at the gateway, we define the reward function of each user by considering the aggregated effect of other users. Therefore, a multi-agent reinforcement learning framework is developed to solve the game with the proposed Independent Learners based Multi-Agent Q-learning (IL-based MA-Q) algorithm. Simulations demonstrate that the proposed IL-based MA-Q algorithm is feasible to solve the formulated problem and is more energy efficient without extra cost on channel estimation at the centralized gateway. Finally, compared with the other three benchmark algorithms, it has better system cost performance and achieves distributed computation offloading.
  • EMERGING TECHNOLOGIES & APPLICATIONS
    Guo Yu, Zhenxing Dong, Yan Zhu
    2020, 17(9): 237-258.
    Abstract ( )   Knowledge map   Save
    Previous research on deep-space networks based on delay-tolerant networking (DTN) has mainly focused on the performance of DTN protocols in simple networks; hence, research on complex networks is lacking. In this paper, we focus on network evaluation and protocol deployment for complex DTN-based deep-space networks and apply the results to a novel complex deep-space network based on the Universal Interplanetary Communication Network (UNICON-CDSN) proposed by the National Space Science Center (NSSC) for simulation and verification. A network evaluation method based on network capacity and memory analysis is proposed. Based on a performance comparison between the Licklider Transmission Protocol (LTP) and the Transmission Control Protocol (TCP) with the Bundle Protocol (BP) in various communication scenarios, a transport protocol configuration proposal is developed and used to construct an LTP deployment scheme for UNICON-CDSN. For the LTP deployment scheme, a theoretical model of file delivery time over complex deep-space networks is built. A network evaluation with the method proposed in this paper proves that UNICON-CDSN satisfies the requirements for the 2020 Mars exploration mission Curiosity. Moreover, simulation results from a universal space communication network testbed (USCNT) designed by us show that the LTP deployment scheme is suitable for UNICON-CDSN.