Archive

  • Select all
    |
    FEATURE TOPIC: SELECTED PAPERS FROM CICC
  • FEATURE TOPIC: SELECTED PAPERS FROM CICC
    Tao Yu, Longfei Yu, Diyue Chen, Hongyan Cui, Jilong Wang
    2020, 17(6): 1-12.
    Abstract ( )   Knowledge map   Save
    The rise of the network has injected new impetus into the development of traditional networks. Due to the complexity of the network itself and its network programmability, there is a risk of routing loops occurring in the SDN network. This paper proposes a loop detection mechanism. According to the Time To Live (TTL) value of the loop packet, there is approximately periodicity in the same loop. We use sFlow to count the number of packets corresponding to each TTL value of a switch in the loop over a period of time, and perform discrete Fourier transform on the obtained finite-length sequence to observe its frequency domain performance and determine whether there are periodic features. By doing so, it is determined whether there is a routing loop, and the purpose of passively detecting the routing loop is achieved. Compared to existing algorithms, it has advantages in real-time, scalability and false positive rate. The experimental results show that the routing loop detection algorithm based on TTL statistics in this paper still maintains high judgment accuracy under the scenarios of lower stream sampling rate and smaller detection period.
  • FEATURE TOPIC: SELECTED PAPERS FROM CICC
    Yuyu Zhao, Guang Cheng, Weici Zhang, Xin Chen, Jin Li
    2020, 17(6): 13-25.
    Abstract ( )   Knowledge map   Save
    Delay and throughput are the two network indicators that users most care about. Traditional congestion control methods try to occupy buffer aggressively until packet loss being detected, causing high delay and variation. Using AQM and ECN can highly reduce packet drop rate and delay, however they may also lead to low utilization. Managing queue size of routers properly means a lot to congestion control method. Keeping traffic size varying around bottleneck bandwidth creates some degree of persistent queue in the router, which brings in additional delay into network unwillingly, but a corporation between sender and router can keep it under control. Proper persistent queue not only keeps routers being fully utilized all the time, but also lower the variation of throughput and delay, achieving the balance between delay and utilization. In this paper, we present BCTCP (Buffer Controllable TCP), a congestion control protocol based on explicit feedback from routers. It requires sender, receiver and routers cooperating with each other, in which senders adjust their sending rate according to the multiple bit load factor information from routers. It keeps queue length of bottleneck under control, leading to very good delay and utilization result, making it more applicable to complex network environments.
  • FEATURE TOPIC: SELECTED PAPERS FROM CICC
    Weijin Jiang, Xiaoliang Liu, Xingbao Liu1, Yang Wang, Sijian Lv, Fang Ye
    2020, 17(6): 26-36.
    Abstract ( )   Knowledge map   Save
    In the recent Smart Home (SH) research work, intelligent service recommendation technique based on behavior recognition, it has been extensively preferred by researchers. However, most current research uses the Semantic recognition to construct the user’s basic behavior model. This method is usually restricted by environmental factors, the way these models are built makes it impossible for them to dynamically match the services that might be provided in the user environment. To solve this problem, this paper proposes a Semantic behavior assistance (Semantic behavior assistance, SBA). By joining the semantic model on the intelligent gateway, building an SA model, in this way, a logical Internet networks for smart home is established. At the same time, a behavior assistant method based on SBA model is proposed, among them, the user environment-related entities, sensors, devices, and user-related knowledge models exist in the logical interconnection network of the SH system through the semantic model. In this paper, the data simulation experiment is carried out on the method. The experimental results show that the SBA model is better than the knowledge-based pre-defined model.
  • FEATURE TOPIC: SELECTED PAPERS FROM CICC
    Jiahua Zhu, Xianliang Jiang, Yan Yu, Guang Jin, Haiming Chen, Xiaohui Li, Long Qu
    2020, 17(6): 37-50.
    Abstract ( )   Knowledge map   Save
    With the emerging diverse applications in data centers, the demands on quality of service in data centers also become diverse, such as high throughput of elephant flows and low latency of deadline-sensitive flows. However, traditional TCPs are ill-suited to such situations and always result in the inefficiency (e.g. missing the flow deadline, inevitable throughput collapse) of data transfers. This further degrades the user-perceived quality of service (QoS) in data centers. To reduce the flow completion time of mice and deadline-sensitive flows along with promoting the throughput of elephant flows, an efficient and deadline-aware priority-driven congestion control (PCC) protocol, which grants mice and deadline-sensitive flows the highest priority, is proposed in this paper. Specifically, PCC computes the priority of different flows according to the size of transmitted data, the remaining data volume, and the flows’ deadline. Then PCC adjusts the congestion window according to the flow priority and the degree of network congestion. Furthermore, switches in data centers control the input/output of packets based on the flow priority and the queue length. Different from existing TCPs, to speed up the data transfers of mice and deadline-sensitive flows, PCC provides an effective method to compute and encode the flow priority explicitly. According to the flow priority, switches can manage packets efficiently and ensure the data transfers of high priority flows through a weighted priority scheduling with minor modification. The experimental results prove that PCC can improve the data transfer performance of mice and deadline-sensitive flows while guaranting the throughput of elephant flows.
  • COVER PAPER
  • COVER PAPER
    Jinkang Zhu, Ming Zhao, Sihai Zhang, Wuyang Zhou
    2020, 17(6): 51-67.
    Abstract ( )   Knowledge map   Save
    The 5th generation (5G) mobile networks has been put into services across a number of markets, which aims at providing subscribers with high bit rates, low latency, high capacity, many new services and vertical applications. Therefore the research and development on 6G have been put on the agenda. Regarding demands and characteristics of future 6G, artificial intelligence (A), big data (B) and cloud computing (C) will play indispensable roles in achieving the highest efficiency and the largest benefits. Interestingly, the initials of these three aspects remind us the significance of vitamin ABC to human body. In this article we specifically expound on the three elements of ABC and relationships in between. We analyze the basic characteristics of wireless big data (WBD) and the corresponding technical action in A and C, which are the high dimensional feature and spatial separation, the predictive ability, and the characteristics of knowledge. Based on the abilities of WBD, a new learning approach for wireless AI called knowledge + data-driven deep learning (KD-DL) method, and a layered computing architecture of mobile network integrating cloud/ edge/ terminal computing, is proposed, and their achievable efficiency is discussed. These progress will be conducive to the development of future 6G.
  • REVIEW PAPER
  • REVIEW PAPER
    Shibing Zhang, Lili Guo, Hongjun Li, Zhihua Bao, Xiaoge Zhang, Yonghong Chen
    2020, 17(6): 68-79.
    Abstract ( )   Knowledge map   Save
    With the development of society and the progress of technology, more and more ocean activities are carried out. It results in booming of deep-sea diving. The use of helium-oxygen mixture (as a kind of breathing gas) solves the physiological problems of divers in saturated diving, but it brings about the Heliumspeech voice communication problem, the drop of speech intelligibility. There is no doubt that the effective speech communication must be provided for supporting the life and work of divers in deep-sea. This paper describes the mechanism of forming heliumspeech, discusses the effects of pressure and helium environment on the speech spectrum, compares the pros and cons of the time-domain and frequency-domain unscrambling techniques, shows the challenges in heliumspeech communications. Finally, it briefly introduces the deep learning, and points out that deep learning/machine learning may be a perfectly unscrambling technique.
  • INVITED PAPER
  • INVITED PAPER
    Weiyu Chen, Haiyang Ding, Shilian Wang, Daniel Benevides da Costa, Fengkui Gong, Pedro Henrique Juliano Nardelli
    2020, 17(6): 80-100.
    Abstract ( )   Knowledge map   Save
    In this paper, we investigate the performance of commensal ambient backscatter communications (AmBC) that ride on a non-orthogonal multiple access (NOMA) downlink transmission, in which a backscatter device (BD) splits part of its received signals from the base station (BS) for energy harvesting, and backscatters the remaining received signals to transmit information to a cellular user. Specifically, under the power consumption constraint at BD and the peak transmit power constraint at BS, we derive the optimal reflection coefficient at BD, the optimal total transmit power at BS, and the optimal power allocation at BS for each transmission block to maximize the ergodic capacity of the ambient backscatter transmission on the premise of preserving the outage performance of the NOMA downlink transmission. Furthermore, we consider a scenario where the BS is restricted by a maximum allowed average transmit power and the reflection coefficient at BD is fixed due to BD’s low-complexity nature. An algorithm is developed to determine the optimal total transmit power and power allocation at BS for this scenario. Also, a low-complexity algorithm is proposed for this scenario to reduce the computational complexity and the signaling overheads. Finally, the performance of the derived solutions are studied and compared via numerical simulations.
  • COMMUNICATIONS THEORIES & SYSTEMS
  • COMMUNICATIONS THEORIES & SYSTEMS
    Sheng Liu, Jing Zhao
    2020, 17(6): 101-108.
    Abstract ( )   Knowledge map   Save
    In this paper, a two-dimensional (2D) direction-of-arrival (DOA) estimation algorithm with increased degrees of freedom for two parallel linear arrays is presented. Being different from the conventional two-parallel linear array, the proposed two-parallel linear array consists of two uniform linear arrays with non-equal inter-element spacing. Propagator method (PM) is used to obtain a special matrix which can be utilized to increase the virtual elements of one of uniform linear arrays. Then, the PM algorithm is used again to obtain automatically paired elevation and azimuth angles. The simulation results and complexity analysis show that the proposed method can increase the number of distinguishable signals and improve the estimation precision without increasing the computational complexity.
  • COMMUNICATIONS THEORIES & SYSTEMS
    Leila Nouri, Salah Yahya, Abbas Rezaei
    2020, 17(6): 109-120.
    Abstract ( )   Knowledge map   Save
    This work presents a compact lowpass-bandpass microstrip diplexer with a novel configuration. It consists of a lowpass filter integrated with a bandpass filter via a simple compact junction. The proposed bandpass filter consists of four rectangular patch cells and some thin strips. The step impedance structures, with a radial cell, are applied to achieve a lowpass frequency response. The lowpass channel of the introduced diplexer has 2.64 GHz cut-off frequency, whereas, the bandpass channel center frequency is 3.73 GHz for WiMAX applications and covers the frequencies 3.31 GHz to 4 GHz. In addition to having novel structures, both filters have other advantages in terms of high return loss, low insertion loss and high selectivity. The presented microstrip diplexer has the compact size of 29 mm × 13.8 mm × 0.762 mm, calculated at 2.64 GHz. The obtained insertion losses are 0.20 dB (for the first channel) and 0.25 dB (for the second channel), which make the proposed diplexer suitable for energy harvesting. The stopband properties of both bandpass and lowpass filters are improved by creating several transmission zeros. The comparison results show that the lowest insertion losses, the minimum gap between channels, good return losses, and good isolation are obtained.
  • COMMUNICATIONS THEORIES & SYSTEMS
    Xue Jiang, Baoyu Zheng, Weiping Zhu, Lei Wang, Xiaoyun Hou
    2020, 17(6): 121-130.
    Abstract ( )   Knowledge map   Save
    The large system analysis (LSA) has recently been shown to be a very useful tool for computing the average achievable rate. In this paper, we use LSA to derive the users’ average achievable rate of multi-antenna two-way relay networks with interference alignment (IA), and we then derive the rate expressions under both equal power allocation and optimal power allocation. It is shown that the obtained closed-form rate expressions are functions of the average signal-to-noise ratio (SNR) for each data stream. Extensive simulation studies show that the average achievable rate expressions derived through LSA provide accurate estimates of the average achievable rate for two-way relay networks with interference alignment.
  • COMMUNICATIONS THEORIES & SYSTEMS
    Yongxin Liu, Ming Zhao, Limin Xiao, Shidong Zhou
    2020, 17(6): 131-144.
    Abstract ( )   Knowledge map   Save
    We propose a pilot domain non-orthogonal multiple access (NOMA) for uplink massive devices grant-free random access scenarios in massive multiple-input multiple-output (MIMO) maritime communication systems. These scenarios are characterized by numerous devices with sporadic access behavior, and therefore only a subset of them are active. Due to massive potential devices in the network, it is infeasible to assign a unique orthogonal pilot to each device in advance. In such scenarios, pilot decontamination is a crucial problem. In this paper, the devices are randomly assigned non-orthogonal pilots which are constructed by a linear combination of some orthogonal pilots. We show that a bipartite graph can conveniently describe the interference cancellation (IC) processes of pilot decontamination. High spectrum efficiency (SE) and low outage probability can be obtained by selecting the numbers of orthogonal pilots according to the given probability distribution. Numerical evaluations show that the proposed pilot domain NOMA decreases the outage probability from 20% to 2e-12 at the SE of 4 bits/s/Hz for a single device, compared to the conventional method of slotted ALOHA with 1024 antennas at the BS, or increases the spectrum efficiency from 1.2 bits/s/Hz to 4 bit/s/Hz at the outage probability of 2e-12 in contrast with the Welch bound equality (WBE) non-orthogonal pilots.
  • NETWORKS & SECURITY
  • NETWORKS & SECURITY
    Xinfang Song, Wei Jiang, Zheng Li, Lijing Liu, Shenggen Wu
    2020, 17(6): 145-152.
    Abstract ( )   Knowledge map   Save
    In recent years, with the rapid development of the Internet of Things (IoT), RFID tags, industrial controllers, sensor nodes, smart cards and other small computing devices are increasingly widely deployed. In order to help protect low-power, low-cost Internet of things devices, lightweight cryptography came into being. In order to launch the standard of cryptographic algorithm suitable for constrained environment, NIST started the process of lightweight cryptography standardization in 2016, and published the second round of candidate cryptographic algorithms in August 2019. SKINNY-Hash in the sponge construction is one of the second round candidates, as well as SKINNY-AEAD. The tweakable block cipher SKINNY is the basic component for both of them. Although cryptanalysts have proposed several cryptanalysis results on SKINNY and SKINNY-AEAD, there is no cryptanalysis results on SKINNY-Hash. Based on the differential cryptanalysis and the method of mixed integer programming (MILP), we perform differential cryptanalysis on SKINNY-Hash. The core is to set up the inequations of the MILP model. Actually, it is hard to obtain the inequations of the substitution (i.e. S-box) obeying the previous method. By a careful study of the permutation, we partition the substitution into a nonlinear part and a linear part, then a series of inequations in the MILP model is obtained to describe the differentials with high possibilities. As a result, we propose a differential hash collision path of 3-round SKINNY-tk3-Hash. By adjusting the bit rate of SKINNY-tk3-Hash, we propose a 7-round collision path for the simplified algorithm. The cryptanalysis in this paper will help to promote the NIST Lightweight Crypto Standardization process.
  • NETWORKS & SECURITY
    Laicheng Cao, Yifan Kang, Qirui Wu, Rong Wu, Xian Guo, Tao Feng
    2020, 17(6): 153-163.
    Abstract ( )   Knowledge map   Save
    Ciphertext policy attribute based encryption (CP-ABE) can provide high fine-grained access control for cloud storage. However, it needs to solve problems such as property privacy protection, ciphertext search and data update in the application process. Therefore, based on CP-ABE scheme, this paper proposes a dynamically updatable searchable encryption cloud storage (DUSECS) scheme. Using the characteristics of homomorphic encryption, the encrypted data is compared to achieve efficient hiding policy. Meanwhile, adopting linked list structure, the DUSECS scheme realizes the dynamic data update and integrity detection, and the search encryption against keyword guessing attacks is achieved by combining homomorphic encryption with aggregation algorithm. The analysis of security and performance shows that the scheme is secure and efficient.
  • NETWORKS & SECURITY
    Wenlong Ke, Yong Wang, Miao Ye
    2020, 17(6): 164-179.
    Abstract ( )   Knowledge map   Save
    The proliferation of the global datasphere has forced cloud storage systems to evolve more complex architectures for different applications. The emergence of these application session requests and system daemon services has created large persistent flows with diverse performance requirements that need to coexist with other types of traffic. Current routing methods such as equal-cost multipath (ECMP) and Hedera do not take into consideration specific traffic characteristics nor performance requirements, which make these methods difficult to meet the quality of service (QoS) for high-priority flows. In this paper, we tailored the best routing for different kinds of cloud storage flows as an integer programming problem and utilized grey relational analysis (GRA) to solve this optimization problem. The resulting method is a GRA-based service-aware flow scheduling (GRSA) framework that considers requested flow types and network status to select appropriate routing paths for flows in cloud storage datacenter networks. The results from experiments carried out on a real traffic trace show that the proposed GRSA method can better balance traffic loads, conserve table space and reduce the average transmission delay for high-priority flows compared to ECMP and Hedera.
  • NETWORKS & SECURITY
    Zhongnan Zhao, Huiqiang Wang, Jian Wang, Hongwei Guo
    2020, 17(6): 180-195.
    Abstract ( )   Knowledge map   Save
    All-optical network, as a new backbone network, is featured with high speed and large capacity transmission. It may be out of order due to various faults while providing high-performance transmission service, thus more effective fault repairing methods are required. A routing and wavelength assignment method based on SDN is designed and analyzed from the perspective of service function chaining in this paper. A multi-objective integer linear programming model based on impairment-aware and scheduling time is constructed by combining the unified control of control plane with the resource allocation mode of service function virtualization. Meanwhile, an improved Firefly Algorithm is adopted to solve the model for obtaining a better scheduling scheme, so as to the resources are allocated on-demand in a more flexible and efficient way, which effectively improved the self-recovery capability of the network. In the simulation experiments, Through the comparison between the method proposed and methods based on centralization and distribution, method proposed in the paper is superior to the compared ones in the indexes of survivability, blocking probability, link recovery time, and presents a better scheduling performance, makes the system has stronger ability of self-healing in the face of failure.
  • EMERGING TECHNOLOGIES & APPLICATIONS
  • EMERGING TECHNOLOGIES & APPLICATIONS
    Xiaohan Yang, Xiaojuan Li, Yong Guan, Jiadong Song, Rui Wang
    2020, 17(6): 196-210.
    Abstract ( )   Knowledge map   Save
    Error or drift is frequently produced in pose estimation based on geometric “feature detection and tracking” monocular visual odometry(VO) when the speed of camera movement exceeds 1.5m/s. While, in most VO methods based on deep learning, weight factors are in the form of fixed values, which are easy to lead to overfitting. A new measurement system, for monocular visual odometry, named Deep Learning Visual Odometry(DLVO), is proposed based on neural network. In this system, Convolutional Neural Network(CNN) is used to extract feature and perform feature matching. Moreover, Recurrent Neural Network(RNN) is used for sequence modeling to estimate camera’s 6-dof poses. Instead of fixed weight values of CNN, Bayesian distribution of weight factors are introduced in order to effectively solve the problem of network overfitting. The 18,726 frame images in KITTI dataset are used for training network. This system can increase the generalization ability of network model in prediction process. Compared with original Recurrent Convolutional Neural Network(RCNN), our method can reduce the loss of test model by 5.33%. And it’s an effective method in improving the robustness of translation and rotation information than traditional VO methods.
  • EMERGING TECHNOLOGIES & APPLICATIONS
    Canghong Jin, Guangjie Zhang, Minghui Wu, Shengli Zhou, Taotao Fu
    2020, 17(6): 211-222.
    Abstract ( )   Knowledge map   Save
    Text analysis is a popular technique for finding the most significant information from texts including semantic, emotional, and other hidden features, which became a research hotspot in the last few years. Specially, there are some text analysis tasks with judgment reports, such as analyzing the criminal process and predicting prison terms. Traditional researches on text analysis are generally based on special feature selection and ontology model generation or require legal experts to provide external knowledge. All these methods require a lot of time and labor costs. Therefore, in this paper, we use textual data such as judgment reports creatively to perform prison term prediction without external legal knowledge. We propose a framework that combines value-based rules and a fuzzy text to predict the target prison term. The procedure in our framework includes information extraction, term fuzzification, and document vector regression. We carry out experiments with real-world judgment reports and compare our model’s performance with those of ten traditional classification and regression models and two deep learning models. The results show that our model achieves competitive results compared with other models as evaluated by the RMSE and R-squared metrics. Finally, we implement a prototype system with a user-friendly GUI that can be used to predict prison terms according to the legal text inputted by the user.
  • EMERGING TECHNOLOGIES & APPLICATIONS
    Vahid Abbasi, Mahrokh G. Shayesteh
    2020, 17(6): 223-234.
    Abstract ( )   Knowledge map   Save
    Improving power distribution characteristics of space time block codes (STBCs), namely peak to average power ratio (PAPR), average to minimum power ratio (Ave/min), and probability of transmitting “zero” by antenna, makes easier their practical implementation. To this end, this study proposes to multiply full diversity STBC with a non-singular matrix in multiple input multiple output (MIMO) or multiple input single output (MISO) systems with linear or maximum likelihood (ML) receivers. It is proved that the obtained code achieves full diversity and the order of detection complexity does not change. The proposed method is applied to different types of STBCs. The bit error rate (BER) and power distribution characteristics of the new codes demonstrate the superiority of the introduced method. Further, lower and upper bounds on the BER of the obtained STBCs are derived for all receivers. The proposed method provides trade-off among PAPR, spectral efficiency, energy efficiency, and BER.