999精品在线视频,手机成人午夜在线视频,久久不卡国产精品无码,中日无码在线观看,成人av手机在线观看,日韩精品亚洲一区中文字幕,亚洲av无码人妻,四虎国产在线观看 ?

To Learn or Not to Learn:Deep Learning Assisted Wireless Modem Design

2019-06-16 04:01:54XUESongyanLIAngWANGJinfeiYINaMAYiRahimTAFAZOLLIandTerenceDODGSON
ZTE Communications 2019年4期

XUE Songyan,LI Ang,WANG Jinfei,YI Na,MA Yi,Rahim TAFAZOLLI,and Terence DODGSON

(1.Institute for Communication Systems,University of Surrey,Guildford,GU2 7XH,the United Kingdom;2.Airbus Defense and Space,Portsmouth,PO3 5PU,the United Kingdom)

Abstract:Deep learning is driving a radical paradigm shift in wireless communications,all the way from the application layer down to the physical layer.Despite this,there is an ongoing debate as to what additional values artificial intelligence (or machine learning) could bring to us,particularly on the physical layer design;and what penalties there may have? These ques?tions motivate a fundamental rethinking of the wireless modem design in the artificial intelli?gence era.Through several physical-layer case studies,we argue for a significant role that ma?chine learning could play,for instance in parallel error-control coding and decoding,channel equalization,interference cancellation,as well as multiuser and multiantenna detection.In addition,we discuss the fundamental bottlenecks of machine learning as well as their poten?tial solutions in this paper.

Keywords:deep learning;neural networks;machine learning;modulation and coding

1 Introduction

With the launch of commercial 5G mobile networks in 2019,the research of wireless communications is now well on the way towards Vision 2030 and be?yond.Today,the picture of future wireless commu?nications is becoming much clearer than ever.According to ITU Network 2030 Working Group [1],future networks should be architected to support holographic communications and smart connectivity,providing seemingly zero latency,guaran?teed ultra-reliability (e.g.99.9999%),massive Internet of Things (IoT) connectivity,and Tbit/s wireless speed.Commu?nication networks are no longer only a medium for information flow,but also act as distributed computers to form over-the-top(OTT)-like platforms to provide services(such as computing-asa-service and design-as-a-service) for vertical users.To achieve this goal,wireless technologies should be fundamental?ly re-designed to be able to fully explore the spectrum;as such,this is driving the development of extreme physical-layer(PHY) technologies,which are able to handle wireless systems with many nonlinearities,due to the use of very-high order modulations,unexploited mmWave or THz bands,and/or lowcost electronic components (such as low-noise amplifiers(LNAs),mixers,oscillators and low-resolution analog-to-digital converters (ADCs)).Moreover,PHY solutions should be made scalable to the number of connected devices; and they should be parallel computing ready,as future high-performance com?puting technologies (including future quantum computing tech?nology)rely highly on the parallel computing power.

With such a big picture in mind,machine learning or more specifically,deep learning can play a significant role in the PHY design,at least from the following five aspects:

1)Conventional PHY algorithms,particularly for wireless re?ceivers,are mostly not parallel computing ready.For instance,most of the linear or nonlinear coherent receivers (such as lin?ear zero-forcing,minimum mean-square error,lattice reduc?tion,and sphere decoding) require either channel matrix inver?sions or channel matrix decompositions,which are difficult to execute in an efficient and parallel manner.This can cause a bottleneck for the implementation of advanced channel equaliz?ers or multiuser detectors at the receiver side.An exception could be the matched-filter algorithm,which is of low-complex?ity and parallel computing architecture.On the other hand,matched filtering is often too suboptimum for most wireless ap?plications.One might also argue for parallel computing abilities of brute-force search,likelihood ascent search,or Tabu search.However,those algorithms trade off complexity for parallel com?puting,and thus they are not cost-effective solutions.In this pa?per,we will study the merits of deep-learning assisted solutions,with specific to their inborn parallel computing ability.

2) Conventional hand-engineered PHY algorithms face the fundamental trade-off between performance and complexity.Optimum algorithms are often too complex to implement and low-complexity algorithms are often too suboptimum.Deeplearning assisted PHY algorithms have the potential to achieve(near-)optimum performances with low computation complexi?ties.We argue for the merits of performance-complexity tradeoff when using deep learning.

3)Current PHY technologies are designed for linear commu?nication channels and they are not optimized for future wire?less systems often operating in nonlinear conditions.Nonlin?ear systems are often much harder for mathematical analysis,and in general,we even do not know their channel capacities.Hand-engineered approaches for PHY design and optimization are currently very challenging; and this is where deep learning can be of much assistance.

4) Sensing and communication is an emerging concept in the scope of network automation.Basically,wireless net?works are able to capture environmental changes through lo?cal and remote sensors or even live video records,based on which networks can adapt their operating states for optimum uses of their local radio resources.On the PHY layer,envi?ronmental information can be translated into channel-side in?formation through machine learning [2],and this can be use?ful for advanced modem functions such as adaptive modula?tion,coding and beamforming.In addition,machine learning can play a central role in building and reconfiguring state ma?chines for local networks through extensive online back?ground learning.

5) Since Shannon’s ground-breaking work on communica?tion theory reported in 1948,most telecommunications re?search effort has been targeting the Level A problem,i.e.,how accurately can the information-bearing symbols be con?veyed from one point to another? In the academic domain,this research problem has been almost saturated.In the in?dustrial domain,it is very challenging to apply the outcome of Level A research so as to satisfy the growing demand of future wireless networks in terms of smart connectivity,providing seemingly zero latency and perceived infinite capacity.Therefore,it is perhaps the right time to revisit or invest more research effort on the Level B problem,i.e.,how precisely do the symbols of communication convey the desired mean?ing? This problem goes well beyond traditional source encod?ing practices; as for now,source encoders are expected to un?derstand the meaning of objects instead of just the probability distribution.A simple example of the Level B problem is il?lustrated in Fig.1,where the picture on the left-hand side is the original picture for transmission.Instead of compressing the picture using current codec processing methods,source encoders that have been trained to understand the meaning of the picture could send a textual description,such as“a white background picture,with a mother kangaroo carrying her ba?by in her pouch.”The receiver then rebuilds the picture based on the meaning of the received symbols; this can be termed semantic communications,which involves heavy use of artificial intelligence/machine learning in semantic source encoding and decoding.

Certainly,we shall be able to find more merits and interest?ing topics when applying artificial intelligence/machine learn?ing in wireless communications; some are already under fast development and some are just emerging.In the following sec?tions,our discussion will be mainly focused on points 1),2),and 3),as they are suitable for both current and future commu?nication networks.We will also discuss fundamental bottle?necks when applying deep learning to wireless modem design.

The rest of this paper is organized as follows.Section 2 out?lines the principles of deep learning assisted modem design in the wireless communication physical layer.Section 3 provides the design details of three practical physical layer applica?tions.Section 4 provides further discussions and open re?search problems.Section 5 draws the conclusion.

2 Principles of Deep Learning Assisted Mo?dem Design

▲Figure 1.A simple example of Level B communication problem (se?mantic communication).

By deep learning,we often mean machine learning through deep artificial neural networks (ANNs).An ANN is called deep when it has two or more hidden layers.Mathematically,the main function of each hidden layer is to perform classifica?tion of input vectors which might be referred to as perception in the artificial intelligence domain.If each output neuron yields a binary-type output,a hidden layer,consisting ofLneu?rons,is able to classify at leastLclusters.When a hidden lay?er is trained according to the nearest-neighbor rule,the ma?chine is able to learn optimum classifications [3].One might also employ the k-nearest neighbor rule to train the hidden lay?er,and in this case,the machine can form at most 2Lclusters.This is a possible way to scale up ANN when input vectors have to be partitioned into clusters that are growing exponen?tially.However,we will have to trade off the classification ac?curacy.

Prior to studying deep learning assisted wireless modem de?sign,let us have a brief review of the PHY procedure of pointto-point communications (Fig.2).Basically,signal waveforms are drawn from a finite-alphabet set,sayA,with the sizeJ.Af?ter going through the fading channel,received waveforms in their discrete-time equivalent form are vectors forming an infi?nite set.The role of receivers is to map the received vectors back onto the finite-alphabet setA.This procedure mimics the ANN-based classification procedure,as described above.In?deed,it is rather straightforward to replace the receiver box in Fig.2 with an ANN black-box.The input vectors are formed by received waveforms combined with the channel state infor?mation,as they together form a bijection to the original wave?form setA.Alternatively,the input vectors can be channelequalized signals which also form bijection with the original waveform setA.The bijection allows the ANN black-box to be trained through supervised learning.In fact,this example is not the only way to apply deep learning for modem designs.It is al?so possible to replace both the transmitter and receiver with their corresponding ANN black-boxes,so as to form an autoen?coder which can be trained end-to-end for joint transmitter and receiver design [4],[5].Theoretically,a shallow-ANN (i.e.an ANN with a single hidden layer) would be sufficient to perform signal classification at the receiver side,as a receiver is nor?mally a single-task classifier.Joint transmitter and receiver de?signs(autoencoders)are different,as they need at least one hid?den layer at the transmitter side to construct the waveform set and another hidden layer at the receiver side to carry out corre?sponding signal classification.Here,the implication is that deep-ANN is more meaningful when a PHY module or proce?dure can have a breakdown of two or more different tasks; or otherwise,a shallow-ANN would be more than enough.This is?sue will be further elaborated in Section 3.

▲Figure 2.Block diagram of the physical-layer (PHY) procedure of point-to-point communication.

In addition to the ANN architecture,ANN training algo?rithms or methods are crucial when improving machine learn?ing efficiency.Analogous to ANN-assisted machine learning practices in the general artificial intelligence domain,it is al?ways important to pay particular attention to the following three aspects:

1) Weighting vectors (including biases) in each hidden layer should be carefully initialized.They are often randomly gener?ated according to a certain independent probability distribu?tion within a certain range,which can vary from case to case in practical applications.Specific to modem design,we should bear in mind that those weighting vectors during training be?come reference vectors for the eventual signal classification.Therefore,they should be initialized in a way that facilitates the capture of the characteristics of communication signals by machines.

2) Activation functions must be carefully selected to improve the optimality or efficiency of ANN-assisted machine learning.For instance,Softmax(.) is suitable for small-scale ANNs to adopt the nearest-neighbor rule in machine learning.This en?ables Euclidean-distance optimality when training a hidden lay?er.Moreover,Softmax(.) allows machines to produce soft out?puts that are often useful for soft-demodulation and decoding practices.Alternatively,we can employ Sigmoid(.) to scale up ANNs when they are expected to handle massive-region classi?fications.Certainly,we will have to pay for the classification optimality.For more information,a relatively comprehensive list of activation functions as well as their descriptions can be found in[6].

3) Backpropagation (BP) is essential at the ANN training stage to recursively update neuron weighting vectors,with the aim of minimizing the loss function such as the mean-square error,mean absolute error or categorical cross-entropy between the ANN output and labeled training target,depending on the applications.A commonly used BP method is called minibatch gradient descent,which randomly picks up a certain number of training samples from the entire training data set on each training iteration.Compared to another commonly used BP algorithm called batch gradient descent,mini-batch gradi?ent descent can significantly reduce computational complexi?ties,particularly when the path to the desired minima is quite noisy.

3 Deep Learning Assisted Modem Designs and Their Merits

In this section,we will offer three case studies on deeplearning assisted wireless modem design and argue for their ad?vantages in computing latency reduction,remarkable complexi?ty-performance trade-off,as well as robustness to nonlinear physical distortions.

3.1 Case Study 1:Deep Learning Assisted Parallel Decod?ing of Convolutional Codes

Error-control codes often have a serial computing architec?ture in nature due to correlations amongst codeword bits.This fact is challenging the design of parallel-computing ready de?coding algorithms.Recent advances towards ANN-assisted de?coders are mainly based on recurrent neural networks [7],[8]and there is a clear show of advantages in performance-com?plexity trade-off.Here,we review a more recent contribution in this domain,which proposes to the employment of feed-for?ward neural networks for low-complexity parallel decoding of convolutional codes[9].

The basic idea is to partition a long convolutional codeword into a number of pieces,forming so-called sub-codewords.When the length of sub-codewords is sufficiently long,there ex?ists a bijection between sub-codewords and their correspond?ing original information bits,subject to an initial state uncer?tainty.As depicted in Fig.3a,sub-codewords are first decod?ed in parallel using a list maximum-likelihood decoder (List-MLD),and then initial state uncertainties are removed through the sub-codeword merging process,referred to as a two-stage decoding process that can be implemented in parallel.In this case study,the role of the ANN is to replace the List-MLD al?gorithm at the sub-codeword decoding stage,as the latter is of very high computation complexity.Fig.3b illustrates the ANN training procedure,where the sub-codeword decoder is modelled as a deep-ANN black-box.The input vector is the noisy version of all possible sub-codewords,and the output vec?tor is the corresponding estimate of the original information bits.It is worthwhile highlighting that the training set of input vectors should be carefully defined so as to incorporate the ef?fect of initial state uncertainty(as detailed in[9]),as this is cru?cial for the sub-codeword merging stage.Moreover,it is sug?gested to partition a long convolutional codeword evenly,as in this case we only need to train one ANN block-box and can re?use it for all sub-codewords,thus resulting in an efficient way to reduce the training complexity.

Fig.4 illustrates the bit reliability of convolutional decoders in additive white Gaussian noise (AWGN),considering a halfrate non-recursive convolutional code with a codeword length of 64.The illustrated simulation results are only forEb/N0=4 dB and similar conclusions can be drawn for otherEb/N0values[9].The ANN black-box was trained atEb/N0=2 dB.When comparing the parallel decoder with the conventional MLD,it can be seen from Fig.4 that they have no difference in bit reli?ability; and thus,the parallel decoder is optimum.Moreover,due to the parallel computing nature,the parallel decoder has the potential to reduce computing latency,subject to the num?ber of sub-codewords.When the sub-codeword decoder is real?ized through the ANN black-box described in Fig.3b,we can see a little bit of a performance loss in bit reliability (around 0.03%); this is mainly due to using an insufficient number of epochs during the ANN training stage.Nevertheless,the com?putation time for sub-codeword decoding is reduced by around 95%.It is clear that ANN helps to achieve a very good com?plexity-performance trade-off.In addition,the ANN decoder can be executed fully in parallel and this is an additional ad?vantage for latency reduction.

▲Figure 3.An artificial neural network (ANN)-assisted parallel decoder:(a) two-stage parallel decoding and (b) ANN-assisted sub-codeword decoder.There are three hidden layers,with each employing Rectified Linear Unit (ReLU)activation function;the output layer is equipped with sigmoid activa?tion function,which outputs the estimated original information bits.

3.2 Case Study 2:Deep Learning Assisted Multiuser OFD?MA Frequency Synchronization

▲Figure 4.Bit reliability and latency evaluation for decoding of halfrate non-recursive convolutional codes with a codeword length of 64.

Consider a multiuser frequency-synchronization problem in the context of orthogonal frequency-division multiple-access(OFDMA) uplink communications,where transmitters experi?ence independently generated carrier-frequency-offsets(CFOs),due to oscillator instability or Doppler-induced ran?dom frequency modulations.This problem involves two subproblems.One is the multiuser-CFO estimation and the other is multiuser detection (MUD) or multiuser interference (MUI)cancellation given the CFO estimates.Multiuser-CFO estima?tion can be implemented by employing either pilot-assisted ap?proaches or blind approaches that exploit statistical behaviors inherent in signal waveforms.When CFO estimates are as?sumed available at the transmitter side,each transmitter can carry out CFO pre-compensation,individually.However,linklevel latency will be a considerable issue due to the CFO feed?back delay.Alternatively,multiuser frequency synchronization can also be carried out at each individual user domain(e.g.,subband) using the filterbank approach,which can be combined with iterative parallel interference cancellation (PIC).Howev?er,such a method is vulnerable to the CFO estimation accura?cy and it could introduce extra baseband processing latency in?to the system.

Fig.5 illustrates a deep-learning assisted multiuser frequen?cy synchronization approach,named classification-and-then-MUD (CAT-MUD) in [10].The deep-ANN has two functional layers:one is responsible for multiuser-CFO classification and the other is for the MUI cancellation.The CFO classifier is employed to tell the CFO sub-range where transmitters’CFOs fall in.This is very different from the conventional CFO esti?mation in the sense that the classifier only estimates the CFO range instead of CFOs.With the estimated CFO sub-range in?dex,received signals are then fed into the MUD layer for the MUI cancellation; please find a detailed introduction of CATMUD in[10].

Fig.6 illustrates the overall system performance (in blockerror rate,BLER) for OFDMA systems,where 4 transmitters evenly share 32 subcarriers.Original information bits are first modulated into 16-QAM symbols and then transmitted through an 8-tap frequency-selective Rayleigh fading channel (3GPP Channel Model A).To be more robust to CFO classification er?rors,the switch depicted in Figure 5 can simultaneously turn on multiple adjacent MUD branches.Figure 6 shows that the 3-branch model achieves the best performance-complexity trade-off.It outperforms the conventional PIC approach by around 3 dB inEb/N0and offers comparable performances with the CFO-free case at low and moderate SNRs (such asEb/N0<15 dB).

▲Figure 5.Block diagram of the deep artificial neural network(deep-ANN)assisted classification-and-then-multiuser-detection(CAT-MUD).

3.3 Case Study 3:Deep Learning Assisted Coherent MI?MO Detection

Multiuser multiple-input multiple-out (MU-MIMO) signal detection over noisy fading channel is mathematically an inte?ger least-squares (ILS) problem,which aims to minimize the pairwise Euclidean distance between the transmitted signal multiplied by channel matrix and the received signal [11].Concerning the optimal MLD solution to be computationally expensive,the usual practice is to employ linear channel equalization algorithms,such as the matched filter (MF),zero forcing (ZF) and linear minimum mean-square error(LMMSE),to trade off the optimality for lower computational complexity.However,linear algorithms are often too sub-opti?mum due to their use of symbol-by-symbol detection.There?fore,enormous research efforts have been paid in the last two decades to achieve the best performance-complexity trade-off through the use of non-linear algorithms (e.g.,Vertical-Bell Laboratories Layered Space-Time (V-BLAST) [12],Linear Minimum Mean-Square Error-Successive Interference Cancel?lation (LMMSE-SIC [13],and so on).The problem is that most of the non-linear algorithms are too complex for current DSP technology and do not lend themselves well to parallel computing.This goes against the trend of computing technol?ogy development.

▲Figure 6.BLER of classification-and-then-multiuser-detection (CATMUD)as a function of Eb/N0(dB)over Rayleigh fading channels.

Deep learning assisted solutions have demonstrated their po?tential for offering computational complexity close to linear re?ceivers,without compromising the detection performance.Moreover,most of the deep learning algorithms are parallel computing ready.According to the ways of utilizing channel state information at the receiver side (CSIR),deep-learning so?lutions can be divided into two categories:channel equaliza?tion and learning (CE-L) mode (Fig.7a) and direct learning(Direct-L) mode (Fig.7b).The difference is that the CE-L mode employs ANN black-box after channel equalization,and the Direct-L mode takes both CSIR and received signal as the input vector for signal classification.

▲Figure 7.Block diagram of deep-learning assisted multiuser multipleinput multiple-out(MU-MIMO)detection algorithms.

A major advantage of the CE-L mode lies in the use of chan?nel equalization for multiuser signal orthogonalization.Hence,the input vector to the ANN black-box is effectively a noisy version of the transmitted signal vector.By such means,the CE-L mode can turn the ANN classification problem from the vector level to the symbol level.However,the performance of the CE-L mode is limited by the symbol-by-symbol MLD bound.Theoretically,the Direct-L mode is able to achieve the optimum MLD performance for the vector-level classification.In addition,the Direct-L mode does not need channel equaliza?tion.This is a remarkable advantage as channel equalizers of?ten require channel matrix inversions which do not support par?allel computing.On the other hand,the Direct-L model is not a scalable approach with the size of MIMO,due to ANN’s re?duced classification ability with the growth of multiuser inter?ferences.

Fig.8 illustrates a novel deep-ANN approach,where a multi-layer modularized ANN is combined with PIC to scale up the Direct-L mode.This approach is called DNN-PIC in[14].Basically,the entire ANN consists of a number of cas?caded PIC layers,with each layer employing a group of identi?cal pre-trained DNN-PIC modules for signal classification and interference cancellation.Therefore,multiuser interfer?ence decreases linearly with the feed-forward procedure,and the last layer is able to provide a better classification of MUMIMO signals.

Fig.9 compares the average bit-error-rate(BER)performance between conventional MU-MIMO receivers and the DNN-assist?ed solutions.For the CE-L and Direct-L mode,the ANN was trained atEb/N0=5 dB.For the DNN-PIC approach,the ANN was trained at three differentEb/N0points (i.e.Eb/N0=0 dB,5 dB and 10 dB),and were optimally selected in the commu?nication procedure in order to obtain the best achievable per?formance.Simulation results show that deep learning mod?ules largely improve the detection performance of the MFbased receiver (around 8 dB at BER of 10-3) due to better use of the sequence-detection gain.For both the ZF and LMMSE receiver,the sequence-detection gain vanishes since channel equalization orthogonalizes multiuser signals.Meanwhile,the Direct-L mode significantly outperforms all CE-L modes and this result confirms the accuracy of the theoretical analy?sis.Finally,the proposed DNN-PIC approach further im?proves the BER performance of the Direct-L approach by around 1.5 dB.The performance gap between the DNN-PIC and the MLD receiver is only about 0.2 dB.Again,it should be emphasized that the DNN-PIC approach is parallel com?puting ready.

4 Discussion and Research Challenges

Although deep learning has achieved widespread empirical success in many areas,the applications of deep learning for wireless communication physical layer design are still at the early stage of research and engineering implementation.In this section,we list several fundamental bottlenecks together with the potential future research directions.

▲Figure 8.Block diagram of the DNN-PIC approach.

▲Figure 9.Average BER as a function of Eb/N0 for uncoded 4-by-8 mul?tiuser multiple-input multiple-out(MU-SIMO)system with BPSK modu?lation.

1)Training set overfitting.

Overfitting is a modeling problem which occurs when a func?tion too closely fits a limited data set [15].In PHY,it could re?fer to the case that an ANN-assisted receiver trained for a spe?cific wireless environment (or channel model) is not suitable for another environment (or channel model).It is a severe problem since a deep learning solution with limited generaliza?tion capability is less useful in real practice.However,this is?sue can be viewed more positively if deep learning algorithm can be used to optimize wireless receivers integrated into ac?cess points based on their local environments.

2)Scalability of DL-based solutions.

In machine learning theory,scalability refers to the effect of an increasing training data set on the computational com?plexity of a learning algorithm.For instance,the ANN solu?tion in Fig.7b has its learning capacity rapidly degraded with the growth of transmit antennas [14].The current ap?proach to mitigate this problem is by means of training the ANN with channel equalized signals (Fig.7a).However,in this case,ANN-assisted receivers are not able to exploit maxi?mally the spatial diversity-gain due to the multiuser orthogo?nalization enabled by channel equalizers,and the perfor?mance goes far from optimum.To tackle this issue,novel deep learning algorithms with good scalability are required(and expected)in the future.

3)Training strategies and performance evaluation.

Deep learning for wireless communication is a new research area and people lack experience in training strategies.For ex?ample,the optimal training SNR points for different PHY sce?narios remain unknown [15].In[9],it can be observed that the training of an ANN at relatively high SNRs gives an excellent generalization performance at low SNR regime in AWGN chan?nel.However,when wireless channel becomes fading [14],the learned PHY feature at high SNR regime can no longer indi?cate the feature of low SNRs.A potential solution is to train ANNs at different SNR regimes separately and then merge the results together,but this solution introduces additional training complexities and requires SNR estimation.A related question is whether there is a more appropriate way to measure the train?ing process in PHY solutions.It is well known that ANN train?ing aims to minimize a given loss function,and we consider that an ANN is well trained if the loss is converged to an ideal state.On the other hand,PHY performance is normally mea?sured by BER or SER.In most of the ANN-assisted PHY solu?tions,we make a hard decision on ANN outputs to obtain the bit-level (or symbol-level) estimates.However,the loss func?tion might not be able to accurately indicate the training prog?ress when complicated PHY scenarios are considered (e.g.,high-order modulation and fast fading channel).In [14],the authors introduce a method which measures the training prog?ress by computing the average BER/SER over the last few training epochs,and the estimated BER performance is shown to be very close to the validation performance.In general,the training strategies,especially for PHY applications,are worthy of investigation in future research.

4)Hardware implementation.

Currently,most of the ANN-assisted PHY solutions are still in their software simulation stages,but hardware imple?mentation normally requires more practical considerations[16]–[18].Apart from the channel model and data set that we have discussed in the previous sections,power consump?tion also needs to be considered since the ANN training pro?cess often involves very high computation cost.The aim of re?ducing ANN learning expenses has recently motivated a new research area on the non von Neumann computing architec?ture.

5 Conclusions

This paper presents several promising ANN-assisted PHY applications.The idea lies in the use of ANNs to replace parts of the conventional signal processing blocks in the communica?tion chain.It is shown that ANN-assisted approaches achieve competitive performance in terms of both reliability and laten?cy in various applications.More importantly,deep learning of?fers us a fundamentally new way to design and optimize the conventional communication systems.A wide range of open challenges need to be solved and theoretical analysis is also ex?pected in future research.

主站蜘蛛池模板: 日韩亚洲综合在线| 亚洲av片在线免费观看| 亚洲中文字幕手机在线第一页| 国产成人精品在线| 欧美成人精品一级在线观看| 亚洲欧州色色免费AV| 亚洲精品在线91| 国产精品护士| 亚洲高清中文字幕| 国产精品黄色片| 久久婷婷六月| 免费看av在线网站网址| 特级毛片免费视频| 国产成人调教在线视频| 国产原创演绎剧情有字幕的| 亚洲国产成人综合精品2020| 宅男噜噜噜66国产在线观看| 波多野结衣视频一区二区| 欧美h在线观看| 欧美亚洲第一页| 91网红精品在线观看| 欧美亚洲第一页| 久久女人网| 精品国产成人高清在线| 91av国产在线| 亚洲欧美另类色图| 少妇极品熟妇人妻专区视频| 国产午夜小视频| 国产成人精品日本亚洲77美色| 亚洲精品视频在线观看视频| 98超碰在线观看| 国产丰满大乳无码免费播放| 蜜桃视频一区| 伊人色综合久久天天| 国产精品久久自在自2021| 成人精品亚洲| 国产在线精品网址你懂的| 欧美在线中文字幕| A级毛片无码久久精品免费| 激情亚洲天堂| 国产国产人免费视频成18| 粉嫩国产白浆在线观看| 色婷婷综合在线| 91视频免费观看网站| 波多野结衣第一页| 欧美午夜在线观看| 精品无码国产一区二区三区AV| 亚洲IV视频免费在线光看| 黄色网址免费在线| 久久性视频| YW尤物AV无码国产在线观看| 亚洲精品福利视频| 国产精品免费电影| 亚洲成在线观看 | 91原创视频在线| 在线中文字幕网| 老熟妇喷水一区二区三区| 亚洲精品黄| AV片亚洲国产男人的天堂| 亚洲人成网址| 国产在线专区| 欧洲熟妇精品视频| 91麻豆久久久| 思思99热精品在线| 国产一级裸网站| 日本在线国产| 日韩高清一区 | 日本黄色不卡视频| 波多野结衣国产精品| 国产va视频| 黄色网在线| 亚洲国产中文欧美在线人成大黄瓜 | 精品在线免费播放| 国产成人精品2021欧美日韩| 免费无遮挡AV| 久操线在视频在线观看| 国产情侣一区| 色欲色欲久久综合网| 日韩性网站| 欧美一级专区免费大片| 婷婷午夜天| 国产激情无码一区二区免费|