999精品在线视频,手机成人午夜在线视频,久久不卡国产精品无码,中日无码在线观看,成人av手机在线观看,日韩精品亚洲一区中文字幕,亚洲av无码人妻,四虎国产在线观看 ?

RecCac:Recommendation-Empowered Cooperative Edge Caching for Internet of Things

2021-07-19 06:07:48HANSuningLIXiuhuaSUNChuanWANGXiaofeiVictorLEUNG
ZTE Communications 2021年2期

HAN Suning, LI Xiuhua, SUN Chuan, WANG Xiaofei, Victor C.M.LEUNG

(1.Chongqing University, Chongqing 400000, China;2.Tianjin University, Tianjin 300072, China;3.Shenzhen University, Shenzhen 518000, China;4.The University of British Columbia, Vancouver V6T 1Z4, Canada)

Abstract:Edge caching is an emerging technology for supporting massive content access in mobile edge networks to address rapidly growing Internet of Things (IoT)services and content applications.However,the edge server is limited with the computation/storage capacity,which causes a low cache hit.Cooperative edge caching jointing neighbor edge servers is regarded as a promising technique to improve cache hit and reduce congestion of the networks.Further, recommender systems can provide personalized content services to meet user’s requirements in the entertainment-oriented mobile networks.Therefore, we investigate the issue of joint cooperative edge caching and recommender systems to achieve additional cache gains by the soft caching framework.To measure the cache profits, the optimization problem is formulated as a 0-1 Integer Linear Programming (ILP), which is NP-hard.Specifically, the method of processing content requests is defined as server actions, we determine the server actions to maximize the quality of experience (QoE).We propose a cachefriendly heuristic algorithm to solve it.Simulation results demonstrate that the proposed framework has superior performance in improving the QoE.

Keywords:IoT;recommender systems;cooperative edge caching;soft caching

1 Introduction

As the development trend of future networks, the Internet of things (IoT)has become a hot research topic in the industry and academia in recent years[1].The emergence of“IoT”paradigm makes accessibility of various IoT sensors (e.g., smart cameras and temperature sensors)universal,and thus enables intelligent services to improve the life quality of humans[2].Billions of IoT devices (IDs)generate a tremendous number of monitoring data while a great many end-users are consuming these data.However, countless electronic devices are anticipated to generate a sheer volume of traffic loads, and the aggregate load on core networks is expected to be large.Therefore, it is important to reduce congestion and transmission delay for network providers[3-5].

As we have stated above, mobile edge networks are faced with the challenge of the explosive growth of IoT data requests from the IDs, especially in the current backhaul networks[6].According to the research, most of the high load in the mobile networks is generated by downloading the same content and data.To solve this problem, it is necessary to put forward new revolutionary methods in network structure and data transmission[7].As one of the rapidly developing technologies, edge caching has drawn growing attention.Edge caching technology can reduce repeated downloading and transmission by caching contents in advance[8].However, as content providers (CPs)provide growing content, and the storage and computing capacity of a cell (e.g.,edge server)are limited, we still face great challenges to solve the above problems.Many researchers are looking for additional cache gains in this area.Some current research (e.g.,FemtoCache[9])focuses on caching contents in the edge server of base stations (BSs).However,it only focuses on the basic cell cache, and the understanding of inter-cell cooperation is not deep.

Besides, how to use the cached contents to achieve more cache gains is also a problem we have to consider.It is difficult to improve the caching performance only by focusing on the content popularity in the entertainment-oriented mobile networks.To solve this problem, the recommender systems provide an effective method that can provide personalized content recommendations through historical behavior,e.g., users may have evaluated or scored different contents.However, some related content, such as two similar comedy movies or two short videos of the same type,might have similar utility for a user.We use the term—soft caching[10],which means that if the local BS doesn’t cache the requested content, the BS can send other relevant contents available locally.If the user likes or accepts the relevant contents (under a certain threshold)instead of the content which was originally requested, a soft cache hit will occur.This scheme may give up some content relevance, but it avoids the“expensive”connection of the IDs to get the requested content from the backhaul network.Actually, some recent experimental evidence suggests that IDs may be willing to trade off some content relevance for a better quality of experience(QoE)[11].

More specifically in this paper, the cooperative edge caching and recommender systems are used to alleviate the pressure of the backhaul network and get related contents to achieve soft caching, respectively.We combine cooperative edge caching with recommender systems to improve the QoE.Recently, some researchers consider the interaction between edge caching and recommender systems to optimize cache or recommender systems[10-17].However, most of the research only focuses on one side of the problem, e.g.,caching-friendly recommendations[10,12-13,15,17]or recommendation-aware caching policies[16].The real joint treatment of both is tried in Refs.[11] and [14], but their studies on hierarchical mobile edge networks are not deep enough.

To sum up, different from the existing studies on edge caching and recommender systems, we focus on improving the QoE by judiciously selecting server actions.Our main contributions are summarized as follows:

1)We combine cooperative edge caching with soft caching for IoT systems.To measure cache profits, we propose a generic metric of QoE that depends on the quality of service(QoS)and the quality of recommendation (QoR).

2)We formulate the problem of optimally choosing the server actions towards maximizing the QoE.While such joint caching and recommendation problems have been proved to be NP-hard, we have proposed a cache-friendly hierarchical heuristic algorithm.

3)Trace-driven evaluation results demonstrate that our proposed scheme has superior performance on improving the cache hits and QoE finally.

The remainder of this paper is organized as follows.Section 2 discusses the proposed hierarchical cooperative edge caching model and formulates the optimization problem.Section 3 introduces a cache-friendly hierarchical heuristic algorithm to solve the problem.Section 4 evaluates the performance of the proposed framework and Section 5 concludes this paper.

2 System Model and Problem Formulation

In this section, we introduce the system model of edge caching.Specifically, we present the hierarchical cooperative edge caching architecture and topology in Section 2.1.Section 2.2 introduces the recommendation-aware content request processing model.Then we propose a QoE model,considering delay and recommendation in Section 2.3.Finally, Section 2.4 gives the problem formulation.Some key parameters are listed in Table 1.

▼Table 1.Key Parameters

2.1 Hierarchical Cooperative Edge Caching Model

The proposed system is a cooperative Cloud-Edge-End computing system with a cloud server (CS), some discrete BSs, and IDs.As shown in Fig.1, we consider a cooperative edge caching scenario for IoT networks.The CS has enough computing and caching capacity, consisting of all data and contents.Each BS is equipped with an edge server, which has the limited ability to cache and compute.Each ID as a content requester generates a request at each time slot.In our proposed system, each BS communicates with the CS through the backhaul links.To enhance the usage of the BSs and alleviate the pressure of the backhaul networks,each BS can communicate with all cooperative BSs through fronthaul links instead of working individually[18].Besides,as the contents are cached in the BSs or the CS, IDs can fetch their requested contents either from edge servers via wireless links or directly by downloading the contents from the CS to the BSs.

The proposed system consisting of N={1,2, ..., N} fully connected BSs with a finite cache sizeCand M={1,2, ...,M} IDs are distributed in the service area of the BSs.In addition, we denoteam,n∈[0,1] as the association probability between the BSnand the IDm.We assume each ID requests content or a set of data from a catalogue F= {1,2, ..., F} at each time slot, and we denote the size of each content asDf.

We assume that the IDmrequests the contentfwith the standard content probabilityHence, we could obtain the content popularitypffrom[9].Furthermore, we assume that the content popularitypfchanges slowly,and

For the cache state, we focus on whether the content has been cached in the BSs.The content cache state is denoted assn,f∈{0,1},?n∈,?f∈F.Here,sn,f=1 represents that the BSnhas cached the contentf,otherwisesn,f=0.

2.2 Recommendation-Aware Request Processing Model

We define a scorewm,fto represent the ID’s preference for the content or dataf.As forpf, it denotes the probability of the IDmrequesting the contentf.Specifically, given the scoreswm,f, a reasonable choice could be their normalized values:

Since soft caching is to replace the requested content with related contents or data available in the local BS, we rank the scores in a descending order to get a recommendation listKmof the IDm.When a content requestfgenerated by the IDmarrives at the local BS, there are three types of situation:

1)Local hits: Local hits denote that the local BS processes content requests.The local hits are divided into direct cache hits and soft cache hits.

2)Neighboring hits: the request generated by an ID can be obtained from its cooperative BSs, and the transmission delay is relatively small compared with downloading from the CS.

3)CS hits: The ID obtains the requested content from the CS.The transmission in this situation is known as“expensive”.

We model the server actions of the content request with three sub-decisions models, denoted asπm,n,f=where∈{0,1}are the indicators for whether the request is processed in the local BS,cooperative BSs, or the CS.Three sub-decisions can jointly determine how the request is processed.Different decisions will affect the transmission delay and content satisfaction.

As the content is indivisible, so for ?m∈M, only one ofcan be 1.Similar to Ref.[19], the decision variableπm,n,fis constrained by

2.3 QoE Model

We define the QoE as a combination of the QoS and the QoR.The QoS and QoR are measured by the transmission delay and content satisfaction, respectively.In the following, we will discuss the two parts with different decisions in detail:

1)Delay: We consider the transmission delay as the time for an ID to receive the contents or data.In the proposed system, there are three delay parts:denotes the transmission delay that the IDmreceives the content from the local BS n,denotes the transmission delay of the BSs’cooperation, anddenotes the transmission delay between the BS and the CS.

▲Figure 1.Cooperative edge caching supporting IoT architecture

Specifically, we assume that the wireless channel has been deployed.Similar to Ref.[20],we can get the transmission rate between the IDmand the local BSnas follow:

whereBdenotes the channel bandwidth;σ2denotes the background noise power;Pmdenotes the power consumption of the BSntransmission to the IDm.The channel gaingm,nis estimated by the distancelm,nbetween the local BSnand the IDm.

Thus, the delay of transferring the contentmbetween the IDfand the local BSnis denoted as:

The transmission among the cooperative BSs is through fronthaul links with high bandwidth.In terms of the transmission between the CS and the BSs, the CS is usually deployed at a further distance, and a large amount of traffic is transmitted through multiple intermediate nodes; We express these two parts in terms of the average rate;vedenotes the average transmission rate between two BSs.Therefore,the transmission delay between cooperative BSs can be expressed as follow:

Similarly,vcdenotes the average transmission rate between the BSs and the CS.The transmission delay between the BSs and the CS can be expressed as:

2)Recommendation: If the content requested by the ID is not cached locally, the similar contents cached locally could be alternated.

Specifically, for local hits, considering the soft caching,we define the content satisfaction as:

Similarly, for neighboring BSs cache hits, we define the content satisfaction as:

For downloading the contentffrom the CS, we define the content satisfaction as:

2.4 Problem Formulation

In the proposed system, our goal is to find the best server actions to improve the QoE.As we have discussed above,transmission delay and content satisfaction are major factors.We express these two parts as follows:

where Eq.(10)denotes the QoS, which is expressed as the reciprocal of the delay of content transmission (i.e., when the delay of the content transmission is small, the larger QoS can be obtained),denotes the transmission delay when the content is sent through the cooperative BSs,anddenotes the transmission delay when the content is downloaded from the CS.Eq.(11)denotes the QoR.

To improve the QoE, we need to trade off the QoS and the QoR (i.e., find the balance between low transmission delay and high content satisfaction)by optimizing the server actionsπm,n,f.To maximize the QoE,we formulate the optimization problem as:

wherepfdenotes the probability of the content or datafrequested.In Eq.(12b),αandβare the scalar parameters to balance transmission delay and content satisfaction.Eq.(12c)denotes the cache state.Eqs.(12d)and (12e)denote the constraints of the server actions.Eq.(12f)denotes the cache ability.

bine optimization objectives with decision variables, the optimization objective of the problem in Eq.(12)can be expressed as:

Thus, the problem can be described as selecting optimal server actions for processing requests with jointing transmission delay and content satisfaction.This is a 0-1 ILP problem, which is NP-hard.Because the number of IDs,BSs, and contents can be large, it is of high complexity to get the optimal solution by using exact methods.

3 Proposed Framework Design

The proposed system is a hierarchical cooperation orchestrated computing topology.We focus on improving the QoE by judiciously selecting the server actions.Different server and content selections affect the final server actions.Thus,to address the above complex optimization Eq.(13), we decompose it into two simpler subproblems as below.

1)Inner algorithm for recommendation list.First, we obtain the recommendation listKmfor the IDmfrom the content or data catalog, which is implemented by the collaborative filtering algorithm based on items-Inverse User Frequency (ItemCF-IUF).The inner algorithm is mainly divided into two steps: calculating the similarity between two contents and generating the recommendation list.When calculating the similarity, we consider the influence of the IDmactivity on content similarity.We use the improved cosine formula to calculate the similarity between the contentiandfas:

whereNidenotes the number of IDs that like the contenti,Nfdenotes the number of IDs that like the contentf, and|Ni|∪|Nf|denotes the number of IDs that both like contentiandf.Then the score of the contentfwill be calculated.

Then we sortwm,fin a descending order to generate the final recommendation list of the IDm.The details of the proposed method for solving the inner problem are shown in Algorithm 1.The internal of the loop consists of|F| calculations.Next, the complexity of the sorting step isO(log|F|)in a pre-ordered list.Since these steps are repeated for every IDm, the total complexity of the algorithm isO(|M||F|).

2)Server actions.We optimize the server actions.As mentioned above,Πhas 3MNFpossible selections.It may be easy to find the optimal solution in a small scenario.Since the number of IDs, BSs, and contents can be large, it will take abundant time to converge if we use the general exhaustive methods (e.g., checking each combination of variables with a value of 0 or 1, and comparing the value of the objective function to obtain the optimal solution).To solve the problem, we propose a cache-friendly heuristic algorithm with the branch and bound(BNB)strategy.

Lemma 2: Eq.(13)can be divided intoMindependent subproblems as:

Proof: For each IDm, we seek the best strategy to satisfy its request and then it can benefit the whole cache system.Therefore, Eq.(13)can be separated, i.e., the sub-decision for each ID does not affect other IDs because there is no relevance between them.

Specifically, for a content or a data request generated by the IDm, we search server and content selectionsΠlayer by layer.After initialization, we first determine whether a local direct hit occurs according to the cache state.If it does not happen, we consider whether the soft cache hits occur.If neither of the above two situations occurs, request processing will be completed through cooperative BSs or the CS.This procedure is repeated until the cache is full.To reduce unnecessary searches, we use the BNB strategy.In Eq.(15), when a feasible solution is determined by using the heuristic algorithm, the value ofZmis calculated and denoted asThus,will be added to the constraint as the lower bound of the target value.Any solution withZm<can be deleted without verifying whether it meets other constraints.By continuously improving the lower bound of the target value, the constraint conditions can be improved and the amount of calculation can be reduced.

The details of the proposed method for solving the whole problem are shown in Algorithm 2.And the computation complexity of Algorithm 2 isO(|M||N||K|).

4 Simulation Results

For simulation purposes, all parameters are selected according to the real-world scenario.Numerical experiments are provided to evaluate the performance of the proposed scheme.We consider several BSs, each of which has the maximum coverage of a circle with a radius of 250 meters.And more than 400 IDs are randomly distributed within the coverage area of the BSs.We determine the local BS of each ID according to the association probabilityam,n.The channel gain is modeled asgm,n=30.6 +36.7 log(lm,n)dB,wherelm,nis the distance between the IDmand the BSn.The distance is randomly set as [0, 250] m.The wireless bandwidth, transmit power of each ID, and noise power is set as 20 MHz,[1.0,1.5]W,and 10-13W,respectively.

For IoT data, we consider a real data set consisting of 457 users and more than 9 000 video contents.And these contents are randomly cached in the BSs.The content size is randomly set as [2, 5] Mbit.Further, the cache constraint of the BS is set to a percentageθof the total storage size.Besides, we use itemCF-IUF to get the recommendation list for each ID,and we get the corresponding scorewm,f.The parameter of Algorithm 1 is set asR=2.To verify the experimental effect of the recommendation algorithm, we calculate the accuracy rate, recall rate, and their weighted harmonic average.And the results are respectively 0.4,0.1311,and 0.1975.

To evaluate our proposed framework, we consider the following three baseline schemes: 1)File popularity distribution (FPD)strategy.As mentioned in Ref.[21], when a content request is generated by the ID, the cache system will distribute popular contents according to the popularity of contents.However, this strategy processes requests without considering content preferences and soft caching; 2)Usercentric optimization (UCO)strategy.Similar to our paper, a simple QoE metric has been proposed for combining content caching with the recommender systems in Ref.[11].They weigh the QoS and QoR, but the work of cooperative edge caching is missing;3)Random scheme.The content request is randomly processed at the local BS, cooperative BSs, or the CS.Πis randomly set under the constraints in Eqs.(12d),(12e),and(12f).

▲Figure 2.QoE versus different numbers of contents

In Fig.2, we study different server selection schemes under contents ranging from 1 000 to 9 000, and eight independent simulations are considered (in this case, we set theN=2).For each scheme, we set the balance constraintαto 0.1 and 0.2 respectively.We observe that the QoE increases rapidly with the increase of the contents in our proposed scheme, mainly because a tremendous amount of contents can provide more accurate references for recommendation(e.g., more historical behaviors).In the random scheme, the result fluctuates obviously because the decision is random.The experimental effect of our proposed scheme is also better than other schemes.In particular, the proposed scheme has an overall performance improvement of about 30% compared with the FPD scheme.The reason is that the soft cache fully considers content preferences, ensuring that content preferences are controllable and the distortion is minimized.

Next, we investigate whether the proposed scheme has better performance in QoS-QoR trade-off, as shown in Fig.3.The balance factorαis in the range of 0.1 to 0.9.According to the simulation,the QoE increases linearly with the increase ofα.Whenα=0.1 (i.e., QoR is given priority), we observe that the performance of the FPD scheme and the UCO scheme is similar to the proposed scheme, mainly because cooperative caching has little effect on additional cache gain.Whenαincreases gradually (i.e., a part of QoR is sacrificed and QoS is given priority), and the performance of the proposed scheme is greatly improved compared with the FPD scheme and UCO scheme.Due to the strong randomness of the random scheme, the performance improvement is not obvious.

We also evaluate the hit ratio under different BS numbers, as shown in Fig 4.In the proposed scheme, cache hits are defined as local hits and neighboring hits.We study different server selection schemes under theNrange of 1 to 4.The hit ratio of the proposed scheme fluctuates depending on the number of BSs.For instance, it achieves the best hit ratio when the BS number is 2.But when the numbers of BSs are equal to 3 and 4, the hit ratio decreases gradually,mainly because more BSs will receive more content requests.In terms of improving the hit ratio, the performance of the proposed scheme is obviously better than the other three baseline schemes, mainly because the proposed scheme provides more cache hit possibilities.

▲Figure 3.QoE versus different balance parameters

▲Figure 4.Hit Ratio versus different numbers of BSs

The proposed scheme considers soft caching and the cooperation between the BSs.Compared with other baseline schemes, our proposed scheme considers the content preferences of the IDs to meet their needs and the BSs’cooperation to reduce the transmission delay of contents in the networks.Therefore, our scheme is superior to other schemes in the above comparative experiments.

5 Conclusions

In this paper, we have investigated the joint problem of cooperative edge caching and recommender systems for IoT systems.We have used the concept of soft caching by shifting from satisfying requests of IDs to satisfying their needs.Under the constraints of resources, computing conditions,etc., we choose the appropriate server actions to improve the QoE, which is defined as a 0-1 ILP problem.To solve it, we have proposed an uncomplicated and cache-friendly hierarchical heuristic algorithm with the BNB strategy.Simulation results have revealed the superior performance of the proposed scheme on increasing the QoE.

主站蜘蛛池模板: 国产精品成人一区二区不卡| 国产在线精品香蕉麻豆| 91成人免费观看| 久久青草视频| 久久精品这里只有国产中文精品| 欧美日韩一区二区在线播放| 国产激情在线视频| 丁香六月综合网| 亚洲美女一级毛片| 欧美成人二区| 国产午夜福利在线小视频| 黄色成年视频| 99精品热视频这里只有精品7| 日韩成人在线一区二区| 九九九九热精品视频| 91视频99| 中文字幕不卡免费高清视频| 波多野结衣一二三| 亚洲三级视频在线观看| 精品中文字幕一区在线| 久精品色妇丰满人妻| 亚洲人成网站色7799在线播放| 亚洲A∨无码精品午夜在线观看| 欧美国产日韩在线| 五月婷婷亚洲综合| 日韩欧美成人高清在线观看| 欧美午夜性视频| 日韩欧美视频第一区在线观看| 欧美性精品不卡在线观看| 91精品网站| 一本大道香蕉久中文在线播放| aⅴ免费在线观看| 一级高清毛片免费a级高清毛片| 国产美女在线免费观看| 国产区在线看| 欧美一级黄色影院| 国产SUV精品一区二区| 亚洲成人动漫在线观看| 精品一区二区三区波多野结衣 | 久久综合色视频| 精品视频在线观看你懂的一区| a色毛片免费视频| 色偷偷综合网| 亚洲最新在线| 蝴蝶伊人久久中文娱乐网| 日韩黄色在线| 麻豆精选在线| 国产精品久久久久久久久| 网久久综合| 欧美成a人片在线观看| 麻豆精品在线| 99精品国产高清一区二区| 青青极品在线| 99人体免费视频| 成人无码区免费视频网站蜜臀| 亚洲天堂高清| 亚洲天堂首页| 天堂岛国av无码免费无禁网站| 国产美女丝袜高潮| 毛片视频网址| 秘书高跟黑色丝袜国产91在线| 国产在线小视频| 无码一区二区波多野结衣播放搜索| 色欲色欲久久综合网| 精品一區二區久久久久久久網站| 成人国产三级在线播放| 久久semm亚洲国产| 亚洲成人一区二区| 久久semm亚洲国产| 福利国产在线| 成人av手机在线观看| 国产91精品调教在线播放| 一本大道香蕉中文日本不卡高清二区 | 欧美无专区| 99精品欧美一区| 91偷拍一区| 成人在线观看不卡| 精品自窥自偷在线看| 国产成年女人特黄特色毛片免 | 日韩一区二区在线电影| 99在线视频网站| 欧类av怡春院|