999精品在线视频,手机成人午夜在线视频,久久不卡国产精品无码,中日无码在线观看,成人av手机在线观看,日韩精品亚洲一区中文字幕,亚洲av无码人妻,四虎国产在线观看 ?

Content Centric Networking:A New Approach to Big Data Distribution

2013-06-06 04:15:18YiZhuandZhengkunMi
ZTE Communications 2013年2期

Yi Zhu and Zhengkun Mi

(1.Key Lab of Broadband Wireless Communication and Sensor Network Technology,Nanjing University of Postsand Telecommunications,Nanjing210003,China;

2.School of Computer Scienceand Telecommunication Engineering,University of Jiangsu,Zhenjiang212013,China)

Abstract In this paper,we explore network architecture and key technologies for content-centric networking(CCN),an emerging networking technology in the big-data era.We describe the structure and operation mechanism of a CCNnode.Then we discuss mobility manage?ment,routing strategy,and caching policy in CCN.For better network performance,we propose a probability cache replacement poli?cy that isbased on content popularity.Wealsoproposeand evaluateaprobability cachewith evicted copy-up decision policy.

Keyw ords bigdata;content-centric networking;cachingpolicy;mobility management;routingstrategy

1 Introduction

W ith the development of new network technolo?gies and information services,big data has be?come the focus of attention in IT[1].The fea?tures of big data are volume,velocity and vari?ety[2].Volume refers to the massive amounts of data that have to be stored and processed.Velocity refers to the constant up?dating,caching,and delivery of data.Variety refers to the wide range of data and abundant forms of data representation.

In current IP networks,big data can cause congestion and server overload because IP architecture works in host-to-host mode.However,the problems caused by big data tend to affect the data itself,rather than the host or server.These problems may limit data availability,reduce delivery speed and quality,or compromise data security.Therefore,more efficient da?ta-centric architectures need to be designed to solve the prob?lemscreated by bigdata.

Since 2006,several new network architectures have been proposed.These architectures stem from next-generation re?search projects and include data-oriented network architec?ture(DONA)[3],proposed by the UC Berkeley RAD Lab;4WARD[4],proposed as part of the EU's Seventh Framework Programme;publish-subscribe internet routing paradigm(PSIRP)[5]and content-centric networking(CCN)[6],[7],pro?posed by the Palo Alto Research Center;and named data net?working(NDN),proposed as part of the National Science Foun?dation's Future Internet Architecture(FIA)project.Of these architectures,CCN is represents a sophisticated technical ad?vancement and alsocomes under the umbrella of NDN.

In CCN,each piece of content is uniquely named,and the content is separated from its location.If we replace traditional routing(based on host address)with new content-based rout?ing,the requested content can be obtained in the nearby CCN node.This node caches the content,and there is no need to forward the request to the far-away content source.The cach?ing mechanism is the key technology of CCN.It can reduce the response time for accessing content,and it can alleviate net?work congestion and server overload in abig-dataenvironment.

2 CCN Architectureand Operating Mechanism

In current IP networks,CCN uses the content name instead of IP address for routing.An hierarchical naming mechanism similar to a URL is used.An example of this mechanism is nj?upt.edu.cn/Video/Computer_Networks/Lecture_1.mpeg,where/njupt.edu.cn/Video/Computer_Networks s the prefix for re?trieving and forwarding the content,/njupt.edu.cn represents the content provider,Video/Computer_Networks represents the content type,and/Lecture_1.mpeg represents the content itself.

Thereare twokinds of packetsin CCN:interest and data.In?terest packets contain content identification,selector,and nonce.The selector comprises order preference,publisher fil?ter,and scope.Data packets contain signature,signed info,key locator and stale time,and content.The signaturecomprisesdi?gest algorithm and witness,and the signed info comprises a publisher ID.

An interest packet with ID is sent by the requester,and CCN nodes forward the packet until it reaches a node that can provide the requested content according to the maximum matching principle.Then,the data packet is used to send the content back to the requester via the reverse interest packet forwardingpath.Thiscompletesthecommunication.

The key structure of a CCN node comprises a content store(CS),a pending interest table(PIT),and a forwarding informa?tion base(FIB).The CSstores content within the node cache.The PITrecords the received interest packets for a pending re?sponse to the arriving face and accompanying content.Here,“face”is the CCN terminology for interface.The FIB indicates the next hop for forwarding the interest packets.The requested content is cached as much as possible in the CCN nodes dur?ing backward delivery so that the content can be quickly pro?vided to subsequent users.This is a completely different to the way a traditional IProuter works.Usually,a traditional IProut?er clearsthecacheon forwarding.

A maximum matching query is executed in CS,PIT,and FIB in turn when an interest packet arrives at the node.If the requested content is found in the CS,the content is sent to the requester through the arrival face of the interest packet.If the requested content is not found in the CS,the PITis queried.If the PIT contains the related content entry,the PIT indicates that the content request has been received and waits for re?sponse.In doing so,it adds the arrival face the content's en?try;otherwise,the FIB is queried further on.If the FIB has the related content entry,the interest packet is forwarded through the face indicated by the FIB.If no match is found in the FIB,theinterest packet isdropped.

The structure of a CCN node is shown in Fig.1.Suppose an interest packet with content name/njupt.edu.cn/Video/Comput?er_Networks/Lecture_1.mpeg arrives.The content can be fetched from the CSand sent back to the requester.If an inter?est packet with content name/njupt.edu.cn/Video/Signal_Sys?tem/Lecture_1.mpeg arrivesfromface 2,the PIThastobe que?ried because the CS does not have this content.The PIT al?ready contains a request entry for this content,and the face number is 3.Therefore,face 2 is added to the face field of that entry.If an interest packet with content name/njupt.edu.cn/Video/Stochastic_Process/Lecture_1.mpeg arrives,and the content name is not contained in the CSnor PIT,then the FIB is queried.This indicates that face 5 is the cor?rect forwarding face.Then,the interest packet is forwarded to the next node through face 5,and the requested content nameand interest-packet arrival faceisadded tothe PIT.

For the sake of network scalability,the FIB has a mecha?nism to aggregate multiple content prefixes into one entry.This reduces the size of the FIB.However,this mechanism cannot be used in the PIT.Reducing the size of the PIT is an important area of research in CCN because the PIT be?comes excessively large for big-data applications.To scale the CSstorage,an appropriate cache replacement policy should be used to free the cache so that it can hold newly obtained con?tent that is of higher importance.Content will be divided into chunks in the delivery process.CSreplaces and stores content in thechunk.

For comparison,the content delivery process for an IP net?work and CCN network is shown in Fig.2.With the IPclient/server infrastructure,each piece of content delivered has a round trip from the request user to the source server.A request that involves a large amount of content involves a huge amount of network traffic that is likely to cause network congestion or server overload.With the CCN infrastructure,the user may ob?tain the content from the cache of a nearby node.This elimi?nates traffic further along the line to and fromthe source sever.In Fig.2,the request from user 1 goes to the source server(as in a conventional IP network).However,the content can be cached in routers R2,R4,R5 and R7 on its way back to user 1.If user 2 subsequently requeststhesamecontent,R2 can de?liver it because there is a content copy in its cache.Similarly,the content can be cached in R1 on the way to user 2.When user 3 requests the same content,they can simply get it from the neighboring router R1.This only involves one hop.It can be seen that,as a distributional resource-caching and manag?ing infrastructure,CCN is a good fit for big data.Through the caching mechanism and content identification,terminal users can obtain content from the network node that is as near to the user as possible.This limits delay,congestion,and perfor?mance fluctuations caused by big data.

3 Key Aspectsof CCN

Mobility management,routing strategy,and caching policy all affect theperformanceof a CCN.

3.1 Mobility Management

▲Figure1.Structureof the CCNnode.

For non-real-time services,such as web pages,email,and file sharing,the location of source server is fixed so that the content nameisunchanged.When moving to anew site,theus?er can request content as before.Although some retransmis?sion delay may be incurred,it practically affects the services becausethey aretolerant todelay.

▲Figure2.a)IP-based network infrastructureand b)CCN-based network infrastructure.

For real-time services,such as internet telephony,instant messaging,and gaming,the situation ismorecomplicated.Usu?ally,both the source and the user are mobile,which means the prefix of content name may change.CCN routers need to up?date their routing tables to guarantee correct forwarding.This causes additional time overhead,an invalid forwarding entry in the FIB,and a huge FIB.Additional time overhead cannot be tolerated by real-time services,which are interrupted when de?lay is more than 150 ms.If many of the forwarding entries in the FIB are invalid,incorrect forwarding will result,and net?work resources will be wasted.If the FIB is huge,the content prefixes(which changeas aresult of the mobility of the content source)will be difficult to handle.The worst case scenario oc?curs when the two parties in the communication move at the same time.As with the mechanism in session initiation proto?col(SIP),a fixed registry server can be set up to exchange con?tent names between sides.Of course,this involves additional registration time.

Several mobility management schemes have been proposed for CCN.Problems with managing the mobility of both user and content are outlined in[8],but no solution is given.In[9],[10],proxy-based mobility(PBM)management scheme is pro?posed.In this scheme,the content requested by the user is cached before moving through the proxy server.Efficient use is made of CCN shared content resources to reduce delay during handover and acquisition.The drawback of this scheme is that content request and acquisition still relies on the traditional IP network.In[11],selective neighbor caching(SNC)is proposed for mobility support.A group of optimal neighboring proxy servers is selected to proactively request and store content that the user fails to receive as the content moves through the proxy server.When selectinga neighbor proxy,a tradeoff is made be?tween the cost of acquiring the content and the cost of caching the content in proxies.SNCcan reduce delay to a large extent but does not use CCN shared content resources.In[12],a par?tial route update scheme is proposed to reduce negative effects caused by content provider mobility.After the movement path hasbeen determined,routersare chosen,and their content pre?fixes are updated.The cost of updating routers is reduced.However,there has been no in-depth analysis of the tradeoff between the number of routers updated and the routing miss probability.In[13],a tunnel is set up between the CCN router in the home domain and the CNN router in the foreign domain in order to redirect the interest packet.This provides real-time support when the content source is mobile.The evaluation in[13]shows that the tunnel reduces delay in a network with many nodes.

3.2 Routing Strategy

The semantics and basic processing mechanisms of IP and CCN routing protocols are similar.Hierarchical identifier nam?ing,longest matching lookup,and forwarding mechanism for an IPnetwork can all beused in CCN[7].

Interior router protocols,such as open shortest path first(OS?PF)and intermediate system to intermediate system(IS-IS),provide a type-length-value(TLV)option that can be easily used by CCN to publish the content prefix(even though the prefix is different for CCN and IP).Interior router protocols al?so ignore an unknown message so that the CCN node can con?nect directly to the IP network running IS-IS or OSPF.This does not adversely affect thenetwork.

For an existing external routing protocol,which is similar to internal gateway protocol(IGP),border gateway protocol(BGP)can also use TLV for interdomain announcement of address in?formation.Different CCNs can be interconnected by announc?ing content information to each other.This can be done by inte?grating thecontent prefixes of the domain into the BGP.

Although existing IProuting strategies can be used in CCN,a specially designed strategy inevitably improves the perfor?mance of CCN.Until now,there have only been a few studies on CCN routing strategy.The four strategies reported are:all forwarding,random forwarding,ant colony forwarding,and im?proved ant colony forwarding.

All forwarding is a basic strategy in which interest packets are forwarded to all the faces matching the prefix in the FIB.The advantage of this strategy is there is less delay during data packet return.The drawback of this strategy is the large amount of redundant traffic that results from the dispatching of multiple interest packet copies.This problem worsens as the network increasesin size.

With random forwarding,one face is randomly chosen among multiple matching faces indicated by the FIB.The cho?sen face forwards the interest packet.Random forwarding does not lead to any redundant traffic,but it cannot guarantee fast and stablenetwork performance.

Ant colony forwarding is a distributed routing selection strat?egy in which the ant colony optimal algorithm sends out an ex?ploratory packet to source an optimal forwarding path[14].An optimal path has the least number of hops to the source server or the lightest-loaded nodes.There may be a tradeoff between hops and load.The path is optimal in the traditional sense and is formed by request node and all the optimal faces of the inter?mediate nodes.Traffic redundancy can be reduced to some ex?tent,but the path may not be optimal for CCN because content caching in routing nodes is not taken into account with ant col?ony forwarding.

In[15],a neighbor cache explore(NCE)routing strategy is proposed.This strategy is an improvement of that in[14].The shortest path isfound using ant colony algorithmunder the con?dition of a non-cache network.Then,exploratory packets are sent to the nodes within a particular range(neighbor nodes)to determine their caching status.Finally,a decision is made on whether the requested content can be acquired in the nodes along the shortest path.With NCE,the caching capability of CCN is taken into account,but the shortest path found using the ant colony algorithm may not contain nodes that cache the requested content.

A reasonable CCN routing strategy helps find the node that caches the requested content and is also as near as possible to the requester.Using a traditional routing strategy to find the least-cost path first is not reasonable.Future research is need?ed into a probabilistic routing strategy for an opportunistic net?work.In such a routing strategy,the first step involves explor?ingthe path that has the nearest possible node that can provide thecontent.

3.3 Caching Policy

There are two kinds of CNN caching policy:cache replace?ment and cache decision.The former involves selectively re?placing cached content with newly arrived content.The latter involves making a decision about caching newly arrived con?tent.

3.3.1 Cache Replacement Policy

There are four classes of cache replacement mechanism that can be found in existing caching policies:recency-based,fre?quency-based,utility-based,and probability-based.Existing CCNcaching policies all originate from basic web caching poli?cies.

A recency-based mechanism selects the content to be re?placed when the cache is full.Content is selected according to how much it has been used over a period of time.Least recent?ly used(LRU)[16],[17]is the most common policy in this class,and other policies can be regarded as a variation of this.

LRUstems fromthe web.Whenever there is a hit on a piece of content,the content is moved to the head of the cache so that less frequently used content is replaced when the cache is full.The rationale for this is that recently used content will probably berequested again.LRUis easy to implement.

CCN is a variation of LRU and uses two replacement poli?cies:MRU and MFU.Most recently used(MRU)and most fre?quently used(MFU)[18]target the multicache architecture of information-centric networking.Assuming that the cache deci?sion policy is to cache everywhere,the requested content is stored in each node on the content delivery path.A hit in one node implies a high probability that a copy of the same content is being stored in neighbor nodes.The most recently used and most frequently used content should be removed when the cacheisfull.

The frequency-based replacement mechanism is similar to the recency-based replacement mechanism except that the for?mer takes usage,specifically,the number of visits to a piece of content,to determine which content is to be replaced.A side effect of this is called“cache pollution.”If a piece of content was popular in the past,it will stay in cache for a long time,even if it is not used any more.This will remain the case until new content becomes more popular and more often visited.The most frequently used content prevents newer content from en?tering the cache.Several mechanisms,including an aging mechanism,have been proposed to solve this problem,but they all increasecomplexity.

LFU is used in the web[19],[20].When new content ar?rives,the least-visited content from the past should be re?placed.Most frequency-based replacement approaches have their foundations in the web are not designed for the dynamic interests of CCN users.In[21],a new policy called recent us?age frequency(RUF)is proposed for CCN.Unlike traditional web-based policies that count content visits in the output face,RUF counts visiting frequency using interest packets arriving at the node.The benefit of changing the checkpoint is that changes in user interests can be instantly detected,and cach?ing policy can be promptly adapted.The content with low in?stant popularity will beremoved when thecacheisfull.

The utility-based replacement mechanism uses a utility val?ue,for example,content size or content lifetime,as an index to decide which content should be replaced.The choice of utility parameter and calculation of utility value affects the perfor?manceand complexity of themechanism.An exampleof autili?ty-based replacement mechanism is size-based policy used in the web[22].Document size is taken as the utility parameter on the basis that a user tends to request small-sized content.With this rationale,larger content should be removed first.For thecontent of similar size,an LRUpolicy isused.

An age-based cooperative(ABC)policy has been proposed for CCN[23].The age of content is taken as the utility parame?ter.Thedistance of the content fromthe requester and the pop?ularity of the content allows a node to calculate a unique age for each piece of content in its cache.This indicates the life?time of the content.Only when a timeout event occurs is the corresponding content removed;otherwise,it should be re?tained in thecache.

A probability-based replacement mechanism reduces the complexity of a policy but does not sacrifice performance too much.Evaluating performance,however,is a little complex be?cause performance differs in different network environments.Uniform random replacement(UNIF)is a probability-based policy that is used in the web to randomly select content to be replaced with uniformdistribution.

Randomized replacement(RR)policy is one attempt at a probability-based replacement policy for CCN[24].N pieces of content are selected randomly from the cache,and the least useful piece of content is removed.The usefulness of content can be determined by any utility function.The remaining M(M<N)contents are retained in the cache.The next time round,N-M new samples are drawn from the cache,and the replacement mechanism is executed again.

3.3.2 Cache Decision Policy

Cache decision policy is used to determine whether the ar?riving content should actually be stored in the cache.Less at?tention has been given to this type of policy than has been giv?en to cache replacement policy.Caching everywhere,also called leaving copy everywhere(LCE),is the default policy for CCN.However,LCE is not a good policy because it increases redundancy and increases the probability of misses.As with replacement policies,current CCN decision policies derive from existing web policies[25].These web decision policies in?clude LCE,leave copy down(LCD),move copy down(MCD),and leave copy probability(LCP).

LCEis a commonly used decision policy in multilevel cache architectures.All the middle nodes on the content-delivery path cachethecontent copy,which ishit in thelevel inode.

With LCD,the content copy that is hit in the level i node is cached only in the downstream node(i.e.level i-1 node).The content copy eventually goes down from level L to level 1 and iscached thereafter at least every i-1 requests.

MCD is an improvement on LCD.The copy hit in the level i node is moved to the downstream node(level i-1 node),and thecopy in thelevel i nodeisdeleted.

With LCP,the content copy hit in the level i node is cached in the nodes on the content-delivery path with probability p.

Existing CCN decision policies include WAVE,less-for-more,and ProbCache.WAVE is an example of popu?larity-based and collaborative in-network caching for CCN[26].With WAVE,content is divided into chunks(Fig.3).If a user requests chunk x and gets a hit in node i,chunk x is sent back to the requester and,at the same time,is stored in the next node(node i-1).If a request for the samecontent gets an?other hit in node i-1,then chunk x is stored in node i-2,and chunks x+1 and x+2 are stored in node i-1.The number of stored chunks increases exponentially with an increase in the number of request for chunk x.With this policy,the relevance of requests between chunks is taken into consideration.When a user requests a chunk of content,there is a high probability they will request the rest of the chunks.Therefore,pushing therest of thechunkstoanodenearest user reduces.

Less-for-more policy is an improvement on LCE proposed in[27].By storing the content in specific nodes on the back?ward path can achieve the goal of gaining maximum benefit with minimumcopy storage.

Assuming there are M paths from node i to j,and node x is on the m th path,m/M is the importance of node x.When there is a backward delivery of content,the content copy is stored in the selected node according to node's importance.v3 is the key node on the delivery path(Fig.4).When v3 stores the copy of the content,clients A to approximately C can acquire the content via the path with the least hops.Thus,nodes v1 and v2 need not store a copy and can remain free to store other content.Both hit probability and network utilization are high for CCN,and there are many different types of content copies provided by thelimited nodes.

▲Figure3.WAVECCNdecision policy topology.

▲Figure4.An exampleless-for-moretopology.

In[28],a probabilistic algorithm for distributed content caching is proposed.This algorithm,called ProbCache,counts the number of nodes through which an interest packet and data packet have passed.It saves this number in the head of the packet in order to evaluate the capability of the path to cache content.The capability indicator,based on path length and multiplexed content flow,is a probabilistic value that can be used todecidewhether content needstobestored or not.

4 Improved CCN Caching Policies

Here,we describe two caching policies specifically for CCN.The first policy is a replacement policy that reduces the proba?bility of missed requests for low-popularity content.The policy balances the stored proportion of different classes of content in cache.The second policy isadecision policy that performsbet?ter than traditional cache decision policies because it extends content survival timein thenetwork.

4.1 Replacement Policy Based on Content Popularity

Requests for content always follow a certain popularity dis?tribution.To balance the load in a CCN,a good replacement policy needs to balance the proportions of content(with differ?ent popularities)that is cached in network nodes.Unfortunate?ly,all existing CCN replacement policies previously mentioned in this paper ignore the issue of content popularity,and this leadsto relatively poor performance.

In[24],a replacement policy based on popularity preference is proposed.Two chunks are selected randomly,and the more popular chunk is replaced.The goal of this policy is to cache less-popular content longer and guarantee the uniform distri?bution of content with different popularities.A drawback of this policy is that less popular chunks may not be replaced for alongperiod of time.

We propose a replacement policy based on content populari?ty probability(PP)[29].PPpolicy issuitable for highly concen?trated content requests.It can improve performance for most content by sacrificing a little performance for the most popular content.Assuming that each replacement removes the chunk at the tail of the cache queue,the chunk position indicates the chunk's replacement priority.If a new chunk is to be cached,the PPdecides where to insert it according to its popularity.A less popular chunk is inserted the nearer to the top of the queue.In this way,PP policy can extend the time in cache of less popular content and thus reduce the probability a request for this content will be missed.It solves the problems faced by thepolicy in[24].

4.1.1 Description of Proposed Policy

Assuming the cache comprises C chunks,when a chunk with class k popularity is hit,it is inserted at the i th position with probability pk(i).Thisprobability isgiven by where,K is the sum of content popularity classes,and a andβ are probability adjustment factors.The probability adjustment factor a isgiven by

Existing chunks in the i th(or later)positions in the cache shift one place backward in the order,and the chunk at the queue tail is removed if the cache is full.A chunk in a physi?cal router is always shifted by simply modifying the pointer that indicates the position of the corresponding chunk.The chunk itself is not moved,so time overhead is reduced.In(1)and(2),the recommended value ofβis[1],[2].Asβincreas?es,low-popularity content at the front of cache queue is more likely tobereplaced.

4.1.2 Evaluation of the Performance of the Proposed Policy

We compared PPand LRU for user-generated video,which is typical big data.We used Matlab to run a simulation on re?quest missprobability.Fig.5 showsthenetwork topology.

The simulation parameters were taken from[30].The net?work is a triple-level tandem architecture.The CCN provided a total of M content files,where M=40,000.These files were divided into K classes,where K=400.Each class had m con?tent files,where m=100,and each file was divided into 10 kB chunks.Requests for content in class k are generated accord?ingtoa Poisson processwith intensity

whereλ1is the request rate at the first level(λ1=40 pieces of content per second in simulation),and qkfollowsthe Zipf distri?bution,which describes the popularity of arrival requests for content in class k[30].The Zipf distribution is given by

whereαis the concentrations of content popularity.A largerα means the requests are more concentrated in the first several classes of content.Fig.6 shows the performance of PP and LRUin thefirst-level CCNnode.

The RMP of content increases as k increases.PP and LRU perform well only for first several classes(i.e.those comprising content with a small k,for example,k<2).Asαincreases,LRU only performs well for the most popular classes but per?forms poorly for classes with class id k>2.An increase inα means there are more concentrated requests for the first sever?al classes,but LRU cannot adapt to variations in content popu?larity.PPincreases cache hits because it has an adaptive con?tent popularity distribution policy.Whenα=1.2,thisimprove?ment is slight,but asαincreases,the improvement is greater.Thismeansthat PPismoresuited toanetwork with highly con?centrated content requests.Whenα=2,the RMP for k=1 class is worse than the LRU for that class.The RMPfor k≥2 classes shows definite improvement.This improvement is caused by the decrease in hits on the k=1 class).PPsacrific?es request hits on popular classes but slightly increases the hit distance of k=1.It does this in order to improve performance for other classesand shorten their hit distances[29].

▲Figure6.Nodeperformanceof PPand LRU.

▲Figure7.a)Sourceservehit probability for LCE,LCPand PCECU under UGC.b)Averagehit distancefor LCE,LCPand PCECUunder UGC.

4.2 Probability Cachewith Evicted Copy Up Decision Policy

LCD and MCD reduce the number of copies in the cache and extend the time needed to cache them.However,in an L-level tandem network,only after at least L-1 requests can a copy be moved to the level 1 node.It is no good for users to ac?cess the content from a nearby node.LCPis potentially a bet?ter policy because it increases the amount of popular content cached in nearby nodes and reducesthe hit distance.However,storage efficiency is reduced,and the time needed to cache the requested content decreased because LCP leads to greater re?dundancy.

To tackle the above problems,we propose a policy called probability cache with evicted copy up(PCECU).This policy isdesigned tokeep thecopiesaslongaspossiblein order toin?crease the hit probability.In the meantime,it also increases the amount of popular content cached near the user in order to reduce hop count and delay during acquisition.

4.2.1 Description of Proposed Policy

If a request for a chunk receives a hit at level i,then the chunk ismoved to thelevel 1 nodewith probability p and isde?leted from the level i node.(The chunk is not deleted if node i is the original server).The chunk remains cached in level i with probability 1-p.Except for the level 1 node,no node on the backward delivery path caches the chunk while delivering it tolevel 1.If thecontent chunk cached in level i nodeiselim?inated from the cache to make way for new chunks,this chunk is moved to its upstream node(i.e.i+1 node).The chunk is then cached in the head of the i+1 node.

4.2.2 Evaluation of the Performanceof the Proposed Policy

In our Matlab simulation,we delivered UGC in a 3-level tandem CCN(Fig.5).The performance parameters were the source server hit probability(SSHP)for the k th content chunk,and the average hit distance(AHD).The CCN provided a total of M content files,where M=40,000.These files were divided into K classes,where K=400.Each class comprised m con?tent files,where m=100,and each file was divided into 10 kB chunks.Requests for content in class k were generated accord?ing to a Poisson process with intensity given by(3).The defini?tion ofλ1and qkwere the same as in section 4.1.In this simula?tion,λ1=40 pieces of content per second,and 5×107re?questsfor first-level chunksarerandomly generated.

Fig.7 shows that LCE always performs the worst,and LCP performs the second worst but improves as p decreases.PCE?CU definitely improves CCN performance it allows probability caching and extends cachingtime.

5 Conclusion

CCN is a promising network infrastructure for the big-data era.It is content-centric,not host centric in the traditional IP network.A request for content can get a hit in a nearby node and does not need to travel far away to the source server.This alleviates network congestion and significantly reducesdelay.

At present,key aspects of CCN being studied include cach?ingpolicy,naming mechanism,content retrieval,routing strate?gy,mobility management,and security.Much attention has been paid to CCN at home and aboard.CCN could potentially revolutionize the internet by providing full-scale network sup?port for big data.

主站蜘蛛池模板: 老司机精品久久| 2021国产在线视频| 亚洲欧美在线综合图区| 欧美a级在线| WWW丫丫国产成人精品| 亚洲视频a| 亚洲AV无码一区二区三区牲色| 97色伦色在线综合视频| 三级欧美在线| 久久特级毛片| 国产一二三区视频| 亚洲第一黄片大全| 国产精品香蕉在线观看不卡| 精品无码一区二区在线观看| 久久99精品国产麻豆宅宅| 美女毛片在线| 国产91色在线| 亚洲中文精品人人永久免费| 国产96在线 | 亚洲熟女中文字幕男人总站| 91小视频版在线观看www| 国产免费一级精品视频| 亚洲精品无码AⅤ片青青在线观看| 91色在线观看| 国产性爱网站| 国内精品小视频在线| 亚洲无码免费黄色网址| 日韩A级毛片一区二区三区| 亚洲永久色| 久久青草视频| 亚洲最新网址| 99精品欧美一区| 一区二区理伦视频| 亚洲国产看片基地久久1024| 91久久偷偷做嫩草影院| 波多野结衣一二三| 一级黄色欧美| 尤物在线观看乱码| 国内精品视频| 日本在线亚洲| 精品無碼一區在線觀看 | 福利在线免费视频| 久久精品只有这里有| 国产一区二区三区在线精品专区| 午夜性刺激在线观看免费| 五月天综合网亚洲综合天堂网| 亚洲欧美不卡视频| 中国成人在线视频| 91精品国产自产91精品资源| 在线观看亚洲天堂| 免费女人18毛片a级毛片视频| 国产精品私拍在线爆乳| 制服丝袜一区二区三区在线| 热九九精品| 国产大片黄在线观看| 国产清纯在线一区二区WWW| 无码精油按摩潮喷在线播放| 国产精品不卡片视频免费观看| 思思99思思久久最新精品| 香蕉久久永久视频| 亚洲国产天堂久久九九九| 国产人前露出系列视频| 国产综合色在线视频播放线视| 午夜日b视频| 黄色网在线| av在线手机播放| 玖玖精品在线| 国产不卡一级毛片视频| 香蕉伊思人视频| 成人午夜福利视频| 尤物国产在线| av大片在线无码免费| www.亚洲天堂| 欧美一级爱操视频| 综合亚洲色图| 国产毛片不卡| 国产青榴视频在线观看网站| 亚洲精品va| 麻豆国产原创视频在线播放| 精品国产福利在线| 国产精品无码作爱| 亚洲伊人久久精品影院|