999精品在线视频,手机成人午夜在线视频,久久不卡国产精品无码,中日无码在线观看,成人av手机在线观看,日韩精品亚洲一区中文字幕,亚洲av无码人妻,四虎国产在线观看 ?

A New Encryption-Then-Compression Scheme on Gray Images Using the Markov Random Field

2018-08-15 10:38:30ChuntaoWangYangFengTianzhengLiHaoXieandGooRakKwon
Computers Materials&Continua 2018年7期

Chuntao Wang , Yang Feng Tianzheng Li Hao Xie and Goo-Rak Kwon

Abstract: Compressing encrypted images remains a challenge. As illustrated in our previous work on compression of encrypted binary images, it is preferable to exploit statistical characteristics at the receiver. Through this line, we characterize statistical correlations between adjacent bitplanes of a gray image with the Markov random field(MRF), represent it with a factor graph, and integrate the constructed MRF factor graph in that for binary image reconstruction, which gives rise to a joint factor graph for gray images reconstruction (JFGIR). By exploiting the JFGIR at the receiver to facilitate the reconstruction of the original bitplanes and deriving theoretically the sum-product algorithm (SPA) adapted to the JFGIR, a novel MRF-based encryption-then-compression(ETC) scheme is thus proposed. After preferable universal parameters of the MRF between adjacent bitplanes are sought via a numerical manner, extensive experimental simulations are then carried out to show that the proposed scheme successfully compresses the first 3 and 4 most significant bitplanes (MSBs) for most test gray images and the others with a large portion of smooth area, respectively. Thus, the proposed scheme achieves significant improvement against the state-of-the-art leveraging the 2-D Markov source model at the receiver and is comparable or somewhat inferior to that using the resolution-progressive strategy in recovery.

Keywords: Encryption-then-compression, compressing encrypted image, Markov random field, compression efficiency, factor graph.

1 Introduction

Compressing encrypted signals is such a kind of technology that addresses the encryption-then-compression (ETC) problem in the service-oriented scenarios like distributed processing, cloud computing, etc. [Johnson, Ishwar and Prabhakaran (2004);Erkin, Piva, Katzenbeisser et al. (2007)]. In these scenarios, the content owner merely encrypts its signal and then sends it to the network or cloud service provider for the sake of limited computational resources. The service provider then compresses, without access to the encryption key, encrypted signals for saving bandwidth and storage space. The receiver finally performs the successive decompression and decryption to reconstruct the original signal.

As the encryption prior to the compression masks the original signal, one may intuitively believe that it would be intractable to compress the encrypted signal. By taking the encryption key as the side information of the encrypted signal and further formulating the ETC problem as the distributed coding with side information at the decoder, however,Johnson et al. [Johnson, Ishwar, Prabhakaran et al. (2004)] demonstrated via the information theory that the ETC system would neither sacrifice the compression efficiency nor degrade the security, as achieved in the conventional compression-thenencryption (CTE) scenario that compresses the original signal before encryption.According to [Johnson, Ishwar, Prabhakaran et al. (2004)], by taking the syndrome of a channel code as the compressed sequence, channel codes like the low-density paritycheck (LDPC) code can be exploited to compress the encrypted signal, and the DISCUS-style Slepian-Wolf decoder [Pradhan and Ramchandran (2003)] can then be used to recover the original signal. To illustrate this, Johnson et al. [Johnson, Ishwar,Prabhakaran et al. (2004)] also proposed 2 practical ETC schemes, which well demonstrates the feasibility and effectiveness of the ETC system.

From then on, a lot of ETC schemes [Schonberg, Draper and Ramchandran (2005, 2006);Schonberg, Draper, Yeo et al. (2008); Lazzeretti and Barni (2008); Kumar and Makur(2008); Zhou, An, Zhai et al. (2014); Liu, Zeng, Dong et al. (2010); Wang, Xiao, Peng et al. (2018)] have been developed. These schemes compress the cipher-stream-encrypted signal by generating the syndrome of LDPC code, and perfectly reconstruct the original signal via the joint LDPC decoding and decryption. Brief introduction to them are presented in the next section.

In contrast to these lossless compression schemes, a number of lossy compression approaches [Kumar and Makur (2009); Zhang (2011); Song, Lin and Shen (2013); Zhang(2015); Zhang, Ren, Feng et al. (2011); Zhang, Feng, Ren et al. (2012); Zhang, Sun, Shen et al. (2013); Zhang, Ren, Shen et al. (2014); Wang and Ni (2015); Wang, Ni and Huang(2015); Kumar and Vaish (2017); Wang, Ni, Zhang et al. (2018); Kang, Peng, Xu et al.(2013); Hu, Li and Yang (2014); Zhou, Liu, An et al. (2014)] have also been developed to improve the compression efficiency at the cost of tolerable quality loss. The approaches of Kumar et al. [Kumar and Makur (2009); Zhang, Ren, Feng et al. (2011);Song, Lin and Shen (2013); Zhang, Wong, Zhang et al. (2015)] use the CS technique[Donoho (2006)] to compress the stream-cipher encrypted data and modified the basis pursuit (BP) algorithm to reconstruct the original signal. In an alternative way, the schemes of Zhang et al. [Zhang (2011); Zhang, Feng, Ren et al. (2012); Zhang, Sun, Shen et al. (2013); Zhang, Ren, Shen et al. (2014); Wang and Ni (2015); Wang, Ni and Huang(2015); Kumar and Vaish (2017); Wang, Ni, Zhang et al. (2018)] condense the streamciphered or permutation-ciphered signal mainly using a scalar quantizer, while the methods of [Kang, Peng, Xu et al. (2013); Hu, Li and Yang (2014); Zhou, Liu, An et al.(2014)] compress the encrypted signal via the uniform down-sampling.

Similar to conventional compression approaches, ETC schemes also exploit the redundancy of the original sign al to achieve good compression efficiency. For instance,the ETC methods of Lazzeretti et al. [Lazzeretti and Barni (2008); Kumar and Makur(2008); Zhou, Au, Zhai et al. (2014)] and [Wang, Ni, Zhang et al. (2018)] leverage the redundancy by generating prediction errors before encryption. The approaches of Liu et al. [Liu, Zeng, Dong et al. (2010); Zhang, Ren, Feng et al. (2011); Zhang, Sun, Shen et al.(2013); Wang and Ni (2015); Wang, Ni and Huang (2015); Kumar and Vaish (2017)]exploit the redundancy by optimizing the compressor with statistical characteristics of the original signal that is intentionally revealed by the content owner. These two categories,however, either remarkably increase the computational burden at the content owner or considerably degrade the security by disclosing statistical distributions to the service provider.

Regarding that the receiver has both the encryption key and feasible computational resources, it is preferable to make full use of statistical correlations of the original signal at the receiver, as analyzed in our recent work [Wang, Ni, Zhang et al. (2018)]. To illustrate this, the work of Wang et al. [Wang, Ni, Zhang et al. (2018)] uses the Markov random field (MRF) to characterize spatial statistical characteristics of a binary image and seamlessly integrates it with the LDPC decoding and decryption via the factor graph.By leveraging the MRF at the receiver side, the work of Wang et al. [Wang, Ni, Zhang et al. (2018)] achieves significant improvement in terms of compression efficiency over the method of Schonberg et al. [Schonberg, Draper and Ramchandran (2008)] using the 2-dimensional (D) Markov source model at the receiver.

In light of this, in this paper we extend our previous work [Wang, Ni, Zhang et al. (2018)]to gray images. Specifically, since each bit-plane of a gray image can be considered as a binary image, we apply the algorithm in Wang et al. [Wang, Ni, Zhang et al. (2018)] on each bit-plane of a gray image to achieve the lossless compression for each bit-plane. By observing that adjacent bit-planes resemble each other, we further exploit the MRF to characterize statistical correlations between adjacent bit-planes and incorporate it in the reconstruction of corresponding bitplanes, aiming to achieve higher compression efficiency for gray images. By representing the MRF between adjacent bitplanes with a factor graph and further incorporating it in the joint factor graph for binary image reconstruction, we construct a joint factor graph for gray image reconstruction (JFGIR)followed by theoretically deriving the sum-product algorithm (SPA) adapted to the JFGIR. Assisted by the stream-cipher-based encryption, LDPC-based compression, and JFGIR-involved reconstruction, this then gives rise to an MRF-based ETC scheme for gray images. Experimental results show that the proposed scheme achieves compression efficiency better than or comparable to the state-of-the-arts exploiting statistical correlations at the receiver.

The contribution of this work is two-fold: i) Exploiting the MRF to characterize statistical correlations between two adjacent bit-planes of a gray image; and ii) Constructing a JFGIR to seamlessly integrate LDPC decoding, decryption, and the MRF within a bitplane and between adjacent bit-planes, and deriving theoretically the SPA adapted to the constructed JFGIR.

The rest of the paper is organized as follows. Section 2 briefly reviews ETC schemes that perform lossless compression on encrypted images. Section 3 presents the construction of JFGIR and the theoretical derivation of the SPA for the JFGIR. The proposed scheme for gray images are introduced in Section 4, and experimental results and analysis are given in Section 5. Section 6 finally draws the conclusion.

2 Prior arts

As this paper focuses on the lossless compression of encrypted images, in this section we mainly review ETC schemes for lossless compression of encrypted images. Brief introductions to these ETC schemes are presented below.

Based on Johnson et al.’s work [Johnson, Ishwar, Ramchandran et al. (2004)], Schonberg et al. [Schonberg, Draper and Ramchandran (2005, 2006); Schonberg, Draper, Yeo et al.(2008)] further integrated the Markov model in image reconstruction. These well exploit statistical correlations between adjacent image pixels and thus significantly improve the compression efficiency.

A number of ETC approaches generating prediction errors before encryption have also been proposed in the literature [Lazzeretti and Barni (2008); Kumar and Makur (2008);Zhou, Liu, Au et al. (2014)]. In Lazzeretti et al. [Lazzeretti and Barni (2008)], the authors extended the Johnson et al.’s scheme [Johnson, Ishwar, Ramchandran et al. (2004)] to gray and color images by leveraging the spatial, cross-plane, and cross-band correlations before stream-cipher-based encryption, achieving good compression efficiency. By imposing the LDPC-based compression on encrypted prediction errors rather than directly on image pixels, Kumar and Makur obtained higher compression efficiency[Kumar and Makur (2008)]. Zhou et al. [Zhou, Au, Zhai et al. (2014)] obtained nearly the same compression efficiency as the conventional compression schemes with original,unencrypted images as input through prediction error clustering and random permutation.In an alternative way, Liu et al. [Liu, Zeng, Au et al. (2010)] compressed the encrypted gray image in a progressive manner and exploited the low-resolution sub-image to learn source statistics for high-resolution ones. Compared to the practical lossless ETC scheme in Johnson et al. [Johnson, Ishwar and Ramchandran (2004)], the work of Liu et al. [Liu,Zeng, Au et al. (2010)] achieves better compression efficiency.

Recently, Wang et al. [Wang, Ni, Zhang et al. (2018)] developed another ETC scheme using the MRF. They deployed the MRF [Li (1995)] to characterize the spatial statistical characteristic of a binary image, represented the MRF with a factor graph [Kschischang,Frey and Loeliger (2001)], sophisticatedly integrated the factor graph for the MRF with those for the decryption and LDPC decoding to construct a joint factor graph for binary image reconstruction, and derived theoretically the SPA for the constructed joint factor graph. This MRF-based scheme achieves significant improvement over the ETC approach using the 2-D Markov source model [Schonberg, Draper and Ramchandran (2006)].

3 Design of JFGIR and derivation of SPA

3.1 Characterization of statistical correlations between adjacent bitplanes

Let I( x, y)be an 8-bit image of size m×n . Then its kt h(k =1, ,8)bit-plane, says, is obtained as:

Figure 1: Illustration of 8 bit-planes of Image “Man”, where bitplanes from left to right are B8( x, y), B7( x, y), ..., and B1( x, y), respectively

Fig. 1 illustrates 8 bit-planes of gray image “Man”. It is observed that any two adjacent bit-planes, Bk( x, y)and Bk?1(x, y)(k = 8, ,2), resemble each other. That is, if Bk( x, y)is equal to b( b =0,1), then Bk?1(x, y) =b may hold with high probability.Therefore, there exists statistical correlations between Bk( x, y). Similar results can also be found in other gray images.

As the MRF well characterizes the spatial statistical feature of binary images, as demonstrated in Wang et al. [Wang, Ni, Zhang et al. (2018)], we deploy the MRF [Li(1995)] to model statistical correlations between Bk( x, y)and Bk?1(x, y). As the MRF within a bitplane has, according to Wang et al. [Wang, Ni, Zhang et al. (2018)], already taken into account spatial statistical correlations between pixels in the neighborhood, we mainly characterize statistical correlations between bits Bk( x, y)and Bk?1(x, y)at the same coordination rather than modeling those between Bk(x, y)and Bk?1( (x) , (y )),where (x)denotes a set containing xand its neighborhood. Thus, statistical correlations between bits Bk(x, y)and Bk?1(x, y)can be characterized with the MRF as:

where p(·)is a probability function, and Fk(x, y)denotes a random variable for bit Bk(x, y)that takes on values in the state space, Φ={0,1}. The Tin Eq. (2) is a temperature constant and Zis a normalizing constant defined as:

where Ω= {Fk=(Fk(1,1), ,Fk(x, y) , ,Fk(m, n) )|Fk(x, y)∈ Φ}is a configuration set including all possible realizations of Fk. The U( Fk)in Eq. (3) is an energy function defined as:

where C is a set of cliques formed by the neighborhood system, and Vc(·)is a potential function defined on a given clique c( c ∈C)(e.g. in our case bits Bk(x, y)and Bk?1(x, y)form a clique). Eq. (2) calculates the probability of Fk?1(x, y)givenFk(x, y), and p( Fk(x, y) |Fk?1(x, y))can be computed similarly.

3.2 Design of JFGIR

To seamlessly integrate the MRF between adjacent bit-planes in the bit-plane reconstruction using the factor graph, we further represent the MRF between adjacent bitplanes with a factor graph [Kschischang, Frey and Loeliger (2001)]. By denoting Fk(x, y)and Fk?1(x, y)with variable nodes (VNs) and characterizing the statistical correlation in Eq. (2) with a factor node (FN), we construct a factor graph for the MRF between adjacent bit-planes, as shown in Fig. 2, where circles and squares stand for VNs and FNs, respectively.

Figure 2: Illustration of the factor graph for the MRF between adjacent bit-planes, where Bk and B k?1denotes 2 adjacent bit-planes and stands for the statistical correlation between Fk(x, y)and F k ?1(x, y)

According to our previous work [Wang, Ni, Zhang et al. (2018)], the factor graph for the reconstruction of each bit-plane can be constructed as Fig. 3, where the bit-plane index,k , is omitted for simplicity. As shown in Fig. 3, the factor graphs in boxes with solid lines, dot lines, and dot-and-dash lines are those for the MRF within a bit-plane,decryption, and LDPC-based decompression, respectively. Sj( j= 1, ,q)are LDPC syndrome bits, which are taken as the encrypted and decompressed bit sequence,Yi(i =1, ,m n)is the decompressed but encrypted sequence, Kiis the encryption key sequence, Fi(i =(y ?1)n +x)is a 1-D bit sequence converted from a given bit-plane, and Fx,ydenotes bits of a 2-D bit-plane.Mx,yNx,y, Px,y, ti, and gjrepresent the constraints imposed by the MRF within a bit-plane, image source prior, decryption, and LDPC code, respectively.

By merging the same VNs of Figs. 2 and 3, we can build the JFGIR for the reconstruction of two adjacent bit-planes, Bk(x, y)and Bk?1(x, y)(k =8, ,2). As illustrated in Fig. 1,the randomness of Bk(x, y)(i.e. entropy) is less than that of Bk?1(x, y). Thus,Bk(x, y)would achieve higher lossless compression efficiency than Bk?1(x, y)and provide more statistical information for Bk?1(x, y), and vice versa. Therefore, it is preferable to first reconstruct Bk(x, y)and then exploit its statistical correlation to recover Bk?1(x, y).

Figure 3: Illustration of the factor graph for reconstruction of each bit-plane Bk of size m×n

3.3 Derivation of SPA adapted to JFGIR

By taking the probability distribution of each bit in a bit-plane as a marginal function in the MRF, each bit-plane can thus be effectively recovered by running the SPA on the constructed JFGIR. By using the log(p (0) p(1))as the message passed between VNs and FNs, where p(0)and p(1)denote the probabilities of bits 0 and 1, respectively, we then derive the SPA adapted to the JFGIR as follows.

Figure 4: Flowchart of the SPA on the JFGIR

Fig. 4 plots the flowchart of the SPA, where vVN→FNand μFN→VNdenote a message updated from a VN to an FN and that from an FN to a VN, respectively. The initialization step initializes all vVN→FNs according to the received syndrome Sp(p = 1, ,q), the secret key Ki(i =1, ,m n), and the source prior Px,y(x ∈ [1,n] , y ∈[1,m]). Via vVN→FNs,messages μFN→VNare updated via the product operation of the SPA, which are then used to yield messages vVN→FNs by means of the sum operation of the SPA. To check whether convergence is met or not, the decompressed but encrypted sequenceis estimated using vVN→FNs and S′=HY is calculated accordingly. Ifis equal to, then convergence is met and the original bitplane B( x, y)can be perfectly recovered;otherwise, continue to execute these update and estimation steps until convergence is achieved or the predefined maximum iteration number is reached. Due to space limitation,details of these steps for the JFGIR within a bitplane is omitted here and recommended to refer to our previous work [Wang, Ni, Zhang et al. (2018)], while the involved details for the JFGIR between adjacent bitplanes are presented below, where the superscript k indicating the bit-plane index is re-inserted here to make symbols clear.

1) Initialization. As Bk( x, y)has been reconstructed in recovering Bk?1(x, y), message(k =8, ,2)(see also Fig. 2) is initialed as:

The derivation is omitted here for space limitation.

4 Proposed scheme

Fig. 5 illustrates the proposed MRF-based ETC scheme for encrypted gray images.Details for these steps are given below.

Figure 5: The proposed scheme

1) Bit-plane division. This step divides a gray image, I( x, y), of size m×ninto 8 bitplanes Bk( x, y) (k =1, ,8)using Eq. (1).

2) Bit-plane encryption. First generate a pseudorandom Bernoulli(1/2) bit sequence of length mn, says Kk={,i =1, ,m n}, via the kth secret key K EY+2k, where KEY is a one-time-pad initial secret key. Then encrypt Bk( x, y)with the stream cipher, i.e.,

3) Bit-plane compression. According to Johnson et al. [Johnson, Ishwar, Prabhakaran et al. (2004)], the service provider can compress, without access to the encryption key, each encrypted bit-plane using the channel code of LDPC. In particular, Ykis compressed as Sk=HYk, where H is a parity-check matrix of size (1 ? R) mn ×mn , whereis the code rate of LDPC.

To compress bitplanes with nearly equiprobable 0 s and 1 s, a doping technology is employed [Wang, Ni, Zhang et al. (2018)]. That is, a number of encrypted but uncompressed bits are sent to the receiver, and these doped bits are then used at the receiver as the “catalyst” to guide the SPA towards convergence. This is essentially equivalent to construct the parity-check matrix in case of doping as follows [Wang, Ni,Zhang et al. (2018)]:

where the D of size dp_ rate ×(( 1 ?R )×mn)contains doped rows, each of which consists of one 1 at a random column and mn?10 s. The dp_ratedenotes the doping rate, i.e. the ratio between the number of doped bits and the length of (1 ? R)×mn. Thus,the compression rate in terms of bit per bit (bpb) is computed as:

4) Bit-plane reconstruction. First reconstruct the MSB, B8( x, y), using the MRF-based method for a binary image [Wang, Ni, Zhang et al. (2018)] (see also Fig. 3), in which the secret key KEY+28is used, where KEYis sent through a secret channel from the content owner side. Based on the reconstructedB8( x, y), we then recover B7( x, y)by running the SPA on the JFGIR (see also Figs. 2 and 4). After obtaining B7( x, y), we proceed to recover B6( x, y), and so on.

5) Gray-image reconstruction. By merging the 8 recovered bit-planes Bk( x, y)(k =1, ,8), we thus reconstruct the original gray image I′( x, y).

5 Experimental results and analysis

In this section, we evaluate the proposed scheme. We first set parameters for the MRF and then compare compression efficiency of the proposed scheme with prior arts.

5.1 Experimental setting

To characterize natural images with both smooth and context areas, we deploy the discontinuity-adaptive potential function [AL-Shaykh and Mersereau (1998); Wang, Xiao,Peng et al. (2018)], i.e.

where F1and F2are essentially a pair of elements in a clique of a given random field,and δis a model parameter to control the sharpness of edges.

According to Eqs. (2)-(4) and (10), the concerned MRF has 3 parameters, i.e. δ, P, and T. As assessed in our previous work [Wang, Ni, Zhang et al. (2018)], the MRF parameters of δ=45and T =0.00049are a preferable setting, and P=0.35and P=0.5 are used for compression of encrypted binary images without and with doping,respectively. As each bit-plane of a gray image can be considered as a binary image,these MRF parameters in Wang et al. [Wang, Ni, Zhang et al. (2018)] are adopted in the reconstruction of each bit-plane.

Considering that the MRF between adjacent bitplanes may be different to that within a bitplane, we further seek a feasible setting for the MRF between adjacent bitplanes. In more detail, parameters δand P are set as 45 and 0.5, respectively, and parameter Tis decreased gradually from 1. Extensive experimental simulation shows that T =0.005is desirable for the MRF between B8( x, y)and B7( x, y)and T=0.05is feasible for the MRF between adjacent bitplanes from B7( x, y)to B5( x, y). The Tfor the MRF between other adjacent bitplanes, however, are intractable because bitplanes from B4( x, y)to B1( x, y)cannot be compressed, as demonstrated below. This universal parameter setting really works for all test gray images as it provides sufficient side information to guide the SPA towards convergence.

In the simulation, we test 10 100× 100gray images with diverse texture characteristics, as illustrated in Fig. 6. Each test gray image is encrypted, compressed, and reconstructed via the algorithm in Section 4 (see also Fig. 5), and the lossless compression performance are assessed with compression rates in terms of bpb (bit per bit) and bit per pixel (bpp),where the bpb is used for each bitplane while the bpp is for a gray image. In compressionstage, LDPC code rates, , are set to be [0.03,0.95]with step 0.025, and the achieved minimum compression rate (MinCR) (see Eq. (9)) is taken as the compression rate (CR)for the involved bitplane, where the minimum doping rate dp_ratecorresponding to a givenis sought via a binary search.

Each LDPC code is of length 10000, its degree distribution is obtained from the LTHC website [Amraoui and Urbanke (2003)], and the Hnewin Eq. (8) is constructed via the PEG method [Hu, Eleftherious and Arnold (2005)].

5.2 Experimental results and analysis

Via the mentioned settings, we run the proposed algorithm on 10 test gray images. Table 1 summarizes lossless compression rates for 8 bitplanes of each test image. It is found that the first 3 MSBs of most test images can be successfully compressed while the other 5 bitplanes cannot. This is because bitplanes from B5( x, y)to B1( x, y)are nearly random(see also Fig. 1) and thus cannot be well characterized with the MRF, which in turn makes the compression and reconstruction of these bitplanes difficult. Nevertheless,bitplanes B5( x, y)of images “F16” and “Milkdrop” with a large portion of smooth area are two exceptions, which can be compressed by the proposed scheme.

Table 1: Compression rates (CRs) for 8 bitplanes of each gray image and their summary CRs (SCRs), where CRk (k=8, ..., 1) denotes the CR for bitplane B k( x, y)in terms of bpb

Figure 6: 10 test images of size 100× 100

We further evaluate the proposed scheme by comparing it with the state of the arts[Schonberg (2006); Schonberg (2007); Liu, Zeng, Dong et al. (2010)] that also exploit statistical characteristics of natural images at the receiver. The work of Schonberg[Schonberg (2006, 2007)] incorporates the 2-D Markov source model in the reconstruction of binary image and successfully compresses the first 2 encrypted MSBs in a lossless way. Via a resolution-progressive manner, the approach of Liu et al. [Liu,Zeng, Dong et al. (2010)] uses low-resolution sub-images to learn source statistics for high-resolution ones, which can compress the first 4 encrypted MSBs. Regarding that the proposed scheme succeeds to compress the first 3 encrypted MSBs for most test gray images and the first 4 encrypted MSBs for a few test images with a large portion of smooth area (e.g. f16 and milkdrop), it achieves significant improvement in terms of compression efficiency against the method of Schonberg et al. [Schonberg, Draper and Ramchandran (2006); Schonberg (2007)], while it is comparable or somewhat inferior to the approach of Liu et al. [Liu, Zeng, Dong et al. (2010)]. The improvement over the method of Schonberg et al. [Schonberg, Draper and Ramchandran (2006); Schonberg(2007)] comes from the fact that the MRF is better than the 2-D Markov source model in characterizing natural gray images with complex intrinsic structure, while the weakness in comparison to the scheme of Liu et al. [Liu, Zeng, Dong et al. (2010)] attributes to the evidence that the first 4 or 5 encrypted LSBs are difficult to model with the MRF.

Table 2: Compression rates (CRs) and numerical results of H1(X)and H∞(X)for the first 3 MSBs of each test gray image

In addition, we also examine the bound for compression of encrypted gray images. As compressing encrypted gray images is essentially equivalent to 8-bitplane compression,the bound for compression of encrypted binary images given in Wang et al. [Wang, Ni,Zhang et al. (2018)] can be taken for analysis here. As discussed in Wang et al. [Wang,Ni, Zhang et al. (2018)], the compression bound is equal to the entropy rate of the adopted MRF source, says H∞(X), the derivation for which is omitted her for space limitation and recommended to refer to Wang et al. [Wang, Ni, Zhang et al. (2018)]. For convenience, the entropy rate of independent identically distributed (i.i.d.) source,namely H1(X), is also compared. Tab. 2 lists compression rates, H1(X)and H∞(X), for the first 3 MSBs of each test gray image, where results for the 4thMSB are not given as the 4thMSB of most test images cannot be compressed. It is observed that compression rates for the first 3 MSBs are far lower than H1(X)due to the exploitation of the MRF in the reconstruction process, while there still exist sizeable gaps from the bound H∞(X)for the proposed scheme to improve.

6 Conclusion

In this paper, we have presented a new ETC scheme for gray images using the MRF. We deployed the MRF to characterize statistical correlations between adjacent bitplanes and within a bitplane, represented them with factor graphs, and further seamlessly integrated the built MRF factor graphs in those for decryption and LDPC decoding, yielding a JFGIR (joint factor graph for gray image reconstruction). The SPA adapted to the JFGIR is then derived theoretically by applying the theory of factor graph. Via the constructed JFGIR and the derived SPA, an MRF-based scheme for compression of encrypted gray images is thus developed, which uses the stream cipher to encrypt each bitplane, employs the LDPC code to compresses each bitplane, and exploits the JFGIR to facilitate inferring the original bitplane. Numerical results show that a universal MRF parameter setting works well for all gray images as the setting provides sufficient side information to guide the SPA towards convergence. Extensive experimental simulation demonstrates that the proposed scheme successfully compresses the first 3 and 4 MSBs for most test gray images and a few test images with a large portion of smooth area, respectively, which achieves significant improvement in terms of compression efficiency over the prior stateof-the-art using the 2-D Markov source model while is comparable or somewhat inferior to that adopting the resolution-progressive strategy.

Acknowledgement:This work is supported in part by the National Natural Science Foundation of China under contracts 61672242 and 61702199, in part by China Spark Program under Grant 2015GA780002, in part by The National Key Research and Development Program of China under Grant 2017YFD0701601, and in part by Natural Science Foundation of Guangdong Province under Grant 2015A030313413.

404 Not Found

404 Not Found


nginx
主站蜘蛛池模板: 欧美激情视频在线观看一区| 欧洲欧美人成免费全部视频| 亚洲六月丁香六月婷婷蜜芽| 国产老女人精品免费视频| 亚洲成人网在线观看| 亚洲首页国产精品丝袜| 欧美日韩亚洲国产| 精品国产www| 99精品热视频这里只有精品7| 久久精品一卡日本电影| 亚洲综合色婷婷| 精品99在线观看| 欧美国产在线看| 国产美女在线观看| 思思热精品在线8| 亚洲成人www| 国产你懂得| 国产麻豆福利av在线播放 | 亚洲精品制服丝袜二区| 91福利一区二区三区| 成人va亚洲va欧美天堂| 精品久久高清| 高清久久精品亚洲日韩Av| 亚洲伊人天堂| 91娇喘视频| 亚洲人成在线精品| 亚洲欧洲日韩国产综合在线二区| 亚洲美女操| 国产精品成| 国产不卡在线看| 中文成人在线视频| 91亚洲影院| 亚洲综合精品香蕉久久网| 国产喷水视频| 欧美一级大片在线观看| 91激情视频| www.91中文字幕| 国产欧美精品一区二区| 91成人精品视频| 免费网站成人亚洲| 午夜国产大片免费观看| 第一区免费在线观看| 伦伦影院精品一区| 人妻一本久道久久综合久久鬼色| 欧美黑人欧美精品刺激| 看国产一级毛片| 久草视频精品| 国产一区免费在线观看| 成年看免费观看视频拍拍| 尤物成AV人片在线观看| www.av男人.com| 她的性爱视频| 国产高颜值露脸在线观看| 欧洲亚洲欧美国产日本高清| 国产精品吹潮在线观看中文| 国产屁屁影院| 精品無碼一區在線觀看 | 亚洲伊人天堂| a在线观看免费| 日本不卡在线播放| 91视频区| 久久黄色毛片| 亚洲欧美不卡视频| 欧美一区二区福利视频| 精品色综合| 亚洲AV成人一区二区三区AV| 国产剧情国内精品原创| 亚洲国产清纯| 欧美日韩91| 国产亚洲欧美在线人成aaaa| 中文成人在线视频| 亚洲中久无码永久在线观看软件| 亚洲高清无码久久久| 国产精品福利一区二区久久| 中国国产一级毛片| 成人免费一级片| 无码人中文字幕| 国产日本欧美在线观看| 91久久精品国产| 高清国产va日韩亚洲免费午夜电影| 中文字幕色在线| 欧美一区二区丝袜高跟鞋|