A New PCA-Based Robust Color Image Watermarking Method

Arash Abadpour, Shohreh Kasaei

Sharif University of Technology, Tehran, Iran

Abstract

Although, the possibility of electronic storage and transmission has been beneficial for extending the intellectual communication, the easiness of copying and transmitting the data in the internet has increased the piracy and illegal use of owned artworks. As a result, digital data watermarking, has been under thorough investigation in recent years. The main aim of data hiding is to add some transparent signature to the data in an attack-resistance fashion, to be used for ownership claims. In this paper a new PCA-based watermarking method is proposed for color images. Rather than the dominant spatial, and the few available semi-spectral approaches, the proposed method uses the true redundancy in the spectral domain. The proposed watermarking method is resistent to available attacks, such as sophisticated geometrical transformations, artistic effects, lossy compression, frequency domain filtering, motion blur, occlusion, and enhancement. Experimental results show the efficiency of the proposed algorithm.

1  Introduction

The spread of the electronic data on the internet has emerged the need to take larger steps towards copyright protection. Thus, there has been a need to watermark the artworks for legal reasons since development of the multimedia systems and the cyberworld [1]. Digital watermarking is performing significantly preferable, when compared to the conventional cryptography systems, because it adds a transparent signature to the data. In this way, the artwork is not protected against being used by authorized users, but the ownership is preserved. Some authors work on methods for general-purpose digital data watermarking(e.g., [2,3]). These approaches do not use the problem-specific constraints and possibilities. Also, as watermarking has a very tight relation to cryptography, some researchers have tried to solve both problems at once (e.g., see [2,4]). Here, we draw a line between these two different approaches of finding some empty places in the image for embedding some data, and to encrypt that data. The main concern of data hiding, is to find those portions of the image domain that are ignored by the human observer. Since the birth of the watermarking concept, many researchers have worked on the redundancy of natural images in the spatial domain, resulting in high performance spatial watermarking techniques [1]. Although, some researchers have worked on data hiding using the spectral redundancy (e.g., see [3]), but small work has been performed on finding the real redundant spectral data in the image. Generally, authors work on some subjectively-investigated assumption about the color channels [3,5,6]. For example in [3] the authors add the watermark to the blue plane. In [6] the author use the CIE-Lab color space in a simple quantization method, to add the color watermark in the least significant bits. Many other authors have tried the quantization techniques [3,4,6], which shows itself as an additive noise, and its robustness when attacked with noise removal methods, is doubtful. In this paper, we address the special problem of color image watermarking using the perceptual gaps in natural images in the spectral domain, which could be filled with watermark data, without disturbing the visual appeal of the original image, using the eigenimage theory.

2  Proposed Algorithm

2.1  Reconstruction Error-Based Homogeneity Criteria

In [7], the authors proposed to use the error made by neglecting the two less important principal components as a likelihood measure. The likelihood of the vector [c\vec] to the cluster r is defined as er([c\vec])=|| \acute[v\vec]([c\vec]-[(h)\vec])[v\vec]-([c\vec]-[(h)\vec])||, where, [v\vec] shows the direction of the first principal component and ||[x\vec]|| denotes the normalized L1 norm. In [7] the authors proposed to use the following stochastic norm as the region homogeneity: ||f||r,p=arge(P[x\vec] Î r{f([x\vec]) £ e} ³ p), where p is the inclusion percentage. The ||er||r,p is proved to outperform conventional Euclidean and Mahalanobis approaches [8]. The criterion is used for quad-tree decomposition [9] as a rough image segmentation tool.

2.2  Bi-tree Decomposition

Having a suitable homogeneity criteria, the image can be decomposed into homogenous blocks. Starting with the entire image area, the tree is produced using the homogeneity criteria defined as ||er||r,p £ e1, where e1 is a user-selected parameter, mostly in the range [1... 10]. In an W×H image, the depth of a w×h block r is defined as rr=max{log2W/w,log2H/h} and no block is permitted to reach to the depth more than a preselected marginal depth value r. During the decomposition stage, all the information is saved as a 22×N matrix called L, where N is the number of the blocks and each column of L consists of x1, y1, x2, y2, h1, h2, h3, Vij,i,j=1,¼,3 and some reserved parameters. Where, [h1,h2,h3]T is the expectation of the color information and Vij are the elements of the PCA Matrix V, both corresponding to the block r. Assume that the image I is fed to the bi-tree decomposition method. If the block r is not enough homogenous, rather than the deterministic choice of sub-blocks in the quad-tree decomposition method, here, two sets of alternatives for decomposition are proposed. Assume that splitting r to two equal rectangles vertically gives the two regions of r1 and r1¢ while splitting horizontally results in r2 and r2¢. Now, if ||er1||r1,p+||er1¢||r1¢,p < ||er2||r2,p+||er2¢||r2¢,p and the depth limitation permits, the block is split vertically and otherwise (if the depth limitation is met) it is split horizontally. In the new method, the rectangular clipping is reserved while the block shape changes to best fit the image details.

2.3  Basis Vectors Polarization

Suppose the space Rn and a set of n basis vectors [v\vec]i,i=1,¼, n. Storing this set of vectors needs n2 cells of memory, when neglecting the redundancy of the data. Having in mind that a set of basis vectors are an ortho-normal set, the actual needed memory can be reduced. In fact, a set of basis vectors of Rn is a member of Rn2, with n constraints of normality (||[v\vec]i||=1,i=1,¼, n) and [(n(n-1))/2] constraints of orthogonality ([v\vec]i^[v\vec]j,i,j=1,¼, n, i ¹ j). Thus, the above-mentioned set of basis vectors is an unconstrained member of an m dimensional space, with m=n2-n-[(n(n-1))/2]=[(n(n-1))/2]. Thus, storing a set of basis vectors of Rn in [(n(n-1))/2] memory cells contains zero redundancy. To make this representation unique, it is crucial to make the set of basis vectors right-rotating (RR). In 2-D spaces, RR means ([v\vec]1×[v\vec]2)·[j\vec] > 0, where, × and · stand for the outer and the inner products, respectively. In 3-D spaces, RR means ([v\vec]1×[v\vec]2)·[v\vec]3 > 0. Setting n=2 leads to m=1, which means that any set of RR basis vectors in the xy plane can be specified uniquely by a single angle. Similarly, the case of n=3 results in m=3, which is used in this paper. Note that, in both cases the m parameters are angles between vectors and some fixed planes. Though, we call this method the polarization method. Consider the three right rotating vectors [v\vec]1,[v\vec]2,[v\vec]3 in R3, we define the three angles q, f, and y as follows. This representation is a manipulated version of the well-known set of Euler angles. Using [v\vec]p as the projection of [v\vec] on the plane p (e.g. [v\vec]1xy), the three angles are defined as:


q = Ð(

®

v

 

xy
1 

,[1,0]T)f = Ð((Rqxy

®

v

 


1 

)xz,[1,0]T)y = Ð((RfxzRqxy

®

v

 


2 

)yz,[1,0]T)                                   

(1)

where, Ð([v\vec],[u\vec]) stands for the angle between the two vectors [v\vec],[u\vec] Î R2. Also, Rap is the 3×3 matrix of a radians counter-clock-wise rotation in the p plane. It can be easily proved, using 3-D geometrical concepts, that the 3×3 matrix V with [v\vec]i as its i-th column satisfies, RyyzRfxzRqxyV=I. Having in mind that, (Rpa)-1=Rp-a, wa have V=R-qxyR-fxzR-yyz. While equation (1) computes the three angles q, f, and y out of the basis vectors (polarization), the above matrix multiplication, reproduces the basis from q, f, and y (depolarization).

2.4  Blockwise Interpolation

Assume a partition of NW×NH into the set of rectangular regions {ri|i=1,¼, n}, with corresponding values of {li|i=1,¼, n}, satisfying li=argl("[c\vec] Î ri, f([c\vec]) @ l), for an unknown smooth function f:R2®R. The problem is to find [f\tilde] as an approximation of f using ri and li. We address this problem as blockwise interpolation of the set {(ri;li)|i=1,¼, n}. Note that in the case that the partition is a rectangular grid, the problem reduces to an ordinary 2-D interpolation task. Here, we use the same idea with some manipulations using a reformulated version of the well-known low-pass Butterworth filter:


Bt,N(x)=

æ
è

1+

æ
è

x


t

ö
ø

2N

 

ö
ø

-1/2

 

                                   

(2)

N=rnd

æ
è

loga/b

æ
è

b


Ö


1- a2


a


Ö


1-b2

ö
ø

ö
ø

,t = a

2N æ
Ö
 

 


a2


1-a2

 

(3)

where, rnd(x) is the nearest integer value to x. The function B(x) satisfies the two below conditions, Bt,N(a) @ a and Bt,N(b) @ b. The 2-D version of this function is defined as Bt,Nw,h(x,y)=Bwt,N(x)Bht,N(y), where w and h control the spread of the function in x and y directions, respectively. Assuming that the region ri has its center on (xi,yi), while its height and width are wi and hi, respectively, we propose the function [f\tilde] as:


~

f

 

(x,y)=

SliBt,N[(wi)/2],[(hi)/2](x-xi,y-yi)


SBt,N[(wi)/2],[(hi)/2](x-xi,y-yi)

                      

(4)

The function [f\tilde](x,y) proposed in (4) is a smooth version of the initial step-case function f°(x,y)=li, [x,y]T Î ri. Also, setting proper values of the parameters a, b, a, and b, the function [f\tilde](x,y) will satisfy the problem conditions. The proper set of parameters must force the corresponding kernel to be nearly one in the entire ri, except for the borders, also preventing ri to intrude the interior points of rj, for i ¹ j. Selecting a near unity (but smaller) value for a and a limits the decline of the ceil of the function, while setting b=1 and a not too big value for b controls the effect of neighbor regions on each other. Setting a=1-, a = 1, b=1+, and b = 0 is the marginal choice leading to the non-smoothed stair-case function.

As the generalization of blockwise interpolation, let the set of m+1-tuples {(ri;lij)|i=1,¼, n,j=1,¼,m}, where the set of functions fj,j=1,¼, m is desired to satisfy lij=argl("[c\vec] Î ri,fj([c\vec]) @ l), for a set of unknown functions fj:R2® R,j=1,,¼, m. In a similar solutions with (4), we propose:


~

f

 


j 

(x,y)=

SlijBt,N[(wi)/2],[(hi)/2](x-xi,y-yi)


SBt,N[(wi)/2],[(hi)/2](x-xi,y-yi)

                   

(5)

Here, because the set of base regions for all [f\tilde]j is the same, the total performance is increased by computing each kernel for each value of j just once. Then the problem reduces to m times computing a weighted average.

Considering the polar coordinates, because of the 2p discontinuity, ordinary algebraic operations on the angular variables leads to spurious results. For example [(0+2p)/2]=p, while the average of 0 radians and 2p radians equals 0 º 2p radians. To overcome this problem, we propose a new method. For the given problem {(ri;qi)|i=1,¼, n} solve the problem {(ri;cosqi, sinqi) | i=1,¼, n} to find the two functions fsin and fcos and then find q using ordinary trigonometric methods. The interpolation is performed on both sinqi and cosqi to avoid ambiguity in the polar plane.

2.5  The Eigenimage

Assume the PCA matrix (Vr) and the expectation vector ([(h)\vec]r), corresponding to the homogenous cluster r. For the color vector [c\vec] belonging to r, [c\vec]¢=Vr-1([c\vec]-[(h)\vec]r) gives the PCA coordinates. Assume that we can somehow find the color cluster r[c\vec] for each color vector [c\vec], where r[c\vec] describes the color mood of [c\vec], in the sense that, [c\vec]¢=Vr[c\vec]-1([c\vec]-[(h)\vec]r[c\vec]), satisfies, sc¢1 >> sc¢2 >> sc¢3, where [c\vec]¢=[c¢1,c¢2,c¢3]T. We call the images c¢1, c¢2, and c¢3 as pc1, pc2, and pc3, respectively. The original image can be perfectly reconstructed using these channels, except for the numerical errors as [c\vec]3=Vr[c\vec][c\vec]¢+[(h)\vec]r[c\vec]. Thus, we have [c\vec] @ [c\vec]3. It is proved in [7] that for homogenous swatches, neglecting pc3 or both pc2 and pc3 gives good approximations of the original image. Here we generalize the results for all images. Note that the perfect reconstruction does not rely on the compaction condition, while the partial reconstructions do rely on it. The partial reconstructions are [c\vec]2=Vr[c\vec][c¢1,c¢2,0]T+[(h)\vec]r[c\vec] and [c\vec]1=Vr[c\vec][c¢1,0,0]T+[(h)\vec]r[c\vec]. Although this scheme gives a 1-D representation of a given color image, if the computation of Vr[c\vec] and [(h)\vec]r[c\vec] needs embedding huge information to the original image or vast computation, the scheme although being theoretically promising, but actually is not applicable. So we are seeking for a method for describing Vr[c\vec] and [(h)\vec]r[c\vec] in a simple way. The case for defining r[c\vec]=N[c\vec] (the neighborhood) is automatically rejected, because to compute Vr[c\vec] and [(h)\vec]r[c\vec] we need all the neighborhood points of [c\vec] leading to too much redundancy and computation cost. Also, we are not interested in computing and embedding Vr[c\vec] and [(h)\vec]r[c\vec] to each pixel, which leads to 1100% redundancy. Here, we propose a fast method for computing corresponding Vr[c\vec] and [(h)\vec]r[c\vec] for all the pixels. Assume feeding the given image I to the bi-tree (or equivalently to the quad-tree) decomposition method. The output of the decomposition method is the matrix L containing the coordinates of ri along with the expectation matrix [(h)\vec]i and the polarized version of the PCA matrix (qi,fi,yi). Storing this portion of the L matrix needs 10n bytes. For ordinary values of n abour 200 in a 512×512 image, L will take about [1/4000] of the original image. Now assume solving the problem {(ri;xi)|i=1,¼, n} using the blockwise interpolation, where xi is the row vector containing hi1, hi2, hi3, qi, fi, and yi. Note that, the three values of qi, fi, and yi are angular values. Assume the solutions of the problem as the functions [(h)\tilde]1, [(h)\tilde]2, [(h)\tilde]3, [(q)\tilde], [(f)\tilde], and [(y)\tilde]. Now we compute the functions [([(h)\vec])\tilde]:R2® R3 and [V\tilde]:R2® R9, as the value of the expectation vector and the PCA matrix in each pixel, respectively. Using the PCA projection, the three eigenimages pc1, pc2 and pc3 are computed as [pc1(x,y),pc2(x,y),pc3(x,y)]T=[V\tilde](x,y)-1[I(x,y)-[([(h)\vec])\tilde](x,y)]. We call the function [([(h)\vec])\tilde]:R2® R3 as the expectation map (Emap) and the polarized version of [V\tilde]:R2®R9 as the rotation map (Rmap), respectively. As the PCA theory states [10], we expect the standard deviation of the three planes to be descending, with spc1 more larger than others. From linear algebra we have that for orthonormal transformation Vr we have, spc12+spc22+spc32=sr2+sg2+sb2. Thus, ki=spc12/(spc12+spc22+spc32) gives the amount of information available in the i-th eigenimage. Note that k1+k2+k3=1.

2.6  Proposed Watermarking Method

Assume the image I with the three eigenimages pc1, pc2, and pc3. Although, there is no orthogonality constraint in the eigenimage theory, but the eigenimage approach can be adapted for watermarking purposes. Assume that the gray-scale image W is to be embedded into I as a watermark. Also, assume that I and W are of the same size. First the dynamic range of W is fit into pc3 domain, as [W\tilde]=[(s)\tilde]pc3(W-hW)/sW. Replacing the pc3 with the scaled version of the watermark ([W\tilde]), the watermarked image is reconstructed (I¢). The process of extracting the watermark is vice versa: compute the eigenimages corresponding to the given image as pc1¢, pc2¢, and pc3¢. Clearly pc3¢, when normalized, contains the watermark. We propose the normalization scheme as W¢=255/(2spc3¢)(pc3¢-hpc3¢-spc3¢).

3  Experimental Results

Assume the image shown in figure 1-a, which is decomposed with parameters of p=0.5, e1=5, and r = 5 into 91 blocks (see figure 1-b). Figures 1-c and 1-d show the corresponding EMap and RMap, and figure 2 shows the three pci channels. Note that the dynamic range is exaggerated in all eigenimages to give a better visualization. The stochastic distribution of pci is investigated in figure 2-d which shows the histogram of the three eigenimages corresponding to the image shown in figure 1-a. In this example the standard deviations of the pc planes are computed as: spc1=52, spc2=12, and spc3=6, leading to k1=94%, k2=5%, and k3=1%.

fig3-a.jpg

fig3-b.jpg

fig3-c.jpg

fig3-d.jpg

(a)

(b)

(c)

(d)

Figure 1: (a) Original Image (adopted from [11]). (b) Result of Bi-tree decomposition. (c) Emap. (d) Rmap.

fig25-a.jpg

fig25-b.jpg

fig25-c.jpg

fig8.jpg

pc1

pc2

pc3

Figure 2: Eigenimages of the sample image and their histogram.

Figure 3-a, 3-b, and 3-c show the values of k1, k2, and k3 for the image in figure 1-a for different values of e1 and r. Rather than the trivial cases of r £ 2 and e1 > 9 (which are never actually used) more than 90% of the image energy is compacted in pc1, while pc2 and pc3 hold about 9% and 1% of the energy, respectively. Having in mind that kr=38%, kg=32%, and kb=30% in the original image, the energy compaction of the proposed eigenimage extraction method is clear.

fig11-a.jpg

fig11-b.jpg

fig11-c.jpg

(a)

(b)

(c)

Figure 3: The energy distribution of the eigenimages for different values of e1 and r. (a)kpc1, (b)kpc2, (c)kpc3.

Figure 4 shows the results of reconstructing the sample image from its corresponding eigenimages. While figure 4-a shows the result of reconstructing the image using all three eigenimages, figures 4-b and 4-c show the results of ignoring pc3 and both pc3 and pc2, respectively. The resulting PSNR values are 60dB, 38dB, and 31dB. Note that PSNR=60dB < ¥ when reconstructing the image using all eigenimages is caused only by numerical errors, while the two other PSNR values (38dB and 31dB) shows the loss of information. Figure 5 shows the PSNR values obtained by reconstructing the image using all the three channels (figure 5-a), only two channels (figure 5-b), and just one channel (figure 5-c), for different values of e1 and r. For values of e1 £ 8 and r ³ 3, reconstructing the image using all eigenimages gives the high PSNR value of about 60dB, while neglecting one and two eigenimages, results in PSNR ³ 35dB and PSNR ³ 28dB.

fig12-a.jpg

fig12-b.jpg

fig12-c.jpg

(a)

(b)

(c)

Figure 4: Results of reconstructing the sample image from its eigenimages, (a) using all eigenimages (PSNR=60dB), (b) ignoring one eigenimage (PSNR=38dB), and (c) ignoring two eigenimages (PSNR=31dB).

fig22-a.jpg

fig22-b.jpg

fig22-c.jpg

(a)

(b)

(c)

Figure 5: PSNR values of image reconstruction using (a) three eigenimages, (b) two eigenimages, and (c) one eigenimage for different values of e1 and r.

Figure 6-b shows the results of embedding the watermark shown in Figure 6-a into the image shown in figure 1-a with the resulting PSNR equal to 35dB. Figure 6-c shows the exaggerated difference between the original image and the watermarked image, and Figure 6-d shows the extracted watermark. Investigating figure 6-c shows where the method hides the data; at each pixel, the direction of the third principal component shows the direction in which data can be placed while not affecting the visual appeal of the image.

fig23-a.jpg

fig23-b.jpg

fig23-c.jpg

fig23-d.jpg

(a)

(b)

(c)

(d)

Figure 6: Results of the proposed watermarking method performed on the sample image. (a) Watermark signal. (b) Watermarked image with PSNR=35dB. (c) Exaggerated difference between the original image and the watermarked image. (d) Extracted watermark.

To test the robustness of the proposed watermarking method against invasive attacks, 42 sample images containing Lena, Mandrill, Girl, Couple, Airplane, and Peppers and 9 watermarks of the logos of Sharif University of Technology, IEEE, and Elsevier is investigated. The watermarked images are attacked by some methods using the Adobe Photoshop 6.0. Figure 7 shows some of the attacked watermarked images and figure 8 shows the corresponding extracted watermarks.

fig29-1.jpg

fig29-3.jpg

fig29-5.jpg

fig29-9.jpg

fig29-11.jpg

fig29-13.jpg

fig29-23.jpg

fig29-28.jpg

Figure 7: The attacked watermarked images.

fig30-1.jpg

fig30-3.jpg

fig30-5.jpg

fig30-9.jpg

fig30-11.jpg

fig30-13.jpg

fig30-23.jpg

fig30-28.jpg

Figure 8: The extracted watermarks.

Investigating figure 8 along with other numerous tests shows that the proposed watermarking method is robust against linear and nonlinear geometrical transformations including rotation, scaling, cropping, and other geometrical distortions. Also, it is robust against occlusion, artistic effects, captioning, noise addition, enhancement operations like brightening and increasing contrast (even when performed locally), lossy compression, frequency domain filtering, and different kinds of blurring.

Table 1 compares the proposed watermarking method with the best methods available in the literature. The table lists the watermark capacity of each method when embedding to a 512×512 color image along with the domain in which data is embedded. Also, the attack resistance of different approaches is compared here. It is observed in different experiments that the standard deviation of pc3 in a typical image is more than 4. Thus, using the proposed watermarking method on a 512×512 color image at least a same-sized 2bpp image can be used as the watermark signal. This makes the watermark capacity of the proposed method equal to 64KB. This is four times more than the highest capacity of the available approaches (the method by Barni et. al. [12]). The only approaches using the color vectors are proposed by Chou et. al. [6] and Piyu et. al. [4]. Note that, the method by Chou et. al. [6] is the only method showing resistance to the linear point operations like brightening and contrast enhancement. It must be emphasized that their method's resistance is limited to the global such operations, while our proposed method is resistant even to local linear point operations (see Figure 7-f,g). Unfortunately, no attention is spent on non-linear geometrical operations like elastic and perspective transformation and image editing processes like adding text, artistic effects, occlusion and so on. While many copyrighted images are used in books, posters, and websites where they appear with some levels of artistic manipulation, the non-efficiency of the available watermarking literature in dealing with these attacks is a real shortcoming. Table 1 depicts that the proposed method is the only available method resistant to the seven groups of attacks listed in its caption.

4  Conclusions

A new PCA-based watermarking method is proposed that uses the spectral redundancy in an image to embed a same-sized gray-scale image into it. The experimental results show that, while the method gives high values of PSNR and no subjective artifact, it is highly resistant against invasive attacks. The method responds promisingly when dealing with attacks in the spatial domain (linear and non-linear geometrical transformations), the spectral domain (manipulating the contrast, brightness both globally and locally), and frequency domain (filtering and blurring). To the best knowledge of the authors no watermarking method with such robustness is available in the

Acknowledgement

The first author wishes to thank Ms. Azadeh Yadollahi for her encouragement and invaluable ideas.

Table 1: Comparison of different watermarking methods with the proposed method used in a 512×512 image. -: Not Resistant. ~ : Partially Resistant. Ö: Completely Resistant. [Abbreviations: Res: Resistance, G: Grayscale, SCC: Single Color Component, CV: Color Vector, LG: Linear Geometrical Transformation, NLG: Nonlinear Geometrical Transformation, LPO: Linear Point Operations, NLPO:Nonlinear Point Operations, SO: Spatial Domain Operations, EO: Editing Operations, CMP: JPEG Compression].

Method

[13]

[14]

[15]

[6]

[3]

[12]

[16]

[17]

[18]

[19]

[4]

Proposed

Capacity

4KB

8KB

8KB

2KB

1KB

16KB

8B

0.5KB

60B

64B

2KB

64KB

Domain

G

G

G

CV

SCC

SCC

SCC

G

G

G

CV

CV

Res.

LG

~

Ö

~

~

~

Ö

-

Ö

Ö

~

~

Ö

NLG

-

-

-

-

-

-

-

-

-

-

-

Ö

LPO

-

-

-

~

-

-

-

-

-

-

-

Ö

NLPO

-

~

~

~

~

~

-

~

-

-

~

Ö

SO

-

~

~

~

~

~

-

~

~

~

~

Ö

EO

-

-

-

-

-

-

-

-

-

-

-

Ö

CMP

Ö

Ö

Ö

Ö

Ö

Ö

~

Ö

Ö

Ö

Ö

Ö



References

[1]
M. Swanson, M. Kobayashi, and A. Tewfik, "Multimedia data-embedding and watermarking technologies," IEEE, vol. 86(6), pp. 1064-1087, June 1998.
[2]
M. Kutter, F. Jordan, and F. Bossen, "Digital watermarking of color images using amplitude modulation," Electronic Imaging, vol. 7(2), pp. 326-332, April 1998.
[3]
P.-T. Yu, H.-H. Tasi, and J.-S. Lin, "Digital watermarking based on neural networks for color images," Signal Processing, vol. 81, pp. 663-671, 2001.
[4]
P. Tsai, Y.-C. Hu, and C.-C. Chang, "A color image watermarking scheme based on color quantization," Signal Processing, vol. 84, pp. 95-106, 2004.
[5]
S. Gilani, I. Kostopoulos, and A. Skodras, "Color image-adaptive watermarking," in 14th Int. Conf. on Digital Signal Processing(DSP2002), vol. 2, Santorini, Greece, 1-3 July 2002, pp. 721-724.
[6]
C.-H. Chou and T.-L. Wu, "Embedding color watermarks in color images," EURASIP Journal on Applied Signal Processing, vol. 1, pp. 327-332, 2001.
[7]
A. Abadpour and S. Kasaei, "A new parametric linear adaptive color space and its pca-based implementation," in The 9th Annual CSI Computer Conference, CSICC, Tehran, Iran, Feb. 2004, pp. 125-132.
[8]
--, "Performance analysis of three homogeneity criteria for color image processing," in IPM Workshop on Computer Vision, Tehran, Iran, 2004.
[9]
H. Samet, "Region representation: Quadtrees from boundary codes," Comm. ACM, vol. 21, p. 163:170, March 1980.
[10]
H. Hotteling, "Analysis of a complex of statistical variables into principal components." Journal of Educational Psychology, vol. 24, pp. 417-441, 1933.
[11]
S. T. Seifi and A. Qanavati, "Digital color image archive," Qnavati@mehr.sharif.edu.
[12]
M. Barni, F. Bartolini, and A. Piva, "Multichannel watermarking of color images," IEEE Transaction on Circuits and Systems for Video Technology, vol. 12(3), pp. 142-156, 2002.
[13]
J. Ruanaidh, W. Dowling, and F. Boland, "Watermarking digital images for copyright protection," IEE Proceedings on Vision, Signal and Image Processing, vol. 143(4), pp. 250-256, August 1996.
[14]
A. Nikoladis and I. Pitas, "Robust watermarking of facial images based on salient geometric pattern matching," IEEE Transaction on Multimedia, vol. 2(3), pp. 172-184, September 2000.
[15]
M.-S. Hsieh, D.-C. Tseng, and Y.-H. Huang, "Hiding digital watermarks using miltiresolution wavelet transform," IEEE Transaction on Industrial Electronics, vol. 48(5), pp. 875-882, October 2001.
[16]
M. Kutter and S. Winkler, "A vision-based masking model for spread-spctrum image watermarking," IEEE Transaction on Image PRocessing, vol. 11(1), pp. 16-25, January 2002.
[17]
C.-W. Tang and H. Hang, "A feature-based robust digital image watermarking scheme," IEEE Transaction on Signal Processing, vol. 51(4), pp. 950-959, April 2003.
[18]
X. Kang, J. Huang, Y. Q. Shi, and Y. Lin, "A dwt-dft composite watermarking scheme robust to both affine transform and jpeg compression," IEEE Transactions on Circuits and Systems for Video Technology, vol. 13(8), pp. 776-786, August 2003.
[19]
Shih-Hao and Y.-P. Lin, "Wavelet tree quantization for copyright protection watermarking," IEEE Transactions on Image Processing, vol. 13(2), pp. 154-165, February 2004.




File translated from TEX by TTH, version 3.72.
On 27 Aug 2006, 17:28.