Although, the possibility of electronic storage and transmission has been beneficial for extending the intellectual communication, the easiness of copying and transmitting the data in the internet has increased the piracy and illegal use of owned artworks. As a result, digital data watermarking, has been under thorough investigation in recent years. The main aim of data hiding is to add some transparent signature to the data in an attackresistance fashion, to be used for ownership claims. In this paper a new PCAbased watermarking method is proposed for color images. Rather than the dominant spatial, and the few available semispectral approaches, the proposed method uses the true redundancy in the spectral domain. The proposed watermarking method is resistent to available attacks, such as sophisticated geometrical transformations, artistic effects, lossy compression, frequency domain filtering, motion blur, occlusion, and enhancement. Experimental results show the efficiency of the proposed algorithm.
The spread of the electronic data on the internet has emerged the need to take larger steps towards copyright protection. Thus, there has been a need to watermark the artworks for legal reasons since development of the multimedia systems and the cyberworld [1]. Digital watermarking is performing significantly preferable, when compared to the conventional cryptography systems, because it adds a transparent signature to the data. In this way, the artwork is not protected against being used by authorized users, but the ownership is preserved. Some authors work on methods for generalpurpose digital data watermarking(e.g., [2,3]). These approaches do not use the problemspecific constraints and possibilities. Also, as watermarking has a very tight relation to cryptography, some researchers have tried to solve both problems at once (e.g., see [2,4]). Here, we draw a line between these two different approaches of finding some empty places in the image for embedding some data, and to encrypt that data. The main concern of data hiding, is to find those portions of the image domain that are ignored by the human observer. Since the birth of the watermarking concept, many researchers have worked on the redundancy of natural images in the spatial domain, resulting in high performance spatial watermarking techniques [1]. Although, some researchers have worked on data hiding using the spectral redundancy (e.g., see [3]), but small work has been performed on finding the real redundant spectral data in the image. Generally, authors work on some subjectivelyinvestigated assumption about the color channels [3,5,6]. For example in [3] the authors add the watermark to the blue plane. In [6] the author use the CIELab color space in a simple quantization method, to add the color watermark in the least significant bits. Many other authors have tried the quantization techniques [3,4,6], which shows itself as an additive noise, and its robustness when attacked with noise removal methods, is doubtful. In this paper, we address the special problem of color image watermarking using the perceptual gaps in natural images in the spectral domain, which could be filled with watermark data, without disturbing the visual appeal of the original image, using the eigenimage theory.
In [7], the authors proposed to use the error made by neglecting the two less important principal components as a likelihood measure. The likelihood of the vector [c\vec] to the cluster r is defined as e_{r}([c\vec])= \acute[v\vec]([c\vec][(h)\vec])[v\vec]([c\vec][(h)\vec]), where, [v\vec] shows the direction of the first principal component and [x\vec] denotes the normalized L_{1} norm. In [7] the authors proposed to use the following stochastic norm as the region homogeneity: f_{r,p}=arg_{e}(P_{[x\vec] Î r}{f([x\vec]) £ e} ³ p), where p is the inclusion percentage. The e_{r}_{r,p} is proved to outperform conventional Euclidean and Mahalanobis approaches [8]. The criterion is used for quadtree decomposition [9] as a rough image segmentation tool.
Having a suitable homogeneity criteria, the image can be decomposed into homogenous blocks. Starting with the entire image area, the tree is produced using the homogeneity criteria defined as e_{r}_{r,p} £ e_{1}, where e_{1} is a userselected parameter, mostly in the range [1... 10]. In an W×H image, the depth of a w×h block r is defined as r_{r}=max{log_{2}^{W}/_{w},log_{2}^{H}/_{h}} and no block is permitted to reach to the depth more than a preselected marginal depth value r. During the decomposition stage, all the information is saved as a 22×N matrix called L, where N is the number of the blocks and each column of L consists of x_{1}, y_{1}, x_{2}, y_{2}, h_{1}, h_{2}, h_{3}, V_{ij},i,j=1,¼,3 and some reserved parameters. Where, [h_{1},h_{2},h_{3}]^{T} is the expectation of the color information and V_{ij} are the elements of the PCA Matrix V, both corresponding to the block r. Assume that the image I is fed to the bitree decomposition method. If the block r is not enough homogenous, rather than the deterministic choice of subblocks in the quadtree decomposition method, here, two sets of alternatives for decomposition are proposed. Assume that splitting r to two equal rectangles vertically gives the two regions of r_{1} and r_{1}¢ while splitting horizontally results in r_{2} and r_{2}¢. Now, if e_{r1}_{r1,p}+e_{r1¢}_{r1¢,p} < e_{r2}_{r2,p}+e_{r2¢}_{r2¢,p} and the depth limitation permits, the block is split vertically and otherwise (if the depth limitation is met) it is split horizontally. In the new method, the rectangular clipping is reserved while the block shape changes to best fit the image details.
Suppose the space R^{n} and a set of n basis vectors [v\vec]_{i},i=1,¼, n. Storing this set of vectors needs n^{2} cells of memory, when neglecting the redundancy of the data. Having in mind that a set of basis vectors are an orthonormal set, the actual needed memory can be reduced. In fact, a set of basis vectors of R^{n} is a member of R^{n2}, with n constraints of normality ([v\vec]_{i}=1,i=1,¼, n) and [(n(n1))/2] constraints of orthogonality ([v\vec]_{i}^[v\vec]_{j},i,j=1,¼, n, i ¹ j). Thus, the abovementioned set of basis vectors is an unconstrained member of an m dimensional space, with m=n^{2}n[(n(n1))/2]=[(n(n1))/2]. Thus, storing a set of basis vectors of R^{n} in [(n(n1))/2] memory cells contains zero redundancy. To make this representation unique, it is crucial to make the set of basis vectors rightrotating (RR). In 2D spaces, RR means ([v\vec]_{1}×[v\vec]_{2})·[j\vec] > 0, where, × and · stand for the outer and the inner products, respectively. In 3D spaces, RR means ([v\vec]_{1}×[v\vec]_{2})·[v\vec]_{3} > 0. Setting n=2 leads to m=1, which means that any set of RR basis vectors in the xy plane can be specified uniquely by a single angle. Similarly, the case of n=3 results in m=3, which is used in this paper. Note that, in both cases the m parameters are angles between vectors and some fixed planes. Though, we call this method the polarization method. Consider the three right rotating vectors [v\vec]_{1},[v\vec]_{2},[v\vec]_{3} in R^{3}, we define the three angles q, f, and y as follows. This representation is a manipulated version of the wellknown set of Euler angles. Using [v\vec]^{p} as the projection of [v\vec] on the plane p (e.g. [v\vec]_{1}^{xy}), the three angles are defined as:
q = Ð( 
® v

xy 
,[1,0]^{T})f = Ð((R_{q}^{xy} 
® v


)^{xz},[1,0]^{T})y = Ð((R_{f}^{xz}R_{q}^{xy} 
® v


)^{yz},[1,0]^{T}) 
(1) 

where, Ð([v\vec],[u\vec]) stands for the angle between the two vectors [v\vec],[u\vec] Î R^{2}. Also, R_{a}^{p} is the 3×3 matrix of a radians counterclockwise rotation in the p plane. It can be easily proved, using 3D geometrical concepts, that the 3×3 matrix V with [v\vec]_{i} as its ith column satisfies, R_{y}^{yz}R_{f}^{xz}R_{q}^{xy}V=I. Having in mind that, (R^{p}_{a})^{1}=R^{p}_{a}, wa have V=R_{q}^{xy}R_{f}^{xz}R_{y}^{yz}. While equation (1) computes the three angles q, f, and y out of the basis vectors (polarization), the above matrix multiplication, reproduces the basis from q, f, and y (depolarization).
Assume a partition of N_{W}×N_{H} into the set of rectangular regions {r_{i}i=1,¼, n}, with corresponding values of {l_{i}i=1,¼, n}, satisfying l_{i}=arg_{l}("[c\vec] Î r_{i}, f([c\vec]) @ l), for an unknown smooth function f:R^{2}®R. The problem is to find [f\tilde] as an approximation of f using r_{i} and l_{i}. We address this problem as blockwise interpolation of the set {(r_{i};l_{i})i=1,¼, n}. Note that in the case that the partition is a rectangular grid, the problem reduces to an ordinary 2D interpolation task. Here, we use the same idea with some manipulations using a reformulated version of the wellknown lowpass Butterworth filter:
B_{t,N}(x)= 
æ 
1+ 
æ 
x t 
ö 
2N 
ö 
^{1}/_{2} 

(2) 

N=rnd 
æ 
log_{a/b} 
æ 

ö 
ö 
,t = a 
2N æ


(3) 

where, rnd(x) is the nearest integer value to x. The function B(x) satisfies the two below conditions, B_{t,N}(a) @ a and B_{t,N}(b) @ b. The 2D version of this function is defined as B_{t,N}^{w,h}(x,y)=B_{wt,N}(x)B_{ht,N}(y), where w and h control the spread of the function in x and y directions, respectively. Assuming that the region r_{i} has its center on (x_{i},y_{i}), while its height and width are w_{i} and h_{i}, respectively, we propose the function [f\tilde] as:
~ f

(x,y)= 
Sl_{i}B_{t,N}^{[(wi)/2],[(hi)/2]}(xx_{i},yy_{i}) SB_{t,N}^{[(wi)/2],[(hi)/2]}(xx_{i},yy_{i}) 

(4) 

The function [f\tilde](x,y) proposed in (4) is a smooth version of the initial stepcase function f_{°}(x,y)=l_{i}, [x,y]^{T} Î r_{i}. Also, setting proper values of the parameters a, b, a, and b, the function [f\tilde](x,y) will satisfy the problem conditions. The proper set of parameters must force the corresponding kernel to be nearly one in the entire r_{i}, except for the borders, also preventing r_{i} to intrude the interior points of r_{j}, for i ¹ j. Selecting a near unity (but smaller) value for a and a limits the decline of the ceil of the function, while setting b=1 and a not too big value for b controls the effect of neighbor regions on each other. Setting a=1^{}, a = 1, b=1^{+}, and b = 0 is the marginal choice leading to the nonsmoothed staircase function.
As the generalization of blockwise interpolation, let the set of m+1tuples {(r_{i};l_{ij})i=1,¼, n,j=1,¼,m}, where the set of functions f_{j},j=1,¼, m is desired to satisfy l_{ij}=arg_{l}("[c\vec] Î r_{i},f_{j}([c\vec]) @ l), for a set of unknown functions f_{j}:R^{2}® R,j=1,,¼, m. In a similar solutions with (4), we propose:
~ f


(x,y)= 
Sl_{ij}B_{t,N}^{[(wi)/2],[(hi)/2]}(xx_{i},yy_{i}) SB_{t,N}^{[(wi)/2],[(hi)/2]}(xx_{i},yy_{i}) 

(5) 

Here, because the set of base regions for all [f\tilde]_{j} is the same, the total performance is increased by computing each kernel for each value of j just once. Then the problem reduces to m times computing a weighted average.
Considering the polar coordinates, because of the 2p discontinuity, ordinary algebraic operations on the angular variables leads to spurious results. For example [(0+2p)/2]=p, while the average of 0 radians and 2p radians equals 0 º 2p radians. To overcome this problem, we propose a new method. For the given problem {(r_{i};q_{i})i=1,¼, n} solve the problem {(r_{i};cosq_{i}, sinq_{i})  i=1,¼, n} to find the two functions f_{sin} and f_{cos} and then find q using ordinary trigonometric methods. The interpolation is performed on both sinq_{i} and cosq_{i} to avoid ambiguity in the polar plane.
Assume the PCA matrix (V_{r}) and the expectation vector ([(h)\vec]_{r}), corresponding to the homogenous cluster r. For the color vector [c\vec] belonging to r, [c\vec]¢=V_{r}^{1}([c\vec][(h)\vec]_{r}) gives the PCA coordinates. Assume that we can somehow find the color cluster r_{[c\vec]} for each color vector [c\vec], where r_{[c\vec]} describes the color mood of [c\vec], in the sense that, [c\vec]¢=V_{r[c\vec]}^{1}([c\vec][(h)\vec]_{r[c\vec]}), satisfies, s_{c¢1} >> s_{c¢2} >> s_{c¢3}, where [c\vec]¢=[c¢_{1},c¢_{2},c¢_{3}]^{T}. We call the images c¢_{1}, c¢_{2}, and c¢_{3} as pc_{1}, pc_{2}, and pc_{3}, respectively. The original image can be perfectly reconstructed using these channels, except for the numerical errors as [c\vec]_{3}=V_{r[c\vec]}[c\vec]¢+[(h)\vec]_{r[c\vec]}. Thus, we have [c\vec] @ [c\vec]_{3}. It is proved in [7] that for homogenous swatches, neglecting pc_{3} or both pc_{2} and pc_{3} gives good approximations of the original image. Here we generalize the results for all images. Note that the perfect reconstruction does not rely on the compaction condition, while the partial reconstructions do rely on it. The partial reconstructions are [c\vec]_{2}=V_{r[c\vec]}[c¢_{1},c¢_{2},0]^{T}+[(h)\vec]_{r[c\vec]} and [c\vec]_{1}=V_{r[c\vec]}[c¢_{1},0,0]^{T}+[(h)\vec]_{r[c\vec]}. Although this scheme gives a 1D representation of a given color image, if the computation of V_{r[c\vec]} and [(h)\vec]_{r[c\vec]} needs embedding huge information to the original image or vast computation, the scheme although being theoretically promising, but actually is not applicable. So we are seeking for a method for describing V_{r[c\vec]} and [(h)\vec]_{r[c\vec]} in a simple way. The case for defining r_{[c\vec]}=N_{[c\vec]} (the neighborhood) is automatically rejected, because to compute V_{r[c\vec]} and [(h)\vec]_{r[c\vec]} we need all the neighborhood points of [c\vec] leading to too much redundancy and computation cost. Also, we are not interested in computing and embedding V_{r[c\vec]} and [(h)\vec]_{r[c\vec]} to each pixel, which leads to 1100% redundancy. Here, we propose a fast method for computing corresponding V_{r[c\vec]} and [(h)\vec]_{r[c\vec]} for all the pixels. Assume feeding the given image I to the bitree (or equivalently to the quadtree) decomposition method. The output of the decomposition method is the matrix L containing the coordinates of r_{i} along with the expectation matrix [(h)\vec]_{i} and the polarized version of the PCA matrix (q_{i},f_{i},y_{i}). Storing this portion of the L matrix needs 10n bytes. For ordinary values of n abour 200 in a 512×512 image, L will take about [1/4000] of the original image. Now assume solving the problem {(r_{i};x_{i})i=1,¼, n} using the blockwise interpolation, where x_{i} is the row vector containing h_{i1}, h_{i2}, h_{i3}, q_{i}, f_{i}, and y_{i}. Note that, the three values of q_{i}, f_{i}, and y_{i} are angular values. Assume the solutions of the problem as the functions [(h)\tilde]_{1}, [(h)\tilde]_{2}, [(h)\tilde]_{3}, [(q)\tilde], [(f)\tilde], and [(y)\tilde]. Now we compute the functions [([(h)\vec])\tilde]:R^{2}® R^{3} and [V\tilde]:R^{2}® R^{9}, as the value of the expectation vector and the PCA matrix in each pixel, respectively. Using the PCA projection, the three eigenimages pc_{1}, pc_{2} and pc_{3} are computed as [pc_{1}(x,y),pc_{2}(x,y),pc_{3}(x,y)]^{T}=[V\tilde](x,y)^{1}[I(x,y)[([(h)\vec])\tilde](x,y)]. We call the function [([(h)\vec])\tilde]:R^{2}® R^{3} as the expectation map (Emap) and the polarized version of [V\tilde]:R^{2}®R^{9} as the rotation map (Rmap), respectively. As the PCA theory states [10], we expect the standard deviation of the three planes to be descending, with s_{pc1} more larger than others. From linear algebra we have that for orthonormal transformation V_{r} we have, s_{pc1}^{2}+s_{pc2}^{2}+s_{pc3}^{2}=s_{r}^{2}+s_{g}^{2}+s_{b}^{2}. Thus, k_{i}=s_{pc1}^{2}/(s_{pc1}^{2}+s_{pc2}^{2}+s_{pc3}^{2}) gives the amount of information available in the ith eigenimage. Note that k_{1}+k_{2}+k_{3}=1.
Assume the image I with the three eigenimages pc_{1}, pc_{2}, and pc_{3}. Although, there is no orthogonality constraint in the eigenimage theory, but the eigenimage approach can be adapted for watermarking purposes. Assume that the grayscale image W is to be embedded into I as a watermark. Also, assume that I and W are of the same size. First the dynamic range of W is fit into pc_{3} domain, as [W\tilde]=[(s)\tilde]_{pc3}(Wh_{W})/s_{W}. Replacing the pc_{3} with the scaled version of the watermark ([W\tilde]), the watermarked image is reconstructed (I¢). The process of extracting the watermark is vice versa: compute the eigenimages corresponding to the given image as pc_{1}¢, pc_{2}¢, and pc_{3}¢. Clearly pc_{3}¢, when normalized, contains the watermark. We propose the normalization scheme as W¢=255/(2s_{pc3¢})(pc_{3}¢h_{pc3¢}s_{pc3¢}).
Assume the image shown in figure 1a, which is decomposed with parameters of p=0.5, e_{1}=5, and r = 5 into 91 blocks (see figure 1b). Figures 1c and 1d show the corresponding EMap and RMap, and figure 2 shows the three pc_{i} channels. Note that the dynamic range is exaggerated in all eigenimages to give a better visualization. The stochastic distribution of pc_{i} is investigated in figure 2d which shows the histogram of the three eigenimages corresponding to the image shown in figure 1a. In this example the standard deviations of the pc planes are computed as: s_{pc1}=52, s_{pc2}=12, and s_{pc3}=6, leading to k_{1}=94%, k_{2}=5%, and k_{3}=1%.
(a) 
(b) 
(c) 
(d) 
Figure 1: (a) Original Image (adopted from [11]). (b) Result of Bitree decomposition. (c) Emap. (d) Rmap.
pc_{1} 
pc_{2} 
pc_{3} 
Figure 2: Eigenimages of the sample image and their histogram.
Figure 3a, 3b, and 3c show the values of k_{1}, k_{2}, and k_{3} for the image in figure 1a for different values of e_{1} and r. Rather than the trivial cases of r £ 2 and e_{1} > 9 (which are never actually used) more than 90% of the image energy is compacted in pc_{1}, while pc_{2} and pc_{3} hold about 9% and 1% of the energy, respectively. Having in mind that k_{r}=38%, k_{g}=32%, and k_{b}=30% in the original image, the energy compaction of the proposed eigenimage extraction method is clear.
(a) 
(b) 
(c) 
Figure 3: The energy distribution of the eigenimages for different values of e_{1} and r. (a)k_{pc1}, (b)k_{pc2}, (c)k_{pc3}.
Figure 4 shows the results of reconstructing the sample image from its corresponding eigenimages. While figure 4a shows the result of reconstructing the image using all three eigenimages, figures 4b and 4c show the results of ignoring pc_{3} and both pc_{3} and pc_{2}, respectively. The resulting PSNR values are 60dB, 38dB, and 31dB. Note that PSNR=60dB < ¥ when reconstructing the image using all eigenimages is caused only by numerical errors, while the two other PSNR values (38dB and 31dB) shows the loss of information. Figure 5 shows the PSNR values obtained by reconstructing the image using all the three channels (figure 5a), only two channels (figure 5b), and just one channel (figure 5c), for different values of e_{1} and r. For values of e_{1} £ 8 and r ³ 3, reconstructing the image using all eigenimages gives the high PSNR value of about 60dB, while neglecting one and two eigenimages, results in PSNR ³ 35dB and PSNR ³ 28dB.
(a) 
(b) 
(c) 
Figure 4: Results of reconstructing the sample image from its eigenimages, (a) using all eigenimages (PSNR=60dB), (b) ignoring one eigenimage (PSNR=38dB), and (c) ignoring two eigenimages (PSNR=31dB).
(a) 
(b) 
(c) 
Figure 5: PSNR values of image reconstruction using (a) three eigenimages, (b) two eigenimages, and (c) one eigenimage for different values of e_{1} and r.
Figure 6b shows the results of embedding the watermark shown in Figure 6a into the image shown in figure 1a with the resulting PSNR equal to 35dB. Figure 6c shows the exaggerated difference between the original image and the watermarked image, and Figure 6d shows the extracted watermark. Investigating figure 6c shows where the method hides the data; at each pixel, the direction of the third principal component shows the direction in which data can be placed while not affecting the visual appeal of the image.
(a) 
(b) 
(c) 
(d) 
Figure 6: Results of the proposed watermarking method performed on the sample image. (a) Watermark signal. (b) Watermarked image with PSNR=35dB. (c) Exaggerated difference between the original image and the watermarked image. (d) Extracted watermark.
To test the robustness of the proposed watermarking method against invasive attacks, 42 sample images containing Lena, Mandrill, Girl, Couple, Airplane, and Peppers and 9 watermarks of the logos of Sharif University of Technology, IEEE, and Elsevier is investigated. The watermarked images are attacked by some methods using the Adobe Photoshop 6.0. Figure 7 shows some of the attacked watermarked images and figure 8 shows the corresponding extracted watermarks.
Figure 7: The attacked watermarked images.
Figure 8: The extracted watermarks.
Investigating figure 8 along with other numerous tests shows that the proposed watermarking method is robust against linear and nonlinear geometrical transformations including rotation, scaling, cropping, and other geometrical distortions. Also, it is robust against occlusion, artistic effects, captioning, noise addition, enhancement operations like brightening and increasing contrast (even when performed locally), lossy compression, frequency domain filtering, and different kinds of blurring.
Table 1 compares the proposed watermarking method with the best methods available in the literature. The table lists the watermark capacity of each method when embedding to a 512×512 color image along with the domain in which data is embedded. Also, the attack resistance of different approaches is compared here. It is observed in different experiments that the standard deviation of pc_{3} in a typical image is more than 4. Thus, using the proposed watermarking method on a 512×512 color image at least a samesized 2bpp image can be used as the watermark signal. This makes the watermark capacity of the proposed method equal to 64KB. This is four times more than the highest capacity of the available approaches (the method by Barni et. al. [12]). The only approaches using the color vectors are proposed by Chou et. al. [6] and Piyu et. al. [4]. Note that, the method by Chou et. al. [6] is the only method showing resistance to the linear point operations like brightening and contrast enhancement. It must be emphasized that their method's resistance is limited to the global such operations, while our proposed method is resistant even to local linear point operations (see Figure 7f,g). Unfortunately, no attention is spent on nonlinear geometrical operations like elastic and perspective transformation and image editing processes like adding text, artistic effects, occlusion and so on. While many copyrighted images are used in books, posters, and websites where they appear with some levels of artistic manipulation, the nonefficiency of the available watermarking literature in dealing with these attacks is a real shortcoming. Table 1 depicts that the proposed method is the only available method resistant to the seven groups of attacks listed in its caption.
A new PCAbased watermarking method is proposed that uses the spectral redundancy in an image to embed a samesized grayscale image into it. The experimental results show that, while the method gives high values of PSNR and no subjective artifact, it is highly resistant against invasive attacks. The method responds promisingly when dealing with attacks in the spatial domain (linear and nonlinear geometrical transformations), the spectral domain (manipulating the contrast, brightness both globally and locally), and frequency domain (filtering and blurring). To the best knowledge of the authors no watermarking method with such robustness is available in the
The first author wishes to thank Ms. Azadeh Yadollahi for her encouragement and invaluable ideas.
Table 1: Comparison of different watermarking methods with the proposed method used in a 512×512 image. : Not Resistant. ~ : Partially Resistant. Ö: Completely Resistant. [Abbreviations: Res: Resistance, G: Grayscale, SCC: Single Color Component, CV: Color Vector, LG: Linear Geometrical Transformation, NLG: Nonlinear Geometrical Transformation, LPO: Linear Point Operations, NLPO:Nonlinear Point Operations, SO: Spatial Domain Operations, EO: Editing Operations, CMP: JPEG Compression].
Method 
[13] 
[14] 
[15] 
[6] 
[3] 
[12] 
[16] 
[17] 
[18] 
[19] 
[4] 
Proposed 

Capacity 
4KB 
8KB 
8KB 
2KB 
1KB 
16KB 
8B 
0.5KB 
60B 
64B 
2KB 
64KB 

Domain 
G 
G 
G 
CV 
SCC 
SCC 
SCC 
G 
G 
G 
CV 
CV 

Res. 
LG 
~ 
Ö 
~ 
~ 
~ 
Ö 
 
Ö 
Ö 
~ 
~ 
Ö 

NLG 
 
 
 
 
 
 
 
 
 
 
 
Ö 

LPO 
 
 
 
~ 
 
 
 
 
 
 
 
Ö 

NLPO 
 
~ 
~ 
~ 
~ 
~ 
 
~ 
 
 
~ 
Ö 

SO 
 
~ 
~ 
~ 
~ 
~ 
 
~ 
~ 
~ 
~ 
Ö 

EO 
 
 
 
 
 
 
 
 
 
 
 
Ö 

CMP 
Ö 
Ö 
Ö 
Ö 
Ö 
Ö 
~ 
Ö 
Ö 
Ö 
Ö 
Ö 

File translated from T_{E}X
by T_{T}H,
version 3.72.
On 27 Aug 2006, 17:28.