Each image has its own color content that greatly influences the
perception of human observer. Being able to transfer the color
content of an image into another image, while preserving other
features, (like texture), opens a new horizon in
humanperceptionbased image processing. In this paper, after a
brief review on the few efficient works performed in the field, a
novel fuzzy principle component analysis (PCA) based color
transfer method is proposed. The proposed method accomplishes the
transformation based on a set of corresponding userselected
regions in images along with a blending ratio parameter set by the
user. Results show more robustness and higher speed when comparing
our proposed method with other available approaches.
Image Recoloring, Principle Component Analysis (PCA).
Altering color appearance of an image, due to the information
extracted from another image has been under investigation in
recent years. Reinhard et al. [1] discuss
that removing a dominant and undesirable colorcast (such as the
yellow in the photos taken under incandescent illumination) will
be a handy tool. Chang et al. [2] state that as
each artist has its own style like the "yellow that Van
Gogh likes to use", it will be fascinating to be able to
transfer the feeling of a painting to a photograph, what they call
"giving the viewers the impression similar to that given by
the painting". Greenfield et al.'s
work [3] is similar to Chang et al.'s work
in spite for the fact that the two images in their method are both
coloredpaintings. They state their method as "liquefy the
color of a painting in such a way that it could be poured to
another".
The notion of color transfer is not widespread in the
literature. Perhaps the first work in this field is the method
developed by Reinhard et al. [1]. they
designed a method for color transfer by choosing a suitable color
space and applying the reference image's color appearance to the
source image by means of some statistical parameters. The paper
emphasizes that the proper choice of a suitable color space is of
great importance [1]. Because when the images of the
nature are represented in many typical color spaces, there is a
high correlation between channels, making single channel
alteration a difficult task. They choose lab color
space by Ruderman et al. [4]. Reinhard
et al. express that the lab color space has never
been applied otherwise or compared to the other color spaces.
Authors are not aware of any correspondence to the color space
since then, rather
than [1,2,3,4,5,6].
Using the lab color space, Reinhard et al.
state their color transfer method as mapping the color vectors
using first and second order statistics. They express that, as
that method tries to transfer one image's appearance to another
one, it is possible to select such source and reference images
that do not work well. To overcome this shortcoming, they proposed
to use the concept of swatches. In the new method, user
must select some different swatches in the two images. For example
the grass, the sky, and so on. Then a classification task takes
place and each pixel will be altered according to the statistics
of the corresponding swatches in the two images that it belongs.
One of the main contributions of that work is the idea of
computing the altered color vector in each of the classes and
blending them inversely proportional to the distances from the
initial point to each of the clusters.
Chang et al. [2] used an early work by
Berlin et al. [7] that examined 98 languages
from several families, and reported that there are regularities in
the number of basic colors and their spread on the color space.
They stated that in developed languages, there are eleven color
terms. In English, they are black, white,
red, green, yellow, blue,
brown, pink, orange, purple, and
gray. Chang et al.'s work [2] that is
implemented entirely in the CIELa^{*}b^{*} color space, uses later
works that defined the spread of these categories. Their method
begins with making the 11 loci of the points in source image
that belong to any of the 11 clusters. Then, they generate the
convex hull that encloses all of the pixels within each of the
categories. The same task is performed in the reference image. For
the given vector [c\vec]_{1} in the ith category, the recolored
vector is computed using a linear mapping between the convex
hulls. It easy to see that Chang et al. [2] is
using the same mapping idea of Reinhard et al.
's [1] swatches, except that the swatches are
preselected in his method; although resulting in less needed user
supervision, but also less adaptivity when dealing with special
problems. Also, the method leaves few room for user intervention.
Greenfield et al. [3] organized the images
into pyramids to produce a palette for each image by successively
downsampling it. The color transfer is then performed based on
the two palettes, computed in the source and the reference images.
Although, that work seems to use a different method compared with
the Reinhard et al.'s method [1], but when
they tend to transfer the actual color, they apply lab color space and alter the a and b content of
points, leaving l unchanged, like what Reinhard et
al. [1] have proposed.
(a)
(b)
Figure 1: Typical result of the proposed color transform method.
(a) Original Image. (b) Recolored Image.
When working with multispectral images, data dimension is an
important problem, (showing itself as a factor increasing the
processing time massively). There are a few works on using
dimension reduction methods in color images (e.g., see
[8]). Principal Component Analysis (PCA) is a fast
linear dimension reduction method [9]. The basic idea
behind the PCA is to find the linear transform giving the maximum
amount of variance out of the set of given vectors. The axis of
ith maximum variance is denoted as [v\vec]_{i}. In practice,
the computation of [v\vec]_{i} is accomplished by using the
covariance matrix C = E{([x\vec][`[x\vec]])([x\vec][`[x\vec]])^{T}}, where [v\vec]_{i} is the eigenvector of C corresponding
to the ith largest eigenvalue [9].
In [10] the authors proposed a novel PCAbased
dimension reduction method for natural color images, reducing the
3D color space to a shifted version of a 1D vector space.
The idea is developed further to define the linear partial
reconstruction error (LPRE) in [11]. The LPRE
likelihood measure, fuzzyfication scheme, and homogeneity decision
are proved to outperform the conventional Euclidean and
Mahalonobis distances, dominantly [11].
In this paper, firstly, a new color transfer method is proposed
that transforms the color information from the reference image to
the source image, (according to one set of corresponding regions
in the images). The general color transfer method uses the fusion
of the set of single region recolored results, using an
LPREbased fuzzyfication technique. User contribution in our
proposed method is limited to select a few corresponding regions
in the reference and source image (like the swatch idea of
Reinhard et. al. [1]), along with tuning a
oneparameter membership function.
Some authors call the color transfer method as recoloring,
but in this paper, these two terms are used interchangeable. We
call the image to be recolorized and the image from which the
color information is extracted as the source and the
reference images, respectively. In all formulas, variables
indexed as x_{1} and x_{2} belong to the source and the reference
images, respectively. Also, variables denoted as x_{1}^{p}rime are
relating to the source image after colorizing task has taken
place. All vectors in this paper are assumed to be column vectors.
The rest of this paper is organized as follows:
Section IIA introduces the single region recoloring
method, while Section IIB states the
fuzzification scheme. Section IIC describes the
proposed fusion technique for rendering the resulting image.
Section III states the experimental results and
discussions and the conclusion are given in
Section IV.
Assume that the image I_{1} is to be recolorized using the
information in image I_{2}. Also, assume that the region r_{1} in
I_{1} should pretend the region r_{2} in I_{2}. Let the vector
[(h)\vec]_{r1} denote the expectation vector of r_{1} and the
3×3 matrix V_{r1} contain the eigenvectors of the
covariance matrix of r_{1} as its columns sorted by corresponding
eigenvalue in a descending fashion. The vector [(h)\vec]_{r2}
and the matrix V_{r2} are defined in a similar way for the
region r_{2} in the reference image (I_{2}). By modelling the
color information in r_{1}, as an ellipsoid spread around
[(h)\vec]_{r1}, the proposed recoloring method for single
selected regions in the source image and the reference image is
defined as:
®
c
¢ 1
=V_{r2}V_{r1}^{1}(
®
c
1

®
h
r_{1}
)+
®
h
r_{2}
.
(1)
The linear transformation described in (1), first
subtracts the center of the ellipsoid from all points to move it
to the center. Then using V_{r1} which is the PCA matrix of
r_{1}, the pixels of the source image are converted to the PCA
coordinates. Using V_{r2} and [(h)\vec]_{r2} the
transformation goes in the inverse direction.
In [11], the authors proposed to use the error made by
neglecting the two less important principal components as a
likelihood measure. In this method, the LPRE likelihood of the
vector [c\vec] to the cluster r is defined as:
e_{r}(
®
c
)=
®
v
T
(
®
c

®
h
)
®
v
(
®
c

®
h
)
(2)
where [v\vec] shows the direction of the first principal
component and [x\vec] denotes the normalized L_{1} norm.
Investigating (2) makes clear that, to make e_{r}([c\vec])
comparable over different clusters, a normalization scheme is
crucial. In [11] the authors proposed to use the
following stochastic margin as the normalization factor:
f_{r,p}=arg_{e}
æ è
P_{[x\vec] Î r}{f(
®
x
) £ e} ³ p
ö ø
(3)
where p is the inclusion percentage. Equation (3) leads
to the definition of the normalized likelihood function:
~
e
r,p
(
®
c
)=
e_{r}(
®
c
)
e_{r}_{r,p}
.
(4)
Also, the e_{r}_{r,p} is a proper homogeneity
criterion [11].
Note that e_{r,p}([c\vec]) is giving lower values for the color
vectors similar to those that exist in r. Thus, a fuzzy
membership function is needed to map
[1,1]®[1,1e] and
[¥,1]È[1,¥]®[1e,0].
In [11], the authors proposed a manipulated form of the
wellknown lowpass Butterworth filter for the sake of
tunability and simplicity, as:
B_{a, b}(x)=
æ è
1+
æ è
x
t_{a,b}
ö ø
2N_{a, b}
ö ø
^{1}/_{2}
(5)
where N_{a, b} and t_{a, b} are defined
as:
N_{a, b}=
é ë
log_{2}
æ è
a
Ö
1b^{2}
b
Ö
1 a^{2}
ö ø
ù û
(6)
t_{a, b} = a^{[1/(Na, b)]}
æ è
1a^{2}
ö ø
[1/(2N_{a, b})]
(7)
and [x] denotes the nearest integer value to x. The function
is designed in the way that satisfies B_{a,b}(1)=a
and B_{a,b}(2)=b. Note that selecting a large
member of ]0,1[ as the a value and a small member of
]0,a[ as the b value, leads to a desired
fuzzyfication. Also, note that the above definition of membership
functions is in contrast with the general selection of the
Gaussian functions.
Regarding the definition of the normalized reconstruction error
and the reformulated Butterworth function, the source image is
fuzzyficated with respect to the query region r as:
h_{r,a,b}=B_{a,b}
æ è
~
e
r,p
(
®
c
)
ö ø
.
(8)
Now, the points in r and the points similar to them, in the
color sense, are mostly giving membership values in the range of
[1,a]; while color information that are not like r are
ranked with poor values. Thus, we set a = 0.99 and p=^{1}/_{2}. Note that by tuning the b parameter, one can easily
control the spread of the membership function. Now, the
fuzzyfication scheme is defined as,
Assume that the user has selected n corresponding regions in the
source image and the reference image, respectively (call them
r_{11}¼r_{1n} and r_{21}¼r_{2n} ). As discussed in
Section IIA, using each set of corresponding regions,
[r_{1i},r_{2i}], equation (1) gives the recolored image.
We propose to blend the results using the fuzzyfication scheme
proposed in Section IIB. Hence, the proposed
color transfer method is formulated as:
All algorithms are developed in MATLAB 6.5, on a 1100
MHz Pentium III personal computer with 256MB of
RAM. Some of the sample images used in this paper are shown
in Figure 2.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
(l)
Figure 2: Some typical sample images. (a), (b), (c), (d), (i), and
(k) Source images. (e), (f), (g), (h), (j), and (l) Reference
images. (a), (b), (e), (f), (i) and (j) Adopted
from [1]. (c), (g) Adopted from [3].
(d), (h) Adopted from [2]. (k) "Mc. Cormic Creck
State Park, Indiana " by Mike Briner,
mbphoto@spraynet.com, www.mikebrinerphoto.com
(adopted with permission of the author). (l) "Hanging
Lake" by Brent Reed, brent@reedservices.com
(adopted with permission of the author).
The proposed color transform method is performed on the sample
images (illustrated in Figure 2). The results of our
method along with results of other methods are shown in
Figure 3 . The b value and description of the
selected regions in images are expressed in the caption of
Figure 3. It is worth mentioning that transferring color
information of a 512×512 image into a 512×512
image according to the four selected mediumsized regions only
takes about 4 seconds.
(a)
(b)
(c)
(d)
(e)
(f)
Figure 3: Results of the proposed color transform method applied on
sample images shown in Figure 2. Source images are
Figures 2a, 2b, 2c,
2d, 2i, and 2k,
respectively. reference images are Figures 2e,
2f, 2g, 2h,
2j, and 2l respectively. Parameters are
set to values of (a) b = 0.5, r={Sky,Water}, (c)
b = 0.1, r={Leaves, Sky}, (c) b = 0.8, r={Skulls,Background}, (d) b = 0.95, r={Leaves,Sky,Earth}, (e)
b = 0.95, r={Sky,Building,Pavement}, (f) b = 0.2, r={Leaves,Bushes,Bark}.
(a)
(b)
(c)
(d)
(e)
Figure 4: Results of other available methods applied on sample
images shown in Figure 2. Source images are Figures
2a, 2b, 2c,
2d, and 2i, respectively. Reference
images are Figures 2e, 2f,
2g, 2h, and 2j
respectively. (a),(b),(e) Reinhard et al.'s
method [1]. (c) Greenfield et al.'s
method [3], (d) Chang et al.'s
method [2].
Comparing the results in Figure 3 (the results of our
proposed method), and Figure 4 (the results of other
methods) shows the performance of our proposed method. While there
is a discontinuity in the sky in Figure 4e, the sky in
Figure 3e simulates a real night sky, having in mind
that the photograph is taken at daytime. This fact is more visible
when one confirms that the recolorized Figure 4e does
not pretend to be a night scene while Figure 3e does
so. This is also visible in Figure 4b where the blue
sky has nothing to do with the entirely white sky of
Figure 2f. The sky in Figure 3b is more
whitened. Although, Greenfield et al.'s
method [3] has tried to transfer the cold colors
of Figure 2g to Figure 2c, but
Figure 4c still contains the warm violetblue
color on the topmost skull. In Figure 3c the colors are
mostly cold. Figure 4d made by Chang et
al. [2], seems acceptable in spite of the fact that
the method that has produced it, is a massively expensive
operation.
It also should be noted that using a reference image of entirely
different scene compared with the source image does not force the
process to fail, but the results of such operation must be
deliberated carefully. The same event occurs when giving regions
of the source image and the reference image in a scattered
fashion; for example trying to transfer color information of
leaves to sand.
No exact time measurement is reported in other works. Considering
the 4 seconds record of our method while other methods use
sophisticated jobs of segmentation and convex hull computation,
the outstanding performance our proposed method is clear.
It must be emphasized that in the proposed method, using an image
as the source image and the reference image at the same time
(recoloring an image with itself) when working on almost the same
regions in the reference image and the source image, gives an
image that cannot be recognized from the original image. In
addition when using singleregion version of the method, the
process is completely reversible.
Although in all samples discussed here the reference image was
unique, there is no limitation that prevents the user from using
two or more images as the reference image. This option may be
useful when trying to recolor a source image due to the sky in
first reference image and the leaves in second reference image.
A new fuzzy principle component analysis based color transform is
proposed and tested on different images, including 5 images
adopted from previous work for the sake of performance comparison.
The images synthesized by our method are more pretending the
reference images when compared with other available approaches.
Also, while other references did not report the time needed for
their proposed methods, our proposed method's computation cost of
a few seconds for 512×512 images is promising.
Acknowledgements
The first author wishes to thank Ms. Azadeh Yadollahi for
her encouragement and invaluable ideas.
E. Reinhard, M. Ashikhmin, B. Gooch, and P. Shirley, "Color transfer between
images," IEEE Computer Graphics and Applications, vol.
September/October, 2001.
Y. Chang, S. Saito, and M. Nakajima, "A framework for transfer colors based on
the basic color categories," in Proceedings of the Computer Graphics
International (CGI'03), IEEE, 2003.
G. R. Greenfield and D. H. House, "Image recoloring induced by palette color
associations," Journal of WSCG'03, vol. 11(1), pp. February 37,
2003, 2003.
D. Ruderman, T. Cronin, and C. Chiao, "Statistics of cone response to natural
images: Implementation for visual coding," Optical Doc. Of America,
vol. 15(8), pp. 20362045, 1998.
W.Q. Yan and M. S. Kankanhalli, "Colorizing infrared home videos," in
Proceedings of IEEE International Conference on Multimedia and Expo
(ICME 2003), Baltimore, July 2003.
T. Welsh, M. Ashikhmin, and K. Mueller, "Transferring color to grayscale
images," in proceedings of ACM SIGGRAPH 2002, San Antonio, July 2002,
pp. 277280.
S.C. Cheng and S.C. Hsia, "Fast algorithm's for color image processing by
principal component analysis," Journal of Visual Communication and
Image Representation, vol. 14, pp. 184203, 2003.
A. Abadpour and S. Kasaei, "A new parametric linear adaptive color space and
its pcabased implementation," in The 9th Annual CSI Computer
Conference, CSICC, Tehran, Iran, Feb. 2004, pp. 125132.