User:Dream144

From Wikipedia, the free encyclopedia

In Computer Vision, Shadow Identification (Also known as Shadow Understanding or Shadow Segmentation) refers to the process of identifying such shadow areas in digital images analysis. Identified shadows can be used to improve the performance of pattern recognition by shadow removal, or obtain extra geometric information of the scene of the image. There are two major type of shadow identification approaches, namely the model-based approach, which calculates the shadow based on prior knowledge of the scene, and property-based approach, which uses the spectral and geometric characteristics of shadow for identification[1].

Nature of Shadow[edit]

Spectral Properties

A shadow is an area which direct light from a light source cannot reach due to complete or partial obstruction by an object. It occupies all of the space behind an opaque object with light in front of it. Shadows occur when objects totally or partially occlude direct light from a light source. A shadow consists of to parts: the self shadow and the cast shadow.

  • Self shadow is the part of the object which is not illuminated by the direct light.
  • Cast shadow is the area projected by the object in the direction of the direction light.

The part of a cast shadow where direct light is completely blocked by its object is called an umbra. The part of a cast shadow where direct light is only partially blocked is called a penumbra. Umbra can be generated by any kind of source, while penumbra can only be generated by area source whose size cannot be neglected comparing to the size of the illumination objects [2].

Figure 1.Umbra and Penumbra

Geometric Properties

The boundaries of shadows can be subdivided into four types, separating the object, umbra and penumbra.

  • Shadow Making Line separates the object from umbra.
  • Occluding Line separates the object from it's cast shadow.
  • Shadow Line separates the cast shadow from the background.
  • Hidden Shadow Line indicates the non-visible shadow making line.

Shown as the Figure 2 below.

File:ShadowLine.jpg
Figure 2.Shadow lines and Shadow Making lines

Approaches of Shadow Identification[edit]

Shadow identification techniques mainly divide into two types as following.

Model-Based Identification[edit]

Model-based techniques rely on priori knowledge of the object geometry and the illumination for shadow identification. Model-based approaches are mostly designed for specific applications, such as aerial image understanding and and video surveillance. They are based on matching sets of geometric features such as edges, lines or corners to 3D object models[1]. Hence, model-based techniques generally handle simple objects and are only applicable to the specific application they are designed for. [3] [4]

Property-Based Identification[edit]

Property-based approaches identify shadows based on more general features such as geometry, brightness or color of shadows, which makes them more applicable for general identification. Property-based shadow identifications mainly consist of two mechanisms, namely the Dark region extraction and Feature identification[2]. Dark Region Extraction identifies dark areas of the image based on spectral properties of shadow such as luminance, chrominance and gradient intensity. The extracted segments are regarded as potential shadow area which requires further feature identification, as dark regions can either be a shadow or a dark textures or the combination of the two. Feature Identification performs further analysis on those dark regions based on further features of shadow, such as color invariant and edges connectivity, and using them to distinguish real shadow from dark texture. Further geometric information such as the direction of light and shape of the occlude objects may also be obtained by such analysis [1].

Dark Region Extraction

The majority of Dark Region Extraction techniques identify dark region of the image based on following hypothesis of shadow's spectral property:

  • Shadows areas gives lower light intensity than neighbour areas.

Considering the radiance of the light, reflected at a given point p on a surface in the 3D space, given some illumination and viewing geometry, is formulated as

[5]

where La(w,p), Ld(w,p) and Ls(w, p) are the ambient light, diffuse reflection and surface reflection respectively, while w is the wavelength of the light. The ambient light is assumed to be the global light indirectly reflected among surfaces in the environment and does not vary with geometry. If there is no diffuse reflection and surface reflection because the object is obstructing the direct light, then the radiance of the reflected light, which represents the intensity of the reflected light at a point in a shadow region, is

This leads to lower light intensity of the shadow areas comparing to their neighbour areas. Hence, the dark region can can extracted by identifying such low light intensity area, further constraint such as color invariant can also be applied to increase the accuracy of shadow extraction in such early stage. However, such hypothesis only holds for simple object and background with motormouths textures. Complex dark textures ruins such hypothesis, as well as complex lighting conditions and dynamic environment(Shadow Identification for Video). For more complex scene, more advanced extraction techniques need to be taken into account [6].

Feature Identification

As specified above, the dark regions extracted by the technique above may also contains dark textures. Further identification of the actual shadows must be performed. To achieve this, several spectral and geometric properties of shadow can be taken into account.

  • The color and texture of an identical surface is invariant in the shadow area comparing to the illuminated part.

The color of a point in the scene (image) can be represented as normalized Red, Green and Blue, donating as

where p is the point and nR,nG,nB are the normalized red, green and blue component respectively. The normalized RGB color gives identical values for a surface of identical color, with no affection from the light intensity [7].

  • Connectivity between the shadow lines and objects.

If the object is adjacent to the background, the shadow line should be connected with corresponding object. Any dark region with no connection with corresponding objects will be regarded as dark texture instead of shadow in such case. However, such hypothesis highly constraint the type of the scene, which reduces its generality.

  • For multiple shadows in a single light source scene, all shadows should indicate an unique light source position or direction.

If the geometry of the occluded object is known or can be approximated, combining with the potential shadow projections, the direction of the light source can be obtained. For a scene of single light source, all shadows should indicate an unique position or direction of the light source. Any dark region gives significant different indication can therefore be regarded as dark texture and filtered out [2]. However, such applying techniques requires prior knowledge of the scene.

For more details and other techniques of feature identification, see Further Readings.

Applications[edit]

  • Improve Pattern(Object) Recognition[8][9].
  • Retrieving direction of light source by shadow identification[8].

Example Mechanism[edit]

Following example shows an approach of shadow identification for a image of a simple scene. This example references to the shadow identification approach described in [1]. The scene's background is a flat or nearly flat non-textured surface and the objects and their shadows are within the image. There is only a single light source to illuminate the viewed scene. The light source emits parallel light rays. Original Picture is shown as Figure 3 following.

File:ShadowIdentification Origin.jpg
Figure 3. Original Image

Dark Region Extraction[edit]

For a light source emits parallel light rays, which gives approximately constant background light intensity, the dark regions can be extracted by setting up value below the intensities of the backgrounds [2]. Instead of the extracting the entire dark region, Salvador's approach applies edge detection to the image and extracts only the edge of the shadow. The extracted edge are the pixels which is adjacent to the background and gives lower light intensity than the specified thresh. The extract result is shown as Figure 4 following.

File:ShadowIdentification DAExtract.jpg
Figure 4. Extracted Edge of Dark Region

For a light source which does not emit parallel light rays, such as a light source in close proximity, the intensities on the background change smoothly, the threshold must be chosen adaptively. A technique called Scanline is used for this example approach for adaptive threshold chosen[2]. Donating as the light intensity of a pixel in the image of width w and height h, where x and y donating the corresponding coordinate of the pixel in the image with ranging of , repesctively. If an image is scanned in the x direction at ,the line at with the intensities for , is called a scanned line. The intensities along a horizontal line L can be expressed by

where , respectively. Then reduces Il one step further by an offset constant c, deriving

which is the extraction threshold for this line for pixels as shown in Figure x. All pixels lie on line y0 and with light intensity lower than with be regarded as dark regions. Such mechanism is then perform for every line of the image.

Feature Identification[edit]

The result of the first level of analysis is the identification of a set of candidate shadow pixels. As shown in the figure above, the analysis leads to the detection of both shadow pixels and object pixels. This is caused by object pixels giving lower light intensity than the corresponding background pixels. Further analysis is required to confirm the real shadow. The invariant color feature of shadow is used to distinguish the shadow and object. The origin image is first converted into a normalized RGB image, then another edge detection is performed. The object is detected and isolated as following figure as the actual shadow gives no difference to the background, according to the invariant color feature of shadow.

File:ShadowIdentification InvColorObj.jpg
Figure 5. Extracted Object

The object is then removed from the potential shadow. Further dilation is performed in the next step to clean up the noise generating Figure 6. The occluding line of shadow is finally generated based on the remaining edges of shadow and the edge of objects (occluding line), as shown in figure 7.

File:ShadowIdentification RemoveCleanUp.jpg
Figure 6. Object Removed and Clean up
File:ShadowIdentification CleanCloseUp.jpg
Figure 7. Clean Up and Close Up

Further Readings[edit]

  • Salvador, E. (2004). Cast shadow segmentation using invariant color features. Computer Vision and Image Understanding, 95(2), 238-259. doi:10.1016/j.cviu.2004.03.008
  • Jiang, C., & Ward, M. O. (1992). Shadow identification. Proceedings of the Computer Vision and Pattern Recognition (pp. 606-612). IEEE Computer Society Press.
  • Waltz, D. (1975). Understanding line drawings of scenes with shadows. In P. H. Winston (Ed.), The Psychology of Computer Vision (pp. 19-91). McGraw-Hill.

References[edit]

  1. ^ a b c d Salvador, E. (2004). Cast shadow segmentation using invariant color features. Computer Vision and Image Understanding, 95(2), 238-259. doi:10.1016/j.cviu.2004.03.008 Cite error: The named reference "ref1" was defined multiple times with different content (see the help page).
  2. ^ a b c d e Jiang, C., & Ward, M. O. (1992). Shadow identification. Proceedings of the Computer Vision and Pattern Recognition (pp. 606-612). IEEE Computer Society Press. Cite error: The named reference "ref2" was defined multiple times with different content (see the help page).
  3. ^ A. Huertas, R. Nevatia, Detecting buildings in aerial images, Comput. Vis. Graph. Image Process. 41(1988) 131–152
  4. ^ R. Irvin, D. Mckeown, Methods for exploiting the relationship between buildings and their shadows in aerial imagery, IEEE Trans. Syst., Man, Cybernet. 19 (1989) 1564–1575.(1988) 131–152
  5. ^ D.A. Forsyth, J. Ponce, Computer Vision: A Modern Approach, Prentice Hall, New York, 2003.
  6. ^ S. Nadimi, B. Bhanu, Moving shadow detection using a physics-based approach, in: Proc. IEEE Int.Conf. Pattern Recognition, vol. 2, 2002, pp. 701–704.
  7. ^ J. Kender, Saturation, Hue, and Normalized colors: Calculation, Digitization Effects, and Use, Tech. Rep., Carnegie-Mellon University, 1976.
  8. ^ a b T. Gevers, A.W.M. Smeulders, Color-based object recognition, Pattern Recogn. 32 (1999) 453–464. Cite error: The named reference "ref8" was defined multiple times with different content (see the help page).
  9. ^ A. Cavallaro, T. Ebrahimi, Video object extraction based on adaptive background and statistical change detection, in: Proc. Visual Communications and Image Processing, 2001, pp. 465–475.

External links[edit]

Category:Image processing Category:Artificial intelligence