Before we begin to learn IPPR (image processing and pattern recognition). Lets define few things. A digital image is a representation of two dimensional image as a finite set of digital values called picture elements or pixels.  So, pixel values represents gray level, color, height, opacity and so on. Digitization implies that a digital image is an approximation of a real scene.  Image format could be 1 sample ( B & W or gray level), 3 sample (R, G, B) and 4 sample (R, G, B, alpha).

Digital image processing focus majorly on two tasks:

1. improvement of pictorial information for human interpretation

2. processing of image data for storage, transmission and representation for autonomous machine perception.

Today, Digital Image Processing is better known as Image analysis and Computer Vision. Which includes low level, middle level and high level processing.

Low level has input and output as image. For example, noise removal, image sharpening.

Middle level has image as input and attributes as output. For example, object recognition, segmentation.

High level has attributes as input and understanding as output. For example, scene understanding, autonomous navigation.

When talking about brief history of Image Processing:

1. in early 1920, news paper industry has used it. Bartlane Cable Television used to send image between Newyork and London.

2. In mid 1920 and late 1920, Bartlane system was improved with increased tone and photographic techniques were used.

3. In 1960, computational technology was highly improved. In 1964, Roger 7 probe was used to capture the image of moon in the space. Similarly, Apollo and other space mission used image processing.

4. In 1970’s, image processing were used in medical applications. In 1979, Sir Godfrey and Allan received the nobel prize for Computerized Axial Tomography scans.

5. From 1980’s to today, image processing is used in all tasks in all areas such as image enhancement and restoration, artistic effects, medical visualization, industrial inspection, law enforcement and computer human interfacing.

There are key stages in digital image processing. They are,

1. Problem domain

2. Image acquisitition

3. Image enhancement

4. Image restoration

5. Morphological analysis

6. Segmentation

7. Object Recognition

8. Representation and Description

9. Image compression

10. Color image processing

For human vision system, retina contains 6-7 millions cones which are sensitive to color. Likewise, retina contains 75-170 millions rods which are sensitive to low level of illumination. Human eye can percieve 10^10 different light intensities.

A digital sensor can only measure a limited no. of samples at a discrete set finite energy levels. Quantization converts the analog signal to digital representation of signal.

Spatial Resolution refers to the smallest discernible detail in an image. Vision specialist talks about pixel size whereas graphics designer talks about dots per inch.

Intensity level resolution refers to no. of intensity levels used to represent the image. It means no. of bits used to store each intensity level. For example, 1 bit has 2 intensity level, 2 bit has 4 intensity level and so on.

That’s it.