Posts

Showing posts from 2011

Box Blur

Image
As i promised in my earlier posts that we will return to Blurring Image concepts, we are back. In this post, i will try to explain Box Blur. Box Blur is basically an image filter which gets created by taking average values of its neighboring image. Since taking average value of pixels is simple, the algorithm can be implemented quite easily. One of the major advantage of Box Blur is that if applied repeatedly it approximates to Gaussian Kernel. Original Box Blurred

Watershed Segmentation

Watershed is a segmentation method where waters are flooded through the image , and make all the low lying areas of image submerge in water, so that remaining high lying areas(pixels) are edges. This way it provides clear boundary of image and this method is one of efficient method to identify the edges. Different approaches may be employed to use the watershed principle for image segmentation :- Local minima of the gradient of the image may be chosen as markers, in this case an over-segmentation is produced and a second step involves region merging. Marker based watershed transformation make use of specific marker positions which have been either explicitly defined by the user or determined automatically with morphological operators or other ways. How to obtain watershed lines :- Remove minima which are not required(irrelevant) merge the regions obtained(sub-regions

Combination of Edge Detection Methods : An Example

Image
 In this post, i would like to post few examples of images treated under different combinations of Edge Detection methods like Sobel, Prewitt, Roberts, Frei-Chen etc Normal Image   HORIZONTAL                      VERTICAL                               Image None                            Sobel                                  None                              Roberts                                    None                              Prewitt None                              Frei-Chen Frei-Chen                              None Frei-Chen                             Frei-Chen \ Frei-Chen                             Prewitt Frei-Chen                              Roberts Frei-Chen                              Sobel  Prewitt                              Sobel

Basic Method to Detect Edges of an Image

As you may have studied in subjects like Operation Research or might have experienced, the changes occur only at edges. And hence to detect these edges we need to track the drastic change in pixel value. Boundary or an edge will have a sudden abrupt change in pixel value. In the first stage of edge detection, the image is cleaned from noise by applying some filters. But also need to remember that not all edges will have a distinct change from its neighboring pixels. So there are problems with false edge detection, missing actual edges etc. Two major methods widely employed in Edge Detection  :- Derivative Method :-  In this method, we take the derivative w.r.t to time. We know that derivative yields maximum location which might correspond to an edge. If we take the same derivative one more time i.e twice on original, maximum turns to zeros. Hence one more approach it to take second derivative and look for maximum zeros. This is nothing but Laplacian Approach . Gradient

Introduction to Techniques of Edge Detection in Images

Image
Edge detection is a important weapon in Image Processing. As name suggests it detects the edges of an image. More importantly it is able to capture some important properties of an image. Important properties of Edge Detection :- 1) As we know in an image, not all pixels are important for a particular application. Some lesser important pixels can be eliminated. Edge detection helps in re-structuring an image. 2) Eliminates the need to process and store redundant pixel information. 3) Makes analyzing and interpreting the pictures much easier. 4) Helps immensely in Pattern Recognition Example of Edge Detection :- Image (Before Edge Detection) Image (After Edge Detection) In next post i will explain few Techniques (Sobel, Prewitt, Roberts etc) of edge detection.

Segmentation

Segmentation is nothing but segmenting the image into multiple pixel sets. It is meant to change the representation of an image. Segmentation makes analysis more simpler as it gives another simpler view of same image. It involves locating the edges of image, objects or any other contrasting things in an image. The output of segmentation is used immensely in lots of applications. For example in medical imaging, segmentation might give you the tumor cell edge as it has greater contrast compared to the rest of image. Different types to achieve segmentation are :- Thresholding Clustering methods Compression-based methods Histogram-based methods Edge detection Region growing methods Split-and-merge methods Partial differential equation-based methods Graph partitioning methods Watershed transformation Model based segmentation Multi-scale segmentation Semi-automatic segmentation Neural networks segmentation We will look into each of these methods in sep

Overlapping The Images

Image
This example will help you to understand the concept of Graphics2D better. Just go through the example provided next and see the result. I believe example itself is self-explanatory. As usual if you have any doubts,clarifications etc you can ask me. Program :- package client; import java.awt.Graphics2D; import java.awt.image.BufferedImage; import java.io.File; import java.io.IOException; import javax.imageio.ImageIO; public class OverlappingImage {     public static void main(String args[]) {             BufferedImage image1 = null; BufferedImage image2 = null; try { image1 = ImageIO.read(new File("C:/temp/Winter.jpg")); image2 = ImageIO.read(new File("C:/temp/Sunset.jpg")); } catch (IOException e) { e.printStackTrace(); } Graphics2D g = image1.createGraphics();         g.drawImage(image1, 0, 0, null);         g.drawImage(image2, 100, 10, null);                 File file = new File("C:/temp/overLayedImage.jpg

Screenshot of our program from our Program

Image
Lets try to take Screenshot of our program from our Program. It sound weired right. Thats what we will try to attempt in this post. Always operating system tries to take a screenshot when we press PrtScn in keyboard. This time lets make our Java program to take the screenshot. As usual we will just brush up few library's and methods required to create this java program.  1) getScreenSize method in Java.awt.toolkit : Gets the size of the screen. On systems with multiple displays, the primary display is used. Multi-screen aware display dimensions are available from GraphicsConfiguration and GraphicsDevice . 2)   java.awt.Robot :- This class is used to generate native system input events for the purposes of test automation, self-running demos, and other applications where control of the mouse and keyboard is needed. The primary purpose of Robot is to facilitate automated testing of Java platform implementations. Using the class to generate input events differs from posting

Raster Image

Image
A  raster graphics   image   is a   data structure epresenting a generally   rectangular   grid of   pixels, or points of   color, viewable via amonitor,   paper, or other display medium. Raster images are stored in   image files   with varying formats . Or a raster image is a class representing a rectangular array of pixels. A Raster encapsulates a DataBuffer that stores the sample values and a SampleModel that describes how to locate a given sample value in a DataBuffer. A Raster defines values for pixels occupying a particular rectangular area of the plane, not necessarily including (0, 0). The rectangle, known as the Raster's bounding rectangle and available by means of the getBounds method, is defined by minX, minY, width, and height values. The minX and minY values define the coordinate of the upper left corner of the Raster. References to pixels outside of the bounding rectangle may result in an exception being thrown, or may result in references to unintended elements of

Image Processing Basics 3 - Image Buffering

Image
Buffered Image :- As the name suggests it is the buffering of image data. java.awt.BufferedImage   is responsible for Buffering of images in Java Actually BuffreedImage is nothing but collection of image data,pixels,RGB Colors etc. BuffredImage basically contains two parst. Raster part and Color part. Color part is responsible for storing color of the image to be buffered. Raster part holds the raw image and their representation. Rater images are stored in image files in different formats.Some of the formats are :-      1)   BitMap :-   a bitmap or pixmap   is a type of   memory   organization or   image file format   used to store   digital images      2)   OpenRaster :- Open Raster is a   file format   proposed for the common exchange of   layered   images between   raster graphics   editors. It is meant as a replacement for later versions of the Adobe PSD format. OpenRaster is still in development and so far is supported by a few programs.The default   file extension   for OpenR