Posts

Watershed Segmentation

Watershed is a segmentation method where waters are flooded through the image , and make all the low lying areas of image submerge in water, so that remaining high lying areas(pixels) are edges. This way it provides clear boundary of image and this method is one of efficient method to identify the edges. Different approaches may be employed to use the watershed principle for image segmentation :- Local minima of the gradient of the image may be chosen as markers, in this case an over-segmentation is produced and a second step involves region merging. Marker based watershed transformation make use of specific marker positions which have been either explicitly defined by the user or determined automatically with morphological operators or other ways. How to obtain watershed lines :- Remove minima which are not required(irrelevant) merge the regions obtained(sub-regions

Combination of Edge Detection Methods : An Example

Image
 In this post, i would like to post few examples of images treated under different combinations of Edge Detection methods like Sobel, Prewitt, Roberts, Frei-Chen etc Normal Image   HORIZONTAL                      VERTICAL                               Image None                            Sobel                                  None                              Roberts                                    None                  ...

Basic Method to Detect Edges of an Image

As you may have studied in subjects like Operation Research or might have experienced, the changes occur only at edges. And hence to detect these edges we need to track the drastic change in pixel value. Boundary or an edge will have a sudden abrupt change in pixel value. In the first stage of edge detection, the image is cleaned from noise by applying some filters. But also need to remember that not all edges will have a distinct change from its neighboring pixels. So there are problems with false edge detection, missing actual edges etc. Two major methods widely employed in Edge Detection  :- Derivative Method :-  In this method, we take the derivative w.r.t to time. We know that derivative yields maximum location which might correspond to an edge. If we take the same derivative one more time i.e twice on original, maximum turns to zeros. Hence one more approach it to take second derivative and look for maximum...

Introduction to Techniques of Edge Detection in Images

Image
Edge detection is a important weapon in Image Processing. As name suggests it detects the edges of an image. More importantly it is able to capture some important properties of an image. Important properties of Edge Detection :- 1) As we know in an image, not all pixels are important for a particular application. Some lesser important pixels can be eliminated. Edge detection helps in re-structuring an image. 2) Eliminates the need to process and store redundant pixel information. 3) Makes analyzing and interpreting the pictures much easier. 4) Helps immensely in Pattern Recognition Example of Edge Detection :- Image (Before Edge Detection) Image (After Edge Detection) In next post i will explain few Techniques (Sobel, Prewitt, Roberts etc) of edge detection.

Segmentation

Segmentation is nothing but segmenting the image into multiple pixel sets. It is meant to change the representation of an image. Segmentation makes analysis more simpler as it gives another simpler view of same image. It involves locating the edges of image, objects or any other contrasting things in an image. The output of segmentation is used immensely in lots of applications. For example in medical imaging, segmentation might give you the tumor cell edge as it has greater contrast compared to the rest of image. Different types to achieve segmentation are :- Thresholding Clustering methods Compression-based methods Histogram-based methods Edge detection Region growing methods Split-and-merge methods Partial differential equation-based methods Graph partitioning methods Watershed transformation Model based segmentation Multi-scale segmentation Semi-automatic segmentation Neural networks segmentation We will look into...

Overlapping The Images

Image
This example will help you to understand the concept of Graphics2D better. Just go through the example provided next and see the result. I believe example itself is self-explanatory. As usual if you have any doubts,clarifications etc you can ask me. Program :- package client; import java.awt.Graphics2D; import java.awt.image.BufferedImage; import java.io.File; import java.io.IOException; import javax.imageio.ImageIO; public class OverlappingImage {     public static void main(String args[]) {             BufferedImage image1 = null; BufferedImage image2 = null; try { image1 = ImageIO.read(new File("C:/temp/Winter.jpg")); image2 = ImageIO.read(new File("C:/temp/Sunset.jpg")); } catch (IOException e) { e.printStackTrace(); } Graphics2D g = image1.createGraphics();         g.drawImage(image1, 0, 0, null);         g.drawImage(image2, 100, 10, null);   ...

Screenshot of our program from our Program

Image
Lets try to take Screenshot of our program from our Program. It sound weired right. Thats what we will try to attempt in this post. Always operating system tries to take a screenshot when we press PrtScn in keyboard. This time lets make our Java program to take the screenshot. As usual we will just brush up few library's and methods required to create this java program.  1) getScreenSize method in Java.awt.toolkit : Gets the size of the screen. On systems with multiple displays, the primary display is used. Multi-screen aware display dimensions are available from GraphicsConfiguration and GraphicsDevice . 2)   java.awt.Robot :- This class is used to generate native system input events for the purposes of test automation, self-running demos, and other applications where control of the mouse and keyboard is needed. The primary purpose of Robot is to facilitate automated testing of Java platform implementations. Using the class to generate input events ...