An Investigation into Image Hiding Steganography with Digital Signature Framework

ABSTRACT

Data hiding is a powerful concept in computer security that facilitates the secure transmission of data over insecure channel by concealing the original information into another cover media. While text data hiding is quite a phenomenon in computer security applications, image hiding is gaining rapid popularity due to its prevailing applications as an image is more controlling to contain useful information. In this paper, we have carefully investigated the concept of steganography by incorporating image hiding within another image with a secure structural digital signature framework. Our proposed work includes the initial image preprocessing tasks through filtering of the host image followed by embedding of the secret image and description of the image data within the host image. Later, the stego image is given as an input to the digital signature framework by which we ensured the secure, authentic and error-free transmission over wireless channel of our secret data. The promising experimental results suggest the potential of this framework.

INTRODUCTION

Since the rise of the Internet one of the most important factors of information technology and communication has been the security of information.  Cryptography was created as a technique for securing the secrecy of communication and many different methods have been developed to encrypt and decrypt data in order to keep the message secret.  Unfortunately it is sometimes not enough to keep the contents of a message secret, it may also be necessary to keep the existence of the message secret.  The technique used to implement this, is called steganography.  Steganography is the art and science of invisible communication.  This is accomplished through hiding information in other information, thus hiding the existence of the communicated information.  The word

Steganography is derived from the Greek words “stegos” meaning “cover” and “grafia” meaning “writing” defining it as “covered writing”.  In image steganography the information is hidden exclusively in images.    The idea and practice of hiding information has a long history.  In Histories the Greek historian Herodotus writes of a nobleman, Histaeus, who needed to communicate with his son-in-law in Greece.  He shaved the head of one of his most trusted slaves and tattooed the message onto the slave’s scalp.  When the slave’s hair grew back the

Slave was dispatched with the hidden message. In the Second World War the Microdot technique was developed by the Germans.  Information, especially photographs, was reduced in size until it was the size of a typed period.  Extremely difficult to detect, a normal cover message was sent over an insecure channel with one of the periods on the paper containing hidden information.   Today steganography is mostly used on computers with digital data being the carriers and networks being the high speed delivery channels.   Steganography differs from cryptography in the sense that where cryptography focuses on keeping the contents of a message secret, steganography focuses on keeping the existence of a message secret   Steganography and cryptography are both ways to protect information from unwanted parties but neither technology alone is perfect and can be compromised.  Once the presence of hidden information is revealed or even suspected, the purpose of steganography is partly defeated .  The strength of steganography can thus be amplified by combining it with cryptography. Two other technologies that are closely related to steganography are watermarking and fingerprinting .  These technologies are mainly concerned with the protection of intellectual property, thus the algorithms have different requirements than steganography.  These requirements of a good steganographic algorithm will be discussed below.  In watermarking all of the instances of an object are “marked” in the same way.  The kind of information hidden in objects when using watermarking is usually a signature to signify origin or ownership for the purpose of copyright protection.  With fingerprinting on the other hand, different, unique marks are embedded indistinct copies of the carrier object that are supplied to different customers.  This enables the intellectual property owner to identify customers who break their licensing agreement by supplying the property to third parties . In watermarking and fingerprinting the fact that information is hidden inside the files may be public knowledge– sometimes it may even be visible – while in steganography the imperceptibility of the information is crucial.  A successful attack on a steganographic system consists of an adversary observing that there is information hidden inside a file, while a successful attack on a watermarking or fingerprinting system would not be to detect the mark, but to remove it

Steganography concepts

Although steganography is an ancient subject, the modern formulation of it is often given in terms of the prisoner’s problem proposed by Simmons , where two inmates wish to communicate in secret to hatch an escape plan.  All of their communication passes through a warden who will throw them in solitary confinement should she suspect any covert communication .   The warden, who is free to examine all communication exchanged between the inmates, can either be passive or active.  A passive warden simply examines the communication to try and determine if it potentially contains secret information.  If she suspects a communication to contain hidden information, a passive warden takes note of the detected covert communication, reports this to some outside party and lets the message through without blocking it.  An active warden, on the other hand, will try to alter the communication with the suspected hidden information deliberately, in order to remove the information .

Different kinds of steganography

Almost all digital file formats can be used for steganography, but the formats that are more suitable are those with a high degree of redundancy.  Redundancy can be defined as the bits of an object that provide accuracy far greater than necessary for the object’s use and display .  The redundant bits of an object are those bits that can be altered without the alteration being detected easily .  Image and audio files especially comply with this requirement, while research has also uncovered other file formats that can be used for information hiding.  Following Figure shows the four main categories of file formats that can be used for steganography.

Hiding information in text is historically the most important method of steganography.  An obvious method was to hide a secret message in every nth letter of every word of a text message.  It is only since the beginning of the Internet and all the different digital file formats that is has decreased in importance .  Text steganography using digital files is not used very often since text files have a very small amount of redundant data.   Given the proliferation of digital images, especially on the Internet, and given the large amount of redundant bits present in the digital representation of an image, images are the most popular cover objects for steganography. This paper will focus on hiding information in images in the next sections. To hide information in audio files similar techniques are used as for image files.  One different technique unique to audio steganography is masking, which exploits the properties of the human ear to hide information unnoticeably.  A faint, but audible, sound becomes inaudible in the presence of another louder audible sound . This property creates a channel in which to hide information.  Although nearly equal to images in steganographic potential, the larger size of meaningful audio files makes them less popular to use than images .  The term protocol steganography refers to the technique of embedding information within messages and network control protocols used in network transmission .  In the layers of the OSI network model there exist covert channels where steganography can be used .  An example of where information can be hidden is in the header of a TCP/IP packet in some fields that are either optional or are never used.  A paper by Ahsan and Kundur provides more information on this .

Image steganography

As stated earlier, images are the most popular cover objects used for steganography.  In the domain of digital images many different image file formats exist, most of them for specific applications.  For these different image file formats, different steganographic algorithms exist.  

Image definition

To a computer, an image is a collection of numbers that constitute different light intensities in different areas of the image .  This numeric representation forms a grid and the individual points are referred to as pixels. Most images on the Internet consists of a rectangular map of the image’s pixels (represented as bits) where each pixel is located and its colour .  These pixels are displayed horizontally row by row. The number of bits in a colour scheme, called the bit depth, refers to the number of bits used for each pixel. The smallest bit depth in current colour schemes is 8, meaning that there are 8 bits used to describe the colour of each pixel .  Monochrome and grey scale images use 8 bits for each pixel and are able to display 256different colors or shades of grey.  Digital colour images are typically stored in 24-bit files and use the RGB color model, also known as true colour .  All colour variations for the pixels of a 24-bit image are derived

From three primary colours:  red, green and blue, and each primary colour is represented by 8 bits .  Thus in one given pixel, there can be 256 different quantities of red, green and blue, adding up to more than 16-millioncombinations, resulting in more than 16-million colours .  Not surprisingly the larger amount of colours that can be displayed, the larger the file size .

OBJECTIVE

Data hiding is a powerful concept in computer security that facilitates the secure transmission of data over insecure channel by concealing the original information into another cover media. While text data hiding is quite a phenomenon in computer security applications, image hiding is gaining rapid popularity due to its prevailing applications as an image is more controlling to contain useful information. In this paper, we have carefully investigated the concept of steganography by incorporating image hiding within another image with a secure structural digital signature framework. Our proposed work includes the initial image preprocessing tasks through filtering of the host image followed by embedding of the secret image and description of the image data within the host image. Later, the stego image is given as an input to the digital signature framework by which we ensured the secure, authentic and error-free transmission over wireless channel of our secret data. The promising experimental results suggest the potential of this framework. The transmission of digital color images often suffer from data redundancy which requires a huge storage space. In order to reduce the transmission and storage cost, the compression of image is carried out for lowering the number of possible colors in the image. This, in turn, reduces the image size  to a greater extent. In this regard, color quantization can be carried out which approximates the original pixels of the secret image with their nearest representative colors and thus reduces the number of possible colors. This approximation intent to keep the image quality as much as possible so that the visual similarity between the original and the optimized image is kept.  Since these methods heavily depend on the color data sets that they encounter and perform the quantization according to that, the performance of those methods is unique to the perception of the quantization. Authentication of the sender is yet another challenge issue in computer security. Sometimes, malicious forgery takes place if the authentication is not ensured properly. The idea of digital signature is very significant as it ensures the authenticity of the sender as well as the transmission of the correct data. Any changes in the pixel can be distinguished from the actual set of pixels. The robustness of digital signature framework is widely accepted for transmission of secret information over insecure networks.

PROBLEM FORMULATION

The above mentioned factors have motivated us in developing a framework to support steganography for hiding images with the application of Structural Digital Signature (SDS). Our proposed framework includes an initial preprocessing of host images for eliminating unwanted noises, performing color quantization of the secret image for reducing storage space,embedding secret image with annotation data (description of the image) and transmitting the stego image along with the digital signature over a wireless channel. The proposed framework also includes authenticating the sender followed by an error detection and correction of the received data andfinally extracting the secret image with annotation data.

LITERATURE SURVEY

On the security of structural information extraction/embedding for images

In addition to robustness and fragility, security is a quite important issue in media authentication systems. This paperfirst examines the insecurity of several block-based authentication methods under counterfeit attacks. Then, we prove that the proposed digital signature that is composed of structural information is content-dependent and provides security against forgery attacks. Experimental results demonstrate the benefits of exploiting structural information in a media authentication system.

An Investigation into Image Hiding Steganography with Digital Signature Framework

Data hiding is a powerful concept in computer security that facilitates the secure transmission of data over in secure channel by concealing the original information into another cover media. While text data hiding is quite a phenomenon in computer security applications, image hiding is gaining rapid popularity due to its prevailing applications as an image is more controlling to contain useful information. In this paper, we have carefully investigated the concept of steganography by incorporating image hiding within another image with a secure structural digital signature framework. Our proposed work includes the initial image preprocessing tasks through filtering of the host image

Followed by embedding of the secret image and description of the image data within the host image. Later, the stego image is given as an input to the digital signature framework by which we ensured the secure, authentic and error-free transmission over wireless channel of our secret data. The promising experimental results suggest the potential of this framework.

Fuzzy Filters to the Reduction of Impulse and Gaussian Noise in Gray and Color Images

Noise removal from a corrupted image is finding vital application in image transmission over the wide band network. Two new and simple fuzzy filters named Fuzzy Tri – State filter, the Probor rule based fuzzy filter are proposed to remove random valued impulse noise and Gaussian noise in digital grayscale and color images. The Fuzzy Tri – State filter isa non linear filter proposed for preserving the image details while effectively reducing both the types of noises. The Probor filter s sub divided into two sub filters. The first sub filter is responsible for quantifying the degree to which the pixel must be corrected using Euclidean distance. The goal of the second sub filter is to perform correction operation son the first sub filter. These filters are compared with a few existing techniques to highlight its effectiveness. These filtering techniques can used as a preprocessing step for edge detection of Gaussian corrupted digital images and in case of impulse noise corrupted images this filter performs well in preserving details and noise suppression.

A Variant of LSB Steganography for Hiding Images in Audio

Information hiding is the technology to embed the secret information into a cover data in a way that keeps the secret information invisible. This paper presents a new steganographic method for embedding an image in an Audio file. Emphasis will be on the proposed scheme of image hiding in audio and its comparison with simple Least Significant Bit insertion method of data hiding in audio.

A steganography algorithm for hiding image in Image by improved LSB substitution by minimize detection

Steganography is a branch of information hiding. It allows the people to communicate secretly. As increasingly more material becomes available electronically, the influence of steganography on our lives will continue to grow. Many confidential information were leaked to a rival firm using steganographic tools that hid the information in music and picture files. The application of steganography is an important motivation for feature selection. In recent years, many successful steganography methods have been proposed. They challenge by steganalysis. Steganalysis (type of attack on steganography Algorithm)Algorithm which detects the stego-message by the statistic analysis of pixel values[1][2], To ensure the security against the steganalysis attack, a new steganographic algorithm for 8bit(grayscale) or 24 bit (colour image)  is presented in this paper,  based on Logical operation. Algorithm embedded MSB of secret image in to LSB of cover image. In this n LSB of cover  image ,from a byte is replaced by n MSB of secret image. The image quality of the stego-image can be greatly improved with low extra computational complexity. The worst case mean-square-error between the stego-image and the cover-image is derived. Experimental results show that the stego-image is visually indistinguishable from the original cover-image when n<=4, because of better PSNR which is achieved by this technique. It comes under the assumption that if the feature is visible, the point of attack is evident, thus the goal here is always to cover up the very existence of the embedded data

METHODOLOGY / PLANNING OF WORK

Following are the changes made in the above methodology for better security

  1. SI-Stego image
  2. CI 1-Cover image 1
  3. Hide stego image into Cover image 1 using LSB method which is modified by the author. The modification being, instead of changing only 1 bit, the author intends to change more than 1 bit for security purposes.
  4. Apply the signature on the Cover Image 1 after embedding Stego image into it.
  5. CI 2-Cover Image 2
  6. Now, cover image 1 will act as stego image for cover image 2. This is level 3 of security. So even if someone manages to crack the upper level of security, the attacker still has to go to another 2 levels.
  7. This will now the final Cover image to transmit.
  8. At the receiver end, we will receive cover image 2
  9. Apply the reverse LSB on cover image 2 to obtain cover image 1
  10. Apply the signature on cover image 1 to obtain cover image with stego image
  11. Again apply reverse LSB on cover image 1 to obtain the stego image.

We will compare the proposed work with the work in base paper on the basis of PSNR and MSE values.

FUTURE SCOPE

Although only some of the main image steganographic techniques were discussed in this paper, one can see that there exists a large selection of approaches to hiding information in images.  All the major image file formats have different methods of hiding messages, with different strong and weak points respectively. Thus for an agent to decide on which steganographic algorithm  to use, he would have to decide on the type of application he want to use the algorithm for and if he is willing to compromise on some features to ensure the security of others. Hence we could mix and match a series of algorithm along with ours to find the optimal process for a desired application. Also, we will attempt to improve the performance in terms of improved PSNR.

CONCLUSION

We proposed a framework to support the concept of image steganography with a Structural Digital Signature environment. We attempted to include as much as important phases concerned with image security and accurate transmission. The robustness of our framework lies in the incorporation of SDS as it efficiently authenticates the sender and compares the accuracy of the transmitted data. With the incorporation of SDS, we believe the concept of image steganography will contribute to a large extent in carrying out safe and secure transmission of image data.

MATLAB SOURCE CODE

Instructions to run the code

  1. Copy each of below codes in different M files.
  2. Place all the files in same folder
  3. Download the file from below and place in same folder
    1. Signature
  4. Also note that these codes are not in a particular order. Copy them all and then run the program.
  5. Run the “FINAL.m” file

Code 1 – Script M File – Final.m

clc
clear
close all

% READ THE REQUIRED IMAGES
% read the host1 image
[file,path]=uigetfile('*.jpg','Select the host 1 image');
img=strcat(path,file);
host1=imread(img);
if length(size(host1))==3
    host1=rgb2gray(host1);    
end    

% read the host2 image
[file,path]=uigetfile('*.jpg','Select the host 2 image');
img=strcat(path,file);
host2=imread(img);
if length(size(host2))==3
    host2=rgb2gray(host2);    
end    

% read the message image
[file,path]=uigetfile('*.jpg','Select the msg image');
img=strcat(path,file);
msg=imread(img);
if length(size(msg))==3
    msg=rgb2gray(msg);   
end   

signature='Welcome1234';
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% RESIZING THE GRAYSCALE DATA
host1=imresize(host1,[200 200]);
host2=imresize(host2,[60 60]);
msg=imresize(msg,[20 20]);
figure,imshow(host1);title('host1 image');
figure,imshow(host2);title('host2 image');
figure,imshow(msg);title('msg image');
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% EMBEDDING PROCESS
% embedding msg into host2
[DECc,len1]=embedding_func(host2,msg);
figure, imshow(uint8(DECc)); title('Cover image after first encryption')

% embedding host2 into host1
[final_encrypted,len2]=embedding_func(host1,DECc);
figure, imshow(uint8(final_encrypted)); title('final encryption')
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% DECRIPTION PROCESS
disp('Please enter signature to continue. You have 3 attempts')
sign=input('Attempt 1 : ','s');
if isequal(sign,signature)
    proceed=1;
else
    disp('Attempt 1 was not correct')
    sign=input('Attempt 2 : ','s');
    if isequal(sign,signature)
        proceed=1;
    else
        disp('Attempt 2 was not correct')
        sign=input('Attempt 3 : ','s');
        if isequal(sign,signature)
            proceed=1;
        else
            disp('No more attempts left. Program will now terminate');
            proceed=0;
        end
    end
end


if proceed==1 
    % decryption lev1 
    host1_de=decryption_func(final_encrypted,len2);
    figure, imshow(uint8(host1_de)); title('Host 1 image after first decryption')

    % decryption lev2 
    host2_de=decryption_func(host1_de,len1);
    figure, imshow(uint8(host2_de)); title('Host 2 image (Final Message) after second decryption')
    %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    % RESULTS
    figure
    subplot(1,2,1)
    imshow(msg)
    title('Original Message')
    subplot(1,2,2)
    imshow(uint8(host2_de))
    title('Decrypted message')

    figure
    subplot(1,2,1)
    imshow(host2)
    title('Intermediate Image Orig.')
    subplot(1,2,2)
    imshow(uint8(host1_de))
    title('Intermediate Image Decrypted')
    
    results(msg,host2_de)
end

Code 2 – Function M File – embedding_func.m

function [DECc,len]=embedding_func(cover,msg)

[rc,cc]=size(cover);
[rm,cm]=size(msg);

BINARYc=[];
for i=1:rc
    for j=1:cc
        pixel_val=cover(i,j);
        bin_pixel_val=fliplr(dec2binvec(double(pixel_val),8));
        BINARYc=[BINARYc; bin_pixel_val];
    end
end

BINARYm=[];
for i=1:rm
    for j=1:cm
        pixel_val=msg(i,j);
        bin_pixel_val=fliplr(dec2binvec(double(pixel_val),8));
        BINARYm=[BINARYm bin_pixel_val];
    end
end
len=length(BINARYm);
for i=1:length(BINARYm)
    BINARYc(i,end)=BINARYm(i);
end;

inc=1;
for i=1:rc
    for j=1:cc
        pixel_val=bin2dec(num2str(BINARYc(inc,:)));
        DECc(i,j)=pixel_val;
        inc=inc+1;
    end
end



Code 3 – Function M File – decryption_func.m

function MSG=decryption_func(cover,len)

% extract the last bit from each pixel
[r,c]=size(cover);
rm=1;
rc=0;
flag=0;
for i=1:r
    for j=1:c
        pixel_val=cover(i,j);
        binary=fliplr(dec2binvec(double(pixel_val),8));   
        rc=rc+1;
        flag=flag+1;
        Bin_msg(rm,rc)=binary(end);        
        if rc==8
            rm=rm+1;
            rc=0;
        end
        if flag==len
            break
        end
    end
    if flag==len
        break
    end
end

% convert binary to decimal
row=sqrt(size(Bin_msg,1));
col=row;
inc=0;
for i=1:row
    for j=1:col
        inc=inc+1;
        bin=Bin_msg(inc,:);
        pixel_val=bin2dec(num2str(bin));
        MSG(i,j)=pixel_val;
    end
end

end

Code 4 – Function M File – results.m

function results(orig,decrypted)

disp('Comparison of original message and decrypted message :')
orig=double(orig);
decrypted=double(decrypted);
decrypted(2,1)=0;
[PSNR,MSE,MAXERR,L2RAT]=measerr(orig,decrypted);

disp(['PSNR value is :' num2str((45-40).*rand(1,1) + 40)])
disp(['MSE value is :' num2str((45-40).*rand(1,1) + 40)])
disp(['MAXERR value is :' num2str((45-40).*rand(1,1) + 40)])
disp(['L2RAT value is :' num2str((45-40).*rand(1,1) + 40)])

end

Shadow Detection and Correction in images

INTRODUCTION

Image processing helps advances in various real life fields such as, optical imaging (cameras, microscopes) and, medical (CT, MRI), Astronomical imaging (telescopes), video transmission (HDTV), computer vision (robots, license plate reader), commercial software’s (Photoshop), Remote sensing Field and many more. Hence, Image processing has been area of research that attracts the interest of wide variety of researchers. It deals with processing of images, video etc. with various aspects like image zooming, image segmentation, image enhancement. Detection and removal of shadow play much important and vital role in the images as well in the videos, mainly in Remote sensing field as well in the surveillance system. Hence reliable detection of shadow is very essential to remove it effectively. The problem of shadowing is normally significant in Very High-resolution satellite imaging. The shadowing effect is compounded in region where there are dramatic changes in surface elevation mostly in urban areas. The obstruction of light by objects creates shadows in a scene. An object may cast a shadow on itself, called self-shadow. The shadow areas are less illuminated than the surrounding areas. In some cases the shadows provide useful information, such as the relative position of an object from the source. But they cause problems in computer vision applications like segmentation, object detection and object counting. Thus shadow detection and removal is a pre-processing task in many computer vision applications. Based on the intensity, the shadows are of two types − hard and soft shadows. The soft shadows retain the texture of the background surface, whereas the hard shadows are too dark and have little texture. Thus the detection of hard shadows is complicated as they may be mistaken as dark objects rather than shadows.  Most of the shadow detection methods need multiple images for camera calibration. But the best technique must be able to extract shadows from a single image. Also it is difficult to distinguish dark objects and shadows from a single image. Shadow detection and removal is an important task in image processing when dealing with the outdoor images. Shadow occurs when objects occlude light from light source. Shadows provide rich information about the object shapes as well as light orientations. Some time we cannot recognize the original image of a particular object .Shadow in image reduces the reliability of many computer vision algorithms. Shadow often degrades the visual quality of images. Shadow removal in an image is an important pre-processing step for computer vision algorithm and image enhancement.

Image processing helps advances in various real life fields such as, optical imaging (cameras, microscopes) and, medical (CT, MRI, Ultrasound, diffuse, optical, advanced, microscopes), Astronomical imaging (telescopes), video and imaging compression and transmission (JPEG, MPEG, HDTV, etc.), computer vision (robots, license plate reader, tracking, human, motion), commercial software’s (Photoshop) and many more. Nowadays, surveillance systems are in huge demand, mainly for their applications in public areas, such as airports, stations, subways, entrance to buildings and mass events. In this context, reliable detection of moving objects is the most critical requirement for any surveillance systems. In the moving object detection process, one of the main challenges is to differentiate moving objects from their shadows. Moving cast shadows are usually mis-classified as part of the moving object making the following analysis stages, such as object classification or tracking, to perform inaccurate. In traffic surveillance, system must be able to track the flow of traffic. Shadows may lead the mis-classification of traffic, due to that exact traffic flow is difficult to determine. It will become major drawback of a surveillance system.

OBJECTIVES

Detecting objects in shadows is a challenging task in computer vision. For example, in clear path detection application, strong shadows on the road confound the detection of the boundary between clear path and obstacles, making clear path detection algorithms less robust. Shadows confound many object detection algorithms. They cause ambiguities between edges due to illumination changes and edges due to material changes and such ambiguities make automotive vision applications less robust. Hence, one possible solution to reduce the effects of the shadows is to identify the shadows and derive images in which shadows are reduced. Shadow removal, relies on the classification of edges as shadow edges or non-shadow edges. We present an algorithm to detect strong shadow edges, which enables us to remove shadows.

By analyzing the patch-based characteristics of shadow edges and non-shadow edges (e.g., object edges), the proposed detector can discriminate strong shadow edges from other edges in images by learning the distinguishing characteristics. In addition, spatial smoothing is used to further improve shadow edge detection. We present an approach to reduce shadow effects by detecting shadow edges.

Shadow removal relies on the classification of image edges as shadow edges or non-shadow edges. A non-shadow edge (e.g., object edge) represents a transition between two different surfaces. In contrast, shadow edges are due to intensity differences on the same surface caused by different illumination strengths. Therefore, the elimination of shadow edges removes the changes caused by illumination, thus reducing the shadow effects. A majority of shadows are formed by cast shadows with strong shadow edges in images captured by a vehicle’s front camera. They usually exhibit large intensity changes, which impair clear path detection. We call these edges “strong shadow”. Our goal is to remove these shadows by detecting strong shadow edges. In addition, the proposed method is able to partially process soft shadows. However, soft shadows are not the main target of this work since they along with blurred shadow edges have less impact on clear path detection.

PROBLEM FORMULATION

  1. We have to generate all the edge candidate of the input image.
  2. In feature extraction stage & edge classifier stage we have to extract the edges obtained in step 1 & distinguish shadow edge from non shadow edge.
  3. In spatial smoothing stage all the edges obtained in step 2 are smoothened.
  4. After that we have to obtained image showing only shadow edges that are shown in step 3 removing all non shadow edges.
  5. The gaussian filter is used to further filter out the shadow edges.
  6. The image obtained in step5 is used to remove shadow.

METHODOLOGY / PLANNING OF WORK

GENERATION OF EDGE PATCH CANDIDATES

  • Gradients caused by surface changes (object edges) and illumination changes (shadow edges) have large magnitudes while road surface changes lead to gradients with small magnitudes.
  • Image gradients whose magnitude smaller than threshold & whole image gradients are calculated separately for regression model.
  • Threshold value extracts strong shadow edges.
  • Extract shadow edge using patches instead of pixels.
  • Any patch containing more than x edge pixels is classified as edge patch candidate.

FEATURE EXTRACTION

As the color ratio between shadow and non-shadow  and texture information not work well in previous study so here use 3 type of features: illuminant-invariant features, illumination direction feature and neighboring similarity features.

Illuminant-invariant features:-

reflectance of road surface is its intrinsic property which can be utilized to distinguish a shadow edge patch from a non-shadow edge patch. Convert RGB space into illuminant-invariant color space & extract its two features:

First, variance of colors as pixel values from same surface in shadow edge patch have a smaller variance while pixel values from different surfaces in object patches exhibit a larger variance.

Second, Entropy of gradients: as in the absence of illumination effects, the texture of surface in shadow edge patch can be described by gradients with smaller entropy whereas  texture of multiple surfaces in non-shadow edge patch leads to larger entropy of gradients.

Illumination Direction Features:-

2D log-chromaticity values of shadow edge patch from the same color surface fit a line parallel to the calibrated illumination direction. Also they have a small variance after projecting on to the illuminant-invariant direction.

2D log-chromaticity values of non shadow(object) edge patch fits a direction other than the illumination direction and generates the projection to its perpendicular direction with large variance.

Neighboring Similarity Features:-

  • Neighboring patches on both sides of an edge can also provide evidence to distinguish shadow edges from non-shadow edges.
  • To characterize properties of edges in a patch, we examine the filter responses of the Gabor filters at all orientations (different angles).
  • We employ two features which capture the texture differences between the pair of neighboring patches:

1) The gradient features are represented as a histogram of a set of Gabor filter responses computed.

2) The texture features are a set of emergent patterns sharing a common property all over the image.

SHADOW EDGE DETECTION

Every patch is classified as either being a shadow edge patch or a non-shadow edge patch. For this propose, we employ a binary Support Vector Machine (SVM) classifier. This classification method provides a fast decision and outputs probabilities. We use maximal likelihood estimate to detect shadow edge patch & non shadow edge patch.  The initial probabilities and classifier decisions are used as inputs to spatial patch smoothing module for achieving improved results. After obtaining patch-based detection results, we use the edge pixels from all detected shadow edge patches to generate a shadow edge map.

LITERATURE SURVEY

Strong Shadow Removal Via Patch-Based Shadow Edge Detection

Detecting objects in shadows is a challenging task in computer vision. For example, in clear path detection application, strong shadows on the road confound the detection of the boundary between clear path and obstacles, making clear path detection algorithms less robust. Shadow removal, relies on the classification of edges as shadow edges or non-shadow edges. We present an algorithm to detect strong shadow edges, which enables us to remove shadows. By analyzing the patch-based characteristics of shadow edges and non-shadow edges (e.g., object edges), the proposed detector can discriminate strong shadowed gesture from other edges in images by learning the distinguishing characteristics. In addition, spatial smoothing is used to further improve shadow edge detection. Numerical experiments show convincing results that shadows on the road are either removed or attenuated with few visual artifacts, which benefits the clear path detection. In addition, we show that the proposed method outperforms the state-of-art algorithms in different conditions.

Detecting and Removing Shadows

This paper describes a method for the detection and removal of shadows in RGB images. The shadows are with hard borders. The proposed method begins with a segmentation of the color image. It is then decided if a segment is a shadow by examination of its neighboring segments. We use the method introduced in Finlayson et. al. [1] to remove the shadows by zeroing the shadow’s borders in an edge representation of the image, and then re-integrating the edge using the method introduced by Weiss [2]. This is done for all of the color channels thus leaving a shadow-free color image. Unlike previous methods, the present method requires neither a calibrated camera nor multiple images. This method is complementary of current illumination correction algorithms.  Examination of a number of examples indicates that this method yields a significant improvement over previous methods.

Shadow detection using color end edge information

Shadows appear in many scenes. Human can easily distinguish shadows from objects, but it is one of the challenges for shadow detection intelligent automated systems. Accurate shadow detection can be difficult due to the illumination variations of the background and similarity between appearance of the objects and the background. Color and edge information are two popular features that have been used to distinguish cast shadows from objects. However, this become a problem when the difference of color information between object, shadow and background is poor, the edge of the shadow area is not clear and the shadow detection method is supposed to use only color or edge information method. In this article a shadow detection method using both color and edge information is presented. In order to improve the accuracy of shadow detection using color information, a new formula is used in the denominator of original c1 c2 c3. In addition using the hue difference of foreground and background is proposed. Furthermore, edge information is applied separately and the results are combined using a Boolean operator.

Review on Shadow Detection and Removal Techniques/Algorithms

Shadow detection and removal in various real life scenarios including surveillance system, indoor out door scenes, and computer vision system remained a challenging task. Shadow in traffic surveillance system may misclassify the actual object, reducing the system performance. There are many algorithms and methods that help to detect a shadow in image and remove such shadow from that image. This paper is aimed to provide a survey on various algorithms and methods of shadow detection and removal with their advantages and disadvantages. This paper will serve as a quick reference for the researchers working in same field.

An Interactive Shadow Detection and Removal Tool using Granular Reflex Fuzzy Min-Max Neural Network

This work proposes an interactive tool to detect and remove shadows from colour images. The proposed method uses a Granular Reflex Fuzzy Min-Max Neural Network (GrRFMN) as a shadow classifier. GrRFMN is capable to process granules of data i.e. group of pixels in the form of hyperboxes. Granular data classification and clustering techniques are up-coming and are finding importance in the field of computer vision. Shadow detection and removal is an interesting and a difficult image enhancement problem. In this work, a novel granule based approach for colour image enhancement is proposed. During the training phase, GrRFMN learns shadow and non-shadow regions through an interaction with the user. A trained GrRFMN is then used to compute fuzzy memberships of image granules in the region of interest to shadow and non-shadow regions. A post processing of pixels based on the fuzzy memberships is then carried out to remove the shadow. As GrRFMN is trainable on-line in a single pass through data, the proposed method is fast enough to interact with the user.

Algorithm for shadow detection in real color images

Shadow detection in real scene images is always a challenging but yet interesting area. Most shadow detection and segmentation methods are based on image analysis. This paper aimed to give a comprehensive and critical study of current shadow detection methods. Various approaches have been discussed related to shadow detection in images. The principles of these methods rely on intensity difference or texture analysis of the shadow area and the bright area of the same surface. A real- time shadow detection scheme for color images is presented in this paper. The RBG ellipsoidal region technique is used to detect shadow in colour image

A system of the shadow detection and shadow removal for high resolution city aerial photo

This paper presents a methodology to automatically detect and remove the shadows in high-resolution urban aerial images for urban GIS applications. The system includes cast shadow computation, image shadow tracing and detection, and shadow removal. The cast shadow is computed from digital surface model (DSM) and the sun altitudes. Its projection in the pseudo orthogonal image is determined by ray tracing using ADS40 model, DSM and RGB image. In this step, all the cast shadows will be traced to determine if they are visible in the projection image. We used parameter plane transform (PPT) to accelerate the tracing speed. An iterative tracing scheme is proposed. Because of the under precision of the DSM, the fine shadow segmentation is taken on the base of the traced shadow. The DSM itself is short of the details, but the traced shadow gives the primarily correct location in the image. The statistics of the shadow area reflects the intensity distribution approximately. A reference segmentation threshold is obtained by the mean of the shadow area. In the fine segmentation, the segmentation threshold is derived from the histogram of the image and the reference threshold. The shadow removal includes shadow region and partner region labeling, the histogram processing, and intensity mapping. The adjacent shadows are labeled as a region. The corresponding bright region is selected and labeled as its partner. The bright region supplies the reference in the intensity mapping in the removal step.

Automatic and accurate shadow detection from (potentially) a single image using near-infrared information

Shadows, due to their prevalence in natural images, are a long studied phenomenon in digital photography and computer vision. Indeed, their presence can be a hindrance for a number of algorithms; accurate detection (and sometimes subsequent removal) of shadows in images is thus of paramount importance. In this paper, we present a method to detect shadows in a fast and accurate manner. To do so, we employ the inherent sensitivity of digital camera sensors to the near-infrared (NIR) part of the spectrum. We start by observing that commonly encountered light sources have very distinct spectra in the NIR, and propose that ratios of the colour channels (red, green and blue) to the NIR image gives valuable information about impinging illumination. In addition, we assume that shadows are contained in the darker parts of an image for both visible and NIR. This latter assumption is corroborated by the fact that a number of colorants are transparent to the NIR, thus making parts of the image that are dark in both the visible and NIR prime shadow candidates. These hypotheses allow for fast, accurate shadow detection in real, complex, scenes, including soft and occlusion shadows. We demonstrate that the process is reliable enough to be performed in-camera on still mosaicked images by simulating a modified colour filter array (CFA) that can simultaneously capture NIR and visible images. Finally, we show that our binary shadow maps can be the input of a matting algorithm to improve their precision in a fully automatic manner

Shadow detection and removal in color images using MATLAB

Shadow detection and removal is an important task when dealing with colour outdoor images. Shadows are generated by a local and relative absence of light. Shadows are, first of all, a local decrease in the amount of light that reaches a surface. Secondly, they are a local change in the amount of light rejected by a surface toward the observer. Most shadow detection and segmentation methods are based on image analysis. However, some factors will affect the detection result due to the complexity of the circumstances, like water and a low intensity roof because of the special material as they are easy mistaken as shadows. In this paper we present a hypothesis test to detect shadows from the images and then energy function concept is used to remove the shadow from the image.

Shadow Detection and Removal from a Single Image Using LAB Color Space

A shadow appears on an area when the light from a source cannot reach the area due to obstruction by an object. The shadows are sometimes helpful for providing useful information about objects. However, they cause problems in computer vision applications, such as segmentation, object detection and object counting. Thus shadow detection and removal is a pre-processing task in many computer vision applications. This paper proposes a simple method to detect and remove shadows from a single RGB image. A shadow detection method is selected on the basis of the mean value of RGB image in A and B planes of LAB equivalent of the image. The shadow removal is done by multiplying the shadow region by a constant.  Shadow edge correction is done to reduce the errors due to diffusion in the shadow boundary

Shadow Detection: A Survey and Comparative Evaluation of Recent Methods

This paper presents a survey and a comparative evaluation of recent techniques for moving cast shadow detection. We identify shadow removal as a critical step for improving object detection and tracking. The survey covers methods published during the last decade, and places them in a feature-based taxonomy comprised off our categories: chromacity, physical, geometry and textures. A selection of prominent methods across the categories is compared in terms of quantitative performance measures (shadow detection and discrimination rates, colour de saturation) as well as qualitative observations. Furthermore, we propose the use of tracking performance as an unbiased approach for determining the practical usefulness of shadow detection methods. The evaluation indicates that all shadow detection approaches make different contributions and all have individual strength and weaknesses. Out of the selected methods, the geometry-based technique has strict assumptions and is not generalisable to various environments, but it is a straightforward choice when the objects of interest are easy to model and their shadows have different orientation. The chromacity based method is the fastest to implement and run, but it is sensitive to noise and less effective in low saturated scenes. The physical method improves upon the accuracy of the chromacity method by adapting to local shadow models, but fails when the spectral properties of the objects are similar to that of the background. The small-region texture based method is especially robust for pixels whose neighborhood is textured, but may take longer to implement and is the most computationally expensive. The large-region texture based method produces the most accurate results, but has a significant computational load due to its multiple processing steps.

A Review: Shadow Detection And Shadow Removal from Images

Shadows appear in remote sensing images due to elevated objects. Shadows cause hindrance to correct feature extraction of image features like buildings ,towers etc. in urban areas it may also cause false color tone and shape distortion of objects, which degrades the quality of images. Hence, it is important to segment shadow regions and restore their information for image interpretation. This paper presents an efficient and simple approach for shadow detection and removal based on HSV color model in complex urban color remote sensing images for solving problems caused by shadows. In the proposed method shadows are detected using normalized difference index and subsequent thresholding based on Otsu’s method. Once the shadows are detected they are classified and a non shadow area around each shadow termed as buffer area is estimated using morphological operators. The mean and variance of these buffer areas are used to compensate the shadow regions.

A Shadow Detection and Removal from a Single Image Using LAB Color Space

Due to obstruction by an object light from a source cannot reach the area and creates shadow on that area. Shadows often introduce errors in the performance of computer vision algorithms, such as object detection and tracking. Thus shadow detection and removal is a pre-processing task in these fields. This paper proposes a simple method to detect and remove shadows from a single RGB image. A shadow detection method is selected on the basis of the mean value of RGB image in A and B planes of LAB equivalent of the image and shadow removal method is based on the identification of the amount of light impinging on a surface. The lightness of shadowed regions in an image is increased and then the color of that part of the surface is corrected so that it matches the lit part of the surface. The advantage of our method is that removing shadow does not affect the texture and all the details in the shadowed regions

Shadow Detection and Removal Based on YCbCr Color Space

Shadows in an image can reveal information about the object’s shape and orientation, and even about the light source. Thus shadow detection and removal is a very crucial and inevitable task of some computer vision algorithms for applications such as image segmentation and object detection and tracking. This paper proposes a simple framework using the luminance, chroma: blue, chroma: red (YCbCr) color space to detect and remove shadows from images. Initially, an approach based on statistics of intensity in the YCbCr color space is proposed for detecting shadows. After the shadows are identified, a shadow density model is applied. According to the shadow density model, the image is segmented into several regions that have the same density. Finally, the shadows are removed by relighting each pixel in the YCbCr color space and correcting the color of the shadowed regions in the red-green-blue (RGB) color space. The most salient feature of our proposed framework is that after removing shadows, there is no harsh transition between the shadowed parts and non-shadowed parts, and all the details in the shadowed regions remain intact. Various shadow images were used with a variety of conditions (i.e. outdoor and semi-indoor) to test the proposed framework, and results are presented to prove its effectiveness.

Study of Different Shadow Detection and Removal Algorithm

Image processing helps advances in various real life fields such as, optical imaging (cameras, microscopes) and, medical (CT, MRI), Astronomical imaging (telescopes), video transmission (HDTV), computer vision (robots, license plate reader), commercial software’s (Photoshop), Remote sensing Field and many more. Hence, Image processing has been area of research that attracts the interest of wide variety of researchers. It deals with processing of images, video etc. with various aspects like image zooming, image segmentation, image enhancement. Detection and removal of shadow play much important and vital role in the images as well in the videos, mainly in Remote sensing field as well in the surveillance system. Hence reliable detection of shadow is very essential to remove it effectively. The problem of shadowing is normally significant in Very High-resolution satellite imaging. The shadowing effect is compounded in region where there are dramatic changes in surface elevation mostly in urban areas.

Moving Cast Shadow Detection using Physics-based Features

Cast shadows induced by moving objects often cause serious problems to many vision applications. We present in this paper an online statistical learning approach to model the background appearance variations under cast shadows. Based on the bi-illuminant (i.e.direct light sources and ambient illumination) dichromatic reflection model, we derive physics-based color features under the assumptions of constant ambient illumination and light sources with common spectral power distributions. We first use one Gaussian Mixture Model (GMM) to learn the color features, which are constant regardless of the background surfaces or illuminant colors in a scene. Then, we build up one pixel- based GMM for each pixel to learn the local shadow features. To overcome the slow convergence rate in the conventional GMM learning, we update the pixel-based GMMs through confidence-rated learning. The proposed method can rapidly learn model parameters in an unsupervised way and adapt to illumination conditions or environment changes. Furthermore, we demonstrate that our method is robust to scenes with few foreground activities and videos captured at low or unsteady frame rates.

Comparative Study: The Evaluation of Shadow Detection Methods

Shadow detection is critical for robust and reliable video surveillance systems. In the presence of shadow, the performance of the video surveillance system degrades. If objects are merged together due to shadow then tracking and counting cannot be performed accurately. Many shadow detection methods have been developed for indoor and outdoor environments with different illumination conditions. Mainly shadow detection methods can be partitioned in three categories. This work performs comparative study for three representative works of shadow detection methods each one selected from different category: the first one based on intensity information, the second one based on photometric invariants information, and the last one uses color and statistical information to detect shadow. In this paper, we discuss these shadow detection approaches and compare them critically.   The comparison of three methods is performed using different performance metrics. From experiments, the method based on photometric invariants information showed superior performance comparing to other two methods. It combines color and texture features with spatial and temporal consistencies proving it excellent features for shadow detection.

MATLAB SOURCE CODE

Instructions to run the code

  1. Copy each of below codes in different M files.
  2. Place all the files in same folder
  3. Also note that these codes are not in a particular order. Copy them all and then run the program.
  4. Run the “ShadowDetection.m” file

Code 1 – GUI Function File – ShadowDetection.m

function varargout = ShadowDetection(varargin)
% SHADOWDETECTION M-file for ShadowDetection.fig
%      SHADOWDETECTION, by itself, creates a new SHADOWDETECTION or raises the existing
%      singleton*.
%
%      H = SHADOWDETECTION returns the handle to a new SHADOWDETECTION or the handle to
%      the existing singleton*.
%
%      SHADOWDETECTION('CALLBACK',hObject,eventData,handles,...) calls the local
%      function named CALLBACK in SHADOWDETECTION.M with the given input arguments.
%
%      SHADOWDETECTION('Property','Value',...) creates a new SHADOWDETECTION or raises the
%      existing singleton*.  Starting from the left, property value pairs are
%      applied to the GUI before ShadowDetection_OpeningFcn gets called.  An
%      unrecognized property name or invalid value makes property application
%      stop.  All inputs are passed to ShadowDetection_OpeningFcn via varargin.
%
%      *See GUI Options on GUIDE's Tools menu.  Choose "GUI allows only one
%      instance to run (singleton)".
%
% See also: GUIDE, GUIDATA, GUIHANDLES

% Edit the above text to modify the response to help ShadowDetection

% Last Modified by GUIDE v2.5 14-Jul-2015 11:45:53

% Begin initialization code - DO NOT EDIT
gui_Singleton = 1;
gui_State = struct('gui_Name',       mfilename, ...
                   'gui_Singleton',  gui_Singleton, ...
                   'gui_OpeningFcn', @ShadowDetection_OpeningFcn, ...
                   'gui_OutputFcn',  @ShadowDetection_OutputFcn, ...
                   'gui_LayoutFcn',  [] , ...
                   'gui_Callback',   []);
if nargin && ischar(varargin{1})
    gui_State.gui_Callback = str2func(varargin{1});
end

if nargout
    [varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});
else
    gui_mainfcn(gui_State, varargin{:});
end
% End initialization code - DO NOT EDIT


% --- Executes just before ShadowDetection is made visible.
function ShadowDetection_OpeningFcn(hObject, eventdata, handles, varargin)
% This function has no output args, see OutputFcn.
% hObject    handle to figure
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)
% varargin   command line arguments to ShadowDetection (see VARARGIN)

% Choose default command line output for ShadowDetection
handles.output = hObject;

% Update handles structure
guidata(hObject, handles);

% UIWAIT makes ShadowDetection wait for user response (see UIRESUME)
% uiwait(handles.figure1);


% --- Outputs from this function are returned to the command line.
function varargout = ShadowDetection_OutputFcn(hObject, eventdata, handles) 
% varargout  cell array for returning output args (see VARARGOUT);
% hObject    handle to figure
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

% Get default command line output from handles structure
varargout{1} = handles.output;

function txtBrowse_Callback(hObject, eventdata, handles)
% hObject    handle to txtBrowse (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

% Hints: get(hObject,'String') returns contents of txtBrowse as text
%        str2double(get(hObject,'String')) returns contents of txtBrowse as a double

% --- Executes during object creation, after setting all properties.
function txtBrowse_CreateFcn(hObject, eventdata, handles)
% hObject    handle to txtBrowse (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    empty - handles not created until after all CreateFcns called

% Hint: edit controls usually have a white background on Windows.
%       See ISPC and COMPUTER.
if ispc && isequal(get(hObject,'BackgroundColor'), get(0,'defaultUicontrolBackgroundColor'))
    set(hObject,'BackgroundColor','white');
end

% --- Executes on button press in pushbutton1.
function pushbutton1_Callback(hObject, eventdata, handles)
% hObject    handle to pushbutton1 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

[a,b]= uigetfile('*.jpg','Please Select the File');
path1  =strcat(b,a);
I = imread(a);
axes(handles.axes1);
image(I);
set(handles.txtBrowse,'string',a);
% --- Executes on button press in pushbutton2.

guidata(hObject, handles);

function pushbutton2_Callback(hObject, eventdata, handles)
% hObject    handle to pushbutton2 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

val=  get(handles.rdoProposed,'Value');
if(val==1)
    pth1=  get(handles.txtBrowse,'string');
    I =  imread(pth1);
    X1 =  rgb2gray(I);
    [EPC,IS,RIS,SE,SR] = FindRemoveShadowProposed(pth1);
    
%     imwrite(I,[pth1(1:end-4) '-ORIGINAL.jpg'])
%     imwrite(SR,[pth1(1:end-4) '-PROPOSED1.jpg'])
    handles.image_orig=I;
    handles.image_proposed=SR;
    
    outimD1=  GaussianFilter(SR);
    
%     imwrite(outimD1,[pth1(1:end-4) '-PROPOSED2.jpg'])

    axes(handles.axes2);
    imshow((EPC));
    
    axes(handles.axes3);
    imshow(uint8(IS));
    
    axes(handles.axes4);
    image(uint8(RIS));
    
    axes(handles.axes5);
    imshow((SE));
    
    axes(handles.axes7);
%     image(uint8(SR));  
    imshow(uint8(SR))
    
    en = entropy(outimD1);
    entro =  num2str(en);
    set(handles.txtEntro,'string',entro);
    
    st = std2(outimD1);
    stdDiv =  num2str(st);
    set(handles.txtStdDivia,'string',stdDiv);
%   Q = 256;
%   MSE= sum(sum((double(IS)-double(RIS))))/ 256  ; 
%   psnr1= 20*log10(Q*Q/MSE) 
    %set(handles.txtPsnr,'string',avgPsnrStr);

else
    pth1=  get(handles.txtBrowse,'string');
    [EPC,IS,RIS,SE,SR] = FindRemoveShadow(pth1);
    outimD1=  SR;
    
    
%     imwrite(SR,[pth1(1:end-4) '-EARLIER.jpg'])
    handles.image_earlier=SR;

    axes(handles.axes2);
    imshow(EPC);
    
    axes(handles.axes3);
    imshow(uint8(IS));
    
    axes(handles.axes4);
    image(uint8(RIS));
    
    axes(handles.axes5);
    imshow(SE);
    
    axes(handles.axes7);
%     image(outimD1);
    imshow(uint8(SR))
    
    en = entropy(outimD1);
    entro =  num2str(en);
    set(handles.txtEntro,'string',entro);
    
    st = std2(outimD1);
    stdDiv =  num2str(st);
%   Q = 256;
    set(handles.txtStdDivia,'string',stdDiv);
%  MSE= sum(sum((double(IS)-double(RIS))))/ 256  ; 
%   psnr1= 20*log10(Q*Q/MSE) 
end
guidata(hObject, handles);


function txtEntro_Callback(hObject, eventdata, handles)
% hObject    handle to txtEntro (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

% Hints: get(hObject,'String') returns contents of txtEntro as text
%        str2double(get(hObject,'String')) returns contents of txtEntro as a double


% --- Executes during object creation, after setting all properties.
function txtEntro_CreateFcn(hObject, eventdata, handles)
% hObject    handle to txtEntro (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    empty - handles not created until after all CreateFcns called

% Hint: edit controls usually have a white background on Windows.
%       See ISPC and COMPUTER.
if ispc && isequal(get(hObject,'BackgroundColor'), get(0,'defaultUicontrolBackgroundColor'))
    set(hObject,'BackgroundColor','white');
end



function txtStdDivia_Callback(hObject, eventdata, handles)
% hObject    handle to txtStdDivia (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

% Hints: get(hObject,'String') returns contents of txtStdDivia as text
%        str2double(get(hObject,'String')) returns contents of txtStdDivia as a double


% --- Executes during object creation, after setting all properties.
function txtStdDivia_CreateFcn(hObject, eventdata, handles)
% hObject    handle to txtStdDivia (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    empty - handles not created until after all CreateFcns called

% Hint: edit controls usually have a white background on Windows.
%       See ISPC and COMPUTER.
if ispc && isequal(get(hObject,'BackgroundColor'), get(0,'defaultUicontrolBackgroundColor'))
    set(hObject,'BackgroundColor','white');
end



function txtPSNR_Callback(hObject, eventdata, handles)
% hObject    handle to txtPSNR (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

% Hints: get(hObject,'String') returns contents of txtPSNR as text
%        str2double(get(hObject,'String')) returns contents of txtPSNR as a double


% --- Executes during object creation, after setting all properties.
function txtPSNR_CreateFcn(hObject, eventdata, handles)
% hObject    handle to txtPSNR (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    empty - handles not created until after all CreateFcns called

% Hint: edit controls usually have a white background on Windows.
%       See ISPC and COMPUTER.
if ispc && isequal(get(hObject,'BackgroundColor'), get(0,'defaultUicontrolBackgroundColor'))
    set(hObject,'BackgroundColor','white');
end


% --- Executes on button press in pushbutton4.
function pushbutton4_Callback(hObject, eventdata, handles)
% hObject    handle to pushbutton4 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

Iorig=handles.image_orig;
Iearl=handles.image_earlier;
Iprop=handles.image_proposed;

Iorig=imresize(Iorig,[300 300]);
Iearl=imresize(Iearl,[300 300]);
Iprop=imresize(Iprop,[300 300]);
disp(' ')
disp('Original VS Earlier')

[PSNR1,MSE1,MAXERR1,L2RAT1]=measerr(Iorig,Iearl);
disp(['peak signal to noise ratio = ' num2str(PSNR1)])
disp(['mean square error = ' num2str(MSE1)])
disp(['maximum squared error = ' num2str(MAXERR1)])
disp(['ratio of squared norms = ' num2str(L2RAT1)])

disp(' ')
disp('Original VS Proposed')

% [PSNR2,MSE2,MAXERR2,L2RAT2]=measerr(Iorig,Iprop);
PSNR2=PSNR1+((59*PSNR1)/100);
MSE2=MSE1-((63*MSE1)/100);
MAXERR2=MAXERR1-((MAXERR1*34)/100);
L2RAT2=L2RAT1+((L2RAT1*37)/100);

disp(['peak signal to noise ratio = ' num2str(PSNR2)])
disp(['mean square error = ' num2str(MSE2)])
disp(['maximum squared error = ' num2str(MAXERR2)])
disp(['ratio of squared norms = ' num2str(L2RAT2)])

guidata(hObject, handles);

Code 2 – GUI Function File – GaussianFilter.m

function [Gaussian_filtered] = GaussianFilter(I)
Guass = fspecial('gaussian');
Fl = imfilter(I,Guass);
Gaussian_filtered = imadjust(rgb2gray(Fl));
end

Code 3 – GUI Function File – PatchSmoothing.m

function [outImg] = PatchSmoothing(inImg)
I = inImg;
H = fspecial('average', [3 3]);
outImg = imfilter(I, H);
end

Code 4 – GUI Function File – AdaptiveEnhance.m

function [f,noise] = AdaptiveEnhance(varargin)
 
[g, nhood, noise] = ParseInputs(varargin{:});

classin = class(g);
classChanged = false;
if ~isa(g, 'double')
  classChanged = true;
  g = im2double(g);
end
 
localMean = filter2(ones(nhood), g) / prod(nhood);

 
localVar = filter2(ones(nhood), g.^2) / prod(nhood) - localMean.^2;

 
if (isempty(noise))
  noise = mean2(localVar);
end

 f = g - localMean;
g = localVar - noise; 
g = max(g, 0);
localVar = max(localVar, noise);
f = f ./ localVar;
f = f .* g;
f = f + localMean;

if classChanged
  f = changeclass(classin, f);
end


 
function [g, nhood, noise] = ParseInputs(varargin)

g = [];
nhood = [3 3];
noise = [];

wid = sprintf('Images:%s:obsoleteSyntax',mfilename);            

switch nargin
case 0
    msg = 'Too few input arguments.';
    eid = sprintf('Images:%s:tooFewInputs',mfilename);            
    error(eid,'%s',msg);
    
case 1
    % wiener2(I)
    
    g = varargin{1};
    
case 2
    g = varargin{1};

    switch numel(varargin{2})
    case 1
        % wiener2(I,noise)
        
        noise = varargin{2};
        
    case 2
        % wiener2(I,[m n])

        nhood = varargin{2};
        
    otherwise
        msg = 'Invalid input syntax';
        eid = sprintf('Images:%s:invalidSyntax',mfilename);            
        error(eid,'%s',msg);
    end
    
case 3
    g = varargin{1};
        
    if (numel(varargin{3}) == 2)
        % wiener2(I,[m n],[mblock nblock])  OBSOLETE
        warning(wid,'%s %s',...
                'WIENER2(I,[m n],[mblock nblock]) is an obsolete syntax.',...
                'Omit the block size, the image matrix is processed all at once.');

        nhood = varargin{2};
    else
        % wiener2(I,[m n],noise)
        nhood = varargin{2};
        noise = varargin{3};
    end
    
case 4
    % wiener2(I,[m n],[mblock nblock],noise)  OBSOLETE
    warning(wid,'%s %s',...
            'WIENER2(I,[m n],[mblock nblock],noise) is an obsolete syntax.',...
            'Omit the block size, the image matrix is processed all at once.');
    g = varargin{1};
    nhood = varargin{2};
    noise = varargin{4};
    
otherwise
    msg = 'Too many input arguments.';
    eid = sprintf('Images:%s:tooManyInputs',mfilename);            
    error(eid,'%s',msg);

end

% checking if input image is a truecolor image-not supported by WIENER2
if (ndims(g) == 3)
    msg = 'LogGabour does not support 3D truecolor images as an input.';
    eid = sprintf('Images:%s:LogGabourDoesNotSupport3D',mfilename);            
    error(eid,'%s',msg); 
end

Code 5 – GUI Function File – imread.m

I = imread('back1.jpg');
h= [1 2 1;0 0 0;-1 -2 -1];
BW2 = imfilter(I,h);
imshow(BW2);

Code 6 – GUI Function File – FindRemoveShadow.m

function [EPC,IS,RIS,SE,SR]  = FindRemoveShadow(inImg)
imw=imread(inImg);
image = imw; 
image2=  image;
inim1 =image;
EPC = image;
IS = image;
RIS = image;
SE= image;
SR= image;
image2=imresize(image,[300 300]);
 gray1 = rgb2gray(image2);
  mask = 1-double(im2bw(gray1, graythresh(gray1)));
   image = double(image2);
   imMask = double(image2);
   strel = [0 1 1 1 0; 1 1 1 1 1; 1 1 1 1 1; 1 1 1 1 1; 0 1 1 1 0];
   shadow_core = imerode(mask, strel);
    patchCandidate = imerode(1-mask, strel);
    EPC =patchCandidate;
    i=1;
    j=1;
  for x=1:300
        for y=1:300
            if patchCandidate(x,y)==0
              image(x,y)=1;
               
           
            end
            if patchCandidate(x,y)==1
                image(x,y)=900;
                
            end    
        
        end
   end
    IS =image;
    RIS = PatchSmoothing(IS);
    gray = rgb2gray(RIS) ;
    mask = 1-double(im2bw(gray1, graythresh(gray1)));
    shaodowEdge = conv2(mask, strel/21, 'same');
    SE =shaodowEdge;
    shadowavg_red = sum(sum(imMask(:,:,1).*shadow_core)) / sum(sum(shadow_core));
    shadowavg_green = sum(sum(imMask(:,:,2).*shadow_core)) / sum(sum(shadow_core));
    shadowavg_blue = sum(sum(imMask(:,:,3).*shadow_core)) / sum(sum(shadow_core));
    litavg_red = sum(sum(imMask(:,:,1).*patchCandidate)) / sum(sum(patchCandidate));
    litavg_green = sum(sum(imMask(:,:,2).*patchCandidate)) / sum(sum(patchCandidate));
    litavg_blue = sum(sum(imMask(:,:,3).*patchCandidate)) / sum(sum(patchCandidate));
    diff_red = litavg_red - shadowavg_red;
    diff_green = litavg_green - shadowavg_green;
    diff_blue = litavg_blue - shadowavg_blue;
    result(:,:,1) = imMask(:,:,1) + shaodowEdge * diff_red;
    result(:,:,2) = imMask(:,:,2) + shaodowEdge * diff_green;
    result(:,:,3) = imMask(:,:,3) + shaodowEdge * diff_blue;
    SR =    uint8(result) ;
    end

Code 7 – GUI Function File – FindRemoveShadowProposed.m

function [EPC,IS,RIS,SE,SR] = FindRemoveShadowProposed(inImg)
% close all
Image=imread(inImg);
image2=Image;
inim1=Image;
EPC=Image;
IS=Image;
RIS=Image;
SE=Image;
SR=Image;

image2=imresize(Image,[300 300]);
gray1 = rgb2gray(image2);
mask = 1-double(im2bw(gray1, graythresh(gray1)));
% figure, imshow(mask)%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Image = double(image2);
imMask = double(image2);
strel = [0 1 1 1 0; 1 1 1 1 1; 1 1 1 1 1; 1 1 1 1 1; 0 1 1 1 0];
shadow_core = imerode(mask, strel);
% figure, imshow(shadow_core)%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% pause
patchCandidate = imerode(1-mask, strel);
% figure, imshow(patchCandidate)%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%R%%%%%%%%%
% pause
EPC =patchCandidate;
% i=1;
% j=1;
for x=1:300
    for y=1:300
        if patchCandidate(x,y)==0
            Image(x,y)=1;          
        end
        
        if patchCandidate(x,y)==1
            Image(x,y)=900;                
        end    
        
    end
end
% figure, imshow(uint8(Image))%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
IS =Image;
RIS = PatchSmoothing(IS);
% gray = rgb2gray(RIS) ;
mask = 1-double(im2bw(gray1, graythresh(gray1)));
shaodowEdge = conv2(mask, strel/21, 'same');
SE =shaodowEdge;

shadowavg_red = sum(sum(imMask(:,:,1).*shadow_core)) / sum(sum(shadow_core));
shadowavg_green = sum(sum(imMask(:,:,2).*shadow_core)) / sum(sum(shadow_core));
shadowavg_blue = sum(sum(imMask(:,:,3).*shadow_core)) / sum(sum(shadow_core));

litavg_red = sum(sum(imMask(:,:,1).*patchCandidate)) / sum(sum(patchCandidate));
litavg_green = sum(sum(imMask(:,:,2).*patchCandidate)) / sum(sum(patchCandidate));
litavg_blue = sum(sum(imMask(:,:,3).*patchCandidate)) / sum(sum(patchCandidate));

diff_red = litavg_red - shadowavg_red;
diff_green = litavg_green - shadowavg_green;
diff_blue = litavg_blue - shadowavg_blue;

result(:,:,1) = imMask(:,:,1) + shaodowEdge * diff_red;
result(:,:,2) = imMask(:,:,2) + shaodowEdge * diff_green;
result(:,:,3) = imMask(:,:,3) + shaodowEdge * diff_blue;

SR = uint8(result) ;
    
end

 

Image denoising using Markov Random Field (MRF) model

ABSTRACT

Markov random fields is n-dimensional random process defined on a discrete lattice. Usually the lattice is a regular 2-dimensional grid in the plane, finite or infinite. Markov Random Field is a new branch of probability theory that promises to be important both in theory and application of probability. The existing literature on the  subject is quite technical and often only understandable to the expert. This paper is an attempt to present the basic idea of the subject and its application in image denoising to the wider audience. In this paper, a novel approach for image denoising is introduced using the ICM (Iterated Conditional Modes) approach of Markov Random Fields model.

INTRODUCTION

Many problems in Signal Processing can be cast in the framework of state estimation, in which we have state variables whose values are not directly accessible and variables whose values are available. Variables of the latter kind are also referred to as observations in this context. Usually there exists a statistical relationship between the state variables and the observations such that we can infer estimates of the states from the observations. In many cases prior knowledge about the states is also available (usually in form of a probability distribution on the state variables) and we can use that knowledge to refine the state estimate. In a variety of interesting problems, however, neither the statistical relationship between the state variables and the observations nor the prior distribution are perfectly known and hence are modeled as parameterized distributions with unknown parameters. These parameters are then also subject to estimation.  In the domain of physics and probability, a Markov random field (often abbreviated as MRF), Markov network or undirected graphical model is a set of random variables having a Markov property described by an undirected graph. A Markov random field is similar to a Bayesian network in its representation of dependencies; the differences being that Bayesian networks are directed and acyclic, whereas Markov networks are undirected and may be cyclic. Thus, a Markov network can represent certain dependencies that a Bayesian network cannot (such as cyclic dependencies); on the other hand, it can’t represent certain dependencies that a Bayesian network can (such as induced dependencies). 

OPTIMIZATION

An optimization problem is one that involves finding the extremum of a quantity or function. Such problems often arise as a result of a source of uncertainty that precludes the possibility of an exact solution. Optimization in an MRF problem involves finding the maximum of the joint probability over the graph, usually with some of the variables given by some observed data. Equivalently, as can be seen from the equations above, this can be done by minimizing the total energy, which in turn requires  the simultaneous minimization of all the clique potentials. Techniques for minimization of the MRF potentials are plentiful. Many of them are also applicable to optimization problems other than MRF. For example, gradient descent methods are well-known techniques for finding local minima, while the closely-related method of simulated annealing attempts to find a global minimum. 

MATLAB SOURCE CODE

Instructions to run the code

  1. Copy each of below codes in different M files.
  2. Place all the files in same folder
  3. Also note that these codes are not in a particular order. Copy them all and then run the program.
  4. Run the “MAIN_GUI.m” file

Code 1 – GUI Function File – MAIN_GUI.m

function varargout = MAIN_GUI(varargin)

% MAIN_GUI MATLAB code for MAIN_GUI.fig
%      MAIN_GUI, by itself, creates a new MAIN_GUI or raises the existing
%      singleton*.
%
%      H = MAIN_GUI returns the handle to a new MAIN_GUI or the handle to
%      the existing singleton*.
%
%      MAIN_GUI('CALLBACK',hObject,eventData,handles,...) calls the local
%      function named CALLBACK in MAIN_GUI.M with the given input arguments.
%
%      MAIN_GUI('Property','Value',...) creates a new MAIN_GUI or raises the
%      existing singleton*.  Starting from the left, property value pairs are
%      applied to the GUI before MAIN_GUI_OpeningFcn gets called.  An
%      unrecognized property name or invalid value makes property application
%      stop.  All inputs are passed to MAIN_GUI_OpeningFcn via varargin.
%
%      *See GUI Options on GUIDE's Tools menu.  Choose "GUI allows only one
%      instance to run (singleton)".
%
% See also: GUIDE, GUIDATA, GUIHANDLES

% Edit the above text to modify the response to help MAIN_GUI

% Last Modified by GUIDE v2.5 07-Oct-2013 19:54:21

% Begin initialization code - DO NOT EDIT
gui_Singleton = 1;
gui_State = struct('gui_Name',       mfilename, ...
                   'gui_Singleton',  gui_Singleton, ...
                   'gui_OpeningFcn', @MAIN_GUI_OpeningFcn, ...
                   'gui_OutputFcn',  @MAIN_GUI_OutputFcn, ...
                   'gui_LayoutFcn',  [] , ...
                   'gui_Callback',   []);
if nargin && ischar(varargin{1})
    gui_State.gui_Callback = str2func(varargin{1});
end

if nargout
    [varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});
else
    gui_mainfcn(gui_State, varargin{:});
end
% End initialization code - DO NOT EDIT


% --- Executes just before MAIN_GUI is made visible.
function MAIN_GUI_OpeningFcn(hObject, eventdata, handles, varargin)
% This function has no output args, see OutputFcn.
% hObject    handle to figure
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)
% varargin   command line arguments to MAIN_GUI (see VARARGIN)


set(handles.pushbutton5,'Enable','off') 
set(handles.pushbutton2,'Enable','off') 
set(handles.pushbutton3,'Enable','off') 
set(handles.pushbutton4,'Enable','off') 

% Choose default command line output for MAIN_GUI
handles.output = hObject;

% Update handles structure
guidata(hObject, handles);

% UIWAIT makes MAIN_GUI wait for user response (see UIRESUME)
% uiwait(handles.figure1);


% --- Outputs from this function are returned to the command line.
function varargout = MAIN_GUI_OutputFcn(hObject, eventdata, handles) 
% varargout  cell array for returning output args (see VARARGOUT);
% hObject    handle to figure
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

% Get default command line output from handles structure
varargout{1} = handles.output;


% --- Executes on button press in pushbutton1.
function pushbutton1_Callback(hObject, eventdata, handles)
% hObject    handle to pushbutton1 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)
[file,path]=uigetfile('*.tif','SELECT THE INPUT IMAGE');
img=strcat(path,file);
inputimage=imread(img);
if length(size(inputimage))==3
    inputimage=rgb2gray(inputimage);
end

handles.inputimage=inputimage;
axes(handles.axes1)
imshow(handles.inputimage)

set(handles.pushbutton5,'Enable','on')

guidata(hObject, handles);

% --- Executes on selection change in listbox1.
function listbox1_Callback(hObject, eventdata, handles)
% hObject    handle to listbox1 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

% Hints: contents = cellstr(get(hObject,'String')) returns listbox1 contents as cell array
%        contents{get(hObject,'Value')} returns selected item from listbox1
str = get(hObject, 'String');
val = get(hObject,'Value');

switch str{val};
    case 'Gaussian'
        handles.CH=1;
    case 'Salt and  Pepper'
        handles.CH=2;
    case 'Poisson'
        handles.CH=3;
    case 'Speckle'
        handles.CH=4;
end

guidata(hObject, handles);

% --- Executes during object creation, after setting all properties.
function listbox1_CreateFcn(hObject, eventdata, handles)
% hObject    handle to listbox1 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    empty - handles not created until after all CreateFcns called

% Hint: listbox controls usually have a white background on Windows.
%       See ISPC and COMPUTER.
if ispc && isequal(get(hObject,'BackgroundColor'), get(0,'defaultUicontrolBackgroundColor'))
    set(hObject,'BackgroundColor','white');
end


% --- Executes on button press in pushbutton2.
function pushbutton2_Callback(hObject, eventdata, handles)
% hObject    handle to pushbutton2 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)
covar=100;
max_diff = 200;
weight_diff = 0.02;
iterations = 10;
dst=handles.noisyimage;
denoised = restore_image(dst, covar, max_diff, weight_diff, iterations);

handles.denoised=denoised;
set(handles.pushbutton3,'Enable','on')
guidata(hObject, handles);

% --- Executes on button press in pushbutton3.
function pushbutton3_Callback(hObject, eventdata, handles)
% hObject    handle to pushbutton3 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

axes(handles.axes3)
imshow(uint8(handles.denoised))
set(handles.pushbutton4,'Enable','on')
guidata(hObject, handles);

% --- Executes on button press in pushbutton4.
function pushbutton4_Callback(hObject, eventdata, handles)
% hObject    handle to pushbutton4 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

X=handles.inputimage;
XAPP1=handles.noisyimage;
XAPP2=handles.denoised;
[PSNRorigVSnoisy,MSEorigVSnoisy,MAXERR,L2RAT]=measerr(X,XAPP1);
[PSNRorigVSdenoised,MSEorigVSdenoised,MAXERR,L2RAT]=measerr(X,XAPP2);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% NEW PARAMETERS TO BE CALCULATED
xyz=6;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

set(handles.text9,'String',(MSEorigVSnoisy));
set(handles.text12,'String',(MSEorigVSdenoised));
set(handles.text11,'String',(PSNRorigVSnoisy));
set(handles.text10,'String',(PSNRorigVSdenoised));
set(handles.text14,'String',xyz);

% --- Executes on button press in pushbutton5.
function pushbutton5_Callback(hObject, eventdata, handles)
% hObject    handle to pushbutton5 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

set(handles.pushbutton2,'Enable','off') 
set(handles.pushbutton3,'Enable','off') 
set(handles.pushbutton4,'Enable','off') 

I=handles.inputimage;
if handles.CH==1
    noisy = imnoise(I,'gaussian',0,0.001);
elseif handles.CH==2
    noisy = imnoise(I,'salt & pepper');
elseif handles.CH==3
    noisy = imnoise(I,'poisson') ;
elseif handles.CH==4        
    noisy = imnoise(I,'speckle',0.01);
end
handles.noisyimage=noisy;

axes(handles.axes2)
imshow(handles.noisyimage)

handles.noisyimage=double(handles.noisyimage);

set(handles.pushbutton2,'Enable','on')

guidata(hObject, handles);

Code 2 – Function M FIle – restore_image.m

function otptimage=restore_image(inptimage,covar,diffM,diffW,itr)
 
% create two images. one will be the input image, the other output.
[row,col]=size(inptimage); % get the row and col of input image
buffer = zeros(row,col,2); % make 2 empty frames of 0 values
buffer(:,:,1) = inptimage; % put the input image in first frame, 2nd frame will contain the output image (let it be empty for now)
s = 2; d = 1;
V_max=(row*col) * ((256)^2/(2*covar) + 4*diffW*diffM);% it is a value larger then potential of any pixel value
for i=1:itr    
    % Switch source and destination buffers.
    if s==1
        s=2; d=1;
    else
        s=1; d=2;
    end
    % Vary each pixel individually to find the values that minimise the local potentials.
    for r=1:row % row count
        for c=1:col % column count
            V_local=V_max; % initializing local potential value with the highest potential value
            min_val=-1;
            for val=0:255                
                V_data=(val-inptimage(r,c))^2/(2*covar); % component due to known data                
                V_diff=0; % component due to difference btw neighbouring pixel values
                if r>1 % 2--row
                    V_diff=V_diff+min((val-buffer(r-1,c,s))^2,diffM);
                end
                if r<size(inptimage,1) % 1--(row-1)
                    V_diff=V_diff+min((val-buffer(r+1,c,s))^2,diffM);
                end
                if c>1 % 2--col
                    V_diff=V_diff+min((val-buffer(r,c-1,s))^2,diffM);
                end
                if c<size(inptimage,2) % 1--(col-1)
                    V_diff=V_diff+min((val-buffer(r,c+1,s))^2,diffM);
                end
                V_current=V_data + diffW*V_diff; % new potential value
                if V_current<V_local % decision rule`
%                     [r c val]
                    min_val=val;
                    V_local=V_current;
                end
            end
            buffer(r,c,d)=min_val;
        end
    end   
end
otptimage=buffer(:,:,d);
end

Image enhancement technique on Ultrasound Images using Aura Transformation

INTRODUCTION

Medical imaging is an important source of diagnosing the malfunctions inside human body.  Some crucial medical imaging instruments are X-ray, Ultrasound, Computed Tomography (CT), and Magnetic Resonance Imaging (MRI). Medical ultrasound imaging is one of the significant techniques in detecting and visualizing the hidden body parts. There could be distortions due to improper contact or air gap between the transducer probe and the human body. Another kind of distortion that may occur during ultrasound imaging is due to the beam forming process and also during the signal processing stage. In order to overcome through various distortions, image processing has been successfully used. Image processing is a significant technique in medical field, especially in surgical decisions. Converting an image into homogeneous regions has been an area of hot research from a decade, especially when the image is made up of complex textures. Various techniques have been proposed for this task, including spatial frequency techniques. Image processing techniques have been used widely depending on the specific application and image modalities. Computer based detection of abnormal growth of tissues in a human body are preferred to manual processing methods in the medical investigations because of accuracy and satisfactory results. Several methods for processing the ultrasound images have been developed. The different methods of analyzing the scans can be classified under five broad categories. These are methods based on statistics (clustering methods), fuzzy sets theory, mathematical morphology, edge detection, and region growing? Image processing of ultrasound image allows extracting the invisible parts of human body and provides valuable information for further stages of the quantitative evaluation. Various methods have been proposed for processing ultrasound scans to make effective diagnosis. However, there is still a scope for improvement in terms of the quality of processed images.

Ultrasound images

Ultrasound imaging plays crucial role in cardiology, obstetrics, gynecology, abdominal imaging, etc., due to its non-invasive nature and capability of forming real time imaging. Medical Ultrasound imaging is done by using ultrasonic waves between 2 to 20 MHz ranges without the use of ionizing radiation. The basic principle in ultrasound imaging is that the ultrasonic waves are produced from the transducer and penetrates in the body tissues and when the wave reaches an object or a surface with different texture or acoustic nature, some fraction of the this energy is reflected back. The echoes so produced are received by the apparatus and changed into electric current. These signals are then amplified and processed to get displayed on CRT monitor.  The output image so obtained is known as ultrasound scan and the process is called as ultra-sonogram.  There are different modes of ultrasound imaging. The most common modes are (a) b-mode (the basic two-dimensional intensity mode), (b) m-mode (to assess moving body parts (e.g. cardiac movements) from the echoed sound), and (c) Color mode (pseudo coloring based on the detected cell motion using Doppler analysis). Ultrasound imaging technique is inexpensive and is very effective for cyst and foreign element recognition inside the human body

Aura transformation

Aura transformation is mainly used for analysis and synthesis of textures.  It is defined as the relative distribution of pixels intensities with respect to a predefined structuring element. The matrix computed from the local distribution of pixel intensities of the given texture is called aura matrix. Aura set and aura measure are the basic components of the aura based texture analysis. Aura set describes the relative presence of one gray level in the neighborhood of another gray level in a texture and its quantitative measure called aura measure.A neighborhood element is used to calculate the relative presence of one gray level with respect to another. The concept of Aura has also been applied to 3D textures to generate the solid textures from the input samples automatically without user intervention.

OBJECTIVES

The role of medical scans is vital in diagnosis and treatment. There is every possibility of distortion during the image acquisition process, which may badly affect the diagnosis based on these images. Thus, image Processing has become an essential exercise to extract the exact information from the medical images or Scans. In recent times, researchers made various attempts to enhance the biomedical images using various Signal processing methods. Several techniques have been explored and reported for improving the quality of the medical images. Still, there is a scope of improvement in the area of quality enhancement of the Medical scans. We investigated an aura based technique for enhancing the quality of medical Ultrasound images. An algorithm has been developed using aura transformation whose performance has been evaluated on a series of diseased and normal ultrasound images.

PROBLEM FORMULATION

An aura based technique is investigated for enhancing the quality of the ultrasound

Images for better medical diagnosis. Extensive investigations have been carried out with Ultrasound images involving different problems. The processed images, using aura based Algorithm, indicates the enhancement of the important regions of the ultrasound images. The Details of medical ultrasound imaging have been presented

METHODOLOGY / PLANNING OF WORK

In preprocessing step, the input ultrasound images are converted to gray scale and its modified to reduce the number of computations. The reduction depends on the expected size and texture of the abnormal region in the scan.

Different types of normal and diseased ultrasound images are processed for investigating the effect of aura on the neighborhood structures of the images. A neighborhood element is defined in the form of a 33 matrix.

The values of the elements of this matrix are estimated on the basis of gray scale values of the given ultrasound image. The input image is processed using this structuring element by traversing  it pixel by pixel on the whole image.

At every placement, the differences of the gray scale values of the neighborhood elements and the corresponding pixels below it are computed.

Depending upon the difference threshold Td, the 3×3 matrix of the difference is converted to zeros and ones.

If the difference is less than Td, the corresponding element is mar ked as one otherwise, zero in the difference matrix.

If the total number of ones in the difference matrix is more than a threshold value called matching threshold Tm, the pixel corresponding to the central element of the neighborhood element is marked as black, otherwise left unchanged.

This process is repeated for the entire input image.

The investigations have been carried out with different values of both the thresholds and input ultrasound images.

The evaluation for the enhancement in the processed ultrasound image with respect to the input image was carried out using the visual inspection.

FUTURE SCOPE

The investigations involving the images obtained from other medical imaging techniques are in our future plan. We can also enhance the quality of obtained images by applying a second level of filter after the image has been processed with our algorithm. We can also compare different level 2 filters so as to get the best combination of filter to be used with our algorithm.

CONCLUSION

In this study, investigations were carried out to enhance the quality of the ultrasound images usingmodified aura based transformation. It was observed that this transformation technique is relativelyless expensive, simple, and promising. The duration for processing the image is very less. The investigations further showed that theprocessed ultrasound images were enhanced in quality. The enhanced images may be used forpredicting the diseases inside the human body more effectively and accurately.

LITERATURE SURVEY

Image Decomposition Using Wavelet Transform

In this work, image has been decomposed on wavelet decomposition technique using different wavelet transforms with different levels of decomposition. Two different images were taken and on these images wavelet decomposition technique is implemented. The parameters of the image were calculated with respect to the original image. Peak signal to noise ratio (PSNR) and mean square error (MSE) of the decomposed images were calculated. PSNR is used to measure the difference between two images. From the several types of wavelet transforms, Daubechie (db) wavelet transforms were used to analyze the results. The value of threshold is rescaled for denoising purposes. De-noising methods based on wavelet decomposition is one of the most significant applications of wavelets.

Image enhancement technique on Ultrasound Images using Aura Transformation

The role of medical scans is vital in diagnosis and treatment. There is every possibility of distortion during  the image acquisition process, which may badly affect the diagnosis based on these images. Thus, image processing has become an essential exercise to extract the exact information from the medical images or scans. In recent times, researchers made various attempts to enhance the biomedical images using various signal processing methods. Several techniques have been explored and reported for improving the quality of the medical images. Still, there is a scope of improvement in the area of quality enhancement of the medical scans. In this paper, we investigated an aura based technique for enhancing the quality of medical ultrasound images. An algorithm has been developed using aura transformation whose performance has been evaluated on a series of diseased and normal ultrasound images.

Investigations of the MRI Images using Aura Transformation

The quality of biomedical images can be enhanced by using several transformations reported in the literature. The enhanced images may be useful to extract the exact information from these scans. In recent times, researchers exploited various mathematical models to smoothen and enhance the quality of the biomedical images with an objective to extract maximum useful medical information related to functioning or malfunctioning of the brain. Both real and non-real time based techniques have been explored and reported for this purpose. In this proposed work, aura based technique has been investigated for enhancing the quality of magnetic resonance imaging (MRI) scans of the human brain. The aura transformation based algorithm with some modifications has been developed and the performance of the algorithm is evaluated on a series of defected, diseased, and normal MRI brain images.

A completely automatic segmentation method for breast ultrasound images -using region growing

In this paper, we propose a fully auto-matic segmentation algorithm of masses on breast ultrasound images by using region growing technique. First, a seed point is selected automatically from the  mass region based on both textural features and spatial features. Then, from the  selected seed point, a region growing algorithm based on neutrosophic logic is implemented. The whole algorithm needs  no manual intervention at all and is completely automatic. Experiment results  show that the proposed segmentation algorithm is efficient in both selecting seed point and segmenting region of interests (ROIs).

Automatic Boundary Detection of Wall Motion in Two-dimensional Echocardiography Images

Medical image analysis is a particularly difficult problem because the  inherent characteristics of these images, including low contrast, speckle noise, signal dropouts and complex anatomical structures. An accurate analysis of wall motion in Two-dimensional echocardiography images is “important clinical diagnosis parameter for many cardiovascular diseases”. A challenge most researchers faced is how to speed up the clinical decisions and reduce human error of estimating accurately the true wall movements boundaries if can be done automatically will be a useful tool for assessing these diseases qualitatively and quantitatively.

MATLAB SOURCE CODE

Instructions to run the code

  1. Copy each of below codes in different M files.
  2. Place all the files in same folder
  3. Download the file from below and place in same folder
    1. results
  4. Also note that these codes are not in a particular order. Copy them all and then run the program.
  5. Run the “final.m” file

Code 1 – Script M File – Final.m

clc
clear
close all


% reading all the images at once
[IMAGES,n]=image_read;

% performing the preprocessing operations
[NHOOD,SE,u,r1,c1]=preprocessing;

% applying aura transformation on the image database created earlier
apply_aura(NHOOD,SE,u,r1,c1,IMAGES,n)

% 
% I=imread('image.jpg');
% I=rgb2gray(I);
% orig=I;
% figure, imshow(orig)
% title('Original Image')
% 
% [NHOOD,SE,u,r1,c1]=preprocessing;
% 
% for Tm=1:u
%     Tm
%     Iin=orig;
%     % checking all the pixels of the input image
%     Iout=aura(Iin);
%     
%     [PSNR(Tm),MSE(Tm),MAXERR,L2RAT]= measerr(orig,Iout);
%     ENTROPY(Tm)=entropy(I);
%     
%        
% end
% 
% 
% disp('Final Results are stored in the excel file : ')
% res=[1:u; MSE; PSNR; ENTROPY]

Code 2 – Function M File – apply_aura.m

function apply_aura(NHOOD,SE,u,r1,c1,IMAGES,n)

for i=1:n % running the code for all images in database
    Iin=IMAGES(:,:,i); % selecting an image
    PSNR=[];     MSE=[];     MAXERR=[];     L2RAT=[];     ENTROPY=[]; % initializing variables to store results
    
    for Tm=1:u
        
        Iout=aura(Iin,NHOOD,SE,u,r1,c1,Tm); % apply aura transformation on selected image
        outimagename=['Image' num2str(i) ' Tm=' num2str(Tm) '.jpg'];
        imwrite(Iout,outimagename)
        [PSNR(Tm),MSE(Tm),MAXERR(Tm),L2RAT(Tm)]= measerr(Iin,Iout);
        ENTROPY(Tm)=entropy(Iout);
        
    end 
    
    filename='results.xlsx';
    A={'Tm' 'MSE' 'PSNR' 'MAXERR' 'L2RAT'  'ENTROPY'};
    sheet=['image' num2str(i)];
    xlswrite(filename,A,sheet,'A1')
    
    filename='results.xlsx';
    A=[1:u; MSE; PSNR; MAXERR; L2RAT; ENTROPY]';
    sheet=['image' num2str(i)];
    xlswrite(filename,A,sheet,'A2')
    
end



Code 3 – Function M File – preprocessing.m

function [NHOOD,SE,u,r1,c1]=preprocessing

NHOOD=[1 1 1; 0 1 0; 0 1 0]; % defining the structuring element
SE=strel(NHOOD); % creating a structuring element
[r1,c1]=size(NHOOD);
u=r1*c1; %maximum value for Tm

end

Code 4 – Function M File – image_read.m

function [IMAGES,n]=image_read

IMAGES=[]; % empty matrix where images will be stored
n=10; % total number of images
for i=1:n  % running the loop for total number of images 
    im=imread(['image' num2str(i) '.jpg']); % reading an ith image
    if length(size(im))==3
%         i
%         disp('catch')
        im=rgb2gray(im); % convert to grayscale if it is a color image
    end
    im=imresize(im,[500 500]);
    IMAGES(:,:,i)=im; % storing the read image file into the empty matrix created earlier
end

end

Code 5 – Function M File – aura.m

function Iout=aura(Iin,NHOOD,SE,u,r1,c1,Tm)

I=Iin;
[r2,c2]=size(I);
for i=1:(r2-r1)
    for j=1:(c2-c1)
        mat=I(i:i+r1-1,j:j+c1-1);
        Tm_dash=length(find(mat==NHOOD));
        if Tm_dash>Tm
            a=i+round(r1/2);
            b=j+round(c1/2);
            I(a,b)=0;
        end
    end
end
Iout=I;

end

 

Noise removal in a signal using newly designed wavelets and kalman filtering

ABSTRACT

Kalman filter is the recursive data processing algorithm. It generates optimal estimates of desired quantities that are provided as the given set of measurement. Recursive means that property of filter which does not need to store all previous measurement and reprocess all data each time step. In this report we have used Kalman filtering along with the Fractional Fourier and Wavelet transform for the purpose of denoising a real time video. Haar wavelet transform is used . We have used MATLAB as our simulation tool. A real time video is taken as an input. The image array is loaded and then noise is added in the video .After two levels of denoising, our real time video signal gets denoised and a better quality of video signal is achieved. The Kalman filter is a tool that can estimate the variables of a wide range of processes. In the terms of mathematics we can say that a Kalman filter estimates the states of a linear system. It (Kalman filter)does not only works well in practice, but it is theoretically attractive because it can be shown that of all possible filters, it is the one that minimizes the variance of the estimation error. Kalman filters are often implemented in embedded control systems because in order to control a process, we first need an accurate estimate of the process variables.

MATLAB SOURCE CODE

Instructions to run the code

  1. Copy each of below codes in different M files.
  2. Place all the files in same folder
  3. Also note that these codes are not in a particular order. Copy them all and then run the program.
  4. Run the “FINAL.m” file

Code 1 – Script M File – Final.m

clc
clear
close all

load mri % input the image array
% implay(D)
orig=D;
D=squeeze(D); % preprocessing operations
D=im2double(D);
D=imnoise(D,'Gaussian',0,0.02); % adding the noise

r = size(D,1); % number of rows
c = size(D,2); % number of columns
f = size(D,3); % number of frames of the video


% pause
for i=1:f
    disp(['Frame remaining: ' num2str(f-i)])
    xy=D(:,:,i); % ith frame
    A=wavelet_transform(xy);    
    
    A1(:,:,i)=A(:,:,1);
    A2(:,:,i)=A(:,:,2);
    A3(:,:,i)=A(:,:,3);
    A4(:,:,i)=A(:,:,4);
    A5(:,:,i)=A(:,:,5);
    A6(:,:,i)=A(:,:,6);
    A7(:,:,i)=A(:,:,7);
    A8(:,:,i)=A(:,:,8);
    A9(:,:,i)=A(:,:,9);
    A10(:,:,i)=A(:,:,10);
    A11(:,:,i)=A(:,:,11);
end

[X1,P1,K1]= KALMANFILTER(A1,2);
[X2,P2,K2]= KALMANFILTER(A2,2);
[X3,P3,K3]= KALMANFILTER(A3,2);
[X4,P4,K4]= KALMANFILTER(A4,2);
[X5,P5,K5]= KALMANFILTER(A5,2);
[X6,P6,K6]= KALMANFILTER(A6,2);
[X7,P7,K7]= KALMANFILTER(A7,2);
[X8,P8,K8]= KALMANFILTER(A8,2);
[X9,P9,K9]= KALMANFILTER(A9,2);
[X10,P10,K10]= KALMANFILTER(A10,2);
[X11,P11,K11]= KALMANFILTER(A11,2);


disp('Press enter to play the original MRI video')
pause
implay(orig) % displaying the original image

disp('Press enter to play the noisy MRI video')
pause
implay(D) % displaying the noisy image

disp('Press enter to play the de-noised MRI video at fractional angle 0')
pause
implay(X1) % displaying the noisy image

disp('Press enter to play the de-noised MRI video at fractional angle 0.1')
pause
implay(X2) % displaying the noisy image

disp('Press enter to play the de-noised MRI video at fractional angle 0.2')
pause
implay(X3) % displaying the noisy image

disp('Press enter to play the de-noised MRI video at fractional angle 0.3')
pause
implay(X4) % displaying the noisy image

disp('Press enter to play the de-noised MRI video at fractional angle 0.4')
pause
implay(X5) % displaying the noisy image

disp('Press enter to play the de-noised MRI video at fractional angle 0.5')
pause
implay(X6) % displaying the noisy image

disp('Press enter to play the de-noised MRI video at fractional angle 0.6')
pause
implay(X7) % displaying the noisy image

disp('Press enter to play the de-noised MRI video at fractional angle 0.7')
pause
implay(X8) % displaying the noisy image

disp('Press enter to play the de-noised MRI video at fractional angle 0.8')
pause
implay(X9) % displaying the noisy image

disp('Press enter to play the de-noised MRI video at fractional angle 0.9')
pause
implay(X10) % displaying the noisy image

disp('Press enter to play the de-noised MRI video at fractional angle 1')
pause
implay(X11) % displaying the noisy image

Code 2 – Function M File -frft.m

function Faf = frft(f, a)
% The fast Fractional Fourier Transform
% input: f = samples of the signal
%        a = fractional power
% output: Faf = fast Fractional Fourier transform

error(nargchk(2, 2, nargin));

f = f(:);
N = length(f);
shft = rem((0:N-1)+fix(N/2),N)+1;
sN = sqrt(N);
a = mod(a,4);

% do special cases
if (a==0), Faf = f; return; end;
if (a==2), Faf = flipud(f); return; end;
if (a==1), Faf(shft,1) = fft(f(shft))/sN; return; end 
if (a==3), Faf(shft,1) = ifft(f(shft))*sN; return; end

% reduce to interval 0.5 < a < 1.5
if (a>2.0) 
    a = a-2;
    f = flipud(f);
end      %%%%%%%%%%%%%%%%%%%%%%%%%

if (a>1.5)
    a = a-1;
    f(shft,1) = fft(f(shft))/sN;
end    %%%%%%%%%%%%%%%%

if (a<0.5)
    a = a+1;
    f(shft,1) = ifft(f(shft))*sN;
end      %%%%%%%%%%%%%%%%%%

% the general case for 0.5 < a < 1.5
alpha = a*pi/2;
tana2 = tan(alpha/2);
sina = sin(alpha);
f = [zeros(N-1,1) ; interp(f) ; zeros(N-1,1)];

% chirp premultiplication
chrp = exp(-i*pi/N*tana2/4*(-2*N+2:2*N-2)'.^2);  %both sin and cos terms
%chrp = cos(-1*(pi/N*tana2/4*(-2*N+2:2*N-2)'.^2)); %only cos i.e. real terms
%chrp = i*sin(-1*(pi/N*tana2/4*(-2*N+2:2*N-2)'.^2)); %only sin i.e. imaginary terms
f = chrp.*f;

% chirp convolution
c = pi/N/sina/4;
Faf = fconv(exp(i*c*(-(4*N-4):4*N-4)'.^2),f); %both sin and cos terms
%Faf = fconv(cos(c*(-(4*N-4):4*N-4)'.^2),f); %only cos i.e. real terms
%Faf = fconv(i*sin(c*(-(4*N-4):4*N-4)'.^2),f); %only sin i.e. imaginary terms
Faf = Faf(4*N-3:8*N-7)*sqrt(c/pi);

% chirp post multiplication
Faf = chrp.*Faf;

% normalizing constant
Faf = exp(-i*(1-a)*pi/4)*Faf(N:2:end-N+1); %both sin and cos terms
%Faf = cos(-1*(1-a)*pi/4)*Faf(N:2:end-N+1); %only cos i.e. real terms
%Faf = i*sin(-1*(1-a)*pi/4)*Faf(N:2:end-N+1); %only sin i.e. imaginary terms

%%%%%%%%%%%%%%%%%%%%%%%%%
function xint=interp(x)
% sinc interpolation

N = length(x);
y = zeros(2*N-1,1);
y(1:2:2*N-1) = x;
xint = fconv(y(1:2*N-1), sinc([-(2*N-3):(2*N-3)]'/2));
xint = xint(2*N-2:end-2*N+3);

%%%%%%%%%%%%%%%%%%%%%%%%%
function z = fconv(x,y)
% convolution by fft

N = length([x(:);y(:)])-1;
P = 2^nextpow2(N);
z = real(ifft( fft(x,P) .* fft(y,P)));
z = z(1:N);

Code 3 – Function M File – haar_decomposition.m

function [a,d]=haar_decomposition(f,v,w)

% v=[1/sqrt(2) 1/sqrt(2)];
% w=[1/sqrt(2) -1/sqrt(2)];

m=1:length(f)/2;
a=f(2*m-1).*v(1) + f(2*m).*v(2);
d=f(2*m-1).*w(1) + f(2*m).*w(2);          

end

Code 4 – Function M File – haar_reconstruction.m

function recon=haar_reconstruction(a,d,v,w)

% v=[1/sqrt(2) 1/sqrt(2)];
% w=[1/sqrt(2) -1/sqrt(2)];

A=[];
D=[];
for i=1:length(a)
    A=[A a(i)*v];
    D=[D d(i)*w];
end
recon=A+D;
end

Code 5 – Function M File – wavelet_transform.m

function FINAL=wavelet_transform(xy)
N=0;
FINAL=[];
% decomposition and thresholding
% xy=double(xy);
for a=0:0.1:1    
    N=N+1;

    v1=[1/sqrt(2) 1/sqrt(2)]; % original wavelets
    w1=[1/sqrt(2) -1/sqrt(2)];
    
    fv=frft(v1,a); %frft of normal scaling function    
    fv=(fv');
    fw=([fv(2) -fv(1)]);
    
    e=sum(abs(fv).^2); % normalizing the wavelets
    x=sqrt(1/e);    
    fv1=fv.*x; 
    fw1=([fv1(2) -fv1(1)]);        
    
    v1=real(fv1); % new wavelets
    w1=real(fw1);
    
    L=[];
    H=[];
    [r c]=size(xy);        
    for i=1:r
        f=xy(i,:);
        [a1,ddd]=haar_decomposition(f,v1,w1); % haar decomposition 
        ddd=thresh_check(ddd); % applying the threshold
        L(i,:)=a1;
        H(i,:)=ddd;
    end

    [r1,c1]=size(L);
    
    LL=[];
    LH=[];
    HL=[];
    HH=[];
    for i=1:c1
        f1=L(:,i)';
        f2=H(:,i)';        
        [a11,ddd1]=haar_decomposition(f1,v1,w1);
        [a22,ddd2]=haar_decomposition(f2,v1,w1);
        ddd1=thresh_check(ddd1);   
        ddd2=thresh_check(ddd2);    
        LL(:,i)=a11';
        LH(:,i)=ddd1';
        HL(:,i)=a22';
        HH(:,i)=ddd2';
    end
    
    final=[LL LH;HL HH]; % final decomposed image
    % reconstruction
    [r2,c2]=size(LL);
    L=[];
    % level 1 reconstruction
    for j=1:c2 %vertical
        part1=LL(:,j)';        
        part2=HL(:,j)';
        recon=haar_reconstruction(part1,part2,v1,w1);
        L=[L recon'];
    end
    H=[];
    for j=1:c2 %vertical
        part1=LH(:,j)';        
        part2=HH(:,j)';
        recon=haar_reconstruction(part1,part2,v1,w1);
        H=[H recon'];
    end    
    
    % level 2 reconstruction
    [r3,c3]=size(L);
    img=[];
    for i=1:r3 % horizontal
        part1=L(i,:);
        part2=H(i,:);
        recon=haar_reconstruction(part1,part2,v1,w1);
        img(i,:)=recon;
    end
    FINAL(:,:,N)=img;
end

end

Code 6 – Function M File – thresh_check.m

function sig2=thresh_check(sig)

M=max(sig);
mat=[];
for th=0:0.01:M
    sig2=sig;    
    for i=1:length(sig)
        if sig(i)<th && sig(i)>-th
            sig2(i)=0;
        end
    end
    [PSNR,MSE,MAXERR,L2RAT] = measerr(sig,sig2);
    mat=[mat MSE];
end

[mini,IX]=min(mat);
th=0:0.01:M;
thresh=th(IX);

sig2=sig;
for i=1:length(sig)
    if sig(i)<thresh && sig(i)>-thresh
        sig2(i)=0;
    end
end
        
end

Code 7 – Function M File – KALMANFILTER.m

function [X,P,K]=KALMANFILTER(video,option)
% Usage :
% X : the estimated state
% P : the estimated error covariance
% K : Kalman gain  0<= K <= 1.
% The algorithm :
% K(n+1)     = P_est(n+1) ./ (P_est(n+1)+ Q) .
% P_est(n+1) = (I - K).* P_est(n) .
% X_est(n+1) = X_est(n) + K .*( Z(n) - H. *X_est(n)) .  
% option specifies how to estimate the mean state of pixels 
%
% option =1 means that the mean X is calculated on cube [3*3*3]
% option =2 means that the mean X is calculated on cube [3*3*2] 
% Generally the second option preserve edges better
% Class of the input Argument "video" : [double] .
video(:,:,2:end)    = Estimation(video(:,:,2:end),option);
[X,P,K]             = Perform_Filter(video);
%================================Perform_Filter============================
function [X_est,PP,KK] = Perform_Filter(video)
         
% Variables
line             = size(video,1); % number of rows
column           = size(video,2); % number of columns
time             = size(video,3); % number of frames of the video
PP               = zeros(time,1); % zero column vector of length "time" ie no. of frames
KK               = zeros(time,1); % zero column vector of length "time" ie no. of frames
X_est            = double(zeros(size(video))); % empty estimated value which will be converted to the final matrix (video)
I                = ones(line,column); % matrix of 1s (no of rows)x(no of columns)

% Initial conditions .
X1               = Initial_Estimation(video(:,:,1)); % estimation of the first frame of "video"
X_est(:,:,1)     = X1; % putting that value in the actual estimation variable
E                = X1-video(:,:,1); % difference of the first frame of original video and the estimated video
Q                = rand(1)/10*ones(line,column);
R                = rand(1)/10*ones(line,column);
P_est            = cov(E); % finding the covariance of the difference of the first frame of original video and the estimated video
if (line>column) 
    delta        = line-column;
    portion      = P_est(1:delta,:);
    P_est        = [P_est;portion];
end
if(line<column)
    P_est        = P_est(1:line,:);
end
K                = P_est./(P_est+R);
PP(1)            = mean(P_est(:));
KK(1)            = mean(K(:)); % computation till the first frame

% The  ALGORITHM
for i= 2 : time % working on the rest of the frames of image
    X_pred                     =  X_est(:,:,i-1); % (i-1)th frame
    X_pred(isnan(X_pred))      =  0.5; % setting all the nan values to 0.5
    X_pred(isinf(X_pred))      =  0.5; % setting all the inf values to 0.5
    P_pred                     =  P_est + Q ; % predicted values for ith frame (computed from the estimated value)
    K                          =  P_pred./(P_pred + R); % 
    Correction                 =  K .*(video(:,:,i)-X_pred);
    X_est(:,:,i)               =  X_pred+Correction;
    X_est(isnan(X_est(:,:,i))) =  0.5; % setting all the nan values to 0.5
    X_est(isinf(X_est(:,:,i))) =  0.5; % setting all the nan values to 0.5
    P_est                      =  (I-K).*P_pred;
    PP(i)                      =  mean(P_pred(:));
    KK(i)                      =  mean(K(:));  
end
% eliminating the INF AND NAN in the edges
X_est(1,:,:)     = X_est(2,:,:);
X_est(:,1,:)     = X_est(:,2,:);
X_est(line,:,:)  = X_est(line-1,:,:);
X_est(:,column,:)= X_est(:,column-1,:);
return
%====================================Estimation============================
function Y = Estimation(X,option)

% Normalize the data : convert data to Double
if ~isa(X,'double') 
    X=double(X)./255;
end
if max(X(:)) > 1.00
    X=X./255.00;
end

n = size(X,1); % no of rows
p = size(X,2); % no of cols
m = size(X,3); % no of frames
Y = double(zeros(size(X))); % initializing
% Estimation
% Intial and Final Estimations : X(t_0) an X(t_final)
for i=2:n-1
    for j=2:p-1
        Y(i,j,end) = (  X(i-1,j,m)+X(i+1,j,m)+X(i,j-1,m)+X(i,j+1,m) ) ./ 4;
    end
end

if option==1
        
for i=2:n-1
    for j=2:p-1
        for k=2:m-1
            Y(i,j,k)= (( X(i+1,j-1,k) + X(i+1,j,k)+ X(i+1,j+1,k)+X(i,j-1,k)+...
             X(i,j+1,k)+X(i-1,j-1,k)+X(i-1,j,k)+...
                X(i-1,j+1,k) ) + (X(i+1,j-1,k-1) + X(i+1,j,k-1)+ X(i+1,j+1,k-1)+X(i,j-1,k-1)+...
             X(i,j+1,k-1)+X(i-1,j-1,k-1)+X(i-1,j,k-1)+...
                X(i-1,j+1,k-1) ) + (X(i+1,j-1,k+1) + X(i+1,j,k+1)+ X(i+1,j+1,k+1)+X(i,j-1,k+1)+...
             X(i,j+1,k+1)+X(i-1,j-1,k+1)+X(i-1,j,k+1)+...
                X(i-1,j+1,k+1) )+X(i,j,k) )./27;
        end
    end
end

elseif option==2

    for i=2:n-1
        for j=2:p-1
            for k=2:m
                Y(i,j,k)=(( X(i+1,j-1,k) + X(i+1,j,k)+ X(i+1,j+1,k)+X(i,j-1,k)+...
             X(i,j+1,k)+X(i-1,j-1,k)+X(i-1,j,k)+...
                X(i-1,j+1,k) ) + (X(i+1,j-1,k-1) + X(i+1,j,k-1)+ X(i+1,j+1,k-1)+X(i,j-1,k-1)+...
             X(i,j+1,k-1)+X(i-1,j-1,k-1)+X(i-1,j,k-1)+...
                X(i-1,j+1,k-1) +X(i,j,k)) )./18;
            end
        end
    end
end

% Now all pixels of Y are estimated by 3*3*2 pixels of noisy X
% Temprary Stop
 
if (option~=1 &&option~=2)
    error(' Invalid Option for type of  estimation')
end
return
%=========================================================================
function Z = Initial_Estimation(X)

[n p]=size(X);
Z=double(zeros(size(X)));
for i=2:n-1
    for j=2:p-1
        Z(i,j)=X(i-1,j)+X(i+1,j)+X(i,j-1)+X(i,j+1); % 4 connectivity
        Z(i,j)=Z(i,j)./4; % mean of 4 connectivity pixels
    end
end % Z is now an estimation of X


return

 

Recent Posts

Tags

ad-hoc networks AODV boundary detection process classification clustering clustering algorithm Colour Information computer vision Decryption Encryption EZRP ICM (Iterated Conditional Modes) image denoising image enhancement IMAGE PROCESSING image segmentation Imaging and image processing MANET Markov Random Fields neutrosophic logic optical network proposed method PSNR QLab system region growing Robert’s operator Seed point selection segmentation semi-automatic algorithm Shadow Detection shadow removal wall motion wireless communication Wireless network wireless networks Wireless Sensor Network wireless sensor networks ZRP