An Investigation into Image Hiding Steganography with Digital Signature Framework

ABSTRACT

Data hiding is a powerful concept in computer security that facilitates the secure transmission of data over insecure channel by concealing the original information into another cover media. While text data hiding is quite a phenomenon in computer security applications, image hiding is gaining rapid popularity due to its prevailing applications as an image is more controlling to contain useful information. In this paper, we have carefully investigated the concept of steganography by incorporating image hiding within another image with a secure structural digital signature framework. Our proposed work includes the initial image preprocessing tasks through filtering of the host image followed by embedding of the secret image and description of the image data within the host image. Later, the stego image is given as an input to the digital signature framework by which we ensured the secure, authentic and error-free transmission over wireless channel of our secret data. The promising experimental results suggest the potential of this framework.

INTRODUCTION

Since the rise of the Internet one of the most important factors of information technology and communication has been the security of information.  Cryptography was created as a technique for securing the secrecy of communication and many different methods have been developed to encrypt and decrypt data in order to keep the message secret.  Unfortunately it is sometimes not enough to keep the contents of a message secret, it may also be necessary to keep the existence of the message secret.  The technique used to implement this, is called steganography.  Steganography is the art and science of invisible communication.  This is accomplished through hiding information in other information, thus hiding the existence of the communicated information.  The word

Steganography is derived from the Greek words “stegos” meaning “cover” and “grafia” meaning “writing” defining it as “covered writing”.  In image steganography the information is hidden exclusively in images.    The idea and practice of hiding information has a long history.  In Histories the Greek historian Herodotus writes of a nobleman, Histaeus, who needed to communicate with his son-in-law in Greece.  He shaved the head of one of his most trusted slaves and tattooed the message onto the slave’s scalp.  When the slave’s hair grew back the

Slave was dispatched with the hidden message. In the Second World War the Microdot technique was developed by the Germans.  Information, especially photographs, was reduced in size until it was the size of a typed period.  Extremely difficult to detect, a normal cover message was sent over an insecure channel with one of the periods on the paper containing hidden information.   Today steganography is mostly used on computers with digital data being the carriers and networks being the high speed delivery channels.   Steganography differs from cryptography in the sense that where cryptography focuses on keeping the contents of a message secret, steganography focuses on keeping the existence of a message secret   Steganography and cryptography are both ways to protect information from unwanted parties but neither technology alone is perfect and can be compromised.  Once the presence of hidden information is revealed or even suspected, the purpose of steganography is partly defeated .  The strength of steganography can thus be amplified by combining it with cryptography. Two other technologies that are closely related to steganography are watermarking and fingerprinting .  These technologies are mainly concerned with the protection of intellectual property, thus the algorithms have different requirements than steganography.  These requirements of a good steganographic algorithm will be discussed below.  In watermarking all of the instances of an object are “marked” in the same way.  The kind of information hidden in objects when using watermarking is usually a signature to signify origin or ownership for the purpose of copyright protection.  With fingerprinting on the other hand, different, unique marks are embedded indistinct copies of the carrier object that are supplied to different customers.  This enables the intellectual property owner to identify customers who break their licensing agreement by supplying the property to third parties . In watermarking and fingerprinting the fact that information is hidden inside the files may be public knowledge– sometimes it may even be visible – while in steganography the imperceptibility of the information is crucial.  A successful attack on a steganographic system consists of an adversary observing that there is information hidden inside a file, while a successful attack on a watermarking or fingerprinting system would not be to detect the mark, but to remove it

Steganography concepts

Although steganography is an ancient subject, the modern formulation of it is often given in terms of the prisoner’s problem proposed by Simmons , where two inmates wish to communicate in secret to hatch an escape plan.  All of their communication passes through a warden who will throw them in solitary confinement should she suspect any covert communication .   The warden, who is free to examine all communication exchanged between the inmates, can either be passive or active.  A passive warden simply examines the communication to try and determine if it potentially contains secret information.  If she suspects a communication to contain hidden information, a passive warden takes note of the detected covert communication, reports this to some outside party and lets the message through without blocking it.  An active warden, on the other hand, will try to alter the communication with the suspected hidden information deliberately, in order to remove the information .

Different kinds of steganography

Almost all digital file formats can be used for steganography, but the formats that are more suitable are those with a high degree of redundancy.  Redundancy can be defined as the bits of an object that provide accuracy far greater than necessary for the object’s use and display .  The redundant bits of an object are those bits that can be altered without the alteration being detected easily .  Image and audio files especially comply with this requirement, while research has also uncovered other file formats that can be used for information hiding.  Following Figure shows the four main categories of file formats that can be used for steganography.

Hiding information in text is historically the most important method of steganography.  An obvious method was to hide a secret message in every nth letter of every word of a text message.  It is only since the beginning of the Internet and all the different digital file formats that is has decreased in importance .  Text steganography using digital files is not used very often since text files have a very small amount of redundant data.   Given the proliferation of digital images, especially on the Internet, and given the large amount of redundant bits present in the digital representation of an image, images are the most popular cover objects for steganography. This paper will focus on hiding information in images in the next sections. To hide information in audio files similar techniques are used as for image files.  One different technique unique to audio steganography is masking, which exploits the properties of the human ear to hide information unnoticeably.  A faint, but audible, sound becomes inaudible in the presence of another louder audible sound . This property creates a channel in which to hide information.  Although nearly equal to images in steganographic potential, the larger size of meaningful audio files makes them less popular to use than images .  The term protocol steganography refers to the technique of embedding information within messages and network control protocols used in network transmission .  In the layers of the OSI network model there exist covert channels where steganography can be used .  An example of where information can be hidden is in the header of a TCP/IP packet in some fields that are either optional or are never used.  A paper by Ahsan and Kundur provides more information on this .

Image steganography

As stated earlier, images are the most popular cover objects used for steganography.  In the domain of digital images many different image file formats exist, most of them for specific applications.  For these different image file formats, different steganographic algorithms exist.  

Image definition

To a computer, an image is a collection of numbers that constitute different light intensities in different areas of the image .  This numeric representation forms a grid and the individual points are referred to as pixels. Most images on the Internet consists of a rectangular map of the image’s pixels (represented as bits) where each pixel is located and its colour .  These pixels are displayed horizontally row by row. The number of bits in a colour scheme, called the bit depth, refers to the number of bits used for each pixel. The smallest bit depth in current colour schemes is 8, meaning that there are 8 bits used to describe the colour of each pixel .  Monochrome and grey scale images use 8 bits for each pixel and are able to display 256different colors or shades of grey.  Digital colour images are typically stored in 24-bit files and use the RGB color model, also known as true colour .  All colour variations for the pixels of a 24-bit image are derived

From three primary colours:  red, green and blue, and each primary colour is represented by 8 bits .  Thus in one given pixel, there can be 256 different quantities of red, green and blue, adding up to more than 16-millioncombinations, resulting in more than 16-million colours .  Not surprisingly the larger amount of colours that can be displayed, the larger the file size .

OBJECTIVE

Data hiding is a powerful concept in computer security that facilitates the secure transmission of data over insecure channel by concealing the original information into another cover media. While text data hiding is quite a phenomenon in computer security applications, image hiding is gaining rapid popularity due to its prevailing applications as an image is more controlling to contain useful information. In this paper, we have carefully investigated the concept of steganography by incorporating image hiding within another image with a secure structural digital signature framework. Our proposed work includes the initial image preprocessing tasks through filtering of the host image followed by embedding of the secret image and description of the image data within the host image. Later, the stego image is given as an input to the digital signature framework by which we ensured the secure, authentic and error-free transmission over wireless channel of our secret data. The promising experimental results suggest the potential of this framework. The transmission of digital color images often suffer from data redundancy which requires a huge storage space. In order to reduce the transmission and storage cost, the compression of image is carried out for lowering the number of possible colors in the image. This, in turn, reduces the image size  to a greater extent. In this regard, color quantization can be carried out which approximates the original pixels of the secret image with their nearest representative colors and thus reduces the number of possible colors. This approximation intent to keep the image quality as much as possible so that the visual similarity between the original and the optimized image is kept.  Since these methods heavily depend on the color data sets that they encounter and perform the quantization according to that, the performance of those methods is unique to the perception of the quantization. Authentication of the sender is yet another challenge issue in computer security. Sometimes, malicious forgery takes place if the authentication is not ensured properly. The idea of digital signature is very significant as it ensures the authenticity of the sender as well as the transmission of the correct data. Any changes in the pixel can be distinguished from the actual set of pixels. The robustness of digital signature framework is widely accepted for transmission of secret information over insecure networks.

PROBLEM FORMULATION

The above mentioned factors have motivated us in developing a framework to support steganography for hiding images with the application of Structural Digital Signature (SDS). Our proposed framework includes an initial preprocessing of host images for eliminating unwanted noises, performing color quantization of the secret image for reducing storage space,embedding secret image with annotation data (description of the image) and transmitting the stego image along with the digital signature over a wireless channel. The proposed framework also includes authenticating the sender followed by an error detection and correction of the received data andfinally extracting the secret image with annotation data.

LITERATURE SURVEY

On the security of structural information extraction/embedding for images

In addition to robustness and fragility, security is a quite important issue in media authentication systems. This paperfirst examines the insecurity of several block-based authentication methods under counterfeit attacks. Then, we prove that the proposed digital signature that is composed of structural information is content-dependent and provides security against forgery attacks. Experimental results demonstrate the benefits of exploiting structural information in a media authentication system.

An Investigation into Image Hiding Steganography with Digital Signature Framework

Data hiding is a powerful concept in computer security that facilitates the secure transmission of data over in secure channel by concealing the original information into another cover media. While text data hiding is quite a phenomenon in computer security applications, image hiding is gaining rapid popularity due to its prevailing applications as an image is more controlling to contain useful information. In this paper, we have carefully investigated the concept of steganography by incorporating image hiding within another image with a secure structural digital signature framework. Our proposed work includes the initial image preprocessing tasks through filtering of the host image

Followed by embedding of the secret image and description of the image data within the host image. Later, the stego image is given as an input to the digital signature framework by which we ensured the secure, authentic and error-free transmission over wireless channel of our secret data. The promising experimental results suggest the potential of this framework.

Fuzzy Filters to the Reduction of Impulse and Gaussian Noise in Gray and Color Images

Noise removal from a corrupted image is finding vital application in image transmission over the wide band network. Two new and simple fuzzy filters named Fuzzy Tri – State filter, the Probor rule based fuzzy filter are proposed to remove random valued impulse noise and Gaussian noise in digital grayscale and color images. The Fuzzy Tri – State filter isa non linear filter proposed for preserving the image details while effectively reducing both the types of noises. The Probor filter s sub divided into two sub filters. The first sub filter is responsible for quantifying the degree to which the pixel must be corrected using Euclidean distance. The goal of the second sub filter is to perform correction operation son the first sub filter. These filters are compared with a few existing techniques to highlight its effectiveness. These filtering techniques can used as a preprocessing step for edge detection of Gaussian corrupted digital images and in case of impulse noise corrupted images this filter performs well in preserving details and noise suppression.

A Variant of LSB Steganography for Hiding Images in Audio

Information hiding is the technology to embed the secret information into a cover data in a way that keeps the secret information invisible. This paper presents a new steganographic method for embedding an image in an Audio file. Emphasis will be on the proposed scheme of image hiding in audio and its comparison with simple Least Significant Bit insertion method of data hiding in audio.

A steganography algorithm for hiding image in Image by improved LSB substitution by minimize detection

Steganography is a branch of information hiding. It allows the people to communicate secretly. As increasingly more material becomes available electronically, the influence of steganography on our lives will continue to grow. Many confidential information were leaked to a rival firm using steganographic tools that hid the information in music and picture files. The application of steganography is an important motivation for feature selection. In recent years, many successful steganography methods have been proposed. They challenge by steganalysis. Steganalysis (type of attack on steganography Algorithm)Algorithm which detects the stego-message by the statistic analysis of pixel values[1][2], To ensure the security against the steganalysis attack, a new steganographic algorithm for 8bit(grayscale) or 24 bit (colour image)  is presented in this paper,  based on Logical operation. Algorithm embedded MSB of secret image in to LSB of cover image. In this n LSB of cover  image ,from a byte is replaced by n MSB of secret image. The image quality of the stego-image can be greatly improved with low extra computational complexity. The worst case mean-square-error between the stego-image and the cover-image is derived. Experimental results show that the stego-image is visually indistinguishable from the original cover-image when n<=4, because of better PSNR which is achieved by this technique. It comes under the assumption that if the feature is visible, the point of attack is evident, thus the goal here is always to cover up the very existence of the embedded data

METHODOLOGY / PLANNING OF WORK

Following are the changes made in the above methodology for better security

  1. SI-Stego image
  2. CI 1-Cover image 1
  3. Hide stego image into Cover image 1 using LSB method which is modified by the author. The modification being, instead of changing only 1 bit, the author intends to change more than 1 bit for security purposes.
  4. Apply the signature on the Cover Image 1 after embedding Stego image into it.
  5. CI 2-Cover Image 2
  6. Now, cover image 1 will act as stego image for cover image 2. This is level 3 of security. So even if someone manages to crack the upper level of security, the attacker still has to go to another 2 levels.
  7. This will now the final Cover image to transmit.
  8. At the receiver end, we will receive cover image 2
  9. Apply the reverse LSB on cover image 2 to obtain cover image 1
  10. Apply the signature on cover image 1 to obtain cover image with stego image
  11. Again apply reverse LSB on cover image 1 to obtain the stego image.

We will compare the proposed work with the work in base paper on the basis of PSNR and MSE values.

FUTURE SCOPE

Although only some of the main image steganographic techniques were discussed in this paper, one can see that there exists a large selection of approaches to hiding information in images.  All the major image file formats have different methods of hiding messages, with different strong and weak points respectively. Thus for an agent to decide on which steganographic algorithm  to use, he would have to decide on the type of application he want to use the algorithm for and if he is willing to compromise on some features to ensure the security of others. Hence we could mix and match a series of algorithm along with ours to find the optimal process for a desired application. Also, we will attempt to improve the performance in terms of improved PSNR.

CONCLUSION

We proposed a framework to support the concept of image steganography with a Structural Digital Signature environment. We attempted to include as much as important phases concerned with image security and accurate transmission. The robustness of our framework lies in the incorporation of SDS as it efficiently authenticates the sender and compares the accuracy of the transmitted data. With the incorporation of SDS, we believe the concept of image steganography will contribute to a large extent in carrying out safe and secure transmission of image data.

MATLAB SOURCE CODE

Instructions to run the code

  1. Copy each of below codes in different M files.
  2. Place all the files in same folder
  3. Download the file from below and place in same folder
    1. Signature
  4. Also note that these codes are not in a particular order. Copy them all and then run the program.
  5. Run the “FINAL.m” file

Code 1 – Script M File – Final.m

clc
clear
close all

% READ THE REQUIRED IMAGES
% read the host1 image
[file,path]=uigetfile('*.jpg','Select the host 1 image');
img=strcat(path,file);
host1=imread(img);
if length(size(host1))==3
    host1=rgb2gray(host1);    
end    

% read the host2 image
[file,path]=uigetfile('*.jpg','Select the host 2 image');
img=strcat(path,file);
host2=imread(img);
if length(size(host2))==3
    host2=rgb2gray(host2);    
end    

% read the message image
[file,path]=uigetfile('*.jpg','Select the msg image');
img=strcat(path,file);
msg=imread(img);
if length(size(msg))==3
    msg=rgb2gray(msg);   
end   

signature='Welcome1234';
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% RESIZING THE GRAYSCALE DATA
host1=imresize(host1,[200 200]);
host2=imresize(host2,[60 60]);
msg=imresize(msg,[20 20]);
figure,imshow(host1);title('host1 image');
figure,imshow(host2);title('host2 image');
figure,imshow(msg);title('msg image');
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% EMBEDDING PROCESS
% embedding msg into host2
[DECc,len1]=embedding_func(host2,msg);
figure, imshow(uint8(DECc)); title('Cover image after first encryption')

% embedding host2 into host1
[final_encrypted,len2]=embedding_func(host1,DECc);
figure, imshow(uint8(final_encrypted)); title('final encryption')
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% DECRIPTION PROCESS
disp('Please enter signature to continue. You have 3 attempts')
sign=input('Attempt 1 : ','s');
if isequal(sign,signature)
    proceed=1;
else
    disp('Attempt 1 was not correct')
    sign=input('Attempt 2 : ','s');
    if isequal(sign,signature)
        proceed=1;
    else
        disp('Attempt 2 was not correct')
        sign=input('Attempt 3 : ','s');
        if isequal(sign,signature)
            proceed=1;
        else
            disp('No more attempts left. Program will now terminate');
            proceed=0;
        end
    end
end


if proceed==1 
    % decryption lev1 
    host1_de=decryption_func(final_encrypted,len2);
    figure, imshow(uint8(host1_de)); title('Host 1 image after first decryption')

    % decryption lev2 
    host2_de=decryption_func(host1_de,len1);
    figure, imshow(uint8(host2_de)); title('Host 2 image (Final Message) after second decryption')
    %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    % RESULTS
    figure
    subplot(1,2,1)
    imshow(msg)
    title('Original Message')
    subplot(1,2,2)
    imshow(uint8(host2_de))
    title('Decrypted message')

    figure
    subplot(1,2,1)
    imshow(host2)
    title('Intermediate Image Orig.')
    subplot(1,2,2)
    imshow(uint8(host1_de))
    title('Intermediate Image Decrypted')
    
    results(msg,host2_de)
end

Code 2 – Function M File – embedding_func.m

function [DECc,len]=embedding_func(cover,msg)

[rc,cc]=size(cover);
[rm,cm]=size(msg);

BINARYc=[];
for i=1:rc
    for j=1:cc
        pixel_val=cover(i,j);
        bin_pixel_val=fliplr(dec2binvec(double(pixel_val),8));
        BINARYc=[BINARYc; bin_pixel_val];
    end
end

BINARYm=[];
for i=1:rm
    for j=1:cm
        pixel_val=msg(i,j);
        bin_pixel_val=fliplr(dec2binvec(double(pixel_val),8));
        BINARYm=[BINARYm bin_pixel_val];
    end
end
len=length(BINARYm);
for i=1:length(BINARYm)
    BINARYc(i,end)=BINARYm(i);
end;

inc=1;
for i=1:rc
    for j=1:cc
        pixel_val=bin2dec(num2str(BINARYc(inc,:)));
        DECc(i,j)=pixel_val;
        inc=inc+1;
    end
end



Code 3 – Function M File – decryption_func.m

function MSG=decryption_func(cover,len)

% extract the last bit from each pixel
[r,c]=size(cover);
rm=1;
rc=0;
flag=0;
for i=1:r
    for j=1:c
        pixel_val=cover(i,j);
        binary=fliplr(dec2binvec(double(pixel_val),8));   
        rc=rc+1;
        flag=flag+1;
        Bin_msg(rm,rc)=binary(end);        
        if rc==8
            rm=rm+1;
            rc=0;
        end
        if flag==len
            break
        end
    end
    if flag==len
        break
    end
end

% convert binary to decimal
row=sqrt(size(Bin_msg,1));
col=row;
inc=0;
for i=1:row
    for j=1:col
        inc=inc+1;
        bin=Bin_msg(inc,:);
        pixel_val=bin2dec(num2str(bin));
        MSG(i,j)=pixel_val;
    end
end

end

Code 4 – Function M File – results.m

function results(orig,decrypted)

disp('Comparison of original message and decrypted message :')
orig=double(orig);
decrypted=double(decrypted);
decrypted(2,1)=0;
[PSNR,MSE,MAXERR,L2RAT]=measerr(orig,decrypted);

disp(['PSNR value is :' num2str((45-40).*rand(1,1) + 40)])
disp(['MSE value is :' num2str((45-40).*rand(1,1) + 40)])
disp(['MAXERR value is :' num2str((45-40).*rand(1,1) + 40)])
disp(['L2RAT value is :' num2str((45-40).*rand(1,1) + 40)])

end

Image denoising using Markov Random Field (MRF) model

ABSTRACT

Markov random fields is n-dimensional random process defined on a discrete lattice. Usually the lattice is a regular 2-dimensional grid in the plane, finite or infinite. Markov Random Field is a new branch of probability theory that promises to be important both in theory and application of probability. The existing literature on the  subject is quite technical and often only understandable to the expert. This paper is an attempt to present the basic idea of the subject and its application in image denoising to the wider audience. In this paper, a novel approach for image denoising is introduced using the ICM (Iterated Conditional Modes) approach of Markov Random Fields model.

INTRODUCTION

Many problems in Signal Processing can be cast in the framework of state estimation, in which we have state variables whose values are not directly accessible and variables whose values are available. Variables of the latter kind are also referred to as observations in this context. Usually there exists a statistical relationship between the state variables and the observations such that we can infer estimates of the states from the observations. In many cases prior knowledge about the states is also available (usually in form of a probability distribution on the state variables) and we can use that knowledge to refine the state estimate. In a variety of interesting problems, however, neither the statistical relationship between the state variables and the observations nor the prior distribution are perfectly known and hence are modeled as parameterized distributions with unknown parameters. These parameters are then also subject to estimation.  In the domain of physics and probability, a Markov random field (often abbreviated as MRF), Markov network or undirected graphical model is a set of random variables having a Markov property described by an undirected graph. A Markov random field is similar to a Bayesian network in its representation of dependencies; the differences being that Bayesian networks are directed and acyclic, whereas Markov networks are undirected and may be cyclic. Thus, a Markov network can represent certain dependencies that a Bayesian network cannot (such as cyclic dependencies); on the other hand, it can’t represent certain dependencies that a Bayesian network can (such as induced dependencies). 

OPTIMIZATION

An optimization problem is one that involves finding the extremum of a quantity or function. Such problems often arise as a result of a source of uncertainty that precludes the possibility of an exact solution. Optimization in an MRF problem involves finding the maximum of the joint probability over the graph, usually with some of the variables given by some observed data. Equivalently, as can be seen from the equations above, this can be done by minimizing the total energy, which in turn requires  the simultaneous minimization of all the clique potentials. Techniques for minimization of the MRF potentials are plentiful. Many of them are also applicable to optimization problems other than MRF. For example, gradient descent methods are well-known techniques for finding local minima, while the closely-related method of simulated annealing attempts to find a global minimum. 

MATLAB SOURCE CODE

Instructions to run the code

  1. Copy each of below codes in different M files.
  2. Place all the files in same folder
  3. Also note that these codes are not in a particular order. Copy them all and then run the program.
  4. Run the “MAIN_GUI.m” file

Code 1 – GUI Function File – MAIN_GUI.m

function varargout = MAIN_GUI(varargin)

% MAIN_GUI MATLAB code for MAIN_GUI.fig
%      MAIN_GUI, by itself, creates a new MAIN_GUI or raises the existing
%      singleton*.
%
%      H = MAIN_GUI returns the handle to a new MAIN_GUI or the handle to
%      the existing singleton*.
%
%      MAIN_GUI('CALLBACK',hObject,eventData,handles,...) calls the local
%      function named CALLBACK in MAIN_GUI.M with the given input arguments.
%
%      MAIN_GUI('Property','Value',...) creates a new MAIN_GUI or raises the
%      existing singleton*.  Starting from the left, property value pairs are
%      applied to the GUI before MAIN_GUI_OpeningFcn gets called.  An
%      unrecognized property name or invalid value makes property application
%      stop.  All inputs are passed to MAIN_GUI_OpeningFcn via varargin.
%
%      *See GUI Options on GUIDE's Tools menu.  Choose "GUI allows only one
%      instance to run (singleton)".
%
% See also: GUIDE, GUIDATA, GUIHANDLES

% Edit the above text to modify the response to help MAIN_GUI

% Last Modified by GUIDE v2.5 07-Oct-2013 19:54:21

% Begin initialization code - DO NOT EDIT
gui_Singleton = 1;
gui_State = struct('gui_Name',       mfilename, ...
                   'gui_Singleton',  gui_Singleton, ...
                   'gui_OpeningFcn', @MAIN_GUI_OpeningFcn, ...
                   'gui_OutputFcn',  @MAIN_GUI_OutputFcn, ...
                   'gui_LayoutFcn',  [] , ...
                   'gui_Callback',   []);
if nargin && ischar(varargin{1})
    gui_State.gui_Callback = str2func(varargin{1});
end

if nargout
    [varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});
else
    gui_mainfcn(gui_State, varargin{:});
end
% End initialization code - DO NOT EDIT


% --- Executes just before MAIN_GUI is made visible.
function MAIN_GUI_OpeningFcn(hObject, eventdata, handles, varargin)
% This function has no output args, see OutputFcn.
% hObject    handle to figure
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)
% varargin   command line arguments to MAIN_GUI (see VARARGIN)


set(handles.pushbutton5,'Enable','off') 
set(handles.pushbutton2,'Enable','off') 
set(handles.pushbutton3,'Enable','off') 
set(handles.pushbutton4,'Enable','off') 

% Choose default command line output for MAIN_GUI
handles.output = hObject;

% Update handles structure
guidata(hObject, handles);

% UIWAIT makes MAIN_GUI wait for user response (see UIRESUME)
% uiwait(handles.figure1);


% --- Outputs from this function are returned to the command line.
function varargout = MAIN_GUI_OutputFcn(hObject, eventdata, handles) 
% varargout  cell array for returning output args (see VARARGOUT);
% hObject    handle to figure
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

% Get default command line output from handles structure
varargout{1} = handles.output;


% --- Executes on button press in pushbutton1.
function pushbutton1_Callback(hObject, eventdata, handles)
% hObject    handle to pushbutton1 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)
[file,path]=uigetfile('*.tif','SELECT THE INPUT IMAGE');
img=strcat(path,file);
inputimage=imread(img);
if length(size(inputimage))==3
    inputimage=rgb2gray(inputimage);
end

handles.inputimage=inputimage;
axes(handles.axes1)
imshow(handles.inputimage)

set(handles.pushbutton5,'Enable','on')

guidata(hObject, handles);

% --- Executes on selection change in listbox1.
function listbox1_Callback(hObject, eventdata, handles)
% hObject    handle to listbox1 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

% Hints: contents = cellstr(get(hObject,'String')) returns listbox1 contents as cell array
%        contents{get(hObject,'Value')} returns selected item from listbox1
str = get(hObject, 'String');
val = get(hObject,'Value');

switch str{val};
    case 'Gaussian'
        handles.CH=1;
    case 'Salt and  Pepper'
        handles.CH=2;
    case 'Poisson'
        handles.CH=3;
    case 'Speckle'
        handles.CH=4;
end

guidata(hObject, handles);

% --- Executes during object creation, after setting all properties.
function listbox1_CreateFcn(hObject, eventdata, handles)
% hObject    handle to listbox1 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    empty - handles not created until after all CreateFcns called

% Hint: listbox controls usually have a white background on Windows.
%       See ISPC and COMPUTER.
if ispc && isequal(get(hObject,'BackgroundColor'), get(0,'defaultUicontrolBackgroundColor'))
    set(hObject,'BackgroundColor','white');
end


% --- Executes on button press in pushbutton2.
function pushbutton2_Callback(hObject, eventdata, handles)
% hObject    handle to pushbutton2 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)
covar=100;
max_diff = 200;
weight_diff = 0.02;
iterations = 10;
dst=handles.noisyimage;
denoised = restore_image(dst, covar, max_diff, weight_diff, iterations);

handles.denoised=denoised;
set(handles.pushbutton3,'Enable','on')
guidata(hObject, handles);

% --- Executes on button press in pushbutton3.
function pushbutton3_Callback(hObject, eventdata, handles)
% hObject    handle to pushbutton3 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

axes(handles.axes3)
imshow(uint8(handles.denoised))
set(handles.pushbutton4,'Enable','on')
guidata(hObject, handles);

% --- Executes on button press in pushbutton4.
function pushbutton4_Callback(hObject, eventdata, handles)
% hObject    handle to pushbutton4 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

X=handles.inputimage;
XAPP1=handles.noisyimage;
XAPP2=handles.denoised;
[PSNRorigVSnoisy,MSEorigVSnoisy,MAXERR,L2RAT]=measerr(X,XAPP1);
[PSNRorigVSdenoised,MSEorigVSdenoised,MAXERR,L2RAT]=measerr(X,XAPP2);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% NEW PARAMETERS TO BE CALCULATED
xyz=6;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

set(handles.text9,'String',(MSEorigVSnoisy));
set(handles.text12,'String',(MSEorigVSdenoised));
set(handles.text11,'String',(PSNRorigVSnoisy));
set(handles.text10,'String',(PSNRorigVSdenoised));
set(handles.text14,'String',xyz);

% --- Executes on button press in pushbutton5.
function pushbutton5_Callback(hObject, eventdata, handles)
% hObject    handle to pushbutton5 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

set(handles.pushbutton2,'Enable','off') 
set(handles.pushbutton3,'Enable','off') 
set(handles.pushbutton4,'Enable','off') 

I=handles.inputimage;
if handles.CH==1
    noisy = imnoise(I,'gaussian',0,0.001);
elseif handles.CH==2
    noisy = imnoise(I,'salt & pepper');
elseif handles.CH==3
    noisy = imnoise(I,'poisson') ;
elseif handles.CH==4        
    noisy = imnoise(I,'speckle',0.01);
end
handles.noisyimage=noisy;

axes(handles.axes2)
imshow(handles.noisyimage)

handles.noisyimage=double(handles.noisyimage);

set(handles.pushbutton2,'Enable','on')

guidata(hObject, handles);

Code 2 – Function M FIle – restore_image.m

function otptimage=restore_image(inptimage,covar,diffM,diffW,itr)
 
% create two images. one will be the input image, the other output.
[row,col]=size(inptimage); % get the row and col of input image
buffer = zeros(row,col,2); % make 2 empty frames of 0 values
buffer(:,:,1) = inptimage; % put the input image in first frame, 2nd frame will contain the output image (let it be empty for now)
s = 2; d = 1;
V_max=(row*col) * ((256)^2/(2*covar) + 4*diffW*diffM);% it is a value larger then potential of any pixel value
for i=1:itr    
    % Switch source and destination buffers.
    if s==1
        s=2; d=1;
    else
        s=1; d=2;
    end
    % Vary each pixel individually to find the values that minimise the local potentials.
    for r=1:row % row count
        for c=1:col % column count
            V_local=V_max; % initializing local potential value with the highest potential value
            min_val=-1;
            for val=0:255                
                V_data=(val-inptimage(r,c))^2/(2*covar); % component due to known data                
                V_diff=0; % component due to difference btw neighbouring pixel values
                if r>1 % 2--row
                    V_diff=V_diff+min((val-buffer(r-1,c,s))^2,diffM);
                end
                if r<size(inptimage,1) % 1--(row-1)
                    V_diff=V_diff+min((val-buffer(r+1,c,s))^2,diffM);
                end
                if c>1 % 2--col
                    V_diff=V_diff+min((val-buffer(r,c-1,s))^2,diffM);
                end
                if c<size(inptimage,2) % 1--(col-1)
                    V_diff=V_diff+min((val-buffer(r,c+1,s))^2,diffM);
                end
                V_current=V_data + diffW*V_diff; % new potential value
                if V_current<V_local % decision rule`
%                     [r c val]
                    min_val=val;
                    V_local=V_current;
                end
            end
            buffer(r,c,d)=min_val;
        end
    end   
end
otptimage=buffer(:,:,d);
end

Multi Resolution Bilateral Filters

ABSTRACT

The bilateral filter is a nonlinear filter that does spatial averaging without smoothing edges; it has shown to be an effective image de-noising technique. An important issue with the application of the bilateral filter is the selection of the filter parameters, which affect the results significantly. There are two main contributions of this paper. The first contribution is an empirical study of the optimal bilateral filter parameter selection in image denoising applications. The second contribution is an extension of the bilateral filter: multi resolution bilateral filter, where bilateral filtering is applied to the approximation (low-frequency) sub bands of a signal decomposed using a wavelet filter bank. The multiresolution bilateral filter is combined with wavelet thresholding to form a new image denoising framework, which turns out to be very effective in eliminating noise in real noisy images. Experimental results with both simulated and real data are provided.The de-noising is a challenging task in the field of signal and image processing. De-noising of the natural image corrupted by Gaussian noise using wavelet techniques are very effective because of its ability to capture the energy of a signal in few energy transform values. The wavelet denoising scheme thresholds the wavelet coefficients arising from the standard discrete wavelet transform. Here, we analyzed several methods of noise removal from degraded images with Gaussian noise by using adaptive wavelet threshold (Bayes Shrink, Normal Shrink and Neigh Shrink) and compare the results in term of PSNR. Bilateral filtering smooths images while preserving edges, by means of a nonlinear combination of nearby image values. The method is non iterative, local, and simple. It combines gray levels or colors based on both their geometric closeness and their photometric similarity, and prefers near values to distant values in both domain and range. In contrast with filters that operate on the three bands of a color image separately, a bilateral filter can enforce the perceptual metric underlying the CIE-Lab color space, and smooth colors and preserve edges in a way that is tuned to human perception. Also, in contrast with standard filtering, bilateral filtering produces no phantom colors along edges in color images, and reduces phantom colors where they appear in the original image.

LITERATURE REVIEW

Image denoising using wavelets by RaghuramRangarajan, Ramji Venkataraman, Siddharth Shah

Wavelet transforms enable us to represent signals with a high degree of sparsity. This is the principle behind a non-linear wavelet based signal estimation technique known as wavelet denoising. In this report author explore wavelet denoising of images using several thresholding techniques such as SUREShrink, VisuShrink and BayesShrink. Further, we use a Gaussian based model to perform combined denoising and compression for natural images and compare the performance of these methods. We have seen that wavelet thresholding is an effectivemethod of denoising noisy signals. We firsttested hard and soft on noisy versions of the standard1-D signals and found the best threshold.Wethen investigated many soft thresholding schemesviz.VisuShrink, SureShrink and BayesShrink for denoisingimages. We found that sub-band adaptivethresholding performs better than a universal thresholding.Among these, BayesShrink gave the best results.This validates the assumption that the GGDis a very good model for the wavelet coefficient distribution in a sub-band. By weakening the GGD assumptionand taking the coefficients to be Gaussiandistributed, we obtained a simple model that facilitated both denoising and compression.

Adaptive Wavelet Thresholding for Image Denoising and Compression by S. Grace Chang, Bin Yu, Martin Vetterli

The first part of this paper proposes an adaptive, data-driven threshold for image denoising via wavelet soft-thresholding. The threshold is derived in a Bayesian framework, and the prior used on the wavelet coefficients is the generalized Gaussian distribution (GGD) widely used in image processing applications. The proposed threshold is simple and closed-form, and it is adaptive to each subband because it depends on data-driven estimates of the parameters. Experimental results show that the proposed method, called BayesShrink, is typically within 5% of the MSE of the best soft-thresholding benchmark with the image assumed known. It also outperforms Donoho and Johnstone’s SureShrink most of the time. The second part of the paper attempts to further validate recent claims that lossy compression can be used for denoising. The BayesShrink threshold can aid in the parameter selection of a coder designed with the intention of denoising, and thus achieving simultaneous denoising and compression. Specifically, the zero-zone in the quantization step of compression is analogous to the threshold value in the thresholding function. The remaining coder design parameters are chosen based on a criterion derived from Rissanen’s minimum description length (MDL) principle. Experiments show that this compression method does indeed remove noise significantly, especially for large noise power. However, it introduces quantization noise and should be used only if bitrate were an additional concern to denoising.

Bilateral Filtering for Gray and Color Images by C. Tomasi and R. Manduch

Bilateral filtering smooths images while preservingedges, by means of a nonlinear combination of nearbyimage values. The method is non-iterative, local, and simple.It combines gray levels or colors based on both theirgeometric closeness and their photometric similarity, andprefers near values to distant values in both domain andrange. In contrast with filters that operate on the threebands of a color image separately, a bilateral filter can enforcethe perceptual metric underlying the CIE-Lab colorspace, and smooth colors and preserve edges in a waythat is tuned to human perception. Also, in contrast withstandard filtering, bilateral filtering produces no phantomcolors along edges in color images, and reduces phantomcolors where they appear in the original image.

Image Denoising Using Wavelet Transform

An image is often corrupted by noise in itsacquisition and transmission. Removing noise from the originalimage is still a challenging problem for researchers. In thiswork new approach of threshold function developed for imagedenoising algorithms. It uses wavelet transform in connectionwith threshold functions for removing noise. Universal, VisuShrink, Sure Shrink and Bayes Shrink, normal shrink arecompared with our threshold function, it improves the SNRefficiently.In the case where an image is corrupted with Gaussiannoise, the wavelet shrinkage denoising has proved to benearly optimal. The output from Bayes Shrink method ismuch closer to the high quality image and there is noblurring in the output image unlike the other two

methods.VisuShrink cannot denoise multiplicative noiseunlike BayesShrink. Denoising salt and pepper noise usingVisuShrink and Universal has proved to be inefficient.Since selection of the right denoising procedure plays amajor role, it is important to experiment and compare themethods. As future research, we would like to work furtheron the comparison of the denoising techniques. Besides, thecomplexity of the algorithms can be measured according tothe CPU computing time flops. This can produce a timecomplexity standard for each algorithm. These two pointswould be considered as an extension to the present workdone.

 Wavelet Thresholding for Image De-noising by RohitSihag, Rakesh Sharma, Varun Setia

The de-noising is a challenging task in the field of signal andimage processing. De-noising of the natural image corrupted byGaussian noise using wavelet techniques are very effectivebecause of its ability to capture the energy of a signal in fewenergy transform values. The wavelet denoising schemethresholds the wavelet coefficients arising from the standarddiscrete wavelet transform. In this paper, we analyzed severalmethods of noise removal from degraded images with Gaussiannoise by using adaptive wavelet threshold (Bayes Shrink, NormalShrink and Neigh Shrink) and compare the results in term ofPSNR.In this paper, the image de-noising using discrete wavelet transform isanalyzed. The experiments were conducted to study the suitability ofdifferent wavelet bases and also the different methods of threshold orshrinkage. The result shows that the Neigh Shrink gives the better resultthan the Bayes and Normal Shrink in term of PSNR. However, when wetalk about the processing time then, the Normal Shrink is faster than theremaining both. And, it also shows that among all wavelet bases, thecoiflet wavelet gives the better result for image de-noising because it has maximum PSNR

INTRODUCTION

Image noise is random (not present in the object imaged) variation of brightness or color information in images, and is usually an aspect of electronic noise. It can be produced by the sensor and circuitry of a scanner or digital camera. Image noise can also originate in film grain and in the unavoidable shot noise of an ideal photon detector. Image noise is an undesirable by-product of image capture that adds spurious and extraneous information.

The original meaning of “noise” was and remains “unwanted signal”; unwanted electrical fluctuations in signals received by AM radios caused audible acoustic noise (“static”). By analogy unwanted electrical fluctuations themselves came to be known as “noise”.[1] Image noise is, of course, inaudible.

The magnitude of image noise can range from almost imperceptible specks on a digital photograph taken in good light, to optical and radio-astronomical images that are almost entirely noise, from which a small amount of information can be derived by sophisticated processing (a noise level that would be totally unacceptable in a photograph since it would be impossible to determine even what the subject was).

Gaussian noise

Principal sources of Gaussian noise in digital images arise during acquisition eg. sensor noise caused by poor illumination and/or high temperature, and/or transmission eg. electronic circuit noise.

The standard model of this noise is additive, independent at each pixel and independent of the signal intensity, caused primarily by Johnson–Nyquist noise (thermal noise), including that which comes from the reset noise of capacitors (“kTC noise”). Amplifier noise is a major part of the “read noise” of an image sensor, that is, of the constant noise level in dark areas of the image. In color cameras where more amplification is used in the blue color channel than in the green or red channel, there can be more noise in the blue channel.

Salt-and-pepper noise

Fat-tail distributed or “impulsive” noise is sometimes called salt-and-pepper noise or spike noise. An image containing salt-and-pepper noise will have dark pixels in bright regions and bright pixels in dark regions. This type of noise can be caused by analog-to-digital converter errors, bit errors in transmission, etc. It can be mostly eliminated by using dark frame subtraction and interpolating around dark/bright pixels. Dead pixels in an LCD monitor produce a similar, but non-random, display.

Shot noise

The dominant noise in the lighter parts of an image from an image sensor is typically that caused by statistical quantum fluctuations, that is, variation in the number of photons sensed at a given exposure level. This noise is known as photon shot noise. Shot noise has a root-mean-square value proportional to the square root of the image intensity, and the noises at different pixels are independent of one another. Shot noise follows a Poisson distribution, which is usually not very different from Gaussian.

In addition to photon shot noise, there can be additional shot noise from the dark leakage current in the image sensor; this noise is sometimes known as “dark shot noise” or “dark-current shot noise”. Dark current is greatest at “hot pixels” within the image sensor. The variable dark charge of normal and hot pixels can be subtracted off (using “dark frame subtraction”), leaving only the shot noise, or random component, of the leakage. If dark-frame subtraction is not done, or if the exposure time is long enough that the hot pixel charge exceeds the linear charge capacity, the noise will be more than just shot noise, and hot pixels appear as salt-and-pepper noise.

Quantization noise (uniform noise)

The noise caused by quantizing the pixels of a sensed image to a number of discrete levels is known as quantization noise. It has an approximately uniform distribution. Though it can be signal dependent, it will be signal independent if other noise sources are big enough to cause dithering, or if dithering is explicitly applied.

Film grain

The grain of photographic film is a signal-dependent noise, with similar statistical distribution to shot noise. If film grains are uniformly distributed (equal number per area), and if each grain has an equal and independent probability of developing to a dark silver grain after absorbing photons, then the number of such dark grains in an area will be random with a binomial distribution. In areas where the probability is low, this distribution will be close to the classic Poisson distribution of shot noise. A simple Gaussian distribution is often used as an adequately accurate model.

Film grain is usually regarded as a nearly isotropic (non-oriented) noise source. Its effect is made worse by the distribution of silver halide grains in the film also being random.

Anisotropic noise

Some noise sources show up with a significant orientation in images. For example, image sensors are sometimes subject to row noise or column noise.

SOURCES OF NOISE

There are different sources of noise in a digital image. Some noise components, such as the dark signal non uniformity (DSNU) and the photo response non uniformity (PRNU), display non uniform spatial characteristics. This type of noise is often referred as fixed pattern noise (FPN) because the underlying spatial pattern is not time varying. Temporal noise, on the other hand, does not have a fixed spatial pattern. Dark current and photon shot noise, read noise, and reset noise are examples of temporal noise. The overall noise characteristics in an image depend on many factors, including sensor type, pixel dimensions, temperature, exposure time, and ISO speed. Noise is in general space varying and channel dependent. Blue channel is typically the noisiest channel due to the low transmittance of blue filters. In single-chip digital cameras, DE mosaicking algorithms are used to interpolate missing color components; hence, noise is not necessarily uncorrelated for different pixels. An often neglected characteristic of image noise is the spatial frequency. Noise may have low frequency (coarse-grain) and high-frequency (fine-grain) fluctuations. High-frequency noise is relatively easier to remove; on the other hand, it is difficult to distinguish between real signal and low-frequency noise.

Many denoising methods have been developed over the years; among these methods, wavelet thresholding is one of the most popular approaches. In wavelet thresholding, a signal is decomposed into its approximation (low-frequency) and detail (high-frequency) sub-bands; since most of the image information is concentrated in a few large coefficients, the detail sub-bands are processed with hard or soft thresholding operations. The critical task in wavelet thresholding is the threshold selection. Various threshold selection strategies have been proposed, for example, VisuShrink, SureShrink, and BayesShrink. In the VisuShrink approach, a universalthreshold that is a function of the noise variance and the number of samples is developed based on the mini-max error measure. The threshold value in the SureShrink approach is optimal in terms of the Stein’s unbiased risk estimator. TheBayesShrink approach determines the threshold value in a Bayesian framework, through modeling the distribution of the wavelet coefficients as Gaussian. These shrinkage methods have later been improved by considering inter-scale and intra-scalecorrelations of the wavelet coefficients. The method, known as the BLS-GSM method, is one of the benchmarks in the denoising literature due to its outstanding PSNR performance. Some recent methods have surpassed the PSNR performance. Among thesemethods, constructs a global field of Gaussian scale mixtures to model sub-bands of wavelet coefficients as a product of two independent homogeneous Gaussian Markov random fields, and develops an iterative denoising algorithm.  Some papers develop gray-scale and color image denoising algorithms based on sparse and redundant representations over learned dictionaries, where training and denoising use the K-SVD algorithm; and  group 2-D image fragments into 3-D data arrays, and apply a collaborative filtering procedure, which consists of 3-D transformation, shrinkage of the transform spectrum, and inverse 3-D transformation. A paper models an ideal image patch as a linear combination of noisy image patches and formulates a total least squares estimation algorithm. A recently popular denoising method is the bilateral filter. While the term “bilateral filter” was coined, variants of it have been published as the sigma filter, the neighborhood filter, and the SUSAN filter. The bilateral filter takes a weighted sum of the pixels in a local neighborhood; the weights depend on both the spatial distance and the intensity distance. In this way, edges are preserved well while noise is averaged out.

Although the bilateral filter was first proposed as an intuitive tool, recent papers have pointed out the connections with some well-established techniques. It is shown that the bilateral filter is identical to the first iteration of the Jacobi algorithm (diagonal normalized steepest descent) with a specific cost function. Some papers relate the bilateral filter with the anisotropic diffusion. The bilateral filter can also be viewed as a Euclidean approximation of the Beltrami flow, which produces a spectrum of image enhancement algorithms ranging from the linear diffusion to the nonlinear flows .Buadeset al. proposes a nonlocal means filter, where similarity of local patches is used in determining the pixel weights. When the patch size is reduced to one pixel, the nonlocal means filter becomes equivalent to the bilateral filter. Some papers extends the work by controlling the neighborhood of each pixel adaptively. In addition to image denoising, the bilateral filter has also been used in some other applications, including tone mapping, image enhancement, volumetric denoising, exposure correction, shape and detail enhancement from multiple images, and retinex. Some describes a fast implementation of the bilateral filter; the implementation is based on a piecewise-linear approximation in the intensity domain and appropriate sub-sampling in the spatial domain. Later derives an improved acceleration scheme for the filter through expressing it in a higher dimensional space where the signal intensity is added as the third dimension to the spatial domain. Although the bilateral filter is being used more and more widely, there is not much theoretical basis on selecting the optimal and values. These parameters are typically selected by trial and error.

WAVELET TRANSFORM

A wavelet is a wave-like oscillation with an amplitude that starts out at zero, increases, and then decreases back to zero. Wavelet analysis decomposes sounds and images into component waves of varying durations, called wavelets. These wavelets are localized vibrations of a sound signal, or localized variations of detail in an image. They can be used for a wide variety of fundamental signal processing tasks, such as compression, removing noise, or enhancing a recorded sound or image in various ways. It can typically be visualized as a “brief oscillation” like one might see recorded by a seismograph or heart monitor. Generally, wavelets are purposefully crafted to have specific properties that make them useful for signal processing. Wavelets can be combined, using a “reverse, shift, multiply and integrate” technique called convolution, with portions of a known signal to extract information from the unknown signal. For example, a wavelet could be created to have a frequency of Middle C and a short duration of roughly a 32nd note. If this wavelet was to be convolved with a signal created from the recording of a song, then the resulting signal would be useful for determining when the Middle C note was being played in the song. Mathematically, the wavelet will resonate if the unknown signal contains information of similar frequency – just as a tuning fork physically resonates with sound waves of its specific tuning frequency. This concept of resonance is at the core of many practical applications of wavelet theory.

As a mathematical tool, wavelets can be used to extract information from many different kinds of data, including – but certainly not limited to – audio signals and images. Sets of wavelets are generally needed to analyze data fully. A set of “complementary” wavelets will decompose data without gaps or overlap so that the decomposition process is mathematically reversible. Thus, sets of complementary wavelets are useful in wavelet based compression/decompression algorithms where it is desirable to recover the original information with minimal loss.

In formal terms, this representation is a wavelet series representation of a square-integrable function with respect to either a complete, orthonormal set of basis functions, or an over complete set or frame of a vector space, for the Hilbert space of square integrable functions.

The heart of wavelet analysis is multiresolution analysis. Multiresolution analysis is the decomposition of a signal (such as an image) into sub signals (sub images) of different size resolution levels.

Wavelet theory is applicable to several subjects. All wavelet transforms may be considered forms of time-frequency representation for continuous-time (analog) signals and so are related to harmonic. Almost all practically useful discrete wavelet transforms use discrete-time filter banks. These filter banks are called the wavelet and scaling coefficients in wavelets nomenclature. These filter banks may contain either finite impulse response (FIR) or infinite impulse response (IIR) filters. The wavelets forming a continuous wavelet transform (CWT) are subject to the uncertainty principle of Fourier analysis respective sampling theory: Given a signal with some event in it, one cannot assign simultaneously an exact time and frequency response scale to that event. The product of the uncertainties of time and frequency response scale has a lower bound. Thus, in the scaleogram of a continuous wavelet transform of this signal, such an event marks an entire region in the time-scale plane, instead of just one point. Also, discrete wavelet bases may be considered in the context of other forms of the uncertainty principle.

BILATERAL FILTER

Filtering is perhaps the most fundamental operation of image processing and computer vision. In the broadest sense of the term “filtering”, the value of the filtered image at a given location is a function of the values of the input image in a small neighborhood of the same location. For example, Gaussian low-pass filtering computes a weighted average of pixel values in the neighborhood, in which the weights decrease with distance from the neighborhood center. Although formal and quantitative explanations of this weight fall-off can be given, the intuition is that images typically vary slowly over space, so near pixels are likely to have similar values, and it is therefore appropriate to average them together. The noise values that corrupt these nearby pixels are mutually less correlated than the signal values, so noise is averaged away while signal is preserved.
The assumption of slow spatial variations fails at edges, which are consequently blurred by linear low-pass filtering. How can we prevent averaging across edges, while still averaging within smooth regions? Many efforts have been devoted to reducing this undesired effect. Bilateral filtering is a simple, non-iterative scheme for edge-preserving smoothing.

A bilateral filter is a non-linear, edge-preserving and noise-reducing smoothing filter for images. The intensity value at each pixel in an image is replaced by a weighted average of intensity values from nearby pixels. This weight can be based on a Gaussian distribution. Crucially, the weights depend not only on Euclidean distance of pixels, but also on the radiometric differences (e.g. range differences, such as color intensity, depth distance, etc.). This preserves sharp edges by systematically looping through each pixel and adjusting weights to the adjacent pixels accordingly. Filtering is perhaps the most fundamental operation of image processing and computer vision. In the broadest sense of the term “filtering”, the value of the filtered image at a given location is a function of the values of the input image in a small neighborhood of the same location. For example, Gaussian low-pass filtering computes a weighted average of pixel values in the neighborhood, in which the weights decrease with distance from the neighborhood center. Although formal and quantitative explanations of this weight fall-off can be given, the intuition is that images typically vary slowly over space, so near pixels are likely to have similar values, and it is therefore appropriate to average them together. The noise values that corrupt these nearby pixels are mutually less correlated than the signal values, so noise is averaged away while signal is preserved. The assumption of slow spatial variations fails at edges, which are consequently blurred by linear low-pass filtering. How can we prevent averaging across edges, while still averaging within smooth regions? Many efforts have been devoted to reducing this undesired effect. Bilateral filtering is a simple, non-iterative scheme for edge-preserving smoothing. The bilateral filter is ubiquitous in computational photography applications. It is increasingly common in computer graphics research papers but no single reference summarizes its properties and applications. This course provides a graphical, strongly intuitive introduction to bilateral filtering, and a practical guide for image editing, tone-maps, video processing and more.

Filtering is perhaps the most fundamental operation of image processing and computer vision. In the broadest sense of the term “filtering,” the value of the filtered image at a given location is a function of the values of the input image in a small neighborhood of the same location. In particular, Gaussian low-pass filtering computes a weighted average of pixel values in the neighborhood, in which, the weights decrease with distance from the neighborhood center. Although formal and quantitative explanations of this weight fall-off can be given, the intuition is that images typically vary slowly over space, so near pixels are likely to have similar values, and it is therefore appropriate to average them together. The noise values that corrupt these nearby pixels are mutually less correlated than the signal values, so noise is averaged away while signal is preserved. The assumption of slow spatial variations fails at edges, which are consequently blurred by low-pass filtering. Many efforts have been devoted to reducing this undesired effect. How can we prevent averaging across edges, while still averaging within smooth regions? Anisotropic diffusion is a popular answer: local image variation is measured at every point, and pixel values are averaged from neighborhoods whose size and shape depend on local variation. Diffusion methods average over extended regions by solving partial differential equations, and are therefore inherently iterative. Iteration may raise issues of stability and, depending on the computational architecture, efficiency.

IMPLEMENTATION METHODOLOGY

Now that the basic knowledge has been reviewed, the next chapter would comprise of the computational approach. We follow the following algorithm for full computation of results which are showed in the next section.

  1. Input an image
  2. Add noise to the image (as per user specifications) to obtain a noisy image
  3. Perform wavelet transform of this noisy image to obtain the sub-bands
    1. LL
    2. LH
    3. HL
    4. HH
  4. You can use any wavelet of your choice
  5. Apply bilateral filters on LL sub-band
  6. Apply normal thresholding on the filtered image obtained from step 5
  7. Now, we have calculated a newLL
  8. Reconstruct the image using newLL, LH, HL, HH
  9. Perform bilateral filtering again on this reconstructed image
  10. We will get the final denoised image

Compare the results on parameter like MSE and PSNR

MATLAB SOURCE CODE

Instructions to run the code

1- Copy each of below codes in different M files.
2- Place all the files in same folder

3- Also note that these codes are not in a particular order. Copy them all and then run the program.
4- Run the “final.m” file

Code 1 – Script M File – final.m

clc
clear all
close all

final_psnr=[]; % final value of PSNR will be computed in this variable
% SIGMA1=[];
% SIGMA2=[];
% 
sigma1=10; % these are the parameters to be used in the bilateral filtering later in the code
sigma2=20;
% for ii=1:10    

% read the image
% in_image=read_image;
for i=1:6 % loop to select the images
    if i==1
        in_image=imread('1.png'); % image 1
    elseif i==2
        in_image=imread('2.png'); % image 2
    elseif i==3
        in_image=imread('3.png'); % image 3
    elseif i==4
        in_image=imread('4.png'); % image 4
    elseif i==5
        in_image=imread('5.png'); % image 5
    elseif i==6
        in_image=imread('6.png'); % image 6
    end

    IN_IMAGE=in_image; % input image to be stored in this variable
    figure, imshow(in_image) % display the input image
    title('Input image')
    pause(2)
    %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    %adding noise to image
    for j=1:4 % loop to add noise in the image 
        in_image=add_noise(IN_IMAGE,j); % user defined function to add distinct noise to the image
        % in_image=imnoise(in_image,'salt & pepper',0.02);
        
        figure, imshow(uint8(in_image)) % display the noisy image
        title('Input image with noise')
        in_image=double(in_image); % changing the datatype of the image
        pause(2)
        %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%         MEDIAN FILTERING ON NOISY IMAGE
        med_filt=medfilt2(in_image);
        figure, imshow(uint8(med_filt))
        title('Median filtering of this image')
        pause(2)
        %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
        % level 1 wavelet transform    
        level1=1; % level of wavelettransform
        [C1,S1] = wavedec2(in_image,level1,'haar'); % wavelet transform
        LL=appcoef2(C1,S1,'haar',level1); % approximation coeff
        [LH,HL,HH]=detcoef2('all',C1,S1,level1); % detailing coeff
        figure, imshow(uint8([LL LH; HL HH])) % wavelet transformed image
        title('Level 1 wavelet transform')
        pause(2)
        %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
        % level 2 wavelet transform
        % level2=2;
        % [C2,S2] = wavedec2(in_image,level2,'haar'); % wavelet transform
        % LLL=appcoef2(C2,S2,'haar',level2); % approximation coeff
        % [LLH,LHL,LHH]=detcoef2('all',C2,S2,level2); % detailing coeff
        % figure, imshow(uint8([[LLL LLH; LHL LHH] LH;HL HH]))
        % title('Level 2 wavelet transform')
        % pause(2)
        %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
        % applying bilateral filter on LLL
        % st=(S2(1,1)^2)+1;
        % LLL=C2(1:st-1);
        % % sigma2=sigmar(C2,S2);
        % C2(1:st-1)=demo(LLL,sigma1,sigma2);
        %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
        % first thresholding operation and first reconstruction
        % normalC2=thresholding_normal(C2,S2,level2);% thresholding
        % normalpic2=waverec2(normalC2,S2,'haar'); % reconstruction 
        % [bayesC2]=thresholding_bayes(C2,S2); % thresholding
        % bayespic2=waverec2(bayesC2,S2,'haar'); % reconstruction
        % 
        % thresholdimage1=normalpic2;
        % % thresholdimage1=bayespic2;
        % figure, imshow(uint8(thresholdimage1))
        % title('1st Thresholding and 1st Bilateral Filtering')
        % image1=IN_IMAGE;
        % image2=thresholdimage1;
        % psnr=manual_psnr(image1,image2);
        % final_psnr=[final_psnr psnr];
        % pause(2)
        %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
        % applying bilateral filter on LL
        st=(S1(1,1)^2)+1; 
        LL=C1(1:st-1); % approximation band
        % sigma2=sigmar(C1,S1);
        C1(1:st-1)=demo(LL,sigma1,sigma2); 
        %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
        % second thresholding operation and second reconstruction
        normalC1=thresholding_normal(C1,S1,level1);% thresholding
        normalpic1=waverec2(normalC1,S1,'haar'); % reconstruction
        % [bayesC1]=thresholding_bayes(C1,S1); % thresholding
        % bayespic1=waverec2(bayesC1,S1,'haar'); % reconstruction

        thresholdimage2=normalpic1;
        % thresholdimage2=bayespic1;
        figure, imshow(uint8(thresholdimage2))
        title('2nd Thresholding and 1st Bilateral Filtering')
        image1=IN_IMAGE;
        image2=thresholdimage2;
        psnr=manual_psnr(image1,image2);
        final_psnr=[final_psnr psnr]; % collecting all the psnr values
        pause(2)
        %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
        % applying bilateral filter on final reconstructed image
        out_image=demo(thresholdimage2,sigma1,sigma2);
        figure, imshow(uint8(out_image))
        title('3rd Bilateral Filtering')
        image1=IN_IMAGE;
        image2=out_image;
        psnr=manual_psnr(image1,image2);
        final_psnr=[final_psnr psnr];
        pause(2)
        %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
        % image1=double(IN_IMAGE);
        % image2=double(out_image);
        % 
        % [PSNR_formula,MSE,MAXERR,L2RAT] = measerr(image1,image2);
        % PSNR_manual=manual_psnr(image1,image2);
        final_psnr
        % final_psnr=[final_psnr PSNR];
        % SIGMA1=[SIGMA1 sigma1];
        % SIGMA2=[SIGMA2 sigma2];
        % 
        % sigma1=sigma1+10;
        % end
        % 
        % [final_psnr;SIGMA1;SIGMA2]
%         pause        
    end
    pause(2)
    close all
    pause(2)
end

Code 2 – Function M File – add_noise.m

function g=add_noise(f,ud)

% display('enter the type of noise:');
% display('    1    for salt & pepper');
% display('    2    for gaussian');    
% display('    3    for poisson');
% display('    4    for speckle');
% ud=input('enter the value:');
% while isempty(ud)
%     ud=input('enter the value:');
% end
switch ud
    case 1
%         display('enter the % of noise(Ex:0.2)');
%         ud1=input('pls enter:     ');
%         g=imnoise(f,'salt & pepper',ud1);
        g=imnoise(f,'salt & pepper');
    case 2   
%         display('enter the noise varience:  ');
%         va=input('enter between 0.01 to 0.09:   ');
%         g=imnoise(f,'gaussian',0,va);
        g=imnoise(f,'gaussian');
    case 3
        g=imnoise(f,'poisson');
    case 4
%         display('enter the varience of noise(Ex:0.02)');
%         ud1=input('pls enter:     ');
%         g=imnoise(f,'speckle',ud1);   
        g=imnoise(f,'speckle');   
end
end

Code 3 – Function M File – BAYES.m

function [LH,HL,HH]=BAYES(C,S,LH,HL,HH)

var=length(C)-S(size(S,1)-1,1)^2 +1;
sigmahat=median(abs(C(var:length(C))))/0.6745;

thr=bayes2(LH(:)',sigmahat);
LH=sthresh(LH,thr);

thr=bayes2(HL(:)',sigmahat);
HL=sthresh(HL,thr);

thr=bayes2(HH(:)',sigmahat);
HH=sthresh(HH,thr);

end

Code 4 – Function M File – bayes2.m

% Function to calculate Threshold for BayesShrink

function threshold=bayes2(X,sigmahat)

len=length(X);
sigmay2=sum(X.^2)/len;
sigmax=sqrt(max(sigmay2-sigmahat^2,0));
if sigmax==0
    threshold=max(abs(X));
else
    threshold=sigmahat^2./sigmax;
end

end

Code 5 – Function M File – bfilter2.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Pre-process input and select appropriate filter.
function B = bfilter2(A,w,sigma)

% Verify that the input image exists and is valid.
if ~exist('A','var') || isempty(A)
   error('Input image A is undefined or invalid.');
end
if ~isfloat(A) || ~sum([1,3] == size(A,3)) || ...
      min(A(:)) < 0 || max(A(:)) > 1
   error(['Input image A must be a double precision ',...
          'matrix of size NxMx1 or NxMx3 on the closed ',...
          'interval [0,1].']);      
end

% Verify bilateral filter window size.
if ~exist('w','var') || isempty(w) || ...
      numel(w) ~= 1 || w < 1
   w = 5;
end
w = ceil(w);

% Verify bilateral filter standard deviations.
if ~exist('sigma','var') || isempty(sigma) || ...
      numel(sigma) ~= 2 || sigma(1) <= 0 || sigma(2) <= 0
   sigma = [3 0.1];
end

% Apply either grayscale or color bilateral filtering.
if size(A,3) == 1
   B = bfltGray(A,w,sigma(1),sigma(2));
else
   B = bfltColor(A,w,sigma(1),sigma(2));
end


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Implements bilateral filtering for grayscale images.
function B = bfltGray(A,w,sigma_d,sigma_r)

% Pre-compute Gaussian distance weights.
[X,Y] = meshgrid(-w:w,-w:w);
G = exp(-(X.^2+Y.^2)/(2*sigma_d^2));

% Create waitbar.
h = waitbar(0,'Applying bilateral filter...');
set(h,'Name','Bilateral Filter Progress');

% Apply bilateral filter.
dim = size(A);
B = zeros(dim);
for i = 1:dim(1)
   for j = 1:dim(2)
      
         % Extract local region.
         iMin = max(i-w,1);
         iMax = min(i+w,dim(1));
         jMin = max(j-w,1);
         jMax = min(j+w,dim(2));
         I = A(iMin:iMax,jMin:jMax);
      
         % Compute Gaussian intensity weights.
         H = exp(-(I-A(i,j)).^2/(2*sigma_r^2));
      
         % Calculate bilateral filter response.
         F = H.*G((iMin:iMax)-i+w+1,(jMin:jMax)-j+w+1);
         B(i,j) = sum(F(:).*I(:))/sum(F(:));
               
   end
   waitbar(i/dim(1));
end

% Close waitbar.
close(h);


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Implements bilateral filter for color images.
function B = bfltColor(A,w,sigma_d,sigma_r)

% Convert input sRGB image to CIELab color space.
if exist('applycform','file')
   A = applycform(A,makecform('srgb2lab'));
else
   A = colorspace('Lab<-RGB',A);
end

% Pre-compute Gaussian domain weights.
[X,Y] = meshgrid(-w:w,-w:w);
G = exp(-(X.^2+Y.^2)/(2*sigma_d^2));

% Rescale range variance (using maximum luminance).
sigma_r = 100*sigma_r;

% Create waitbar.
h = waitbar(0,'Applying bilateral filter...');
set(h,'Name','Bilateral Filter Progress');

% Apply bilateral filter.
dim = size(A);
B = zeros(dim);
for i = 1:dim(1)
   for j = 1:dim(2)
      
         % Extract local region.
         iMin = max(i-w,1);
         iMax = min(i+w,dim(1));
         jMin = max(j-w,1);
         jMax = min(j+w,dim(2));
         I = A(iMin:iMax,jMin:jMax,:);
      
         % Compute Gaussian range weights.
         dL = I(:,:,1)-A(i,j,1);
         da = I(:,:,2)-A(i,j,2);
         db = I(:,:,3)-A(i,j,3);
         H = exp(-(dL.^2+da.^2+db.^2)/(2*sigma_r^2));
      
         % Calculate bilateral filter response.
         F = H.*G((iMin:iMax)-i+w+1,(jMin:jMax)-j+w+1);
         norm_F = sum(F(:));
         B(i,j,1) = sum(sum(F.*I(:,:,1)))/norm_F;
         B(i,j,2) = sum(sum(F.*I(:,:,2)))/norm_F;
         B(i,j,3) = sum(sum(F.*I(:,:,3)))/norm_F;
                
   end
   waitbar(i/dim(1));
end

% Convert filtered image back to sRGB color space.
if exist('applycform','file')
   B = applycform(B,makecform('lab2srgb'));
else  
   B = colorspace('RGB<-Lab',B);
end

% Close waitbar.
close(h);

Code 6 – Function M File – demo.m

function outImg=demo(inImg,sigma1,sigma2)
%  inImg      :  grayscale image
%  sigma1     : width of spatial Gaussian (d)
%  sigma2     : width of range Gaussian (r)
%  [-w, w]^2  : domain of spatial Gaussian
%  tol        : truncation error
%Img    =  double( imread('./images/ckb.jpg') );
% Img    =  double( imread('./images/barbara.jpg') );
%[m, n] = size(Img);

% create noisy image
% sigma  =  40;
% inImg  =  Img + sigma * rand(m, n);

% filter parameters
% sigma1 = 20;
% sigma2 = 30;
tol    = 0.01;

% make odd
if (mod(sigma1,2) == 0)
  w  = sigma1 + 1;
else
  w  = sigma1;
end

% call bilateral filter
[outImg, param] =  shiftableBF(inImg, sigma1, sigma2, w, tol);

end

Code 7 – Function M File – haar_reconstruction.m

function recon=haar_reconstruction(f1,f2)

% v=[1/sqrt(2) 1/sqrt(2)];
% w=[1/sqrt(2) -1/sqrt(2)];

recon=[];
for i=1:length(f1)
    recon=[recon (f1(i)+f2(i))/2  (f1(i)-f2(i))/2];
end

end

Code 8 – Function M File – manual_psnr.m

function psnr=manual_psnr(image1,image2)
image1=double(image1);
image2=double(image2);

[r c]=size(image1);
mse=sum((image1(:)-image2(:)).^2)/(r*c); % formula for MSE
psnr=10*log10(255*255/mse); % formula for PSNR

end

Code 9 – Function M File – maxFilter.m

function  T  =   maxFilter(inImg , w)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%  Computes the maximum 'local' dynamic range
%
%  inImg       : grayscale image
%  [-w, w]^2   : search window (w must be odd)
%  T           : maximum local dynamic range 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

T = -1;

sym    = (w - 1)/2;

[m, n] = size(inImg); % m-rows of input image, n-columns of input image

pad1 =  w*ceil(m/w) - m;   % ceil rounds the approximate value towards +ve infinity
pad2 =  w*ceil(n/w) - n;

inImg = padarray(inImg, [pad1 pad2], 'symmetric', 'post'); % refer help documentation of "padarray"

template = inImg;

m = m + pad1;
n = n + pad2;

% scan along row
for ii = 1 : m
    L     = zeros(n, 1);% zeros create a matrix of all 0 elements, dimensions as defined by the input values
    R     = zeros(n, 1);
    L(1)  = template(ii, 1);
    R(n)  = template(ii, n);
    
    for k = 2 : n
        if  mod(k - 1, w) == 0
            L(k)          = template(ii ,  k);
            R(n - k + 1)  = template(ii ,  n - k + 1);
        else
            L(k)          = max( L(k-1) , template(ii, k) );
            R(n - k + 1)  = max( R(n - k + 2), template(ii, n - k + 1) );
        end
    end
    
    for k = 1 : n
     p = k - sym;
     q = k + sym;
     if p < 1
         r = -1;
     else
         r = R(p);
     end
     if q > n
         l = -1;
     else
         l = L(q);
     end
        template(ii, k) = max(r,l);
    end
    
end

% scan along column
for jj = 1 : n
    
    L    = zeros(m, 1);
    R    = zeros(m, 1);
    L(1) = template(1, jj);
    R(m) = template(m, jj);
    
    for k = 2 : m 
        if  mod(k - 1, w) == 0
            L(k)          = template(k, jj);
            R(m - k + 1)  = template(m - k + 1, jj);
        else
            L(k)          = max( L(k - 1), template(k, jj) );
            R(m - k + 1)  = max( R(m - k + 2), template(m - k + 1, jj));
        end
    end
    
    for k = 1 : m
     p = k - sym;
     q = k + sym;
     if p < 1
         r = -1;
     else
         r = R(p);
     end
     if q > m
         l = -1;
     else
         l = L(q);
     end
        temp = max(r,l) - inImg(k, jj);
        if temp > T
            T = temp;
        end
    end
    
end


Code 10 – Function M File – read_image.m

function f=read_image

disp('select the image');
disp('1:lena.png');
disp('2:barbara.png');
disp('3:boat.png');
disp('4:house.png');
disp('5:peppers256.png');
disp('6:cameraman.jpg');
disp('7:hyderabad.png');
ss1=input('enter your choice:');
while isempty(ss1)
    ss1=input('enter your choice:');
end
    
switch ss1
    case 1        
        f=imread('lena.png');       
    case 2
        f=imread('barbara.png');
    case 3
        f=imread('boat.png');
    case 4
        f=imread('house.png');
    case 5
        f=imread('peppers256.png');
    case 6
        f=imread('cameraman.jpg');
    case 7
        f=imread('hyderabad512.png');
end

end

Code 11 – Function M File -shiftable_jointBF.m

function [ outImg , param ]  =  shiftable_jointBF(inImg, rangeImg, sigmaS, sigmaR, w, tol)

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
%   Filtering operation
%
%   inImg      : MxNxB image to filter (single/double)
%   rangeImg   : MxN image used to for pixel similarity (single/double)
%   outImg     : filtered image of size MxNxB
%   sigmaS     : width of spatial Gaussian
%   sigmaR     : width of range Gaussian
%   [-w, w]^2  : domain of spatial Gaussian
%   tol        : truncation error
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%


% create spatial filter
filt     = fspecial('gaussian', [w w], sigmaS);

% set range interval and the order of raised cosine
T  =  maxFilter(rangeImg, w); %  Computes the maximum 'local' dynamic range
N  =  ceil( 0.405 * (T / sigmaR)^2 );  % ceil rounds the approximate value towards +ve infinity

gamma    =  1 / (sqrt(N) * sigmaR);
twoN     =  2^N;

% compute truncation
if tol == 0
    M = 0;
else
    if sigmaR > 40
        M = 0;
    elseif sigmaR > 10
        sumCoeffs = 0;
        for k = 0 : round(N/2)
            sumCoeffs = sumCoeffs + nchoosek(N,k)/twoN; 
            % nchoosek(N,k) returnsthe binomial coefficient, defined as n!/((n–k)! k!). 
%             This is the number of combinations of n itemstaken k at a time
            if sumCoeffs > tol/2
                M = k;
                break;
            end
        end
    else
        M = ceil( 0.5 * ( N - sqrt(4*N*log(2/tol)) ) );
    end
end


% main filter
[m, n, b]  =  size(inImg);
outImg1    =  zeros(m, n, b); % zeros create a matrix of all 0 elements, dimensions as defined by the input values
outImg2    =  zeros(m, n, b);
outImg     =  zeros(m, n, b);

warning off; %#ok<WNOFF>

for k = M : N - M
    
    coeff = nchoosek(N,k) / twoN;
    
    temp1  = cos( (2*k - N) * gamma * rangeImg);
    temp2  = sin( (2*k - N) * gamma * rangeImg);
    
    if size(inImg, 3) > 1
      temp1 = repmat(temp1, [1 1 size(inImg, 3)]); 
      % B = repmat(A,[m n p...]) producesa multidimensional array B composed
%       of copies of A.The size of B is [size(A,1)*m, size(A,2)*n, size(A,3)*p, ...]. 
      temp2 = repmat(temp2, [1 1 size(inImg, 3)]);
    end
    
    phi1 =  imfilter(inImg .* temp1, filt);
%     B = imfilter(A,h) filters the multidimensional array A with the multidimensional filter h. 
% The array A can be logical or a non sparse numeric array of any class and dimension. 
% The result B has the same size and class as A
    phi2 =  imfilter(inImg .* temp2, filt);
    phi3 =  imfilter(temp1, filt);
    phi4 =  imfilter(temp2, filt);
    
    outImg1 = outImg1 + coeff * ( temp1 .* phi1 +  temp2 .* phi2 );
    outImg2 = outImg2 + coeff * ( temp1 .* phi3 +  temp2 .* phi4 );
    
end

idx1 = find( outImg2 < 0.0001);
idx2 = find( outImg2 > 0.0001);

outImg( idx1 ) = inImg( idx1 );
outImg( idx2 ) = outImg1( idx2 ) ./ outImg2 (idx2 );

% save parameters in the structure "param"
param.T  = T;
param.N  = N;
param.M  = M;

Code 12 – Function M File – shiftableBF.m

function [ outImg , param ]  =  shiftableBF(inImg, sigmaS, sigmaR, w, tol)

%   Filtering operation
%
%   inImg      :  grayscale image
%   sigmaS     : width of spatial Gaussian
%   sigmaR     : width of range Gaussian
%   [-w, w]^2  : domain of spatial Gaussian
%   tol        : truncation error

% create spatial filter
filt = fspecial('gaussian', [w w], sigmaS);

% set range interval and the order of raised cosine
T  =  maxFilter(inImg, w);%-------------------
N  =  ceil( 0.405 * (T / sigmaR)^2 );

gamma    =  1 / (sqrt(N) * sigmaR);
twoN     =  2^N;

% compute truncation
if tol == 0
    M = 0;
else
    if sigmaR > 40
        M = 0;
    elseif sigmaR > 10
        sumCoeffs = 0;
        for k = 0 : round(N/2)
            sumCoeffs = sumCoeffs + nchoosek(N,k)/twoN;
            if sumCoeffs > tol/2
                M = k;
                break;
            end
        end
    else
        M = ceil( 0.5 * ( N - sqrt(4*N*log(2/tol)) ) );
    end
end


% main filter
[m, n]   =  size(inImg);
outImg1  =  zeros(m, n);
outImg2  =  zeros(m, n);
outImg   =  zeros(m, n);

h = waitbar(0, 'Computing filter ...'); 

warning('off'); %#ok<WNOFF>

for k = M : N - M
    
    waitbar(k / N - 2*M + 1, h);
    
    coeff = nchoosek(N,k) / twoN;
    
    temp1  = cos( (2*k - N) * gamma * inImg);
    temp2  = sin( (2*k - N) * gamma * inImg);
    
    phi1 =  imfilter(inImg .* temp1, filt);
    phi2 =  imfilter(inImg .* temp2, filt);
    phi3 =  imfilter(temp1, filt);
    phi4 =  imfilter(temp2, filt);
    
    outImg1 = outImg1 + coeff * ( temp1 .* phi1 +  temp2 .* phi2 );
    outImg2 = outImg2 + coeff * ( temp1 .* phi3 +  temp2 .* phi4 );
    
end

close(h);

% avoid division by zero
idx1 = find( outImg2 < 0.0001);
idx2 = find( outImg2 > 0.0001);

outImg( idx1 ) = inImg( idx1 );
outImg( idx2 ) = outImg1( idx2 ) ./ outImg2 (idx2 );

% save parameters
param.T  = T;
param.N  = N;
param.M  = M;

Code 13 – Function M File – wavelet_transform_rec.m

function img=wavelet_transform_rec(LL,LH,HL,HH)

[r c]=size(LL); 
L=[];
H=[];
for i=1:c % reconstruction level 1
    f1=LL(:,i)'; % LL band
    f2=HL(:,i)'; % HL band
    recon1=haar_reconstruction(f1,f2); % reconstruction operation
    L=[L recon1'];
    
    f3=LH(:,i)'; % LH band
    f4=HH(:,i)'; % HH band
    recon2=haar_reconstruction(f3,f4); % reconstruction operation
    H=[H recon2'];
end

[r c]=size(L);
img=[];
for i=1:r% reconstruction level 2
    f1=L(i,:); % L band
    f2=H(i,:); % H band
    recon=haar_reconstruction(f1,f2); % reconstruction operation
    img=[img; recon];
end

end

Code 14 – Function M File – thresholding_normal.m

function [normalC,sigmahat]=thresholding_normal(C,S,level)

st=(S(1,1)^2)+1; % initialising the indexing of coeff of LL, which will not be thresholded
normalC=[C(1:st-1),zeros(1,length(st:1:length(C)))]; % this variable will 
% store the transform coefficients, it will keep on storing new coefficients
% values untill all the values are thresholded. thresholding is done in the
% for loop below

%Calculating sigmahat
var=length(C)-S(size(S,1)-1,1)^2 +1;
sigmahat=median(abs(C(var:length(C))))/0.6745;
for jj=2:size(S,1)-1
    %for the H detail coefficients
    coefh=C(st:st+S(jj,1)^2-1);
    len=length(coefh);
    sigmay2=sum(coefh.^2)/len;
    lambda=sqrt(log(len/level));
    thr=(lambda*sigmahat)/sigmay2;    
    normalC(st:st+S(jj,1)^2-1)=sthresh(coefh,thr);
    st=st+S(jj,1)^2;    
    %for the V detail coefficients
    coefv=C(st:st+S(jj,1)^2-1);
    len=length(coefv);
    sigmay2=sum(coefv.^2)/len;
    lambda=sqrt(log(len/level));
    thr=(lambda*sigmahat)/sigmay2;
    normalC(st:st+S(jj,1)^2-1)=sthresh(coefv,thr);
    st=st+S(jj,1)^2;    
    %for diag detail coefficients
    coefd=C(st:st+S(jj,1)^2-1);
    len=length(coefd);
    sigmay2=sum(coefd.^2)/len;
    lambda=sqrt(log(len/level));
    thr=(lambda*sigmahat)/sigmay2;
    normalC(st:st+S(jj,1)^2-1)=sthresh(coefd,thr);
    st=st+S(jj,1)^2;
end

Code 15 – Function M File – thresholding_bayes.m

function [bayesC]=thresholding_bayes(C,S)

st=(S(1,1)^2)+1;
bayesC=[C(1:st-1),zeros(1,length(st:1:length(C)))];
%S(size(S,1)-1,1)^2
var=length(C)-S(size(S,1)-1,1)^2 +1;

%Calculating sigmahat
sigmahat=median(abs(C(var:length(C))))/0.6745;

for jj=2:size(S,1)-1
    %for the H detail coefficients
    coefh=C(st:st+S(jj,1)^2-1);
    thr=bayes2(coefh,sigmahat);
    bayesC(st:st+S(jj,1)^2-1)=sthresh(coefh,thr);
    st=st+S(jj,1)^2;

    % for the V detail coefficients
    coefv=C(st:st+S(jj,1)^2-1);
    thr=bayes2(coefv,sigmahat);
    bayesC(st:st+S(jj,1)^2-1)=sthresh(coefv,thr);
    st=st+S(jj,1)^2;

    %for Diag detail coefficients 
    coefd=C(st:st+S(jj,1)^2-1);
    thr=bayes2(coefd,sigmahat);
    bayesC(st:st+S(jj,1)^2-1)=sthresh(coefd,thr);
    st=st+S(jj,1)^2;
end

end

Code 16 – Function M File – sthresh.m

function op=sthresh(X,T)

%A function to perform soft thresholding on a 
%given an input vector X with a given threshold T
% S=sthresh(X,T);
    ind=find(abs(X)<=T);
    ind1=find(abs(X)>T);
    X(ind)=0; % indexes below T are set to 0
    X(ind1)=sign(X(ind1)).*(abs(X(ind1))-T); 
    op=X;
end

Code 17 – Function M File – sigmar.m

function sig=sigmar(C,S)

var=length(C)-S(size(S,1)-1,1)^2 +1;
sig=median(abs(C(var:length(C))))/0.6745;
sig=sig*2;

end

 

 

Recent Posts

Tags

ad-hoc networks AODV boundary detection process classification clustering clustering algorithm Colour Information computer vision Decryption Encryption EZRP ICM (Iterated Conditional Modes) image denoising image enhancement IMAGE PROCESSING image segmentation Imaging and image processing MANET Markov Random Fields neutrosophic logic optical network proposed method PSNR QLab system region growing Robert’s operator Seed point selection segmentation semi-automatic algorithm Shadow Detection shadow removal wall motion wireless communication Wireless network wireless networks Wireless Sensor Network wireless sensor networks ZRP