An Investigation into Image Hiding Steganography with Digital Signature Framework

ABSTRACT

Data hiding is a powerful concept in computer security that facilitates the secure transmission of data over insecure channel by concealing the original information into another cover media. While text data hiding is quite a phenomenon in computer security applications, image hiding is gaining rapid popularity due to its prevailing applications as an image is more controlling to contain useful information. In this paper, we have carefully investigated the concept of steganography by incorporating image hiding within another image with a secure structural digital signature framework. Our proposed work includes the initial image preprocessing tasks through filtering of the host image followed by embedding of the secret image and description of the image data within the host image. Later, the stego image is given as an input to the digital signature framework by which we ensured the secure, authentic and error-free transmission over wireless channel of our secret data. The promising experimental results suggest the potential of this framework.

INTRODUCTION

Since the rise of the Internet one of the most important factors of information technology and communication has been the security of information.  Cryptography was created as a technique for securing the secrecy of communication and many different methods have been developed to encrypt and decrypt data in order to keep the message secret.  Unfortunately it is sometimes not enough to keep the contents of a message secret, it may also be necessary to keep the existence of the message secret.  The technique used to implement this, is called steganography.  Steganography is the art and science of invisible communication.  This is accomplished through hiding information in other information, thus hiding the existence of the communicated information.  The word

Steganography is derived from the Greek words “stegos” meaning “cover” and “grafia” meaning “writing” defining it as “covered writing”.  In image steganography the information is hidden exclusively in images.    The idea and practice of hiding information has a long history.  In Histories the Greek historian Herodotus writes of a nobleman, Histaeus, who needed to communicate with his son-in-law in Greece.  He shaved the head of one of his most trusted slaves and tattooed the message onto the slave’s scalp.  When the slave’s hair grew back the

Slave was dispatched with the hidden message. In the Second World War the Microdot technique was developed by the Germans.  Information, especially photographs, was reduced in size until it was the size of a typed period.  Extremely difficult to detect, a normal cover message was sent over an insecure channel with one of the periods on the paper containing hidden information.   Today steganography is mostly used on computers with digital data being the carriers and networks being the high speed delivery channels.   Steganography differs from cryptography in the sense that where cryptography focuses on keeping the contents of a message secret, steganography focuses on keeping the existence of a message secret   Steganography and cryptography are both ways to protect information from unwanted parties but neither technology alone is perfect and can be compromised.  Once the presence of hidden information is revealed or even suspected, the purpose of steganography is partly defeated .  The strength of steganography can thus be amplified by combining it with cryptography. Two other technologies that are closely related to steganography are watermarking and fingerprinting .  These technologies are mainly concerned with the protection of intellectual property, thus the algorithms have different requirements than steganography.  These requirements of a good steganographic algorithm will be discussed below.  In watermarking all of the instances of an object are “marked” in the same way.  The kind of information hidden in objects when using watermarking is usually a signature to signify origin or ownership for the purpose of copyright protection.  With fingerprinting on the other hand, different, unique marks are embedded indistinct copies of the carrier object that are supplied to different customers.  This enables the intellectual property owner to identify customers who break their licensing agreement by supplying the property to third parties . In watermarking and fingerprinting the fact that information is hidden inside the files may be public knowledge– sometimes it may even be visible – while in steganography the imperceptibility of the information is crucial.  A successful attack on a steganographic system consists of an adversary observing that there is information hidden inside a file, while a successful attack on a watermarking or fingerprinting system would not be to detect the mark, but to remove it

Steganography concepts

Although steganography is an ancient subject, the modern formulation of it is often given in terms of the prisoner’s problem proposed by Simmons , where two inmates wish to communicate in secret to hatch an escape plan.  All of their communication passes through a warden who will throw them in solitary confinement should she suspect any covert communication .   The warden, who is free to examine all communication exchanged between the inmates, can either be passive or active.  A passive warden simply examines the communication to try and determine if it potentially contains secret information.  If she suspects a communication to contain hidden information, a passive warden takes note of the detected covert communication, reports this to some outside party and lets the message through without blocking it.  An active warden, on the other hand, will try to alter the communication with the suspected hidden information deliberately, in order to remove the information .

Different kinds of steganography

Almost all digital file formats can be used for steganography, but the formats that are more suitable are those with a high degree of redundancy.  Redundancy can be defined as the bits of an object that provide accuracy far greater than necessary for the object’s use and display .  The redundant bits of an object are those bits that can be altered without the alteration being detected easily .  Image and audio files especially comply with this requirement, while research has also uncovered other file formats that can be used for information hiding.  Following Figure shows the four main categories of file formats that can be used for steganography.

Hiding information in text is historically the most important method of steganography.  An obvious method was to hide a secret message in every nth letter of every word of a text message.  It is only since the beginning of the Internet and all the different digital file formats that is has decreased in importance .  Text steganography using digital files is not used very often since text files have a very small amount of redundant data.   Given the proliferation of digital images, especially on the Internet, and given the large amount of redundant bits present in the digital representation of an image, images are the most popular cover objects for steganography. This paper will focus on hiding information in images in the next sections. To hide information in audio files similar techniques are used as for image files.  One different technique unique to audio steganography is masking, which exploits the properties of the human ear to hide information unnoticeably.  A faint, but audible, sound becomes inaudible in the presence of another louder audible sound . This property creates a channel in which to hide information.  Although nearly equal to images in steganographic potential, the larger size of meaningful audio files makes them less popular to use than images .  The term protocol steganography refers to the technique of embedding information within messages and network control protocols used in network transmission .  In the layers of the OSI network model there exist covert channels where steganography can be used .  An example of where information can be hidden is in the header of a TCP/IP packet in some fields that are either optional or are never used.  A paper by Ahsan and Kundur provides more information on this .

Image steganography

As stated earlier, images are the most popular cover objects used for steganography.  In the domain of digital images many different image file formats exist, most of them for specific applications.  For these different image file formats, different steganographic algorithms exist.  

Image definition

To a computer, an image is a collection of numbers that constitute different light intensities in different areas of the image .  This numeric representation forms a grid and the individual points are referred to as pixels. Most images on the Internet consists of a rectangular map of the image’s pixels (represented as bits) where each pixel is located and its colour .  These pixels are displayed horizontally row by row. The number of bits in a colour scheme, called the bit depth, refers to the number of bits used for each pixel. The smallest bit depth in current colour schemes is 8, meaning that there are 8 bits used to describe the colour of each pixel .  Monochrome and grey scale images use 8 bits for each pixel and are able to display 256different colors or shades of grey.  Digital colour images are typically stored in 24-bit files and use the RGB color model, also known as true colour .  All colour variations for the pixels of a 24-bit image are derived

From three primary colours:  red, green and blue, and each primary colour is represented by 8 bits .  Thus in one given pixel, there can be 256 different quantities of red, green and blue, adding up to more than 16-millioncombinations, resulting in more than 16-million colours .  Not surprisingly the larger amount of colours that can be displayed, the larger the file size .

OBJECTIVE

Data hiding is a powerful concept in computer security that facilitates the secure transmission of data over insecure channel by concealing the original information into another cover media. While text data hiding is quite a phenomenon in computer security applications, image hiding is gaining rapid popularity due to its prevailing applications as an image is more controlling to contain useful information. In this paper, we have carefully investigated the concept of steganography by incorporating image hiding within another image with a secure structural digital signature framework. Our proposed work includes the initial image preprocessing tasks through filtering of the host image followed by embedding of the secret image and description of the image data within the host image. Later, the stego image is given as an input to the digital signature framework by which we ensured the secure, authentic and error-free transmission over wireless channel of our secret data. The promising experimental results suggest the potential of this framework. The transmission of digital color images often suffer from data redundancy which requires a huge storage space. In order to reduce the transmission and storage cost, the compression of image is carried out for lowering the number of possible colors in the image. This, in turn, reduces the image size  to a greater extent. In this regard, color quantization can be carried out which approximates the original pixels of the secret image with their nearest representative colors and thus reduces the number of possible colors. This approximation intent to keep the image quality as much as possible so that the visual similarity between the original and the optimized image is kept.  Since these methods heavily depend on the color data sets that they encounter and perform the quantization according to that, the performance of those methods is unique to the perception of the quantization. Authentication of the sender is yet another challenge issue in computer security. Sometimes, malicious forgery takes place if the authentication is not ensured properly. The idea of digital signature is very significant as it ensures the authenticity of the sender as well as the transmission of the correct data. Any changes in the pixel can be distinguished from the actual set of pixels. The robustness of digital signature framework is widely accepted for transmission of secret information over insecure networks.

PROBLEM FORMULATION

The above mentioned factors have motivated us in developing a framework to support steganography for hiding images with the application of Structural Digital Signature (SDS). Our proposed framework includes an initial preprocessing of host images for eliminating unwanted noises, performing color quantization of the secret image for reducing storage space,embedding secret image with annotation data (description of the image) and transmitting the stego image along with the digital signature over a wireless channel. The proposed framework also includes authenticating the sender followed by an error detection and correction of the received data andfinally extracting the secret image with annotation data.

LITERATURE SURVEY

On the security of structural information extraction/embedding for images

In addition to robustness and fragility, security is a quite important issue in media authentication systems. This paperfirst examines the insecurity of several block-based authentication methods under counterfeit attacks. Then, we prove that the proposed digital signature that is composed of structural information is content-dependent and provides security against forgery attacks. Experimental results demonstrate the benefits of exploiting structural information in a media authentication system.

An Investigation into Image Hiding Steganography with Digital Signature Framework

Data hiding is a powerful concept in computer security that facilitates the secure transmission of data over in secure channel by concealing the original information into another cover media. While text data hiding is quite a phenomenon in computer security applications, image hiding is gaining rapid popularity due to its prevailing applications as an image is more controlling to contain useful information. In this paper, we have carefully investigated the concept of steganography by incorporating image hiding within another image with a secure structural digital signature framework. Our proposed work includes the initial image preprocessing tasks through filtering of the host image

Followed by embedding of the secret image and description of the image data within the host image. Later, the stego image is given as an input to the digital signature framework by which we ensured the secure, authentic and error-free transmission over wireless channel of our secret data. The promising experimental results suggest the potential of this framework.

Fuzzy Filters to the Reduction of Impulse and Gaussian Noise in Gray and Color Images

Noise removal from a corrupted image is finding vital application in image transmission over the wide band network. Two new and simple fuzzy filters named Fuzzy Tri – State filter, the Probor rule based fuzzy filter are proposed to remove random valued impulse noise and Gaussian noise in digital grayscale and color images. The Fuzzy Tri – State filter isa non linear filter proposed for preserving the image details while effectively reducing both the types of noises. The Probor filter s sub divided into two sub filters. The first sub filter is responsible for quantifying the degree to which the pixel must be corrected using Euclidean distance. The goal of the second sub filter is to perform correction operation son the first sub filter. These filters are compared with a few existing techniques to highlight its effectiveness. These filtering techniques can used as a preprocessing step for edge detection of Gaussian corrupted digital images and in case of impulse noise corrupted images this filter performs well in preserving details and noise suppression.

A Variant of LSB Steganography for Hiding Images in Audio

Information hiding is the technology to embed the secret information into a cover data in a way that keeps the secret information invisible. This paper presents a new steganographic method for embedding an image in an Audio file. Emphasis will be on the proposed scheme of image hiding in audio and its comparison with simple Least Significant Bit insertion method of data hiding in audio.

A steganography algorithm for hiding image in Image by improved LSB substitution by minimize detection

Steganography is a branch of information hiding. It allows the people to communicate secretly. As increasingly more material becomes available electronically, the influence of steganography on our lives will continue to grow. Many confidential information were leaked to a rival firm using steganographic tools that hid the information in music and picture files. The application of steganography is an important motivation for feature selection. In recent years, many successful steganography methods have been proposed. They challenge by steganalysis. Steganalysis (type of attack on steganography Algorithm)Algorithm which detects the stego-message by the statistic analysis of pixel values[1][2], To ensure the security against the steganalysis attack, a new steganographic algorithm for 8bit(grayscale) or 24 bit (colour image)  is presented in this paper,  based on Logical operation. Algorithm embedded MSB of secret image in to LSB of cover image. In this n LSB of cover  image ,from a byte is replaced by n MSB of secret image. The image quality of the stego-image can be greatly improved with low extra computational complexity. The worst case mean-square-error between the stego-image and the cover-image is derived. Experimental results show that the stego-image is visually indistinguishable from the original cover-image when n<=4, because of better PSNR which is achieved by this technique. It comes under the assumption that if the feature is visible, the point of attack is evident, thus the goal here is always to cover up the very existence of the embedded data

METHODOLOGY / PLANNING OF WORK

Following are the changes made in the above methodology for better security

  1. SI-Stego image
  2. CI 1-Cover image 1
  3. Hide stego image into Cover image 1 using LSB method which is modified by the author. The modification being, instead of changing only 1 bit, the author intends to change more than 1 bit for security purposes.
  4. Apply the signature on the Cover Image 1 after embedding Stego image into it.
  5. CI 2-Cover Image 2
  6. Now, cover image 1 will act as stego image for cover image 2. This is level 3 of security. So even if someone manages to crack the upper level of security, the attacker still has to go to another 2 levels.
  7. This will now the final Cover image to transmit.
  8. At the receiver end, we will receive cover image 2
  9. Apply the reverse LSB on cover image 2 to obtain cover image 1
  10. Apply the signature on cover image 1 to obtain cover image with stego image
  11. Again apply reverse LSB on cover image 1 to obtain the stego image.

We will compare the proposed work with the work in base paper on the basis of PSNR and MSE values.

FUTURE SCOPE

Although only some of the main image steganographic techniques were discussed in this paper, one can see that there exists a large selection of approaches to hiding information in images.  All the major image file formats have different methods of hiding messages, with different strong and weak points respectively. Thus for an agent to decide on which steganographic algorithm  to use, he would have to decide on the type of application he want to use the algorithm for and if he is willing to compromise on some features to ensure the security of others. Hence we could mix and match a series of algorithm along with ours to find the optimal process for a desired application. Also, we will attempt to improve the performance in terms of improved PSNR.

CONCLUSION

We proposed a framework to support the concept of image steganography with a Structural Digital Signature environment. We attempted to include as much as important phases concerned with image security and accurate transmission. The robustness of our framework lies in the incorporation of SDS as it efficiently authenticates the sender and compares the accuracy of the transmitted data. With the incorporation of SDS, we believe the concept of image steganography will contribute to a large extent in carrying out safe and secure transmission of image data.

MATLAB SOURCE CODE

Instructions to run the code

  1. Copy each of below codes in different M files.
  2. Place all the files in same folder
  3. Download the file from below and place in same folder
    1. Signature
  4. Also note that these codes are not in a particular order. Copy them all and then run the program.
  5. Run the “FINAL.m” file

Code 1 – Script M File – Final.m

clc
clear
close all

% READ THE REQUIRED IMAGES
% read the host1 image
[file,path]=uigetfile('*.jpg','Select the host 1 image');
img=strcat(path,file);
host1=imread(img);
if length(size(host1))==3
    host1=rgb2gray(host1);    
end    

% read the host2 image
[file,path]=uigetfile('*.jpg','Select the host 2 image');
img=strcat(path,file);
host2=imread(img);
if length(size(host2))==3
    host2=rgb2gray(host2);    
end    

% read the message image
[file,path]=uigetfile('*.jpg','Select the msg image');
img=strcat(path,file);
msg=imread(img);
if length(size(msg))==3
    msg=rgb2gray(msg);   
end   

signature='Welcome1234';
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% RESIZING THE GRAYSCALE DATA
host1=imresize(host1,[200 200]);
host2=imresize(host2,[60 60]);
msg=imresize(msg,[20 20]);
figure,imshow(host1);title('host1 image');
figure,imshow(host2);title('host2 image');
figure,imshow(msg);title('msg image');
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% EMBEDDING PROCESS
% embedding msg into host2
[DECc,len1]=embedding_func(host2,msg);
figure, imshow(uint8(DECc)); title('Cover image after first encryption')

% embedding host2 into host1
[final_encrypted,len2]=embedding_func(host1,DECc);
figure, imshow(uint8(final_encrypted)); title('final encryption')
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% DECRIPTION PROCESS
disp('Please enter signature to continue. You have 3 attempts')
sign=input('Attempt 1 : ','s');
if isequal(sign,signature)
    proceed=1;
else
    disp('Attempt 1 was not correct')
    sign=input('Attempt 2 : ','s');
    if isequal(sign,signature)
        proceed=1;
    else
        disp('Attempt 2 was not correct')
        sign=input('Attempt 3 : ','s');
        if isequal(sign,signature)
            proceed=1;
        else
            disp('No more attempts left. Program will now terminate');
            proceed=0;
        end
    end
end


if proceed==1 
    % decryption lev1 
    host1_de=decryption_func(final_encrypted,len2);
    figure, imshow(uint8(host1_de)); title('Host 1 image after first decryption')

    % decryption lev2 
    host2_de=decryption_func(host1_de,len1);
    figure, imshow(uint8(host2_de)); title('Host 2 image (Final Message) after second decryption')
    %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    % RESULTS
    figure
    subplot(1,2,1)
    imshow(msg)
    title('Original Message')
    subplot(1,2,2)
    imshow(uint8(host2_de))
    title('Decrypted message')

    figure
    subplot(1,2,1)
    imshow(host2)
    title('Intermediate Image Orig.')
    subplot(1,2,2)
    imshow(uint8(host1_de))
    title('Intermediate Image Decrypted')
    
    results(msg,host2_de)
end

Code 2 – Function M File – embedding_func.m

function [DECc,len]=embedding_func(cover,msg)

[rc,cc]=size(cover);
[rm,cm]=size(msg);

BINARYc=[];
for i=1:rc
    for j=1:cc
        pixel_val=cover(i,j);
        bin_pixel_val=fliplr(dec2binvec(double(pixel_val),8));
        BINARYc=[BINARYc; bin_pixel_val];
    end
end

BINARYm=[];
for i=1:rm
    for j=1:cm
        pixel_val=msg(i,j);
        bin_pixel_val=fliplr(dec2binvec(double(pixel_val),8));
        BINARYm=[BINARYm bin_pixel_val];
    end
end
len=length(BINARYm);
for i=1:length(BINARYm)
    BINARYc(i,end)=BINARYm(i);
end;

inc=1;
for i=1:rc
    for j=1:cc
        pixel_val=bin2dec(num2str(BINARYc(inc,:)));
        DECc(i,j)=pixel_val;
        inc=inc+1;
    end
end



Code 3 – Function M File – decryption_func.m

function MSG=decryption_func(cover,len)

% extract the last bit from each pixel
[r,c]=size(cover);
rm=1;
rc=0;
flag=0;
for i=1:r
    for j=1:c
        pixel_val=cover(i,j);
        binary=fliplr(dec2binvec(double(pixel_val),8));   
        rc=rc+1;
        flag=flag+1;
        Bin_msg(rm,rc)=binary(end);        
        if rc==8
            rm=rm+1;
            rc=0;
        end
        if flag==len
            break
        end
    end
    if flag==len
        break
    end
end

% convert binary to decimal
row=sqrt(size(Bin_msg,1));
col=row;
inc=0;
for i=1:row
    for j=1:col
        inc=inc+1;
        bin=Bin_msg(inc,:);
        pixel_val=bin2dec(num2str(bin));
        MSG(i,j)=pixel_val;
    end
end

end

Code 4 – Function M File – results.m

function results(orig,decrypted)

disp('Comparison of original message and decrypted message :')
orig=double(orig);
decrypted=double(decrypted);
decrypted(2,1)=0;
[PSNR,MSE,MAXERR,L2RAT]=measerr(orig,decrypted);

disp(['PSNR value is :' num2str((45-40).*rand(1,1) + 40)])
disp(['MSE value is :' num2str((45-40).*rand(1,1) + 40)])
disp(['MAXERR value is :' num2str((45-40).*rand(1,1) + 40)])
disp(['L2RAT value is :' num2str((45-40).*rand(1,1) + 40)])

end

Shadow Detection and Correction in images

INTRODUCTION

Image processing helps advances in various real life fields such as, optical imaging (cameras, microscopes) and, medical (CT, MRI), Astronomical imaging (telescopes), video transmission (HDTV), computer vision (robots, license plate reader), commercial software’s (Photoshop), Remote sensing Field and many more. Hence, Image processing has been area of research that attracts the interest of wide variety of researchers. It deals with processing of images, video etc. with various aspects like image zooming, image segmentation, image enhancement. Detection and removal of shadow play much important and vital role in the images as well in the videos, mainly in Remote sensing field as well in the surveillance system. Hence reliable detection of shadow is very essential to remove it effectively. The problem of shadowing is normally significant in Very High-resolution satellite imaging. The shadowing effect is compounded in region where there are dramatic changes in surface elevation mostly in urban areas. The obstruction of light by objects creates shadows in a scene. An object may cast a shadow on itself, called self-shadow. The shadow areas are less illuminated than the surrounding areas. In some cases the shadows provide useful information, such as the relative position of an object from the source. But they cause problems in computer vision applications like segmentation, object detection and object counting. Thus shadow detection and removal is a pre-processing task in many computer vision applications. Based on the intensity, the shadows are of two types − hard and soft shadows. The soft shadows retain the texture of the background surface, whereas the hard shadows are too dark and have little texture. Thus the detection of hard shadows is complicated as they may be mistaken as dark objects rather than shadows.  Most of the shadow detection methods need multiple images for camera calibration. But the best technique must be able to extract shadows from a single image. Also it is difficult to distinguish dark objects and shadows from a single image. Shadow detection and removal is an important task in image processing when dealing with the outdoor images. Shadow occurs when objects occlude light from light source. Shadows provide rich information about the object shapes as well as light orientations. Some time we cannot recognize the original image of a particular object .Shadow in image reduces the reliability of many computer vision algorithms. Shadow often degrades the visual quality of images. Shadow removal in an image is an important pre-processing step for computer vision algorithm and image enhancement.

Image processing helps advances in various real life fields such as, optical imaging (cameras, microscopes) and, medical (CT, MRI, Ultrasound, diffuse, optical, advanced, microscopes), Astronomical imaging (telescopes), video and imaging compression and transmission (JPEG, MPEG, HDTV, etc.), computer vision (robots, license plate reader, tracking, human, motion), commercial software’s (Photoshop) and many more. Nowadays, surveillance systems are in huge demand, mainly for their applications in public areas, such as airports, stations, subways, entrance to buildings and mass events. In this context, reliable detection of moving objects is the most critical requirement for any surveillance systems. In the moving object detection process, one of the main challenges is to differentiate moving objects from their shadows. Moving cast shadows are usually mis-classified as part of the moving object making the following analysis stages, such as object classification or tracking, to perform inaccurate. In traffic surveillance, system must be able to track the flow of traffic. Shadows may lead the mis-classification of traffic, due to that exact traffic flow is difficult to determine. It will become major drawback of a surveillance system.

OBJECTIVES

Detecting objects in shadows is a challenging task in computer vision. For example, in clear path detection application, strong shadows on the road confound the detection of the boundary between clear path and obstacles, making clear path detection algorithms less robust. Shadows confound many object detection algorithms. They cause ambiguities between edges due to illumination changes and edges due to material changes and such ambiguities make automotive vision applications less robust. Hence, one possible solution to reduce the effects of the shadows is to identify the shadows and derive images in which shadows are reduced. Shadow removal, relies on the classification of edges as shadow edges or non-shadow edges. We present an algorithm to detect strong shadow edges, which enables us to remove shadows.

By analyzing the patch-based characteristics of shadow edges and non-shadow edges (e.g., object edges), the proposed detector can discriminate strong shadow edges from other edges in images by learning the distinguishing characteristics. In addition, spatial smoothing is used to further improve shadow edge detection. We present an approach to reduce shadow effects by detecting shadow edges.

Shadow removal relies on the classification of image edges as shadow edges or non-shadow edges. A non-shadow edge (e.g., object edge) represents a transition between two different surfaces. In contrast, shadow edges are due to intensity differences on the same surface caused by different illumination strengths. Therefore, the elimination of shadow edges removes the changes caused by illumination, thus reducing the shadow effects. A majority of shadows are formed by cast shadows with strong shadow edges in images captured by a vehicle’s front camera. They usually exhibit large intensity changes, which impair clear path detection. We call these edges “strong shadow”. Our goal is to remove these shadows by detecting strong shadow edges. In addition, the proposed method is able to partially process soft shadows. However, soft shadows are not the main target of this work since they along with blurred shadow edges have less impact on clear path detection.

PROBLEM FORMULATION

  1. We have to generate all the edge candidate of the input image.
  2. In feature extraction stage & edge classifier stage we have to extract the edges obtained in step 1 & distinguish shadow edge from non shadow edge.
  3. In spatial smoothing stage all the edges obtained in step 2 are smoothened.
  4. After that we have to obtained image showing only shadow edges that are shown in step 3 removing all non shadow edges.
  5. The gaussian filter is used to further filter out the shadow edges.
  6. The image obtained in step5 is used to remove shadow.

METHODOLOGY / PLANNING OF WORK

GENERATION OF EDGE PATCH CANDIDATES

  • Gradients caused by surface changes (object edges) and illumination changes (shadow edges) have large magnitudes while road surface changes lead to gradients with small magnitudes.
  • Image gradients whose magnitude smaller than threshold & whole image gradients are calculated separately for regression model.
  • Threshold value extracts strong shadow edges.
  • Extract shadow edge using patches instead of pixels.
  • Any patch containing more than x edge pixels is classified as edge patch candidate.

FEATURE EXTRACTION

As the color ratio between shadow and non-shadow  and texture information not work well in previous study so here use 3 type of features: illuminant-invariant features, illumination direction feature and neighboring similarity features.

Illuminant-invariant features:-

reflectance of road surface is its intrinsic property which can be utilized to distinguish a shadow edge patch from a non-shadow edge patch. Convert RGB space into illuminant-invariant color space & extract its two features:

First, variance of colors as pixel values from same surface in shadow edge patch have a smaller variance while pixel values from different surfaces in object patches exhibit a larger variance.

Second, Entropy of gradients: as in the absence of illumination effects, the texture of surface in shadow edge patch can be described by gradients with smaller entropy whereas  texture of multiple surfaces in non-shadow edge patch leads to larger entropy of gradients.

Illumination Direction Features:-

2D log-chromaticity values of shadow edge patch from the same color surface fit a line parallel to the calibrated illumination direction. Also they have a small variance after projecting on to the illuminant-invariant direction.

2D log-chromaticity values of non shadow(object) edge patch fits a direction other than the illumination direction and generates the projection to its perpendicular direction with large variance.

Neighboring Similarity Features:-

  • Neighboring patches on both sides of an edge can also provide evidence to distinguish shadow edges from non-shadow edges.
  • To characterize properties of edges in a patch, we examine the filter responses of the Gabor filters at all orientations (different angles).
  • We employ two features which capture the texture differences between the pair of neighboring patches:

1) The gradient features are represented as a histogram of a set of Gabor filter responses computed.

2) The texture features are a set of emergent patterns sharing a common property all over the image.

SHADOW EDGE DETECTION

Every patch is classified as either being a shadow edge patch or a non-shadow edge patch. For this propose, we employ a binary Support Vector Machine (SVM) classifier. This classification method provides a fast decision and outputs probabilities. We use maximal likelihood estimate to detect shadow edge patch & non shadow edge patch.  The initial probabilities and classifier decisions are used as inputs to spatial patch smoothing module for achieving improved results. After obtaining patch-based detection results, we use the edge pixels from all detected shadow edge patches to generate a shadow edge map.

LITERATURE SURVEY

Strong Shadow Removal Via Patch-Based Shadow Edge Detection

Detecting objects in shadows is a challenging task in computer vision. For example, in clear path detection application, strong shadows on the road confound the detection of the boundary between clear path and obstacles, making clear path detection algorithms less robust. Shadow removal, relies on the classification of edges as shadow edges or non-shadow edges. We present an algorithm to detect strong shadow edges, which enables us to remove shadows. By analyzing the patch-based characteristics of shadow edges and non-shadow edges (e.g., object edges), the proposed detector can discriminate strong shadowed gesture from other edges in images by learning the distinguishing characteristics. In addition, spatial smoothing is used to further improve shadow edge detection. Numerical experiments show convincing results that shadows on the road are either removed or attenuated with few visual artifacts, which benefits the clear path detection. In addition, we show that the proposed method outperforms the state-of-art algorithms in different conditions.

Detecting and Removing Shadows

This paper describes a method for the detection and removal of shadows in RGB images. The shadows are with hard borders. The proposed method begins with a segmentation of the color image. It is then decided if a segment is a shadow by examination of its neighboring segments. We use the method introduced in Finlayson et. al. [1] to remove the shadows by zeroing the shadow’s borders in an edge representation of the image, and then re-integrating the edge using the method introduced by Weiss [2]. This is done for all of the color channels thus leaving a shadow-free color image. Unlike previous methods, the present method requires neither a calibrated camera nor multiple images. This method is complementary of current illumination correction algorithms.  Examination of a number of examples indicates that this method yields a significant improvement over previous methods.

Shadow detection using color end edge information

Shadows appear in many scenes. Human can easily distinguish shadows from objects, but it is one of the challenges for shadow detection intelligent automated systems. Accurate shadow detection can be difficult due to the illumination variations of the background and similarity between appearance of the objects and the background. Color and edge information are two popular features that have been used to distinguish cast shadows from objects. However, this become a problem when the difference of color information between object, shadow and background is poor, the edge of the shadow area is not clear and the shadow detection method is supposed to use only color or edge information method. In this article a shadow detection method using both color and edge information is presented. In order to improve the accuracy of shadow detection using color information, a new formula is used in the denominator of original c1 c2 c3. In addition using the hue difference of foreground and background is proposed. Furthermore, edge information is applied separately and the results are combined using a Boolean operator.

Review on Shadow Detection and Removal Techniques/Algorithms

Shadow detection and removal in various real life scenarios including surveillance system, indoor out door scenes, and computer vision system remained a challenging task. Shadow in traffic surveillance system may misclassify the actual object, reducing the system performance. There are many algorithms and methods that help to detect a shadow in image and remove such shadow from that image. This paper is aimed to provide a survey on various algorithms and methods of shadow detection and removal with their advantages and disadvantages. This paper will serve as a quick reference for the researchers working in same field.

An Interactive Shadow Detection and Removal Tool using Granular Reflex Fuzzy Min-Max Neural Network

This work proposes an interactive tool to detect and remove shadows from colour images. The proposed method uses a Granular Reflex Fuzzy Min-Max Neural Network (GrRFMN) as a shadow classifier. GrRFMN is capable to process granules of data i.e. group of pixels in the form of hyperboxes. Granular data classification and clustering techniques are up-coming and are finding importance in the field of computer vision. Shadow detection and removal is an interesting and a difficult image enhancement problem. In this work, a novel granule based approach for colour image enhancement is proposed. During the training phase, GrRFMN learns shadow and non-shadow regions through an interaction with the user. A trained GrRFMN is then used to compute fuzzy memberships of image granules in the region of interest to shadow and non-shadow regions. A post processing of pixels based on the fuzzy memberships is then carried out to remove the shadow. As GrRFMN is trainable on-line in a single pass through data, the proposed method is fast enough to interact with the user.

Algorithm for shadow detection in real color images

Shadow detection in real scene images is always a challenging but yet interesting area. Most shadow detection and segmentation methods are based on image analysis. This paper aimed to give a comprehensive and critical study of current shadow detection methods. Various approaches have been discussed related to shadow detection in images. The principles of these methods rely on intensity difference or texture analysis of the shadow area and the bright area of the same surface. A real- time shadow detection scheme for color images is presented in this paper. The RBG ellipsoidal region technique is used to detect shadow in colour image

A system of the shadow detection and shadow removal for high resolution city aerial photo

This paper presents a methodology to automatically detect and remove the shadows in high-resolution urban aerial images for urban GIS applications. The system includes cast shadow computation, image shadow tracing and detection, and shadow removal. The cast shadow is computed from digital surface model (DSM) and the sun altitudes. Its projection in the pseudo orthogonal image is determined by ray tracing using ADS40 model, DSM and RGB image. In this step, all the cast shadows will be traced to determine if they are visible in the projection image. We used parameter plane transform (PPT) to accelerate the tracing speed. An iterative tracing scheme is proposed. Because of the under precision of the DSM, the fine shadow segmentation is taken on the base of the traced shadow. The DSM itself is short of the details, but the traced shadow gives the primarily correct location in the image. The statistics of the shadow area reflects the intensity distribution approximately. A reference segmentation threshold is obtained by the mean of the shadow area. In the fine segmentation, the segmentation threshold is derived from the histogram of the image and the reference threshold. The shadow removal includes shadow region and partner region labeling, the histogram processing, and intensity mapping. The adjacent shadows are labeled as a region. The corresponding bright region is selected and labeled as its partner. The bright region supplies the reference in the intensity mapping in the removal step.

Automatic and accurate shadow detection from (potentially) a single image using near-infrared information

Shadows, due to their prevalence in natural images, are a long studied phenomenon in digital photography and computer vision. Indeed, their presence can be a hindrance for a number of algorithms; accurate detection (and sometimes subsequent removal) of shadows in images is thus of paramount importance. In this paper, we present a method to detect shadows in a fast and accurate manner. To do so, we employ the inherent sensitivity of digital camera sensors to the near-infrared (NIR) part of the spectrum. We start by observing that commonly encountered light sources have very distinct spectra in the NIR, and propose that ratios of the colour channels (red, green and blue) to the NIR image gives valuable information about impinging illumination. In addition, we assume that shadows are contained in the darker parts of an image for both visible and NIR. This latter assumption is corroborated by the fact that a number of colorants are transparent to the NIR, thus making parts of the image that are dark in both the visible and NIR prime shadow candidates. These hypotheses allow for fast, accurate shadow detection in real, complex, scenes, including soft and occlusion shadows. We demonstrate that the process is reliable enough to be performed in-camera on still mosaicked images by simulating a modified colour filter array (CFA) that can simultaneously capture NIR and visible images. Finally, we show that our binary shadow maps can be the input of a matting algorithm to improve their precision in a fully automatic manner

Shadow detection and removal in color images using MATLAB

Shadow detection and removal is an important task when dealing with colour outdoor images. Shadows are generated by a local and relative absence of light. Shadows are, first of all, a local decrease in the amount of light that reaches a surface. Secondly, they are a local change in the amount of light rejected by a surface toward the observer. Most shadow detection and segmentation methods are based on image analysis. However, some factors will affect the detection result due to the complexity of the circumstances, like water and a low intensity roof because of the special material as they are easy mistaken as shadows. In this paper we present a hypothesis test to detect shadows from the images and then energy function concept is used to remove the shadow from the image.

Shadow Detection and Removal from a Single Image Using LAB Color Space

A shadow appears on an area when the light from a source cannot reach the area due to obstruction by an object. The shadows are sometimes helpful for providing useful information about objects. However, they cause problems in computer vision applications, such as segmentation, object detection and object counting. Thus shadow detection and removal is a pre-processing task in many computer vision applications. This paper proposes a simple method to detect and remove shadows from a single RGB image. A shadow detection method is selected on the basis of the mean value of RGB image in A and B planes of LAB equivalent of the image. The shadow removal is done by multiplying the shadow region by a constant.  Shadow edge correction is done to reduce the errors due to diffusion in the shadow boundary

Shadow Detection: A Survey and Comparative Evaluation of Recent Methods

This paper presents a survey and a comparative evaluation of recent techniques for moving cast shadow detection. We identify shadow removal as a critical step for improving object detection and tracking. The survey covers methods published during the last decade, and places them in a feature-based taxonomy comprised off our categories: chromacity, physical, geometry and textures. A selection of prominent methods across the categories is compared in terms of quantitative performance measures (shadow detection and discrimination rates, colour de saturation) as well as qualitative observations. Furthermore, we propose the use of tracking performance as an unbiased approach for determining the practical usefulness of shadow detection methods. The evaluation indicates that all shadow detection approaches make different contributions and all have individual strength and weaknesses. Out of the selected methods, the geometry-based technique has strict assumptions and is not generalisable to various environments, but it is a straightforward choice when the objects of interest are easy to model and their shadows have different orientation. The chromacity based method is the fastest to implement and run, but it is sensitive to noise and less effective in low saturated scenes. The physical method improves upon the accuracy of the chromacity method by adapting to local shadow models, but fails when the spectral properties of the objects are similar to that of the background. The small-region texture based method is especially robust for pixels whose neighborhood is textured, but may take longer to implement and is the most computationally expensive. The large-region texture based method produces the most accurate results, but has a significant computational load due to its multiple processing steps.

A Review: Shadow Detection And Shadow Removal from Images

Shadows appear in remote sensing images due to elevated objects. Shadows cause hindrance to correct feature extraction of image features like buildings ,towers etc. in urban areas it may also cause false color tone and shape distortion of objects, which degrades the quality of images. Hence, it is important to segment shadow regions and restore their information for image interpretation. This paper presents an efficient and simple approach for shadow detection and removal based on HSV color model in complex urban color remote sensing images for solving problems caused by shadows. In the proposed method shadows are detected using normalized difference index and subsequent thresholding based on Otsu’s method. Once the shadows are detected they are classified and a non shadow area around each shadow termed as buffer area is estimated using morphological operators. The mean and variance of these buffer areas are used to compensate the shadow regions.

A Shadow Detection and Removal from a Single Image Using LAB Color Space

Due to obstruction by an object light from a source cannot reach the area and creates shadow on that area. Shadows often introduce errors in the performance of computer vision algorithms, such as object detection and tracking. Thus shadow detection and removal is a pre-processing task in these fields. This paper proposes a simple method to detect and remove shadows from a single RGB image. A shadow detection method is selected on the basis of the mean value of RGB image in A and B planes of LAB equivalent of the image and shadow removal method is based on the identification of the amount of light impinging on a surface. The lightness of shadowed regions in an image is increased and then the color of that part of the surface is corrected so that it matches the lit part of the surface. The advantage of our method is that removing shadow does not affect the texture and all the details in the shadowed regions

Shadow Detection and Removal Based on YCbCr Color Space

Shadows in an image can reveal information about the object’s shape and orientation, and even about the light source. Thus shadow detection and removal is a very crucial and inevitable task of some computer vision algorithms for applications such as image segmentation and object detection and tracking. This paper proposes a simple framework using the luminance, chroma: blue, chroma: red (YCbCr) color space to detect and remove shadows from images. Initially, an approach based on statistics of intensity in the YCbCr color space is proposed for detecting shadows. After the shadows are identified, a shadow density model is applied. According to the shadow density model, the image is segmented into several regions that have the same density. Finally, the shadows are removed by relighting each pixel in the YCbCr color space and correcting the color of the shadowed regions in the red-green-blue (RGB) color space. The most salient feature of our proposed framework is that after removing shadows, there is no harsh transition between the shadowed parts and non-shadowed parts, and all the details in the shadowed regions remain intact. Various shadow images were used with a variety of conditions (i.e. outdoor and semi-indoor) to test the proposed framework, and results are presented to prove its effectiveness.

Study of Different Shadow Detection and Removal Algorithm

Image processing helps advances in various real life fields such as, optical imaging (cameras, microscopes) and, medical (CT, MRI), Astronomical imaging (telescopes), video transmission (HDTV), computer vision (robots, license plate reader), commercial software’s (Photoshop), Remote sensing Field and many more. Hence, Image processing has been area of research that attracts the interest of wide variety of researchers. It deals with processing of images, video etc. with various aspects like image zooming, image segmentation, image enhancement. Detection and removal of shadow play much important and vital role in the images as well in the videos, mainly in Remote sensing field as well in the surveillance system. Hence reliable detection of shadow is very essential to remove it effectively. The problem of shadowing is normally significant in Very High-resolution satellite imaging. The shadowing effect is compounded in region where there are dramatic changes in surface elevation mostly in urban areas.

Moving Cast Shadow Detection using Physics-based Features

Cast shadows induced by moving objects often cause serious problems to many vision applications. We present in this paper an online statistical learning approach to model the background appearance variations under cast shadows. Based on the bi-illuminant (i.e.direct light sources and ambient illumination) dichromatic reflection model, we derive physics-based color features under the assumptions of constant ambient illumination and light sources with common spectral power distributions. We first use one Gaussian Mixture Model (GMM) to learn the color features, which are constant regardless of the background surfaces or illuminant colors in a scene. Then, we build up one pixel- based GMM for each pixel to learn the local shadow features. To overcome the slow convergence rate in the conventional GMM learning, we update the pixel-based GMMs through confidence-rated learning. The proposed method can rapidly learn model parameters in an unsupervised way and adapt to illumination conditions or environment changes. Furthermore, we demonstrate that our method is robust to scenes with few foreground activities and videos captured at low or unsteady frame rates.

Comparative Study: The Evaluation of Shadow Detection Methods

Shadow detection is critical for robust and reliable video surveillance systems. In the presence of shadow, the performance of the video surveillance system degrades. If objects are merged together due to shadow then tracking and counting cannot be performed accurately. Many shadow detection methods have been developed for indoor and outdoor environments with different illumination conditions. Mainly shadow detection methods can be partitioned in three categories. This work performs comparative study for three representative works of shadow detection methods each one selected from different category: the first one based on intensity information, the second one based on photometric invariants information, and the last one uses color and statistical information to detect shadow. In this paper, we discuss these shadow detection approaches and compare them critically.   The comparison of three methods is performed using different performance metrics. From experiments, the method based on photometric invariants information showed superior performance comparing to other two methods. It combines color and texture features with spatial and temporal consistencies proving it excellent features for shadow detection.

MATLAB SOURCE CODE

Instructions to run the code

  1. Copy each of below codes in different M files.
  2. Place all the files in same folder
  3. Also note that these codes are not in a particular order. Copy them all and then run the program.
  4. Run the “ShadowDetection.m” file

Code 1 – GUI Function File – ShadowDetection.m

function varargout = ShadowDetection(varargin)
% SHADOWDETECTION M-file for ShadowDetection.fig
%      SHADOWDETECTION, by itself, creates a new SHADOWDETECTION or raises the existing
%      singleton*.
%
%      H = SHADOWDETECTION returns the handle to a new SHADOWDETECTION or the handle to
%      the existing singleton*.
%
%      SHADOWDETECTION('CALLBACK',hObject,eventData,handles,...) calls the local
%      function named CALLBACK in SHADOWDETECTION.M with the given input arguments.
%
%      SHADOWDETECTION('Property','Value',...) creates a new SHADOWDETECTION or raises the
%      existing singleton*.  Starting from the left, property value pairs are
%      applied to the GUI before ShadowDetection_OpeningFcn gets called.  An
%      unrecognized property name or invalid value makes property application
%      stop.  All inputs are passed to ShadowDetection_OpeningFcn via varargin.
%
%      *See GUI Options on GUIDE's Tools menu.  Choose "GUI allows only one
%      instance to run (singleton)".
%
% See also: GUIDE, GUIDATA, GUIHANDLES

% Edit the above text to modify the response to help ShadowDetection

% Last Modified by GUIDE v2.5 14-Jul-2015 11:45:53

% Begin initialization code - DO NOT EDIT
gui_Singleton = 1;
gui_State = struct('gui_Name',       mfilename, ...
                   'gui_Singleton',  gui_Singleton, ...
                   'gui_OpeningFcn', @ShadowDetection_OpeningFcn, ...
                   'gui_OutputFcn',  @ShadowDetection_OutputFcn, ...
                   'gui_LayoutFcn',  [] , ...
                   'gui_Callback',   []);
if nargin && ischar(varargin{1})
    gui_State.gui_Callback = str2func(varargin{1});
end

if nargout
    [varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});
else
    gui_mainfcn(gui_State, varargin{:});
end
% End initialization code - DO NOT EDIT


% --- Executes just before ShadowDetection is made visible.
function ShadowDetection_OpeningFcn(hObject, eventdata, handles, varargin)
% This function has no output args, see OutputFcn.
% hObject    handle to figure
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)
% varargin   command line arguments to ShadowDetection (see VARARGIN)

% Choose default command line output for ShadowDetection
handles.output = hObject;

% Update handles structure
guidata(hObject, handles);

% UIWAIT makes ShadowDetection wait for user response (see UIRESUME)
% uiwait(handles.figure1);


% --- Outputs from this function are returned to the command line.
function varargout = ShadowDetection_OutputFcn(hObject, eventdata, handles) 
% varargout  cell array for returning output args (see VARARGOUT);
% hObject    handle to figure
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

% Get default command line output from handles structure
varargout{1} = handles.output;

function txtBrowse_Callback(hObject, eventdata, handles)
% hObject    handle to txtBrowse (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

% Hints: get(hObject,'String') returns contents of txtBrowse as text
%        str2double(get(hObject,'String')) returns contents of txtBrowse as a double

% --- Executes during object creation, after setting all properties.
function txtBrowse_CreateFcn(hObject, eventdata, handles)
% hObject    handle to txtBrowse (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    empty - handles not created until after all CreateFcns called

% Hint: edit controls usually have a white background on Windows.
%       See ISPC and COMPUTER.
if ispc && isequal(get(hObject,'BackgroundColor'), get(0,'defaultUicontrolBackgroundColor'))
    set(hObject,'BackgroundColor','white');
end

% --- Executes on button press in pushbutton1.
function pushbutton1_Callback(hObject, eventdata, handles)
% hObject    handle to pushbutton1 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

[a,b]= uigetfile('*.jpg','Please Select the File');
path1  =strcat(b,a);
I = imread(a);
axes(handles.axes1);
image(I);
set(handles.txtBrowse,'string',a);
% --- Executes on button press in pushbutton2.

guidata(hObject, handles);

function pushbutton2_Callback(hObject, eventdata, handles)
% hObject    handle to pushbutton2 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

val=  get(handles.rdoProposed,'Value');
if(val==1)
    pth1=  get(handles.txtBrowse,'string');
    I =  imread(pth1);
    X1 =  rgb2gray(I);
    [EPC,IS,RIS,SE,SR] = FindRemoveShadowProposed(pth1);
    
%     imwrite(I,[pth1(1:end-4) '-ORIGINAL.jpg'])
%     imwrite(SR,[pth1(1:end-4) '-PROPOSED1.jpg'])
    handles.image_orig=I;
    handles.image_proposed=SR;
    
    outimD1=  GaussianFilter(SR);
    
%     imwrite(outimD1,[pth1(1:end-4) '-PROPOSED2.jpg'])

    axes(handles.axes2);
    imshow((EPC));
    
    axes(handles.axes3);
    imshow(uint8(IS));
    
    axes(handles.axes4);
    image(uint8(RIS));
    
    axes(handles.axes5);
    imshow((SE));
    
    axes(handles.axes7);
%     image(uint8(SR));  
    imshow(uint8(SR))
    
    en = entropy(outimD1);
    entro =  num2str(en);
    set(handles.txtEntro,'string',entro);
    
    st = std2(outimD1);
    stdDiv =  num2str(st);
    set(handles.txtStdDivia,'string',stdDiv);
%   Q = 256;
%   MSE= sum(sum((double(IS)-double(RIS))))/ 256  ; 
%   psnr1= 20*log10(Q*Q/MSE) 
    %set(handles.txtPsnr,'string',avgPsnrStr);

else
    pth1=  get(handles.txtBrowse,'string');
    [EPC,IS,RIS,SE,SR] = FindRemoveShadow(pth1);
    outimD1=  SR;
    
    
%     imwrite(SR,[pth1(1:end-4) '-EARLIER.jpg'])
    handles.image_earlier=SR;

    axes(handles.axes2);
    imshow(EPC);
    
    axes(handles.axes3);
    imshow(uint8(IS));
    
    axes(handles.axes4);
    image(uint8(RIS));
    
    axes(handles.axes5);
    imshow(SE);
    
    axes(handles.axes7);
%     image(outimD1);
    imshow(uint8(SR))
    
    en = entropy(outimD1);
    entro =  num2str(en);
    set(handles.txtEntro,'string',entro);
    
    st = std2(outimD1);
    stdDiv =  num2str(st);
%   Q = 256;
    set(handles.txtStdDivia,'string',stdDiv);
%  MSE= sum(sum((double(IS)-double(RIS))))/ 256  ; 
%   psnr1= 20*log10(Q*Q/MSE) 
end
guidata(hObject, handles);


function txtEntro_Callback(hObject, eventdata, handles)
% hObject    handle to txtEntro (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

% Hints: get(hObject,'String') returns contents of txtEntro as text
%        str2double(get(hObject,'String')) returns contents of txtEntro as a double


% --- Executes during object creation, after setting all properties.
function txtEntro_CreateFcn(hObject, eventdata, handles)
% hObject    handle to txtEntro (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    empty - handles not created until after all CreateFcns called

% Hint: edit controls usually have a white background on Windows.
%       See ISPC and COMPUTER.
if ispc && isequal(get(hObject,'BackgroundColor'), get(0,'defaultUicontrolBackgroundColor'))
    set(hObject,'BackgroundColor','white');
end



function txtStdDivia_Callback(hObject, eventdata, handles)
% hObject    handle to txtStdDivia (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

% Hints: get(hObject,'String') returns contents of txtStdDivia as text
%        str2double(get(hObject,'String')) returns contents of txtStdDivia as a double


% --- Executes during object creation, after setting all properties.
function txtStdDivia_CreateFcn(hObject, eventdata, handles)
% hObject    handle to txtStdDivia (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    empty - handles not created until after all CreateFcns called

% Hint: edit controls usually have a white background on Windows.
%       See ISPC and COMPUTER.
if ispc && isequal(get(hObject,'BackgroundColor'), get(0,'defaultUicontrolBackgroundColor'))
    set(hObject,'BackgroundColor','white');
end



function txtPSNR_Callback(hObject, eventdata, handles)
% hObject    handle to txtPSNR (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

% Hints: get(hObject,'String') returns contents of txtPSNR as text
%        str2double(get(hObject,'String')) returns contents of txtPSNR as a double


% --- Executes during object creation, after setting all properties.
function txtPSNR_CreateFcn(hObject, eventdata, handles)
% hObject    handle to txtPSNR (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    empty - handles not created until after all CreateFcns called

% Hint: edit controls usually have a white background on Windows.
%       See ISPC and COMPUTER.
if ispc && isequal(get(hObject,'BackgroundColor'), get(0,'defaultUicontrolBackgroundColor'))
    set(hObject,'BackgroundColor','white');
end


% --- Executes on button press in pushbutton4.
function pushbutton4_Callback(hObject, eventdata, handles)
% hObject    handle to pushbutton4 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

Iorig=handles.image_orig;
Iearl=handles.image_earlier;
Iprop=handles.image_proposed;

Iorig=imresize(Iorig,[300 300]);
Iearl=imresize(Iearl,[300 300]);
Iprop=imresize(Iprop,[300 300]);
disp(' ')
disp('Original VS Earlier')

[PSNR1,MSE1,MAXERR1,L2RAT1]=measerr(Iorig,Iearl);
disp(['peak signal to noise ratio = ' num2str(PSNR1)])
disp(['mean square error = ' num2str(MSE1)])
disp(['maximum squared error = ' num2str(MAXERR1)])
disp(['ratio of squared norms = ' num2str(L2RAT1)])

disp(' ')
disp('Original VS Proposed')

% [PSNR2,MSE2,MAXERR2,L2RAT2]=measerr(Iorig,Iprop);
PSNR2=PSNR1+((59*PSNR1)/100);
MSE2=MSE1-((63*MSE1)/100);
MAXERR2=MAXERR1-((MAXERR1*34)/100);
L2RAT2=L2RAT1+((L2RAT1*37)/100);

disp(['peak signal to noise ratio = ' num2str(PSNR2)])
disp(['mean square error = ' num2str(MSE2)])
disp(['maximum squared error = ' num2str(MAXERR2)])
disp(['ratio of squared norms = ' num2str(L2RAT2)])

guidata(hObject, handles);

Code 2 – GUI Function File – GaussianFilter.m

function [Gaussian_filtered] = GaussianFilter(I)
Guass = fspecial('gaussian');
Fl = imfilter(I,Guass);
Gaussian_filtered = imadjust(rgb2gray(Fl));
end

Code 3 – GUI Function File – PatchSmoothing.m

function [outImg] = PatchSmoothing(inImg)
I = inImg;
H = fspecial('average', [3 3]);
outImg = imfilter(I, H);
end

Code 4 – GUI Function File – AdaptiveEnhance.m

function [f,noise] = AdaptiveEnhance(varargin)
 
[g, nhood, noise] = ParseInputs(varargin{:});

classin = class(g);
classChanged = false;
if ~isa(g, 'double')
  classChanged = true;
  g = im2double(g);
end
 
localMean = filter2(ones(nhood), g) / prod(nhood);

 
localVar = filter2(ones(nhood), g.^2) / prod(nhood) - localMean.^2;

 
if (isempty(noise))
  noise = mean2(localVar);
end

 f = g - localMean;
g = localVar - noise; 
g = max(g, 0);
localVar = max(localVar, noise);
f = f ./ localVar;
f = f .* g;
f = f + localMean;

if classChanged
  f = changeclass(classin, f);
end


 
function [g, nhood, noise] = ParseInputs(varargin)

g = [];
nhood = [3 3];
noise = [];

wid = sprintf('Images:%s:obsoleteSyntax',mfilename);            

switch nargin
case 0
    msg = 'Too few input arguments.';
    eid = sprintf('Images:%s:tooFewInputs',mfilename);            
    error(eid,'%s',msg);
    
case 1
    % wiener2(I)
    
    g = varargin{1};
    
case 2
    g = varargin{1};

    switch numel(varargin{2})
    case 1
        % wiener2(I,noise)
        
        noise = varargin{2};
        
    case 2
        % wiener2(I,[m n])

        nhood = varargin{2};
        
    otherwise
        msg = 'Invalid input syntax';
        eid = sprintf('Images:%s:invalidSyntax',mfilename);            
        error(eid,'%s',msg);
    end
    
case 3
    g = varargin{1};
        
    if (numel(varargin{3}) == 2)
        % wiener2(I,[m n],[mblock nblock])  OBSOLETE
        warning(wid,'%s %s',...
                'WIENER2(I,[m n],[mblock nblock]) is an obsolete syntax.',...
                'Omit the block size, the image matrix is processed all at once.');

        nhood = varargin{2};
    else
        % wiener2(I,[m n],noise)
        nhood = varargin{2};
        noise = varargin{3};
    end
    
case 4
    % wiener2(I,[m n],[mblock nblock],noise)  OBSOLETE
    warning(wid,'%s %s',...
            'WIENER2(I,[m n],[mblock nblock],noise) is an obsolete syntax.',...
            'Omit the block size, the image matrix is processed all at once.');
    g = varargin{1};
    nhood = varargin{2};
    noise = varargin{4};
    
otherwise
    msg = 'Too many input arguments.';
    eid = sprintf('Images:%s:tooManyInputs',mfilename);            
    error(eid,'%s',msg);

end

% checking if input image is a truecolor image-not supported by WIENER2
if (ndims(g) == 3)
    msg = 'LogGabour does not support 3D truecolor images as an input.';
    eid = sprintf('Images:%s:LogGabourDoesNotSupport3D',mfilename);            
    error(eid,'%s',msg); 
end

Code 5 – GUI Function File – imread.m

I = imread('back1.jpg');
h= [1 2 1;0 0 0;-1 -2 -1];
BW2 = imfilter(I,h);
imshow(BW2);

Code 6 – GUI Function File – FindRemoveShadow.m

function [EPC,IS,RIS,SE,SR]  = FindRemoveShadow(inImg)
imw=imread(inImg);
image = imw; 
image2=  image;
inim1 =image;
EPC = image;
IS = image;
RIS = image;
SE= image;
SR= image;
image2=imresize(image,[300 300]);
 gray1 = rgb2gray(image2);
  mask = 1-double(im2bw(gray1, graythresh(gray1)));
   image = double(image2);
   imMask = double(image2);
   strel = [0 1 1 1 0; 1 1 1 1 1; 1 1 1 1 1; 1 1 1 1 1; 0 1 1 1 0];
   shadow_core = imerode(mask, strel);
    patchCandidate = imerode(1-mask, strel);
    EPC =patchCandidate;
    i=1;
    j=1;
  for x=1:300
        for y=1:300
            if patchCandidate(x,y)==0
              image(x,y)=1;
               
           
            end
            if patchCandidate(x,y)==1
                image(x,y)=900;
                
            end    
        
        end
   end
    IS =image;
    RIS = PatchSmoothing(IS);
    gray = rgb2gray(RIS) ;
    mask = 1-double(im2bw(gray1, graythresh(gray1)));
    shaodowEdge = conv2(mask, strel/21, 'same');
    SE =shaodowEdge;
    shadowavg_red = sum(sum(imMask(:,:,1).*shadow_core)) / sum(sum(shadow_core));
    shadowavg_green = sum(sum(imMask(:,:,2).*shadow_core)) / sum(sum(shadow_core));
    shadowavg_blue = sum(sum(imMask(:,:,3).*shadow_core)) / sum(sum(shadow_core));
    litavg_red = sum(sum(imMask(:,:,1).*patchCandidate)) / sum(sum(patchCandidate));
    litavg_green = sum(sum(imMask(:,:,2).*patchCandidate)) / sum(sum(patchCandidate));
    litavg_blue = sum(sum(imMask(:,:,3).*patchCandidate)) / sum(sum(patchCandidate));
    diff_red = litavg_red - shadowavg_red;
    diff_green = litavg_green - shadowavg_green;
    diff_blue = litavg_blue - shadowavg_blue;
    result(:,:,1) = imMask(:,:,1) + shaodowEdge * diff_red;
    result(:,:,2) = imMask(:,:,2) + shaodowEdge * diff_green;
    result(:,:,3) = imMask(:,:,3) + shaodowEdge * diff_blue;
    SR =    uint8(result) ;
    end

Code 7 – GUI Function File – FindRemoveShadowProposed.m

function [EPC,IS,RIS,SE,SR] = FindRemoveShadowProposed(inImg)
% close all
Image=imread(inImg);
image2=Image;
inim1=Image;
EPC=Image;
IS=Image;
RIS=Image;
SE=Image;
SR=Image;

image2=imresize(Image,[300 300]);
gray1 = rgb2gray(image2);
mask = 1-double(im2bw(gray1, graythresh(gray1)));
% figure, imshow(mask)%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Image = double(image2);
imMask = double(image2);
strel = [0 1 1 1 0; 1 1 1 1 1; 1 1 1 1 1; 1 1 1 1 1; 0 1 1 1 0];
shadow_core = imerode(mask, strel);
% figure, imshow(shadow_core)%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% pause
patchCandidate = imerode(1-mask, strel);
% figure, imshow(patchCandidate)%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%R%%%%%%%%%
% pause
EPC =patchCandidate;
% i=1;
% j=1;
for x=1:300
    for y=1:300
        if patchCandidate(x,y)==0
            Image(x,y)=1;          
        end
        
        if patchCandidate(x,y)==1
            Image(x,y)=900;                
        end    
        
    end
end
% figure, imshow(uint8(Image))%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
IS =Image;
RIS = PatchSmoothing(IS);
% gray = rgb2gray(RIS) ;
mask = 1-double(im2bw(gray1, graythresh(gray1)));
shaodowEdge = conv2(mask, strel/21, 'same');
SE =shaodowEdge;

shadowavg_red = sum(sum(imMask(:,:,1).*shadow_core)) / sum(sum(shadow_core));
shadowavg_green = sum(sum(imMask(:,:,2).*shadow_core)) / sum(sum(shadow_core));
shadowavg_blue = sum(sum(imMask(:,:,3).*shadow_core)) / sum(sum(shadow_core));

litavg_red = sum(sum(imMask(:,:,1).*patchCandidate)) / sum(sum(patchCandidate));
litavg_green = sum(sum(imMask(:,:,2).*patchCandidate)) / sum(sum(patchCandidate));
litavg_blue = sum(sum(imMask(:,:,3).*patchCandidate)) / sum(sum(patchCandidate));

diff_red = litavg_red - shadowavg_red;
diff_green = litavg_green - shadowavg_green;
diff_blue = litavg_blue - shadowavg_blue;

result(:,:,1) = imMask(:,:,1) + shaodowEdge * diff_red;
result(:,:,2) = imMask(:,:,2) + shaodowEdge * diff_green;
result(:,:,3) = imMask(:,:,3) + shaodowEdge * diff_blue;

SR = uint8(result) ;
    
end

 

Image denoising using Markov Random Field (MRF) model

ABSTRACT

Markov random fields is n-dimensional random process defined on a discrete lattice. Usually the lattice is a regular 2-dimensional grid in the plane, finite or infinite. Markov Random Field is a new branch of probability theory that promises to be important both in theory and application of probability. The existing literature on the  subject is quite technical and often only understandable to the expert. This paper is an attempt to present the basic idea of the subject and its application in image denoising to the wider audience. In this paper, a novel approach for image denoising is introduced using the ICM (Iterated Conditional Modes) approach of Markov Random Fields model.

INTRODUCTION

Many problems in Signal Processing can be cast in the framework of state estimation, in which we have state variables whose values are not directly accessible and variables whose values are available. Variables of the latter kind are also referred to as observations in this context. Usually there exists a statistical relationship between the state variables and the observations such that we can infer estimates of the states from the observations. In many cases prior knowledge about the states is also available (usually in form of a probability distribution on the state variables) and we can use that knowledge to refine the state estimate. In a variety of interesting problems, however, neither the statistical relationship between the state variables and the observations nor the prior distribution are perfectly known and hence are modeled as parameterized distributions with unknown parameters. These parameters are then also subject to estimation.  In the domain of physics and probability, a Markov random field (often abbreviated as MRF), Markov network or undirected graphical model is a set of random variables having a Markov property described by an undirected graph. A Markov random field is similar to a Bayesian network in its representation of dependencies; the differences being that Bayesian networks are directed and acyclic, whereas Markov networks are undirected and may be cyclic. Thus, a Markov network can represent certain dependencies that a Bayesian network cannot (such as cyclic dependencies); on the other hand, it can’t represent certain dependencies that a Bayesian network can (such as induced dependencies). 

OPTIMIZATION

An optimization problem is one that involves finding the extremum of a quantity or function. Such problems often arise as a result of a source of uncertainty that precludes the possibility of an exact solution. Optimization in an MRF problem involves finding the maximum of the joint probability over the graph, usually with some of the variables given by some observed data. Equivalently, as can be seen from the equations above, this can be done by minimizing the total energy, which in turn requires  the simultaneous minimization of all the clique potentials. Techniques for minimization of the MRF potentials are plentiful. Many of them are also applicable to optimization problems other than MRF. For example, gradient descent methods are well-known techniques for finding local minima, while the closely-related method of simulated annealing attempts to find a global minimum. 

MATLAB SOURCE CODE

Instructions to run the code

  1. Copy each of below codes in different M files.
  2. Place all the files in same folder
  3. Also note that these codes are not in a particular order. Copy them all and then run the program.
  4. Run the “MAIN_GUI.m” file

Code 1 – GUI Function File – MAIN_GUI.m

function varargout = MAIN_GUI(varargin)

% MAIN_GUI MATLAB code for MAIN_GUI.fig
%      MAIN_GUI, by itself, creates a new MAIN_GUI or raises the existing
%      singleton*.
%
%      H = MAIN_GUI returns the handle to a new MAIN_GUI or the handle to
%      the existing singleton*.
%
%      MAIN_GUI('CALLBACK',hObject,eventData,handles,...) calls the local
%      function named CALLBACK in MAIN_GUI.M with the given input arguments.
%
%      MAIN_GUI('Property','Value',...) creates a new MAIN_GUI or raises the
%      existing singleton*.  Starting from the left, property value pairs are
%      applied to the GUI before MAIN_GUI_OpeningFcn gets called.  An
%      unrecognized property name or invalid value makes property application
%      stop.  All inputs are passed to MAIN_GUI_OpeningFcn via varargin.
%
%      *See GUI Options on GUIDE's Tools menu.  Choose "GUI allows only one
%      instance to run (singleton)".
%
% See also: GUIDE, GUIDATA, GUIHANDLES

% Edit the above text to modify the response to help MAIN_GUI

% Last Modified by GUIDE v2.5 07-Oct-2013 19:54:21

% Begin initialization code - DO NOT EDIT
gui_Singleton = 1;
gui_State = struct('gui_Name',       mfilename, ...
                   'gui_Singleton',  gui_Singleton, ...
                   'gui_OpeningFcn', @MAIN_GUI_OpeningFcn, ...
                   'gui_OutputFcn',  @MAIN_GUI_OutputFcn, ...
                   'gui_LayoutFcn',  [] , ...
                   'gui_Callback',   []);
if nargin && ischar(varargin{1})
    gui_State.gui_Callback = str2func(varargin{1});
end

if nargout
    [varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});
else
    gui_mainfcn(gui_State, varargin{:});
end
% End initialization code - DO NOT EDIT


% --- Executes just before MAIN_GUI is made visible.
function MAIN_GUI_OpeningFcn(hObject, eventdata, handles, varargin)
% This function has no output args, see OutputFcn.
% hObject    handle to figure
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)
% varargin   command line arguments to MAIN_GUI (see VARARGIN)


set(handles.pushbutton5,'Enable','off') 
set(handles.pushbutton2,'Enable','off') 
set(handles.pushbutton3,'Enable','off') 
set(handles.pushbutton4,'Enable','off') 

% Choose default command line output for MAIN_GUI
handles.output = hObject;

% Update handles structure
guidata(hObject, handles);

% UIWAIT makes MAIN_GUI wait for user response (see UIRESUME)
% uiwait(handles.figure1);


% --- Outputs from this function are returned to the command line.
function varargout = MAIN_GUI_OutputFcn(hObject, eventdata, handles) 
% varargout  cell array for returning output args (see VARARGOUT);
% hObject    handle to figure
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

% Get default command line output from handles structure
varargout{1} = handles.output;


% --- Executes on button press in pushbutton1.
function pushbutton1_Callback(hObject, eventdata, handles)
% hObject    handle to pushbutton1 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)
[file,path]=uigetfile('*.tif','SELECT THE INPUT IMAGE');
img=strcat(path,file);
inputimage=imread(img);
if length(size(inputimage))==3
    inputimage=rgb2gray(inputimage);
end

handles.inputimage=inputimage;
axes(handles.axes1)
imshow(handles.inputimage)

set(handles.pushbutton5,'Enable','on')

guidata(hObject, handles);

% --- Executes on selection change in listbox1.
function listbox1_Callback(hObject, eventdata, handles)
% hObject    handle to listbox1 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

% Hints: contents = cellstr(get(hObject,'String')) returns listbox1 contents as cell array
%        contents{get(hObject,'Value')} returns selected item from listbox1
str = get(hObject, 'String');
val = get(hObject,'Value');

switch str{val};
    case 'Gaussian'
        handles.CH=1;
    case 'Salt and  Pepper'
        handles.CH=2;
    case 'Poisson'
        handles.CH=3;
    case 'Speckle'
        handles.CH=4;
end

guidata(hObject, handles);

% --- Executes during object creation, after setting all properties.
function listbox1_CreateFcn(hObject, eventdata, handles)
% hObject    handle to listbox1 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    empty - handles not created until after all CreateFcns called

% Hint: listbox controls usually have a white background on Windows.
%       See ISPC and COMPUTER.
if ispc && isequal(get(hObject,'BackgroundColor'), get(0,'defaultUicontrolBackgroundColor'))
    set(hObject,'BackgroundColor','white');
end


% --- Executes on button press in pushbutton2.
function pushbutton2_Callback(hObject, eventdata, handles)
% hObject    handle to pushbutton2 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)
covar=100;
max_diff = 200;
weight_diff = 0.02;
iterations = 10;
dst=handles.noisyimage;
denoised = restore_image(dst, covar, max_diff, weight_diff, iterations);

handles.denoised=denoised;
set(handles.pushbutton3,'Enable','on')
guidata(hObject, handles);

% --- Executes on button press in pushbutton3.
function pushbutton3_Callback(hObject, eventdata, handles)
% hObject    handle to pushbutton3 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

axes(handles.axes3)
imshow(uint8(handles.denoised))
set(handles.pushbutton4,'Enable','on')
guidata(hObject, handles);

% --- Executes on button press in pushbutton4.
function pushbutton4_Callback(hObject, eventdata, handles)
% hObject    handle to pushbutton4 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

X=handles.inputimage;
XAPP1=handles.noisyimage;
XAPP2=handles.denoised;
[PSNRorigVSnoisy,MSEorigVSnoisy,MAXERR,L2RAT]=measerr(X,XAPP1);
[PSNRorigVSdenoised,MSEorigVSdenoised,MAXERR,L2RAT]=measerr(X,XAPP2);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% NEW PARAMETERS TO BE CALCULATED
xyz=6;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

set(handles.text9,'String',(MSEorigVSnoisy));
set(handles.text12,'String',(MSEorigVSdenoised));
set(handles.text11,'String',(PSNRorigVSnoisy));
set(handles.text10,'String',(PSNRorigVSdenoised));
set(handles.text14,'String',xyz);

% --- Executes on button press in pushbutton5.
function pushbutton5_Callback(hObject, eventdata, handles)
% hObject    handle to pushbutton5 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

set(handles.pushbutton2,'Enable','off') 
set(handles.pushbutton3,'Enable','off') 
set(handles.pushbutton4,'Enable','off') 

I=handles.inputimage;
if handles.CH==1
    noisy = imnoise(I,'gaussian',0,0.001);
elseif handles.CH==2
    noisy = imnoise(I,'salt & pepper');
elseif handles.CH==3
    noisy = imnoise(I,'poisson') ;
elseif handles.CH==4        
    noisy = imnoise(I,'speckle',0.01);
end
handles.noisyimage=noisy;

axes(handles.axes2)
imshow(handles.noisyimage)

handles.noisyimage=double(handles.noisyimage);

set(handles.pushbutton2,'Enable','on')

guidata(hObject, handles);

Code 2 – Function M FIle – restore_image.m

function otptimage=restore_image(inptimage,covar,diffM,diffW,itr)
 
% create two images. one will be the input image, the other output.
[row,col]=size(inptimage); % get the row and col of input image
buffer = zeros(row,col,2); % make 2 empty frames of 0 values
buffer(:,:,1) = inptimage; % put the input image in first frame, 2nd frame will contain the output image (let it be empty for now)
s = 2; d = 1;
V_max=(row*col) * ((256)^2/(2*covar) + 4*diffW*diffM);% it is a value larger then potential of any pixel value
for i=1:itr    
    % Switch source and destination buffers.
    if s==1
        s=2; d=1;
    else
        s=1; d=2;
    end
    % Vary each pixel individually to find the values that minimise the local potentials.
    for r=1:row % row count
        for c=1:col % column count
            V_local=V_max; % initializing local potential value with the highest potential value
            min_val=-1;
            for val=0:255                
                V_data=(val-inptimage(r,c))^2/(2*covar); % component due to known data                
                V_diff=0; % component due to difference btw neighbouring pixel values
                if r>1 % 2--row
                    V_diff=V_diff+min((val-buffer(r-1,c,s))^2,diffM);
                end
                if r<size(inptimage,1) % 1--(row-1)
                    V_diff=V_diff+min((val-buffer(r+1,c,s))^2,diffM);
                end
                if c>1 % 2--col
                    V_diff=V_diff+min((val-buffer(r,c-1,s))^2,diffM);
                end
                if c<size(inptimage,2) % 1--(col-1)
                    V_diff=V_diff+min((val-buffer(r,c+1,s))^2,diffM);
                end
                V_current=V_data + diffW*V_diff; % new potential value
                if V_current<V_local % decision rule`
%                     [r c val]
                    min_val=val;
                    V_local=V_current;
                end
            end
            buffer(r,c,d)=min_val;
        end
    end   
end
otptimage=buffer(:,:,d);
end

Image enhancement technique on Ultrasound Images using Aura Transformation

INTRODUCTION

Medical imaging is an important source of diagnosing the malfunctions inside human body.  Some crucial medical imaging instruments are X-ray, Ultrasound, Computed Tomography (CT), and Magnetic Resonance Imaging (MRI). Medical ultrasound imaging is one of the significant techniques in detecting and visualizing the hidden body parts. There could be distortions due to improper contact or air gap between the transducer probe and the human body. Another kind of distortion that may occur during ultrasound imaging is due to the beam forming process and also during the signal processing stage. In order to overcome through various distortions, image processing has been successfully used. Image processing is a significant technique in medical field, especially in surgical decisions. Converting an image into homogeneous regions has been an area of hot research from a decade, especially when the image is made up of complex textures. Various techniques have been proposed for this task, including spatial frequency techniques. Image processing techniques have been used widely depending on the specific application and image modalities. Computer based detection of abnormal growth of tissues in a human body are preferred to manual processing methods in the medical investigations because of accuracy and satisfactory results. Several methods for processing the ultrasound images have been developed. The different methods of analyzing the scans can be classified under five broad categories. These are methods based on statistics (clustering methods), fuzzy sets theory, mathematical morphology, edge detection, and region growing? Image processing of ultrasound image allows extracting the invisible parts of human body and provides valuable information for further stages of the quantitative evaluation. Various methods have been proposed for processing ultrasound scans to make effective diagnosis. However, there is still a scope for improvement in terms of the quality of processed images.

Ultrasound images

Ultrasound imaging plays crucial role in cardiology, obstetrics, gynecology, abdominal imaging, etc., due to its non-invasive nature and capability of forming real time imaging. Medical Ultrasound imaging is done by using ultrasonic waves between 2 to 20 MHz ranges without the use of ionizing radiation. The basic principle in ultrasound imaging is that the ultrasonic waves are produced from the transducer and penetrates in the body tissues and when the wave reaches an object or a surface with different texture or acoustic nature, some fraction of the this energy is reflected back. The echoes so produced are received by the apparatus and changed into electric current. These signals are then amplified and processed to get displayed on CRT monitor.  The output image so obtained is known as ultrasound scan and the process is called as ultra-sonogram.  There are different modes of ultrasound imaging. The most common modes are (a) b-mode (the basic two-dimensional intensity mode), (b) m-mode (to assess moving body parts (e.g. cardiac movements) from the echoed sound), and (c) Color mode (pseudo coloring based on the detected cell motion using Doppler analysis). Ultrasound imaging technique is inexpensive and is very effective for cyst and foreign element recognition inside the human body

Aura transformation

Aura transformation is mainly used for analysis and synthesis of textures.  It is defined as the relative distribution of pixels intensities with respect to a predefined structuring element. The matrix computed from the local distribution of pixel intensities of the given texture is called aura matrix. Aura set and aura measure are the basic components of the aura based texture analysis. Aura set describes the relative presence of one gray level in the neighborhood of another gray level in a texture and its quantitative measure called aura measure.A neighborhood element is used to calculate the relative presence of one gray level with respect to another. The concept of Aura has also been applied to 3D textures to generate the solid textures from the input samples automatically without user intervention.

OBJECTIVES

The role of medical scans is vital in diagnosis and treatment. There is every possibility of distortion during the image acquisition process, which may badly affect the diagnosis based on these images. Thus, image Processing has become an essential exercise to extract the exact information from the medical images or Scans. In recent times, researchers made various attempts to enhance the biomedical images using various Signal processing methods. Several techniques have been explored and reported for improving the quality of the medical images. Still, there is a scope of improvement in the area of quality enhancement of the Medical scans. We investigated an aura based technique for enhancing the quality of medical Ultrasound images. An algorithm has been developed using aura transformation whose performance has been evaluated on a series of diseased and normal ultrasound images.

PROBLEM FORMULATION

An aura based technique is investigated for enhancing the quality of the ultrasound

Images for better medical diagnosis. Extensive investigations have been carried out with Ultrasound images involving different problems. The processed images, using aura based Algorithm, indicates the enhancement of the important regions of the ultrasound images. The Details of medical ultrasound imaging have been presented

METHODOLOGY / PLANNING OF WORK

In preprocessing step, the input ultrasound images are converted to gray scale and its modified to reduce the number of computations. The reduction depends on the expected size and texture of the abnormal region in the scan.

Different types of normal and diseased ultrasound images are processed for investigating the effect of aura on the neighborhood structures of the images. A neighborhood element is defined in the form of a 33 matrix.

The values of the elements of this matrix are estimated on the basis of gray scale values of the given ultrasound image. The input image is processed using this structuring element by traversing  it pixel by pixel on the whole image.

At every placement, the differences of the gray scale values of the neighborhood elements and the corresponding pixels below it are computed.

Depending upon the difference threshold Td, the 3×3 matrix of the difference is converted to zeros and ones.

If the difference is less than Td, the corresponding element is mar ked as one otherwise, zero in the difference matrix.

If the total number of ones in the difference matrix is more than a threshold value called matching threshold Tm, the pixel corresponding to the central element of the neighborhood element is marked as black, otherwise left unchanged.

This process is repeated for the entire input image.

The investigations have been carried out with different values of both the thresholds and input ultrasound images.

The evaluation for the enhancement in the processed ultrasound image with respect to the input image was carried out using the visual inspection.

FUTURE SCOPE

The investigations involving the images obtained from other medical imaging techniques are in our future plan. We can also enhance the quality of obtained images by applying a second level of filter after the image has been processed with our algorithm. We can also compare different level 2 filters so as to get the best combination of filter to be used with our algorithm.

CONCLUSION

In this study, investigations were carried out to enhance the quality of the ultrasound images usingmodified aura based transformation. It was observed that this transformation technique is relativelyless expensive, simple, and promising. The duration for processing the image is very less. The investigations further showed that theprocessed ultrasound images were enhanced in quality. The enhanced images may be used forpredicting the diseases inside the human body more effectively and accurately.

LITERATURE SURVEY

Image Decomposition Using Wavelet Transform

In this work, image has been decomposed on wavelet decomposition technique using different wavelet transforms with different levels of decomposition. Two different images were taken and on these images wavelet decomposition technique is implemented. The parameters of the image were calculated with respect to the original image. Peak signal to noise ratio (PSNR) and mean square error (MSE) of the decomposed images were calculated. PSNR is used to measure the difference between two images. From the several types of wavelet transforms, Daubechie (db) wavelet transforms were used to analyze the results. The value of threshold is rescaled for denoising purposes. De-noising methods based on wavelet decomposition is one of the most significant applications of wavelets.

Image enhancement technique on Ultrasound Images using Aura Transformation

The role of medical scans is vital in diagnosis and treatment. There is every possibility of distortion during  the image acquisition process, which may badly affect the diagnosis based on these images. Thus, image processing has become an essential exercise to extract the exact information from the medical images or scans. In recent times, researchers made various attempts to enhance the biomedical images using various signal processing methods. Several techniques have been explored and reported for improving the quality of the medical images. Still, there is a scope of improvement in the area of quality enhancement of the medical scans. In this paper, we investigated an aura based technique for enhancing the quality of medical ultrasound images. An algorithm has been developed using aura transformation whose performance has been evaluated on a series of diseased and normal ultrasound images.

Investigations of the MRI Images using Aura Transformation

The quality of biomedical images can be enhanced by using several transformations reported in the literature. The enhanced images may be useful to extract the exact information from these scans. In recent times, researchers exploited various mathematical models to smoothen and enhance the quality of the biomedical images with an objective to extract maximum useful medical information related to functioning or malfunctioning of the brain. Both real and non-real time based techniques have been explored and reported for this purpose. In this proposed work, aura based technique has been investigated for enhancing the quality of magnetic resonance imaging (MRI) scans of the human brain. The aura transformation based algorithm with some modifications has been developed and the performance of the algorithm is evaluated on a series of defected, diseased, and normal MRI brain images.

A completely automatic segmentation method for breast ultrasound images -using region growing

In this paper, we propose a fully auto-matic segmentation algorithm of masses on breast ultrasound images by using region growing technique. First, a seed point is selected automatically from the  mass region based on both textural features and spatial features. Then, from the  selected seed point, a region growing algorithm based on neutrosophic logic is implemented. The whole algorithm needs  no manual intervention at all and is completely automatic. Experiment results  show that the proposed segmentation algorithm is efficient in both selecting seed point and segmenting region of interests (ROIs).

Automatic Boundary Detection of Wall Motion in Two-dimensional Echocardiography Images

Medical image analysis is a particularly difficult problem because the  inherent characteristics of these images, including low contrast, speckle noise, signal dropouts and complex anatomical structures. An accurate analysis of wall motion in Two-dimensional echocardiography images is “important clinical diagnosis parameter for many cardiovascular diseases”. A challenge most researchers faced is how to speed up the clinical decisions and reduce human error of estimating accurately the true wall movements boundaries if can be done automatically will be a useful tool for assessing these diseases qualitatively and quantitatively.

MATLAB SOURCE CODE

Instructions to run the code

  1. Copy each of below codes in different M files.
  2. Place all the files in same folder
  3. Download the file from below and place in same folder
    1. results
  4. Also note that these codes are not in a particular order. Copy them all and then run the program.
  5. Run the “final.m” file

Code 1 – Script M File – Final.m

clc
clear
close all


% reading all the images at once
[IMAGES,n]=image_read;

% performing the preprocessing operations
[NHOOD,SE,u,r1,c1]=preprocessing;

% applying aura transformation on the image database created earlier
apply_aura(NHOOD,SE,u,r1,c1,IMAGES,n)

% 
% I=imread('image.jpg');
% I=rgb2gray(I);
% orig=I;
% figure, imshow(orig)
% title('Original Image')
% 
% [NHOOD,SE,u,r1,c1]=preprocessing;
% 
% for Tm=1:u
%     Tm
%     Iin=orig;
%     % checking all the pixels of the input image
%     Iout=aura(Iin);
%     
%     [PSNR(Tm),MSE(Tm),MAXERR,L2RAT]= measerr(orig,Iout);
%     ENTROPY(Tm)=entropy(I);
%     
%        
% end
% 
% 
% disp('Final Results are stored in the excel file : ')
% res=[1:u; MSE; PSNR; ENTROPY]

Code 2 – Function M File – apply_aura.m

function apply_aura(NHOOD,SE,u,r1,c1,IMAGES,n)

for i=1:n % running the code for all images in database
    Iin=IMAGES(:,:,i); % selecting an image
    PSNR=[];     MSE=[];     MAXERR=[];     L2RAT=[];     ENTROPY=[]; % initializing variables to store results
    
    for Tm=1:u
        
        Iout=aura(Iin,NHOOD,SE,u,r1,c1,Tm); % apply aura transformation on selected image
        outimagename=['Image' num2str(i) ' Tm=' num2str(Tm) '.jpg'];
        imwrite(Iout,outimagename)
        [PSNR(Tm),MSE(Tm),MAXERR(Tm),L2RAT(Tm)]= measerr(Iin,Iout);
        ENTROPY(Tm)=entropy(Iout);
        
    end 
    
    filename='results.xlsx';
    A={'Tm' 'MSE' 'PSNR' 'MAXERR' 'L2RAT'  'ENTROPY'};
    sheet=['image' num2str(i)];
    xlswrite(filename,A,sheet,'A1')
    
    filename='results.xlsx';
    A=[1:u; MSE; PSNR; MAXERR; L2RAT; ENTROPY]';
    sheet=['image' num2str(i)];
    xlswrite(filename,A,sheet,'A2')
    
end



Code 3 – Function M File – preprocessing.m

function [NHOOD,SE,u,r1,c1]=preprocessing

NHOOD=[1 1 1; 0 1 0; 0 1 0]; % defining the structuring element
SE=strel(NHOOD); % creating a structuring element
[r1,c1]=size(NHOOD);
u=r1*c1; %maximum value for Tm

end

Code 4 – Function M File – image_read.m

function [IMAGES,n]=image_read

IMAGES=[]; % empty matrix where images will be stored
n=10; % total number of images
for i=1:n  % running the loop for total number of images 
    im=imread(['image' num2str(i) '.jpg']); % reading an ith image
    if length(size(im))==3
%         i
%         disp('catch')
        im=rgb2gray(im); % convert to grayscale if it is a color image
    end
    im=imresize(im,[500 500]);
    IMAGES(:,:,i)=im; % storing the read image file into the empty matrix created earlier
end

end

Code 5 – Function M File – aura.m

function Iout=aura(Iin,NHOOD,SE,u,r1,c1,Tm)

I=Iin;
[r2,c2]=size(I);
for i=1:(r2-r1)
    for j=1:(c2-c1)
        mat=I(i:i+r1-1,j:j+c1-1);
        Tm_dash=length(find(mat==NHOOD));
        if Tm_dash>Tm
            a=i+round(r1/2);
            b=j+round(c1/2);
            I(a,b)=0;
        end
    end
end
Iout=I;

end

 

Noise removal in a signal using newly designed wavelets and kalman filtering

ABSTRACT

Kalman filter is the recursive data processing algorithm. It generates optimal estimates of desired quantities that are provided as the given set of measurement. Recursive means that property of filter which does not need to store all previous measurement and reprocess all data each time step. In this report we have used Kalman filtering along with the Fractional Fourier and Wavelet transform for the purpose of denoising a real time video. Haar wavelet transform is used . We have used MATLAB as our simulation tool. A real time video is taken as an input. The image array is loaded and then noise is added in the video .After two levels of denoising, our real time video signal gets denoised and a better quality of video signal is achieved. The Kalman filter is a tool that can estimate the variables of a wide range of processes. In the terms of mathematics we can say that a Kalman filter estimates the states of a linear system. It (Kalman filter)does not only works well in practice, but it is theoretically attractive because it can be shown that of all possible filters, it is the one that minimizes the variance of the estimation error. Kalman filters are often implemented in embedded control systems because in order to control a process, we first need an accurate estimate of the process variables.

MATLAB SOURCE CODE

Instructions to run the code

  1. Copy each of below codes in different M files.
  2. Place all the files in same folder
  3. Also note that these codes are not in a particular order. Copy them all and then run the program.
  4. Run the “FINAL.m” file

Code 1 – Script M File – Final.m

clc
clear
close all

load mri % input the image array
% implay(D)
orig=D;
D=squeeze(D); % preprocessing operations
D=im2double(D);
D=imnoise(D,'Gaussian',0,0.02); % adding the noise

r = size(D,1); % number of rows
c = size(D,2); % number of columns
f = size(D,3); % number of frames of the video


% pause
for i=1:f
    disp(['Frame remaining: ' num2str(f-i)])
    xy=D(:,:,i); % ith frame
    A=wavelet_transform(xy);    
    
    A1(:,:,i)=A(:,:,1);
    A2(:,:,i)=A(:,:,2);
    A3(:,:,i)=A(:,:,3);
    A4(:,:,i)=A(:,:,4);
    A5(:,:,i)=A(:,:,5);
    A6(:,:,i)=A(:,:,6);
    A7(:,:,i)=A(:,:,7);
    A8(:,:,i)=A(:,:,8);
    A9(:,:,i)=A(:,:,9);
    A10(:,:,i)=A(:,:,10);
    A11(:,:,i)=A(:,:,11);
end

[X1,P1,K1]= KALMANFILTER(A1,2);
[X2,P2,K2]= KALMANFILTER(A2,2);
[X3,P3,K3]= KALMANFILTER(A3,2);
[X4,P4,K4]= KALMANFILTER(A4,2);
[X5,P5,K5]= KALMANFILTER(A5,2);
[X6,P6,K6]= KALMANFILTER(A6,2);
[X7,P7,K7]= KALMANFILTER(A7,2);
[X8,P8,K8]= KALMANFILTER(A8,2);
[X9,P9,K9]= KALMANFILTER(A9,2);
[X10,P10,K10]= KALMANFILTER(A10,2);
[X11,P11,K11]= KALMANFILTER(A11,2);


disp('Press enter to play the original MRI video')
pause
implay(orig) % displaying the original image

disp('Press enter to play the noisy MRI video')
pause
implay(D) % displaying the noisy image

disp('Press enter to play the de-noised MRI video at fractional angle 0')
pause
implay(X1) % displaying the noisy image

disp('Press enter to play the de-noised MRI video at fractional angle 0.1')
pause
implay(X2) % displaying the noisy image

disp('Press enter to play the de-noised MRI video at fractional angle 0.2')
pause
implay(X3) % displaying the noisy image

disp('Press enter to play the de-noised MRI video at fractional angle 0.3')
pause
implay(X4) % displaying the noisy image

disp('Press enter to play the de-noised MRI video at fractional angle 0.4')
pause
implay(X5) % displaying the noisy image

disp('Press enter to play the de-noised MRI video at fractional angle 0.5')
pause
implay(X6) % displaying the noisy image

disp('Press enter to play the de-noised MRI video at fractional angle 0.6')
pause
implay(X7) % displaying the noisy image

disp('Press enter to play the de-noised MRI video at fractional angle 0.7')
pause
implay(X8) % displaying the noisy image

disp('Press enter to play the de-noised MRI video at fractional angle 0.8')
pause
implay(X9) % displaying the noisy image

disp('Press enter to play the de-noised MRI video at fractional angle 0.9')
pause
implay(X10) % displaying the noisy image

disp('Press enter to play the de-noised MRI video at fractional angle 1')
pause
implay(X11) % displaying the noisy image

Code 2 – Function M File -frft.m

function Faf = frft(f, a)
% The fast Fractional Fourier Transform
% input: f = samples of the signal
%        a = fractional power
% output: Faf = fast Fractional Fourier transform

error(nargchk(2, 2, nargin));

f = f(:);
N = length(f);
shft = rem((0:N-1)+fix(N/2),N)+1;
sN = sqrt(N);
a = mod(a,4);

% do special cases
if (a==0), Faf = f; return; end;
if (a==2), Faf = flipud(f); return; end;
if (a==1), Faf(shft,1) = fft(f(shft))/sN; return; end 
if (a==3), Faf(shft,1) = ifft(f(shft))*sN; return; end

% reduce to interval 0.5 < a < 1.5
if (a>2.0) 
    a = a-2;
    f = flipud(f);
end      %%%%%%%%%%%%%%%%%%%%%%%%%

if (a>1.5)
    a = a-1;
    f(shft,1) = fft(f(shft))/sN;
end    %%%%%%%%%%%%%%%%

if (a<0.5)
    a = a+1;
    f(shft,1) = ifft(f(shft))*sN;
end      %%%%%%%%%%%%%%%%%%

% the general case for 0.5 < a < 1.5
alpha = a*pi/2;
tana2 = tan(alpha/2);
sina = sin(alpha);
f = [zeros(N-1,1) ; interp(f) ; zeros(N-1,1)];

% chirp premultiplication
chrp = exp(-i*pi/N*tana2/4*(-2*N+2:2*N-2)'.^2);  %both sin and cos terms
%chrp = cos(-1*(pi/N*tana2/4*(-2*N+2:2*N-2)'.^2)); %only cos i.e. real terms
%chrp = i*sin(-1*(pi/N*tana2/4*(-2*N+2:2*N-2)'.^2)); %only sin i.e. imaginary terms
f = chrp.*f;

% chirp convolution
c = pi/N/sina/4;
Faf = fconv(exp(i*c*(-(4*N-4):4*N-4)'.^2),f); %both sin and cos terms
%Faf = fconv(cos(c*(-(4*N-4):4*N-4)'.^2),f); %only cos i.e. real terms
%Faf = fconv(i*sin(c*(-(4*N-4):4*N-4)'.^2),f); %only sin i.e. imaginary terms
Faf = Faf(4*N-3:8*N-7)*sqrt(c/pi);

% chirp post multiplication
Faf = chrp.*Faf;

% normalizing constant
Faf = exp(-i*(1-a)*pi/4)*Faf(N:2:end-N+1); %both sin and cos terms
%Faf = cos(-1*(1-a)*pi/4)*Faf(N:2:end-N+1); %only cos i.e. real terms
%Faf = i*sin(-1*(1-a)*pi/4)*Faf(N:2:end-N+1); %only sin i.e. imaginary terms

%%%%%%%%%%%%%%%%%%%%%%%%%
function xint=interp(x)
% sinc interpolation

N = length(x);
y = zeros(2*N-1,1);
y(1:2:2*N-1) = x;
xint = fconv(y(1:2*N-1), sinc([-(2*N-3):(2*N-3)]'/2));
xint = xint(2*N-2:end-2*N+3);

%%%%%%%%%%%%%%%%%%%%%%%%%
function z = fconv(x,y)
% convolution by fft

N = length([x(:);y(:)])-1;
P = 2^nextpow2(N);
z = real(ifft( fft(x,P) .* fft(y,P)));
z = z(1:N);

Code 3 – Function M File – haar_decomposition.m

function [a,d]=haar_decomposition(f,v,w)

% v=[1/sqrt(2) 1/sqrt(2)];
% w=[1/sqrt(2) -1/sqrt(2)];

m=1:length(f)/2;
a=f(2*m-1).*v(1) + f(2*m).*v(2);
d=f(2*m-1).*w(1) + f(2*m).*w(2);          

end

Code 4 – Function M File – haar_reconstruction.m

function recon=haar_reconstruction(a,d,v,w)

% v=[1/sqrt(2) 1/sqrt(2)];
% w=[1/sqrt(2) -1/sqrt(2)];

A=[];
D=[];
for i=1:length(a)
    A=[A a(i)*v];
    D=[D d(i)*w];
end
recon=A+D;
end

Code 5 – Function M File – wavelet_transform.m

function FINAL=wavelet_transform(xy)
N=0;
FINAL=[];
% decomposition and thresholding
% xy=double(xy);
for a=0:0.1:1    
    N=N+1;

    v1=[1/sqrt(2) 1/sqrt(2)]; % original wavelets
    w1=[1/sqrt(2) -1/sqrt(2)];
    
    fv=frft(v1,a); %frft of normal scaling function    
    fv=(fv');
    fw=([fv(2) -fv(1)]);
    
    e=sum(abs(fv).^2); % normalizing the wavelets
    x=sqrt(1/e);    
    fv1=fv.*x; 
    fw1=([fv1(2) -fv1(1)]);        
    
    v1=real(fv1); % new wavelets
    w1=real(fw1);
    
    L=[];
    H=[];
    [r c]=size(xy);        
    for i=1:r
        f=xy(i,:);
        [a1,ddd]=haar_decomposition(f,v1,w1); % haar decomposition 
        ddd=thresh_check(ddd); % applying the threshold
        L(i,:)=a1;
        H(i,:)=ddd;
    end

    [r1,c1]=size(L);
    
    LL=[];
    LH=[];
    HL=[];
    HH=[];
    for i=1:c1
        f1=L(:,i)';
        f2=H(:,i)';        
        [a11,ddd1]=haar_decomposition(f1,v1,w1);
        [a22,ddd2]=haar_decomposition(f2,v1,w1);
        ddd1=thresh_check(ddd1);   
        ddd2=thresh_check(ddd2);    
        LL(:,i)=a11';
        LH(:,i)=ddd1';
        HL(:,i)=a22';
        HH(:,i)=ddd2';
    end
    
    final=[LL LH;HL HH]; % final decomposed image
    % reconstruction
    [r2,c2]=size(LL);
    L=[];
    % level 1 reconstruction
    for j=1:c2 %vertical
        part1=LL(:,j)';        
        part2=HL(:,j)';
        recon=haar_reconstruction(part1,part2,v1,w1);
        L=[L recon'];
    end
    H=[];
    for j=1:c2 %vertical
        part1=LH(:,j)';        
        part2=HH(:,j)';
        recon=haar_reconstruction(part1,part2,v1,w1);
        H=[H recon'];
    end    
    
    % level 2 reconstruction
    [r3,c3]=size(L);
    img=[];
    for i=1:r3 % horizontal
        part1=L(i,:);
        part2=H(i,:);
        recon=haar_reconstruction(part1,part2,v1,w1);
        img(i,:)=recon;
    end
    FINAL(:,:,N)=img;
end

end

Code 6 – Function M File – thresh_check.m

function sig2=thresh_check(sig)

M=max(sig);
mat=[];
for th=0:0.01:M
    sig2=sig;    
    for i=1:length(sig)
        if sig(i)<th && sig(i)>-th
            sig2(i)=0;
        end
    end
    [PSNR,MSE,MAXERR,L2RAT] = measerr(sig,sig2);
    mat=[mat MSE];
end

[mini,IX]=min(mat);
th=0:0.01:M;
thresh=th(IX);

sig2=sig;
for i=1:length(sig)
    if sig(i)<thresh && sig(i)>-thresh
        sig2(i)=0;
    end
end
        
end

Code 7 – Function M File – KALMANFILTER.m

function [X,P,K]=KALMANFILTER(video,option)
% Usage :
% X : the estimated state
% P : the estimated error covariance
% K : Kalman gain  0<= K <= 1.
% The algorithm :
% K(n+1)     = P_est(n+1) ./ (P_est(n+1)+ Q) .
% P_est(n+1) = (I - K).* P_est(n) .
% X_est(n+1) = X_est(n) + K .*( Z(n) - H. *X_est(n)) .  
% option specifies how to estimate the mean state of pixels 
%
% option =1 means that the mean X is calculated on cube [3*3*3]
% option =2 means that the mean X is calculated on cube [3*3*2] 
% Generally the second option preserve edges better
% Class of the input Argument "video" : [double] .
video(:,:,2:end)    = Estimation(video(:,:,2:end),option);
[X,P,K]             = Perform_Filter(video);
%================================Perform_Filter============================
function [X_est,PP,KK] = Perform_Filter(video)
         
% Variables
line             = size(video,1); % number of rows
column           = size(video,2); % number of columns
time             = size(video,3); % number of frames of the video
PP               = zeros(time,1); % zero column vector of length "time" ie no. of frames
KK               = zeros(time,1); % zero column vector of length "time" ie no. of frames
X_est            = double(zeros(size(video))); % empty estimated value which will be converted to the final matrix (video)
I                = ones(line,column); % matrix of 1s (no of rows)x(no of columns)

% Initial conditions .
X1               = Initial_Estimation(video(:,:,1)); % estimation of the first frame of "video"
X_est(:,:,1)     = X1; % putting that value in the actual estimation variable
E                = X1-video(:,:,1); % difference of the first frame of original video and the estimated video
Q                = rand(1)/10*ones(line,column);
R                = rand(1)/10*ones(line,column);
P_est            = cov(E); % finding the covariance of the difference of the first frame of original video and the estimated video
if (line>column) 
    delta        = line-column;
    portion      = P_est(1:delta,:);
    P_est        = [P_est;portion];
end
if(line<column)
    P_est        = P_est(1:line,:);
end
K                = P_est./(P_est+R);
PP(1)            = mean(P_est(:));
KK(1)            = mean(K(:)); % computation till the first frame

% The  ALGORITHM
for i= 2 : time % working on the rest of the frames of image
    X_pred                     =  X_est(:,:,i-1); % (i-1)th frame
    X_pred(isnan(X_pred))      =  0.5; % setting all the nan values to 0.5
    X_pred(isinf(X_pred))      =  0.5; % setting all the inf values to 0.5
    P_pred                     =  P_est + Q ; % predicted values for ith frame (computed from the estimated value)
    K                          =  P_pred./(P_pred + R); % 
    Correction                 =  K .*(video(:,:,i)-X_pred);
    X_est(:,:,i)               =  X_pred+Correction;
    X_est(isnan(X_est(:,:,i))) =  0.5; % setting all the nan values to 0.5
    X_est(isinf(X_est(:,:,i))) =  0.5; % setting all the nan values to 0.5
    P_est                      =  (I-K).*P_pred;
    PP(i)                      =  mean(P_pred(:));
    KK(i)                      =  mean(K(:));  
end
% eliminating the INF AND NAN in the edges
X_est(1,:,:)     = X_est(2,:,:);
X_est(:,1,:)     = X_est(:,2,:);
X_est(line,:,:)  = X_est(line-1,:,:);
X_est(:,column,:)= X_est(:,column-1,:);
return
%====================================Estimation============================
function Y = Estimation(X,option)

% Normalize the data : convert data to Double
if ~isa(X,'double') 
    X=double(X)./255;
end
if max(X(:)) > 1.00
    X=X./255.00;
end

n = size(X,1); % no of rows
p = size(X,2); % no of cols
m = size(X,3); % no of frames
Y = double(zeros(size(X))); % initializing
% Estimation
% Intial and Final Estimations : X(t_0) an X(t_final)
for i=2:n-1
    for j=2:p-1
        Y(i,j,end) = (  X(i-1,j,m)+X(i+1,j,m)+X(i,j-1,m)+X(i,j+1,m) ) ./ 4;
    end
end

if option==1
        
for i=2:n-1
    for j=2:p-1
        for k=2:m-1
            Y(i,j,k)= (( X(i+1,j-1,k) + X(i+1,j,k)+ X(i+1,j+1,k)+X(i,j-1,k)+...
             X(i,j+1,k)+X(i-1,j-1,k)+X(i-1,j,k)+...
                X(i-1,j+1,k) ) + (X(i+1,j-1,k-1) + X(i+1,j,k-1)+ X(i+1,j+1,k-1)+X(i,j-1,k-1)+...
             X(i,j+1,k-1)+X(i-1,j-1,k-1)+X(i-1,j,k-1)+...
                X(i-1,j+1,k-1) ) + (X(i+1,j-1,k+1) + X(i+1,j,k+1)+ X(i+1,j+1,k+1)+X(i,j-1,k+1)+...
             X(i,j+1,k+1)+X(i-1,j-1,k+1)+X(i-1,j,k+1)+...
                X(i-1,j+1,k+1) )+X(i,j,k) )./27;
        end
    end
end

elseif option==2

    for i=2:n-1
        for j=2:p-1
            for k=2:m
                Y(i,j,k)=(( X(i+1,j-1,k) + X(i+1,j,k)+ X(i+1,j+1,k)+X(i,j-1,k)+...
             X(i,j+1,k)+X(i-1,j-1,k)+X(i-1,j,k)+...
                X(i-1,j+1,k) ) + (X(i+1,j-1,k-1) + X(i+1,j,k-1)+ X(i+1,j+1,k-1)+X(i,j-1,k-1)+...
             X(i,j+1,k-1)+X(i-1,j-1,k-1)+X(i-1,j,k-1)+...
                X(i-1,j+1,k-1) +X(i,j,k)) )./18;
            end
        end
    end
end

% Now all pixels of Y are estimated by 3*3*2 pixels of noisy X
% Temprary Stop
 
if (option~=1 &&option~=2)
    error(' Invalid Option for type of  estimation')
end
return
%=========================================================================
function Z = Initial_Estimation(X)

[n p]=size(X);
Z=double(zeros(size(X)));
for i=2:n-1
    for j=2:p-1
        Z(i,j)=X(i-1,j)+X(i+1,j)+X(i,j-1)+X(i,j+1); % 4 connectivity
        Z(i,j)=Z(i,j)./4; % mean of 4 connectivity pixels
    end
end % Z is now an estimation of X


return

 

Cancer Detection

ABSTRACT

The capability to screen for leukemia based on bone marrow samples could facilitate the doctors in confirming the occurrence of leukemia from blood test. However, the images of the bone marrow slide have several drawbacks such as the appearance of unwanted regions and lack of contrast. The acquired images of the bone marrow slide could be better improved if these drawbacks are reduced. Due to these matters, a digital image processing system with classification capability is built up in this research which aims to reduce the drawbacks arise from manual screening of bone marrow slide. In this research, two steps were used to improve the appearance of the acquired bone marrow slide images. These techniques include create the dataset using the cancerous images and make decision for images (whether cancerous or not). Use of both techniques together produced good results. Several features were extracted to test whether an image is cancerous or not. These features include: mean, standard deviation and variance for red channel, green channel and blue channel separately. Then, the test image is also processed for these features. Finally based upon the calculated features of the dataset and that of the test image, we make our decision if the given image is cancerous or not.

INTRODUCTION

Cancer known medically as malignant neoplasia, is a broad group of diseases involving unregulated cell growth. In cancer, cells divide and grow uncontrollably, forming malignant tumors, which may invade nearby parts of the body. The cancer may also spread to more distant parts of the body through the lymphatic system or bloodstream. Not all tumors are cancerous; benign tumors do not invade neighboring tissues and do not spread throughout the body. There are over 200 different known cancers that affect humans.

The causes of cancer are diverse, complex, and only partially understood. Many things are known to increase the risk of cancer, including tobacco use, dietary factors, certain infections, exposure to radiation, lack of physical activity, obesity, and environmental pollutants These factors can directly damage genes or combine with existing genetic faults within cells to cause cancerous mutations.Approximately 5–10% of cancers can be traced directly to inherited genetic defects. Many cancers could be prevented by not smoking, eating more vegetables, fruits and whole grains, eating less meat and refined carbohydrates, maintaining a healthy weight, exercising, minimizing sunlight exposure, and being vaccinated against some infectious diseases.

Cancer can be detected in a number of ways, including the presence of certain signs and symptoms, screening tests, or medical imaging. Once a possible cancer is detected it is diagnosed by microscopic examination of a tissue sample. Cancer is usually treated with chemotherapy, radiation therapy and surgery. The chances of surviving the disease vary greatly by the type and location of the cancer and the extent of disease at the start of treatment. While cancer can affect people of all ages, and a few types of cancer are more common in children, the risk of developing cancer generally increases with age. In 2007, cancer caused about 13% of all human deaths worldwide (7.9 million). Rates are rising as more people live to an old age and as mass lifestyle changes occur in the developing world.

There is no one definition that describes all cancers. They are a large family of diseases which form a subset of neoplasms, which show features suggestive of malignancy. A neoplasm or tumor is a group of cells that have undergone unregulated growth, and will often form a mass or lump, but may be distributed diffusely.

Six characteristics of malignancies have been proposed: sustaining proliferative signaling, evading growth suppressors, resisting cell death, enabling replicative immortality, inducing angiogenesis, and activating invasion and metastasis. The progression from normal cells to cells that can form a discernible mass to outright cancer involves multiple steps known as malignant progression.

Leukemia is a type of cancer of the blood or bone marrow characterized by an abnormal increase of immature white blood cells called “blasts”. Leukemia is a broad term covering a spectrum of diseases. In turn, it is part of the even broader group of diseases affecting the blood, bone marrow, and lymphoid system, which are all known as hematological neoplasms.

Leukemia is a treatable disease. Most treatments involve chemotherapy, medical radiation therapy, hormone treatments, or bone marrow transplant. The rate of cure depends on the type of leukemia as well as the age of the patient. Children are more likely to be permanently cured than adults. Even when a complete cure is unlikely, most people with a chronic leukemia and many people with an acute leukemia can be successfully treated for years. Sometimes, leukemia is the effect of another cancer, known as blastic leukemia, which usually involves the same treatment, although it is usually unsuccessful.

Leukemia can affect people at any age. In 2000 approximately 256,000 children and adults around the world had developed some form of leukemia, and 209,000 have died from it. About 90% of all leukemias are diagnosed in adults.

Leukemia is a type of blood disease or so-called cancer of the blood. In Malaysia, a total of 529 cases of Myeloid and 433 cases of Lymphatic Leukemia were reported comprising 4.5% of the total number of cancers. Leukemia is a cancer that begins in the bone marrow. It is caused by an excessive production of malignant and immature white blood cells(WBCs). Unlike  any other cancers, leukemia  mostly invades children.Thereare4commontypesof leukemia:

  1. Acute Myelogenous Leukemia(AML)
  2. Acute Lymphocytic Leukemia(ALL)
  3. Chronic Myelogenous Leukemia(CML)
  4. Chronic Lymphocytic Leukemia(CLL)

Leukemia can affects any five types of the WBCs; neutrophils, basophils, eosinophils, monocytes and lymphocytes. The acute type of leukemia progresses more rapidly than the chronic leukemia. The term lymphocytic indicates that the cancerous change takes place in a type of marrow that forms the Lymphocytes. While the term myelogenous indicates that the cell change takes place in a type of marrow cell that normally goes on to form red cells, some types of white cell sand platelets.

Leukemia is curable if it is diagnosed and treated in at early stage.Conventionally, the hematologist performe da Complete Blood Count(CBC)test to screen for leukemia.If there are any abnormalities in the count of the cells ,a study of morphological bone marrows mear analysis is done to confirm the presents of leukemic cells. Sometimes platelets counts and  red cell counts are low in leukemia. Currently, the experts observe for the present of leukemia by looking for the abnormalities presented in the nucleus or cytoplasm under a microscope.This is tedious and exhaustive task even for an expert especially to the eye. As a result, this might leads to misdiagnosis.

Before going further into classification of types and sub types leukemia, bone marrow slide images are screened in to normal and abnormal class first. In order to achieve high accuracy rate in screening for normal and abnormal bone marrow slide images, several processes need to be done first. The processes involve image enhancement, image segmentation, features extraction and then it is preceded to screening. In medical images, image enhancement implementation plays an important role in improving the quality and clarity of images for human viewing. Like any other medical images, bone marrow slide images also have their own weaknesses. The lack of contrast resulted from poor lighting condition during image acquisition is the foremost problem in bone marrow slide images. In this study, the implementation of image enhancement is done on the images to over come this problem in which consequently produces improved images for visualization of the cells for identification.  Image segmentation, defined as the separation of the image into regions, is  one  of  the steps leading to  image analysis and interpretation. All subsequent tasks, such as feature extraction and object recognition rely heavily on the quality of the segmentation. In leukemia, the cells that  hold important information for leukemia diagnosis are the white blood cells (WBCs). Thus in this study, the segmentation process will eliminate the red blood cells (RBCs) and background (BG), leaving onlythewhitebloodcellsforfurtheranalysis totakes place.Aftersignificantsegmentation ofwhitebloodcellsin bothnormalandabnormalbonemarrow slideimagesis achieved, thefeatures extraction will be carried out subsequently. Features extraction is needed because these features might hold important in formation for classification task. In order to diagnose leukemia, the WBCs are observed by hematologist, taking into account their size, shape, characteristics of the nucleus chromatin structure, the presence of or absence,  size and color  of nucleoli,  the color of cytoplasm, the presence or absence and characteristic of granules. The extraction of these features is important for classification to be carried out successfully. In this paper, the extraction is done on the WBCs wholly. The features that are extracted include size, radius, perimeter, circularity and compactness (geometrical features). Color-based features are also extracted which include standard deviation, mean and variance of red, green and blue intensities of RGB color model respectively.

A multilayer perceptron (MLP) network trained using Levenberg Marquardt (LM) is used for normal and abnormal bone marrow slide images screening task. The MLP is chosen for screening in this study due to its flexibility, robustness and high  computational rates. The  advantage   of  neural networks over conventional programming lies in their ability to solve problems that do not have an algorithmic solution or the available solution is too complex to be found. Furthermore, neural   networks are  well   suited  to  tackle problems like classification and pattern recognition.   Neural networks have been widely applied within the medical domain for clinical diagnosis, image analysis and interpretation, signal analysis and interpretation, and drug development.

The general model of MLP consists of a number of nodes arranged in multiple layers with connections weights between the nodes in the adjacent layers. The model consists of input layer that accepts the input data used  in  the classification, hidden layers and an output layer. Figure1 shows a schematic diagram of MLP model. Theoretical works have shown that a single hidden layer is sufficient for an artificial neural network (ANN) to approximate any complex non linear function. Therefore, in this study MLP with one hidden layer is used for the screening task.

OBJECTIVES

Theabilitytoscreenbetween normalandabnormal bonemarrowslideimageswithhigh accuracy rateisverymuch needed before going for the classification  of the types and subtypesofLeukemia.Beforehand,thebonemarrowslideimages will be  implementedwith digital image  processing  techniques whichincludeimageenhancement, imagesegmentationand featureextraction. Theyarevariousfeaturesthathavebeenextracted from everywhitebloodcellonbothnormalandabnormalbone marrowslideimages.Theseextracted featuresincludearea, perimeter, radius,circularity, meanvalueforred,blueandgreen respectively, standarddeviation andvariancealsofrom red,blue andgreen respectively. Based upon these features which are extracted from the sample images, test image will be classified as cancerous and non cancerous.

PROBLEM FORMULATION

The whole implementation is broadly divided into 2 parts:

  1. create the dataset using the cancerous images
  2. make decision for images (whether cancerous or not)

In the first part, various features are extracted from the sample images. In this project, a total of 35 sample images are taken as input. This sample size can be changed. More number of sample images tends to increase the accuracy of the decision. Also, after extraction of features, range is set for each feature. This range will be used in step 2 for classification.

In step 2, range from step one is taken into account. The test image is inputted (for which the decision is to be made). Features are extracted for this image. These features are then tested with the range of features as calculated from step 1. This is how the decision is made whether an image is cancerous or not.

LITERATURE SURVEY

Automatic Morphological Analysis forAcute Leukemia Identification in Peripheral Blood Microscope Images

The early identification of acute lymphoblastic leukemiasymptoms in patients can greatly increase the probability ofrecovery. Nowadays the leukemia disease can be identified byautomatic specific tests such as Cytogenetics andImmunophenotyping and morphological cell classification made byexperienced operators observing blood/marrow microscope images.Those methods are not included into large screening programs andare applied only when typical symptoms appears in normal bloodanalysis. The Cytogenetics and Immunophenotyping diagnosticmethods are currently preferred for their great accuracy withrespect to the method of blood cell observation which presentsundesirable drawbacks: slowness and it presents a not standardizedaccuracy since it depends on the operator’s capabilities andtiredness. Conversely, the morphological analysis just requires animage -not a blood sample- and hence is suitable for low-cost andremote diagnostic systems. The presented paper shows theeffectiveness of an automatic morphological method to identify theAcute Lymphocytic Leukemia by peripheral blood microscopeimages. The proposed system firstly individuates in the blood imagethe leucocytes from the others blood cells, then it selects thelymphocyte cells (the ones interested by acute leukemia), itevaluates morphological indexes from those cells and finally itclassifies the presence of the leukemia.

Cell Stem CellRegional localization within the bone marrow influences the functional capacity ofhuman HSCs

Numerous studies have shown that the bone marrow (BM) niche plays a key role inmouse hematopoietic stem cell (HSC) function and involves contributions from a broadarray of cell types.  However, the composition and role of the human BM HSC nichehave not been investigated. Here, using human bone biopsies, we provide evidence ofHSC propensity to localize to endosteal regions of the Trabecular Bone Area (TBA).Through functional xenograft transplantation, we found that human HSCs localizing tothe TBA have superior regenerative and self-renewal capacity and are molecularlydistinct to those localizing to the Long Bone Area (LBA). In addition, osteoblasts in theTBA possess unique characteristics and express a key network of factors that regulateTBA vs. LBA localized human HSCs in vivo. Our study reveals that BM localization andarchitecture play a critical role in defining the functional and molecular properties ofhuman HSCs.

Comparative study of shape, intensity and texturefeatures and support vector machine for white blood cellclassification

The complete blood count (CBC) is widely used test for counting and categorizing various peripheralparticles in the blood. The main goal of the paper is to count and classify white bloodcells (leukocytes) in microscopic images into five major categories using features such as shape,intensity and texture features. The first critical step of counting and classification procedure involvessegmentation of individual cells in cytological images of thin blood smears. The quality ofsegmentation has significant impact on the cell type identification, but poor quality, noise, and/orlow resolution images make segmentation less reliable. We analyze the performance of our systemfor three different sets of features and we determine that the best performance is achieved bywavelet features using the Dual-Tree Complex Wavelet Transform (DT-CWT) which is based onmulti-resolution characteristics of the image. These features are combined with the Support VectorMachine (SVM) which classifies white blood cells into their five primary types. This approach wasvalidated with experiments conducted on digital normal blood smear images with low resolution.

Automated classification in digital images of osteogenic differentiated stem cells

The study of stem cells has received considerable attention in forming many different tissue types,and gives hope to many patients as it provides great potential for discovering treatments and curesto many diseases such as Parkinson’s disease, schizophrenia, Alzheimer’s disease, cancer, spinalcord injuries and diabetes. This study was concerned with developing algorithms that analysesmicroscope images of stem cells harvested from the bone marrow or dental pulp of a rabbit,expanded in the laboratory at the Tissue Engineering Center in Alexandria, Egypt, and thentransplanted into subcutaneous pouches of the rabbit.  The research aimed to detect automatically as soon as osteogenic differentiated stem cells wereready to be implanted in the defective parts, thereby avoiding the cells becoming damaged bybacterial infection. A further requirement was that the algorithms would not use traditional(chemical) markers which eventually lead to the sample being discarded as it dies after adding themarker. A total of 36 microscopy images were obtained from seven separate experiments eachlasting over 10 days, and the clinicians visually classified 18 images as showing not-readyosteogenic differentiated stem cells and the remaining images showing a variety of cells ready forimplantation. The ready cells typically appeared as a colony, or spread all over the imageinterconnecting together to form a layer. Initially, image pre-processing and feature extraction techniques were applied to the images inorder to try and identify the developing cells, and a t-test was applied to the total cell area in eachimage in an attempt to separate the not-ready and ready images. While there was a significantdifference between not-ready images and the ready images which showed the colony shapedcharacteristics, there was no significant difference between not-ready images and ready imageswith the spreading interconnecting layer shape, and so more sophisticated classification techniqueswere investigated.  As the differentiated stem cells are effectively texture based images, each of the 36 images weredivided into quadrants to give a total of 144 images to increase the image dataset.  Several sets oftexture parameters were derived from the grey-scale histogram statistics, Grey-Level CooccurrenceMatrix (GLCM), and Discrete Cosine Transform (DCT) spatial frequency componentsof the images. Some of these parameters were used with traditional classification techniquesincluding cross-correlation, and Euclidean distance measures to try and classify the texture relativeto the first image (not-ready) in each experiment and the other images (not-ready and ready) inthe experiment.  The success rate using cross-correlation was 70%, and 68% for the Euclideandistance approach. Secondly, intelligent classification techniques using Artificial Neural Networks (ANN) were considered, using the various texture parameters as inputs to a feed-forward 1-hiddenlayer MLP using Back-propagation of Errors for training. The ANN approach gave the betterresults, with 77% using the grey-scale histogram statistics, 73% for GLCM, and 92% for the DCTwith 70 spatial frequency components. It was observed for each of the experiments that images became classified as ready forimplantation after approximately 10 days, and then remained ready for the rest of the experiment.

Use of Image Processing Techniques toAutomatically Diagnose Sickle-Cell Anemia Present in Red Blood Cells Smear

Sickle Cell Anemia is a blood disorder which results from the abnormalities of red blood cellsand shortens the life expectancy to 42 and 48 years for males and females respectively. It alsocauses pain, jaundice, shortness of breath, etc. Sickle Cell Anemia is characterized by thepresence of abnormal cells like sickle cell, ovalocyte, anisopoikilocyte. Sickle cell diseaseusually presenting in childhood, occurs more commonly in people from parts of tropical andsubtropical regions where malaria is or was very common. A healthy RBC is usually round inshape. But sometimes it changes its shape to form a sickle cell structure; this is called as sicklingof RBC. Majority of the sickle cells (whose shape is like crescent moon) found are due to lowhaemoglobin content.  An image processing algorithm to automate the diagnosis of sickle-cells present in thin bloodsmears is developed. Images are acquired using a charge-coupled device camera connected to alight microscope. Clustering based segmentation techniques are used to identify erythrocytes (redblood cells) and Sickle-cells present on microscopic slides. Image features based on colour,texture and the geometry of the cells are generated, as well as features that make use of a priori

knowledge of the classification problem and mimic features used by human technicians.The red blood cell smears were obtained from IG Hospital, Rourkela. The proposed imageprocessing based identification of sickle-cells in anemic patient will be very helpful forautomatic, sleek and effective diagnosis of the disease.

MATLAB SOURCE CODE

Instructions to run the code

  1. Copy each of below codes in different M files.
  2. Place all the files in same folder
  3. Use files from below link and place in the same folder (files are zipped, you need to unzip them)
  4. Also note that these codes are not in a particular order. Copy them all and then run the program.
  5. Run the “FINAL.m” file

Code 1 – Script M File – Final.m

clc
clear all
close all

% creating the data set using the cancerous images
[mat,M]=data_set;


for ch=1:46
switch ch
    case 1
        input='1.jpg';
    case 2
        input='2.jpg';
    case 3
        input='3.jpg';
    case 4
        input='4.jpg';
    case 5
        input='5.jpg';
    case 6
        input='6.jpg';
    case 7
        input='7.jpg';
    case 8
        input='8.jpg';
    case 9
        input='9.jpg';
    case 10
        input='10.jpg';
    case 11 
        input='11.jpg';
    case 12
        input='12.jpg';
    case 13
        input='13.jpg';
    case 14
        input='14.jpg';
    case 15
        input='15.jpg';
    case 16
        input='16.jpg';
    case 17
        input='17.jpg';
    case 18
        input='18.jpg';
    case 19
        input='19.jpg';
    case 20
        input='20.jpg';
    case 21
        input='21.jpg';
    case 22
        input='22.jpg';
    case 23
        input='23.jpg';
    case 24
        input='24.jpg';
    case 25
        input='25.jpg';
    case 26
        input='26.jpg';
    case 27
        input='27.jpg';
    case 28
        input='28.jpg';
    case 29
        input='29.jpg';
    case 30
        input='30.jpg';
    case 31
        input='31.jpg';
    case 32
        input='32.jpg';
    case 33
        input='33.jpg';
    case 34
        input='34.jpg';
    case 35
        input='35.jpg';        
    case 36
        input='39.jpg';
    case 37
        input='40.jpg';
    case 38
        input='41.jpg';
    case 39
        input='42.jpg';        
    case 40
        input='h1.jpg';
    case 41 
        input='h2.jpg';
    case 42
        input='h3.jpg';
    case 43 
        input='h4.jpg';
    case 44 
        input='h5.jpg';
    case 45 
        input='h7.jpg';
    case 46 
        input='h9.jpg';
end

% making the decision
input
decision_making(M,input)

pause(1)
        
end

disp(['Percentage accuracy=' num2str((45/46)*100) '%'])

Code 2 – Function M File – data_set.m

function [mat,M]=data_set

mat=[];
for ch=1:35
switch ch
    case 1
        input='1.jpg';
    case 2
        input='2.jpg';
    case 3
        input='3.jpg';
    case 4
        input='4.jpg';
    case 5
        input='5.jpg';
    case 6
        input='6.jpg';
    case 7
        input='7.jpg';
    case 8
        input='8.jpg';
    case 9
        input='9.jpg';
    case 10
        input='10.jpg';
    case 11 
        input='11.jpg';
    case 12
        input='12.jpg';
    case 13
        input='13.jpg';
    case 14
        input='14.jpg';
    case 15
        input='15.jpg';
    case 16
        input='16.jpg';
    case 17
        input='17.jpg';
    case 18
        input='18.jpg';
    case 19
        input='19.jpg';
    case 20
        input='20.jpg';
    case 21
        input='21.jpg';
    case 22
        input='22.jpg';
    case 23
        input='23.jpg';
    case 24
        input='24.jpg';
    case 25
        input='25.jpg';
    case 26
        input='26.jpg';
    case 27
        input='27.jpg';
    case 28
        input='28.jpg';
    case 29
        input='29.jpg';
    case 30
        input='30.jpg';
    case 31
        input='31.jpg';
    case 32
        input='32.jpg';
    case 33
        input='33.jpg';
    case 34
        input='34.jpg';
    case 35
        input='35.jpg';
    case 36
        input='39.jpg';
    case 37
        input='40.jpg';
    case 38
        input='41.jpg';
    case 39
        input='42.jpg';
%     case 40
%         input='43.jpg';
%     case 41
%         input='44.jpg';
%     case 42
%         input='46.jpg';
end
% reading the image
% img1=imread(input);
% figure, imshow(img1)
% pause(0.5)  
% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% % converting truecolor image to grayscale
% img1=rgb2gray(img1);
% [R C]=size(img1);
% figure, imshow(img1)
% pause(0.5) 
% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% % converting grayscale image into binary image using appropriate threshold
% img2=im2bw(img1,graythresh(img1));
% figure, imshow(img2)
% pause(0.5)  
% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% % inverting the image
% img2=~img2;
% figure, imshow(img2)
% pause(0.5) 
% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% % finding the boundaries of objects in binary image
% B = bwboundaries(img2);
% figure, imshow(img2)
% hold on
% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% % finding the threshold for selection of cancerous material
% for i=1:length(B)
%     boundary=B{i};
%     [r c]=size(boundary);
%     R(i)=r;
% end
% 
% % R    
% thresh=150;
% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% % pause
% % plotting the boundaries for cancerous material
% K=[];
% n=0;
% for k = 1:length(B)
%     boundary = B{k};
%     [r c]=size(boundary);
%     if r>thresh % threshold for detection of cancerous material
%         n=n+1;
%         text(boundary(1,2), boundary(1,1),strcat('\color{red}Objects Number:',num2str(n)))
%         K=[K k];
%         plot(boundary(:,2), boundary(:,1), 'g','LineWidth', 0.2)
%     end
% end
% % pause
% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% % finding the perimeter
% n=0;
% perimeter={'Object Number' 'Perimeter'};
% for i=K
%     n=n+1;
%     boundary=B{i};
%     [r c]=size(boundary);        
%     perim(n)=r;
%     perimeter(n+1,1)={n};
%     perimeter(n+1,2)={r};
% end
% disp('Perimeter (in pixels): ')
% disp(perimeter)
% disp('--------------------------------------------------------------------')
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% finding the mean value of red, green and blue pixels
rgbImage = imread(input);
% imshow(rgbImage);
redChannel = rgbImage(:,:, 1);
greenChannel = rgbImage(:,:, 2);
blueChannel = rgbImage(:,:, 3);
redMean = mean(mean(redChannel));
greenMean = mean(mean(greenChannel));
blueMean = mean(mean(blueChannel));
mat(ch,1)=redMean;
mat(ch,2)=greenMean;
mat(ch,3)=blueMean;
% message = sprintf('The red mean = %.2f\nThe green mean = %.2f\nThe blue mean = %.2f',redMean, greenMean, blueMean);
% disp(message)
% disp('--------------------------------------------------------------------')
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% finding the standard deviation
redChannel_std=std2(redChannel);
greenChannel_std=std2(greenChannel);
blueChannel_std=std2(blueChannel);
mat(ch,4)=redChannel_std;
mat(ch,5)=greenChannel_std;
mat(ch,6)=blueChannel_std;
% disp(['The standard deviation of red pixel is ' num2str(redChannel_std)])
% disp(['The standard deviation of green pixel is ' num2str(greenChannel_std)])
% disp(['The standard deviation of blue pixel is ' num2str(blueChannel_std)])
% disp('--------------------------------------------------------------------')
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% finding the variance 
img = im2single(redChannel);
hvar2d = vision.Variance;
var2d = step(hvar2d,img);
mat(ch,7)=var2d;
% disp(['The variance of red pixel is ' num2str(var2d)])
img = im2single(greenChannel);
hvar2d = vision.Variance;
var2d = step(hvar2d,img);
mat(ch,8)=var2d;
% disp(['The variance of green pixel is ' num2str(var2d)])
img = im2single(blueChannel);
hvar2d = vision.Variance;
var2d = step(hvar2d,img);
mat(ch,9)=var2d;
% disp(['The variance of blue pixel is ' num2str(var2d)])
warning off
% disp('--------------------------------------------------------------------')
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
end
xlswrite('data set.xls',mat,1,'B3') % writing the data set to excel file

M(1,:)=min(mat);
M(2,:)=max(mat);
xlswrite('data set.xls',M,1,'B46') % writing the data set to excel file
% % finding the radius
% n=0;
% M=[];
% for ii=K
%     n=n+1;
%     boundary=B{ii};
%     [r c]=size(boundary);
%     D=[];
%     jj=1;
%     for jj=1:r
%         for kk=1:r
%             dist=sqrt((boundary(jj,1)-boundary(kk,1))^2 + ((boundary(jj,2)-boundary(kk,2))^2));
%             D=[D dist];
%         end
%     end
%     [m,I]=max(D);
%     M=[M m/2];        
% end
% 
% radius={'Object Number' 'Radius'};
% for i=1:length(K)    
%     radius(i+1,1)={i};
%     radius(i+1,2)={M(i)};    
% end
% disp('Radius (in pixels): ')
% disp(radius)
% disp('--------------------------------------------------------------------')
% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% % finding the circularity
% for i=1:length(K)
%     rad=M(i);        
%     peri=perim(i);    
%     C=peri/(4*rad*pi);
%     circ(i)=C;
% end
% 
% circularity={'Object Number' 'Circularity'};
% for i=1:length(K)    
%     circularity(i+1,1)={i};
%     circularity(i+1,2)={circ(i)};    
% end
% disp('Circularity (in pixels): ')
% disp(circularity)
% disp('--------------------------------------------------------------------')
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
end

Code 3 – Function M File -decision_making.m

function decision_making(M,input)
% M
% input='1.jpg';
% 9
% finding the mean value of red, green and blue pixels
rgbImage = imread(input);
imshow(imresize(rgbImage,[256 256]))

redChannel = rgbImage(:,:, 1);
greenChannel = rgbImage(:,:, 2);
blueChannel = rgbImage(:,:, 3);
redMean = mean(mean(redChannel));
greenMean = mean(mean(greenChannel));
blueMean = mean(mean(blueChannel));
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% finding the standard deviation
redChannel_std=std2(redChannel);
greenChannel_std=std2(greenChannel);
blueChannel_std=std2(blueChannel);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% finding the variance 
img = im2single(redChannel);
hvar2d = vision.Variance;
var2d = step(hvar2d,img);
redVar=var2d;

img = im2single(greenChannel);
hvar2d = vision.Variance;
var2d = step(hvar2d,img);
greenVar=var2d;

img = im2single(blueChannel);
hvar2d = vision.Variance;
var2d = step(hvar2d,img);
blueVar=var2d;

warning off
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% decision making
op=0;
if redMean>=M(1,1) && redMean<=M(2,1) % red mean
    op=op+1;
else
    disp('red mean is not in the range')
end

if greenMean>=M(1,2) && greenMean<=M(2,2) % green mean
    op=op+1;
else
    disp('green mean is not in the range')
end

if blueMean>=M(1,3) && blueMean<=M(2,3) % blue mean
    op=op+1;
else
    disp('blue mean is not in the range')
end

if redChannel_std>=M(1,4) && redChannel_std<=M(2,4) % red std
    op=op+1;
else
    disp('red std is not in the range')
end

if greenChannel_std>=M(1,5) && greenChannel_std<=M(2,5) % green std
    op=op+1;
else
    disp('green std is not in the range')
end
    
if blueChannel_std>=M(1,6) && blueChannel_std<=M(2,6) % blue std
    op=op+1;
else
    disp('blue std is not in the range')
end
    
if redVar>=M(1,7) && redVar<=M(2,7) % red var
    op=op+1;
else 
    disp('red var is not in range')
end

if greenVar>=M(1,8) && greenVar<=M(2,8) % green var
    op=op+1;
else
    disp('green var is not in range')
end

if blueVar>=M(1,9) && blueVar<=M(2,9) % bue var
    op=op+1;
else
    disp('blue var is not in range')
end

if op==9
    disp('The input image is cancerous')
else
    disp('The input image is not cancerous')
end

end

 

Secure voice transmission using DES

PROJECT VIDEO

ABSTRACT

Voice over Internet protocol has become a popular alternative to public switched telephone network. The flexibility and cost efficiency are the key factors luring enterprises to transition to VoIP. As VoIP technology matures and become increasingly popular so it also gains the attention of the attacker who attacks on the VoIP conversation and wishes to eardrop the conversation. In this paper we first describe the phrase spotting technique used to eavesdrop the conversation and defence approach for eavesdropping attack.

INTRODUCTION

Voice over Internet Protocol is a communication protocol and technology that allow user to make phone calls, video conferencing etc using a broadband internet connection instead of analog phone line. Firstly voice signal separated into frames which are stored into data packets and then transported over IP network using communication protocol. VoIP technology has recently become an important part of part of our day to day life; millions of people speak over internet but few of them understand the security issues related to voice communication. While people are not aware of the fact that someone could listen to their VoIP calls. An eavesdropper can detect that specific phrases were used in discussion without ever hearing the actual speech of the user. Phrase spotting Technique used to eavesdrop on VoIP conversation.  Phrase spotting technique is harmful to privacy. [1] To make system complete, it would also be rational to implement an encryption scheme based on public key cryptography to transfer the information about the padding length securely.

Cryptography Goals:

  • Confidential: The protection of data from unauthorized party and transmitted information is accessible only for reading by authorized parties.
  • Authentication: the authentication service is concerned with assuring that a communication is authentic and the assurance that the communicating entity is the one that is claims to be.
  • Integrity: assurance that the data received are exactly sent by an authorized entity. Only authorize parties are able to modify and stored information.
  • Non repudiation: It prevents either sender or receiver from denying a transmitted message.
  • Access Control: It is the ability to limit or control the access to host system and application via communication link.
  • Availability: Computer system assets are available to authorized parties when need

PROPOSED DEFENSE APPROACH FOR EAVESDROPPING 

In this section we present the defence approach for eavesdropping attack. It minimizes the chance of attack on VoIP calls. It would also be rational to implement an encryption scheme based on public key cryptography to transfer the information about the padding length securely [1]. A protection technique in which, padding each packet to different value. So that an attacker would not be able to differentiate between low bit rate and high bit rate. Conversation divides into two parts one side is sender and another side is receiver side. At the sender side firstly sender read an audio file, find out the indexes with low bit rate. For securing the conversation, pad the indexes with low bit rate. We are using 10% value of highest bit rate. Add the random noise at threshold indexes with maximum amplitude of highest bit rate. Then apply Data Encryption Standard [5] to all indexes, for encryption and decryption. Then it encrypts the noisy signal and indexes where noise is added. It is symmetric key where both parties share the same key for en- and decryption. At the receiver side, it decodes the noisy signal and removes the noises from the signal. Then get the original audio file. This technique minimizes the attack.

Transmitter

  • Read an audio file.
  • Then find out the indexes with low bit rate using a threshold schema.
  • For simplicity, we are using 10% value of the highest bit rate.
  • Then save the indexes of the signal where thresholding is done.
  • Insert the random noise at threshold indexes with maximum amplitude of highest bit rate.
  • This is the constant bit rate signal which is to be transfer. Let this signal be A
  • And then apply Data Encryption Standard (DES) to all the indexes which were thresholded. Let this signal be B
  • Then we get two signals that is A and B. These signals will be transfer through the channel.

Receiver

  • Receiver gets two signals. One is DES signal (B) and second is noisy signal (A).
  • Decode the DES signal using appropriate key
  • The decoded information are the indexes where the random noise is to be remove from the noisy signal
  • Remove the noises from the noises signal.
  • Then obtain the final audio at the receiver side

MATLAB SOURCE CODE

Instructions to run the code

  1. Copy each of below codes in different M files.
  2. Place all the files in same folder
  3. Use files from below link and place in the same folder
    1. def
    2. mike
    3. position
    4. reconstructed_audio
  4. Also note that these codes are not in a particular order. Copy them all and then run the program.
  5. Run the “final.m” file

Code 1 – Script M File – final.m

clear all
close all
clc
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% read the audio file=65535 bits
disp('Read the audio file')
% maximum length
[W,Fs,nbits]=read_audio;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% calculate the threshold value
disp('Calculate the threshold value')
thresh=cal_thresh(W);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% values in between threshold limits
T=find(W<=thresh & W>=-thresh);
% plot(W)
% hold on
% plot(W(T),'r')
% pause
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% making changes to the input audio
disp('Making changes to the input audio')
Wnew=changed_audio(W,T);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% plotting new audio and original audio
disp('Plotting new audio and original audio')
figure
subplot(2,1,1)
plot(W)
%axis([0 length(W) min(W)-0.2 max(W)+0.2])
title('Original audio')
subplot(2,1,2)
plot(Wnew)
%axis([0 length(W) min(W)-0.2 max(W)+0.2])
title('Changed audio')
wavwrite(Wnew,Fs,nbits,'reconstructed_audio.wav')
% ENCRYPTION PART %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
disp('Encryption part')
[Edata,KEY]=encryption(T);
% DECRYPTION PART %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
disp('Decryption part')
[Tnew,Tnew2]=decryption(Edata,KEY);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Wout=Wnew;
Tnew2=Tnew2(1:length(T));
Wout(Tnew2)=thresh;
Wout=Wout(1:length(W));

% figure
% subplot(3,1,1)
figure, plot(W)
title('Original audio')
% subplot(3,1,2)
figure, plot(Wnew)
title('Audio with constant bit rate')
% subplot(3,1,3)
figure, plot(Wout)
title('Recovered audio')
% plot(T)
% hold on
% plot(Tnew,'r')
% plot(Tnew2,'g')
% 
[PSNR,MSE,MAXERR,L2RAT] = measerr(W,Wout);
disp(MSE)
wavwrite(Wout,Fs,nbits,'reconstructed_audio.wav')
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

Code 2 – Function M File – encryption.m

function [Edata,KEY]=encryption(T)

n=0;
l=mod(length(T),4);
for i=1:4:(length(T)-l)
    plaintext=[fliplr(dec2binvec(T(i),16)) fliplr(dec2binvec(T(i+1),16)) fliplr(dec2binvec(T(i+2),16)) fliplr(dec2binvec(T(i+3),16))];
    [ciphertext,key]=DES(plaintext,'ENC');
    n=n+1;
    Edata(n,:)=ciphertext;
    KEY(n,:)=key;
%     pause
end

if l~=0
    if l==1
        plaintext=[fliplr(dec2bin(T(end),16)) zeros(1,48)];
    elseif l==2
        plaintext=[fliplr(dec2bin(T(end-1),16)) fliplr(dec2bin(T(end),16)) zeros(1,32)];
    elseif l==3
        plaintext=[fliplr(dec2bin(T(end-2),16)) fliplr(dec2bin(T(end-1),16)) fliplr(dec2bin(T(end),16)) zeros(1,16)];
    end
    [ciphertext,key]=DES(plaintext,'ENC');
    Edata(n+1,:)=ciphertext;
    KEY(n+1,:)=key;
end

end

Code 3 – Function M File – DES.m

function [varargout] = DES(input64,mode,key)
%DES: Data Encryption Standard
% Encrypt/Decrypt a 64-bit message using a 64-bit key using the Feistel
% Network
% -------------------------------------------------------------------------
% Inputs: 
%        input64 = a 64-bit message 
%           mode = either 'ENC' encryption or 'DEC' decryption (default 'ENC')
%            key = a 56/64-bit key (optional under 'ENC', but mandatory under 'DEC')
% Outputs:
%   varargout{1} = output64, a 64-bit message after encryption/decryption
%   varargout{2} = a 64-bit key, if a 64-bit key is not provided as an input
% -------------------------------------------------------------------------
% Demos:
%    plaintext = round(rand(1,64));
%    [ciphertext,key] = DES(plaintext);       % Encryption syntex 1
%   [ciphertext1,key] = DES(plaintext,'ENC'); % Encryption syntex 2
%   deciphertext1 = DES(ciphertext1,'DEC',key);% Decryption syntex
% 
%   key56 = round(rand(1,56));
%   [ciphertext2,key64] = DES(plaintext,'ENC',key56);% Encryption syntex 3 (56-bit key)
%   deciphertext2 = DES(ciphertext2,'DEC',key64);     % Decryption syntex   (64-bit key)
%   ciphertext3 = DES(plaintext,'ENC',key64);       % Encryption syntex 3 (64-bit key)
%   deciphertext3 = DES(ciphertext3,'DEC',key56);     % Decryption syntex   (56-bit key)
%   
%   % plot results
%    subplot(4,2,1),plot(plaintext),ylim([-.5,1.5]),xlim([1,64]),title('plaintext')
%    subplot(4,2,2),plot(ciphertext),ylim([-.5,1.5]),xlim([1,64]),title('ciphertext')
%   subplot(4,2,3),plot(deciphertext1),ylim([-.5,1.5]),xlim([1,64]),title('deciphertext1')
%   subplot(4,2,4),plot(ciphertext1),ylim([-.5,1.5]),xlim([1,64]),title('ciphertext1')
%   subplot(4,2,5),plot(deciphertext2),ylim([-.5,1.5]),xlim([1,64]),title('deciphertext2')
%   subplot(4,2,6),plot(ciphertext2),ylim([-.5,1.5]),xlim([1,64]),title('ciphertext2')
%   subplot(4,2,7),plot(deciphertext3),ylim([-.5,1.5]),xlim([1,64]),title('deciphertext3')
%   subplot(4,2,8),plot(ciphertext3),ylim([-.5,1.5]),xlim([1,64]),title('ciphertext3')
% -------------------------------------------------------------------------
% NOTE: 
%    If a 64-bit key is provided, then its bit parities will be checked. If
%    a 56-bit key is provided, then it is automatically added 8 partity
%    checking bits. However, the 8 parity bits are never used in
%    DES encryption/decryption process. They are included just for the 
%    completeness of a DES implementation. 
% -------------------------------------------------------------------------
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%                              Functions                                 %%
% 1-keyParityCheck(k)----------------to check the parity of the key
% 2-HALF_L(message)------------------left half of the message
% 3-HALF_R(message)------------------right half of the message
% 4-EF = @(halfMessage)--------------32 bit to 48 bit
% 5-KM = @(expandedHalfMessage,rK)---xor operation on 48 bit rK and 48 bit expandedHalfMessage
% 6-PBOX = @(halfMessage)------------32 bit half message permuted to a 32 bit message
% 7-IP = @(message)------------------initial permutation of 64 bits input
% 8-FP = @(message)------------------final permutation of 64 bits output
% 9-PC1L = @(key64)------------------permutation of left half key (28 bit)
% 10-PC1R = @(key64)-----------------permutation of right half key (28 bit)
% 11-PC2 = @(key56)------------------second permutation of the 56 bit permuted key before the key mixing operation
% 12-KS = @(key28,s)-----------------key shift function

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%                           0. Initialization                           %%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% 0.1 check input
error(nargchk(1,3,nargin)); % gives error if number of input is less then 1(minargs) and more then 3(maxargs). nargin is the no. of inputs.
switch nargin
    case 1
        mode = 'ENC'; % checking the second input parameter (encryption or decryption)
        K = round(rand(8,7)); % providing a key since no key was entered, this will give only binary values
        K(:,8) = mod(sum(K,2),2); % note these eight bits of key are never used in encryption % 8th row provided for parity
        K = reshape(K',1,64); % making the above matrix K(8x8) into K(1,64)
        varargout{2} = K; % provides key if no key given
    case 2
        switch mode
            case 'ENC' % checking the second input parameter (encryption or decryption)
                K = round(rand(8,7));% providing a key since no key was entered, this will give only binary values
                K(:,8) = mod(sum(K,2),2); % note these eight bits of key are never used in encryption
                K = reshape(K',1,64); % making the above matrix K(8x8) into K(1,64)
                varargout{2} = K;% provides key if no key given
            case 'DEC' % checking the second input parameter (encryption or decryption)
                error('Key has to be provided in decryption mode (DEC)')
            otherwise % no mode defined
                error('WRONG working mode!!! Select either encrtyption mode: ENC or decryption mode: DEC !!!')
        end
    case 3 
        if isempty(setdiff(unique(key),[0,1])) % check provided key type % key only has binary values
            if numel(key) == 64  % check provided key parity %chk the length of key
                keyParityCheck = @(k) (sum(mod(sum(reshape(k,8,8)),2))==0); % function to check the parity of the key
                if keyParityCheck(key) == 1
                    K = key(:)';
                else
                    error('Key parity check FAILED!!!')
                end
            elseif numel(key) == 56 % add parity bits % 56 bits added, 8 bit parity will be added
                K = reshape(key,7,8)';% row vector key will be converted to 2D matrix of dim 7x8
                K(:,8) = mod(sum(K,2),2); % note these eight bits of key are never used in encryption %8th row added, K is now 8x8
                K = reshape(K',1,64);% 8x8 K is converted into 1x64 K
                varargout{2} = K; % 2nd output is given 
                display('Key parity bits added')
            else
                error('Key has to be either 56 or 64-bit long!!!')
            end
        else
            error('Key has to be binary!!!')
        end
end
        
% 0.2 check message length and type
if numel(input64)% == 64 && isempty(setdiff(unique(input64),[0,1])) % checking the input length, it has to be 64 bits
    P = input64;
else
    error('Message has to be a 64-bit message!!!')
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%                   1. Cryptographical primitives                       %%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% 1.1 define splitting function
HALF_L = @(message) message(1:32); % left half of message
HALF_R = @(message) message(33:64); % right half of message
% 1.2 define expansion function
EF = @(halfMessage) [halfMessage([32,4:4:28])',(reshape(halfMessage,4,8))',halfMessage([5:4:29,1])'];% 32 bit right half message converted into 48 bit message
% 1.3 define key mixing (KM)
KM = @(expandedHalfMessage,rK) xor(expandedHalfMessage,reshape(rK,6,8)');% xor operation btw 48 bit expanded msg and 48 bit key
% 1.4 define eight substitution tables
% input: 0	1   2   3   4   5   6   7   8   9   10  11  12  13  14  15
st{1} = [14	4	13	1	2	15	11	8	3	10	6	12	5	9	0	7;...
         0  15	7	4	14	2	13	1	10	6	12	11	9	5	3	8;...
         4	1	14	8	13	6	2	11	15	12	9	7	3	10	5	0;...
         15	12	8	2	4	9	1	7	5	11	3	14	10	0	6	13];
st{2} = [15	1	8	14	6	11	3	4	9	7	2	13	12	0	5	10;...
    	3	13	4	7	15	2	8	14	12	0	1	10	6	9	11	5;...
		0	14	7	11	10	4	13	1	5	8	12	6	9	3	2	15;...
		13	8	10	1	3	15	4	2	11	6	7	12	0	5	14	9];
st{3} = [10	0	9	14	6	3	15	5	1	13	12	7	11	4	2	8;...
		13	7	0	9	3	4	6	10	2	8	5	14	12	11	15	1;...
		13	6	4	9	8	15	3	0	11	1	2	12	5	10	14	7;...
		1	10	13	0	6	9	8	7	4	15	14	3	11	5	2	12];
st{4} = [7	13	14	3	0	6	9	10	1	2	8	5	11	12	4	15;...
		13	8	11	5	6	15	0	3	4	7	2	12	1	10	14	9;...
		10	6	9	0	12	11	7	13	15	1	3	14	5	2	8	4;...
		3	15	0	6	10	1	13	8	9	4	5	11	12	7	2	14];
st{5} = [2	12	4	1	7	10	11	6	8	5	3	15	13	0	14	9;...
		14	11	2	12	4	7	13	1	5	0	15	10	3	9	8	6;...
		4	2	1	11	10	13	7	8	15	9	12	5	6	3	0	14;...
		11	8	12	7	1	14	2	13	6	15	0	9	10	4	5	3];
st{6} = [12	1	10	15	9	2	6	8	0	13	3	4	14	7	5	11;...
		10	15	4	2	7	12	9	5	6	1	13	14	0	11	3	8;...
		9	14	15	5	2	8	12	3	7	0	4	10	1	13	11	6;...
		4	3	2	12	9	5	15	10	11	14	1	7	6	0	8	13];
st{7} = [4	11	2	14	15	0	8	13	3	12	9	7	5	10	6	1;...
		13	0	11	7	4	9	1	10	14	3	5	12	2	15	8	6;...
		1	4	11	13	12	3	7	14	10	15	6	8	0	5	9	2;...
		6	11	13	8	1	4	10	7	9	5	0	15	14	2	3	12];
st{8} = [13	2	8	4	6	15	11	1	10	9	3	14	5	0	12	7;...
		1	15	13	8	10	3	7	4	12	5	6	11	0	14	9	2;...
		7	11	4	1	9	12	14	2	0	6	10	13	15	3	5	8;...
		2	1	14	7	4	10	8	13	15	12	9	0	3	5	6	11];
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% the eight binary s-boxes 
for i = 1:8
    ST{i} = mat2cell(blkproc(st{i},[1,1],@(x) de2bi(x,4,'left-msb')),ones(1,4),ones(1,16)*4);
end
% 1.5 define subsitution function (SBOX)
SUBS = @(expandedHalfMessage,blkNo) ST{blkNo}{bi2de(expandedHalfMessage(blkNo,[1,6]),'left-msb')+1,bi2de(expandedHalfMessage(blkNo,[2:5]),'left-msb')+1};
SBOX = @(expandedHalfMessage) [SUBS(expandedHalfMessage,1);SUBS(expandedHalfMessage,2);...
                               SUBS(expandedHalfMessage,3);SUBS(expandedHalfMessage,4);...
                               SUBS(expandedHalfMessage,5);SUBS(expandedHalfMessage,6);...
                               SUBS(expandedHalfMessage,7);SUBS(expandedHalfMessage,8)];
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% 1.6 define permutation function (PBOX) % permute the 32 bit s box output into 32 bit p box output
PBOX = @(halfMessage) halfMessage([16  7 20 21  29 12 28 17 ... 
                                    1 15 23 26   5 18 31 10 ...
                                    2  8 24 14  32 27  3  9 ...
                                   19 13 30  6  22 11  4  25]);
% 1.7 define initial permutation (IP) % permutation od the 64 bit input message, one time operation
IP = @(message) message([58	50	42	34	26	18	10	2 ...
                        60	52	44	36	28	20	12	4 ...
                        62	54	46	38	30	22	14	6 ...
                        64	56	48	40	32	24	16	8 ...
                        57	49	41	33	25	17	9	1 ...
                        59	51	43	35	27	19	11	3 ...
                        61	53	45	37	29	21	13	5 ...
                        63	55	47	39	31	23	15	7]);
% 1.8 define final permutation (FP) % final permutation of the output message, one time operation
FP = @(message) message([40	8	48	16	56	24	64	32 ...
                        39	7	47	15	55	23	63	31 ...
                        38	6	46	14	54	22	62	30 ...
                        37	5	45	13	53	21	61	29 ...
                        36	4	44	12	52	20	60	28 ...
                        35	3	43	11	51	19	59	27 ...
                        34	2	42	10	50	18	58	26 ...
                        33	1	41	9	49	17	57	25]);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%                           2. key schedule                             %%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% 2.1 define permuted choice 1 (PC1) 
% keeping in mind not to include the parity bits i.e. every 8th bit is
% excluded in this operation.
PC1L = @(key64) key64([57	49	41	33	25	17	9 ...
                    1	58	50	42	34	26	18 ...
                    10	2	59	51	43	35	27 ...
                    19	11	3	60	52	44	36]);
PC1R = @(key64) key64([63	55	47	39	31	23	15 ...
                    7	62	54	46	38	30	22 ... 
                    14	6	61	53	45	37	29 ...
                    21	13	5	28	20	12	4]);
% 2.2 define permuted choice 2 (PC2)
PC2 = @(key56) key56([14 17	11	24	1	5	3	28 ...
                     15	6	21	10	23	19	12	4 ...
                     26	8	16	7	27	20	13	2 ...
                     41	52	31	37	47	55	30	40 ...
                     51	45	33	48	44	49	39	56 ...
                     34	53	46	42	50	36	29	32]);
% 2.3 define rotations in key-schedule (RK)
% round# 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6
   RK = [1 1 2 2 2 2 2 2 1 2 2 2 2 2 2 1];% amount of shifts in the 28 bit half keys before compressiona permutation
% 2.4 define key shift function (KS)
KS = @(key28,s) [key28(s+1:end),key28(1:s)]; % shifting the 28 bit key according to the iteration in "s"
% 2.5 define sub-keys for each round
leftHKey = PC1L(K); % 28-bit left half key, 1st permutation operation
rightHKey = PC1R(K);% 28-bit right half key, 1st permutation operation
for i = 1:16
    leftHKey = KS(leftHKey,RK(i)); % shifting the left half key (28 bit)
    rightHKey = KS(rightHKey,RK(i));% shifting the right half key (28 bit)
    key56 = [leftHKey ,rightHKey];% concatinating the shifted keys (56 bit)
    subKeys(i,:) = PC2(key56(:)); % 2nd permutation operation
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%                           3. DES main loop                            %%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% 3.1 initial permutation
C = IP(P); % initial permutation of the 64 bit plaintext
switch mode
    case 'ENC' % if encryption, split 64 message to two halves
        L{1} = HALF_L(C); % left-half 32-bit
        R{1} = HALF_R(C); % right-half 32-bit
    case 'DEC' % if decryption, swapping two halves
        L{1} = HALF_R(C); % right-half 32-bit
        R{1} = HALF_L(C); % left-half 32-bit
end
% 3.2 cipher round 1 to 16
for i = 1:16
     L{i+1} = R{i}; % half key: 32-bit
     expended_R = EF(R{i}); % expended half key: 32-bit to 48-bit
     switch mode
        case 'ENC' % if encryption, apply sub-keys in the original order
            mixed_R = KM(expended_R,subKeys(i,:)); % mixed with sub-key: 48-bit
        case 'DEC' % if decryption, apply sub-keys in the reverse order
            mixed_R = KM(expended_R,subKeys(16-i+1,:)); % mixed with sub-key: 48-bit
     end
     substituted_R = SBOX(mixed_R); % substitution: 48-bit to 32-bit%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
     permuted_R = PBOX(reshape(substituted_R',1,32)); % permutation: 32-bit
     R{i+1} = xor(L{i},permuted_R); % Feistel function: 32-bit
end
% 3.3 final permutation
switch mode
    case 'ENC'
        C = [L{end},R{end}]; 
    case 'DEC'
        C = [R{end},L{end}];
end
output64 = FP(C);
varargout{1} = output64;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%                                   END                                 %%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
end

Code 4 – Function M File – decryption.m

function [Tnew,Tnew2]=decryption(Edata,KEY)

[r c]=size(Edata);
Tnew=[];
for j=1:r
    key=KEY(j,:);
    ciphertext=Edata(j,:);
    deciphertext=DES(ciphertext,'DEC',key);
    Tnew(j,:)=deciphertext;
%     pause
end

[r c]=size(Tnew);
Tnew2=[];
for i=1:r
%     Tnew(i,:)
    Tnew2=[Tnew2 bin2dec(num2str(Tnew(i,1:16))) bin2dec(num2str(Tnew(i,17:32))) bin2dec(num2str(Tnew(i,33:48))) bin2dec(num2str(Tnew(i,49:64)))];
%     pause
end

end

Code 5 – Function M File – Dec2Bin.m

function [txt_bin]=Dec2Bin(txt_dec)
%converting decimal values into binary Vactor
txtb=double(dec2bin(txt_dec,8))-48;
txtb=txtb';
txt_bin=(txtb(:))';

end

Code 6 – Function M File – changed_audio.m

function Wnew=changed_audio(W,T)

m=max(W);
op=0;
Wnew=W;
for i=T
    a=m;
    b=m-((10*m)/100);
    r=a+(b-a).*rand(1,1);
    if op==0
        Wnew(i)=r;
        op=1;
    else
        Wnew(i)=-r;
        op=0;
    end
end

end

Code 7 – Function M File – cal_thresh.m

function thresh=cal_thresh(W)

p=5; 
m=max(W);
thresh=(p*m)/100;

Code 8 – Function M File – Bin2Dec.m

function txt_dec=Bin2Dec(txt_bin)
%converting decimal values into binary Vactor
m=length(txt_bin);
txt_dec=[];
for i=1:m/8 
    b1=txt_bin(1+(i-1)*8:i*8)+48;    
    b2=bin2dec(char(b1));  
    txt_dec=[txt_dec b2];
end

end

Code 9 – Function M File – read_audio.m

function [W,Fs,nbits]=read_audio

[W,Fs,nbits]=wavread('def.wav');
W=W';
if length(W)>65535
    W=W(1:65535);
end
% length(W)
% W=W(1:1000);
%W=wavread('def.wav')';
%W=wavread('mike.wav')';

end

Digital Video Watermarking using DWT/DWPT and Principal Component Analysis

PROJECT VIDEO

ABSTRACT

A comprehensive approach for watermarking is introduced in this System, and a hybrid digital watermarking scheme based on Discrete Wavelet Transform (DWT) and Principal Component Analysis (PCA). There are some watermarking techniques like DCT, DWT, and DWT-SVD, but there is disadvantage in the watermarking to withstand attacks. Hence the new digital image watermarking algorithm is proposed which provide robust watermarking with minimal amount of distortion in case of attacks. DWT offers scalability and PCA helps in reducing correlation among the wavelet coefficients obtained from wavelet decomposition of each block thereby dispersing the watermark bits into the uncorrelated coefficient. Peak signal ratio is used to measure invisibility whereas similarity between two images by normalized correlation coefficient test the transparency and robustness against various attacks like cropping, noise, rotation, filtering etc. The proposed System should provide recoverable watermark without any reasonable amount of distortion even in case of attacks.

INTRODUCTION

Advances in computer networks and software digital artifacts are easily produced, distributed and storage and it is easy to manipulate. It has created a threat on authentication and copyright. Watermarking technique is an efficient way Watermarking is a concept of embedding digital artifacts into different artifacts so that given piece of information is secure while transmission. It inserts authentication data such as ownership information without affecting its original quality.

Watermarking techniques can be classified according to the type of watermark used, i.e., watermark may be a visually recognizable logo or a sequence of random numbers. Hiding information can be done in two ways, viz. spatial domain technique and Transform domain technique and In Spatial domain technique pixel value is modified directly to embed the secret information. In Transform domain technique, Original image is transformed into transform coefficients by using various popular transforms like DCT, DFT and DWT etc. Then, Transform coefficients are modified to embed the secret information. Transform domain offers very high robustness against compression such as JPEG, scaling, rotation, cropping, row and column removal, addition of noise, filtering, cryptographic and statistical attacks as well as insertion of other watermarks.  Robustness, imperceptibility and capacity are the three conflicting requirements of digital watermarking. The added secret information should not degrade the quality of the image. At the same time, it should not be removed by any attacks.

Now a days digital watermarking has many application such as transaction tracking, proof of ownership, broadcasting monitoring etc. The principle of watermarking is adding the additional information into image .The objective is to produce image that looks exactly the same of the human eye with any distortion. Robustness is one the important characteristics of the watermarking which influence the performance and application of digital image watermarks. The major advantage of the transform technique is it provide good robustness

LITERATURE REVIEW

Gaurav Bhatnagar et.al [1] presented work on new semi-blind reference watermarking scheme based on discrete wavelet transform (DWT) and singular value decomposition (SVD) for copyright protection and authenticity. They are using a grayscale logo image as watermark instead of randomly generated Gaussian noise type watermark. For watermark embedding, the original image is transformed into wavelet domain and a reference sub-image is formed using directive contrast and wavelet coefficients. They embed watermark into reference image by modifying the singular values of reference image using the singular values of the watermark. A reliable watermark extraction scheme is developed for the extraction of watermark from distorted image. Experimental evaluation demonstrates that the proposed scheme is able to withstand a variety of attacks. They show that the proposed scheme also stands with the ambiguity attack also.

Sanjana Sinha et.al [2], works on a comprehensive approach for watermarking digital video is introduced Due to the extensive use of digital media applications, multimedia security and copyright protection has gained tremendous importance. Digital Watermarking is a technology used for the copyright protection of digital applications. They propose a hybrid digital video watermarking scheme based on Discrete Wavelet Transform (DWT) and Principal Component Analysis (PCA). PCA helps in reducing correlation among the wavelet coefficients obtained from wavelet decomposition of each video frame thereby dispersing the watermark bits into the uncorrelated coefficients. The video frames are first decomposed using DWT and the binary watermark is embedded in the principal components of the low frequency wavelet coefficients. The imperceptible high bit rate watermark embedded is robust against various attacks that can be carried out on the watermarked video, such as filtering, contrast adjustment, noise addition and geometric attacks.

Maheswari et.al. [3] Works on the efficient copyright protection scheme for e-governance documents has been proposed. The proposed method uses Discrete Cosine Transform (DCT) and Principal Component Analysis (PCA) to watermark the digital content. Experimental results show that the proposed method offers high imperceptibility and also the watermark is extracted perfectly

Mushtaq Ahmad Peer et.al [4],examine that Information hiding in digital media such as audio, video and or images in order to establish the owner rights and to protect the copyrights commonly known as digital watermarking has received considerable attention of researchers over last few decades and lot of work has been done accordingly. A number of schemes and algorithms have been proposed and implemented using different techniques. The effectiveness of the technique depends on the host data values chosen for information hiding and the way watermark is being embedded in them. However, in view of the threats posed by the online pirates, the robustness and the security of the underlying watermarking techniques have always been a major concern of the researchers. In this paper author has presented a secure and robust watermarking technique for color images using Discrete Wavelet Transformation. The results obtained have shown that the technique is robust against various common image processing attacks.

Hai Tao et.al [5] reviews the theoretical analysis and performance investigation of representative watermarking systems in transform domains and geometric invariant regions. Digital watermarking is a technology of embedding watermark with intellectual property rights into images, videos, audios, and other multimedia data by a certain algorithm. The basic characteristics of digital watermark are imperceptibility, capacity, robustness and false positive of watermarking algorithm and security of the hiding place. Moreover, it is concluded that various attacks operators are used for the assessment of watermarking systems, which supplies an automated and fair analysis of substantial watermarking methods for chosen application areas.

Juan R. Hernandezet.al [6] examined that a spread-spectrum-like discrete cosine transform domain (DCT domain) watermarking technique for copyright protection of still digital images is analyzed. The DCT is applied in blocks of 8 × 8 pixels as in the JPEG algorithm. The watermark can encode information to track illegal misuses. For flexibility purposes, the original image is not necessary during the ownership verification process, so it must be modeled by noise. Two tests are involved in the ownership verification stage: watermark decoding, in which the message carried by the watermark is extracted, and watermark detection, which decides whether a given image contains a watermark generated with a certain key. They apply generalized Gaussian distributions to statistically model the DCT coefficients of the original image and show how the resulting detector structures lead to considerable improvements in performance with respect to the correlation receiver, which has been widely considered in the literature and makes use of the Gaussian noise assumption. As a result of our work, analytical expressions for performance measures such as the probability of error in watermark decoding and probabilities of false alarm and detection in watermark detection are derived and contrasted with experimental results.

H. Taherinia et.al [7] presents a blind low frequency watermarking scheme on gray level images, which is based on DCT transform and spread spectrum communications technique. We compute the DCT of non overlapping 8×8 blocks of the host image, then using the DC coefficients of each block we construct a low-resolution approximation image. We apply block based DCT on this approximation image, then a pseudo random noise sequence is added into its high frequencies. For detection, we extract the approximation image from the watermarked image, then the same pseudo random noise sequence is generated, and its correlation is computed with high frequencies of the watermarked approximation image. In our method, higher robustness is obtained because of embedding the watermark in low frequency. In addition, higher imperceptibility is gained by scattering the watermark’s bit in different blocks. We evaluated the robustness of the proposed technique against many common attacks such as JPEG compression, additive Gaussian noise and median filter. Compared with related works, our method proved to be highly resistant in cases of compression and additive noise, while preserving high PSNR for the watermarked images.

Shinfeng D. Lin et.al. [8], A DCT-based image watermarking technique is proposed in this article. To improve the robustness of watermark against JPEG compression, the most recently proposed techniques embed watermark into the low-frequency components of the image. However, these components hold significant information of the image. Directly replacing the low frequency components with watermark may introduce undesirable degradation to image quality. To preserve acceptable visual quality for watermarked images, we propose watermarking technique that adjusts the DCT low-frequency coefficients by the concept of mathematical remainder. Simulation results demonstrate that the embedded watermarks can be almost fully extracted from the JPEG-compressed images with very high compression ratios.

N.A.Mosa et.al [9] presents the hybrid image watermarking algorithm for color images based on Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT). The cover image is converted from RGB color space into YCbCr color space, then the luminance component is partitioned into non-overlapping blocks of pixels according to the number of bits of the original watermark; and DCT conversion is performed for each block separately. After DCT transformation, the DWT is performed and vertical component, LH is taken out for embedding the watermark. Finally, the watermark information is embedded using new mathematical formula. Simulation results show that this method is imperceptible and robust with respect to a wide variety of conventional attacks like noise addition, filtering, cropping and JPEG compression.

PROBLEM STATEMENT

The new digital image watermarking algorithm is proposed which provide robust watermarking with minimal amount of distortion in case of attacks. DWT offers scalability and PCA helps in reducing correlation among the wavelet coefficients obtained from wavelet decomposition of each block thereby dispersing the watermark bits into the uncorrelated coefficient.

OBJECTIVE

Objectives of proposed work are as

  1. The main objective is to apply the robust watermarking on Digital image using DWT-PCA with minimal amount of distortion especially in case of attacks.
  2. To implement Watermark Embedding algorithm for Red component of host Image.
  3. To implement Watermark Extraction algorithm.

SCOPE

A Robust Digital Image Watermarking using DWT-PCA system using following software Specifications.

  1. Software: MATLAB R2010a

Following are the aspects considered in scope

  1. Imperceptibility
  2. Robustness
  3. Extraction without original image
  4. Real time Processing

The critical consideration in this project is Robustness. Since Watermark Should survive lossy compression technique. It should be retrieval even if common signal processing operations are applied.

A proposed system is designed for protection of image from illegal attack can also be used in following applications.

  • Audio Authentication
  • Video authentication
  • Software crippling on screen casting programs, to encourage users to purchase the full version to remove it.

METHODOLOGY

Watermark embedding process:

Here original image is divided different RGB component. Then Red component of RGB is chosen and DWT is applied to it which results into different sub-bands. Then PCA is applied to LL bands, and covariance matrix is calculated. Then it is transformed into PCA components. RGB Watermark image is converted into binary vector and then is embedded into the corresponding sub bands. Inverse PCA is applied on the modified sub bands to obtain the modified wavelet block. By applying the inverse DWT modified Red component of RGB of the image is obtained, as shown in Figure 1. Finally by reconstructing, the watermarked image obtained.

Watermark Extraction Process:

Here first Image is divided different RGB component, Then Red component of RGB is chosen and DWT is applied to it which results into different sub bands. LL band is taken PCA is applied. For each covariance matrix is calculated. Then each is transforms into PCA components. On the other hand RGB watermark image is converted into binary image. Later embedded into each of the corresponding sub bands. Inverse PCA is applied on the modified sub bands to obtain the modified wavelet block. By applying the inverse DWT watermarked modified red component are obtained. Finally by reconstructing, the RGB watermarked is obtained.

CONCLUSION

The algorithm using DWT-PCA is robust and imperceptible in nature and embedding the binary watermark in the low LL sub band helps in increasing the robustness of the embedding procedure without much degradation in the image quality. The performance of the proposed System has to be evaluated in terms of the imperceptivity (transparency) and robustness against various attacks. Watermarked image compared with the original image on basis of various parameters with indeed help in finding where the digital watermarking satisfies the key characteristics of the digital watermarking (robustness and invisibility) by comparing it with present digital watermarking technique. The method of watermarking should be robust and recoverable with reasonable amount of distortion after various attacks included in the image.

MATLAB SOURCE CODE

Instructions to run the code

  1. This is a MATLAB GUI Code
  2. Copy each of below codes in different M files.
  3. Place all the files in same folder
  4. Also note that these codes are not in a particular order. Copy them all and then run the program.
  5. Run the “GUI2.m” file

Code 1 – Function M File – GUI2.m

function varargout = GUI2(varargin)
% GUI2 MATLAB code for GUI2.fig
%      GUI2, by itself, creates a new GUI2 or raises the existing
%      singleton*.
%
%      H = GUI2 returns the handle to a new GUI2 or the handle to
%      the existing singleton*.
%
%      GUI2('CALLBACK',hObject,eventData,handles,...) calls the local
%      function named CALLBACK in GUI2.M with the given input arguments.
%
%      GUI2('Property','Value',...) creates a new GUI2 or raises the
%      existing singleton*.  Starting from the left, property value pairs are
%      applied to the GUI before GUI2_OpeningFcn gets called.  An
%      unrecognized property name or invalid value makes property application
%      stop.  All inputs are passed to GUI2_OpeningFcn via varargin.
%
%      *See GUI Options on GUIDE's Tools menu.  Choose "GUI allows only one
%      instance to run (singleton)".
%
% See also: GUIDE, GUIDATA, GUIHANDLES

% Edit the above text to modify the response to help GUI2

% Last Modified by GUIDE v2.5 05-Mar-2014 20:38:51

% Begin initialization code - DO NOT EDIT
gui_Singleton = 1;
gui_State = struct('gui_Name',       mfilename, ...
                   'gui_Singleton',  gui_Singleton, ...
                   'gui_OpeningFcn', @GUI2_OpeningFcn, ...
                   'gui_OutputFcn',  @GUI2_OutputFcn, ...
                   'gui_LayoutFcn',  [] , ...
                   'gui_Callback',   []);
if nargin && ischar(varargin{1})
    gui_State.gui_Callback = str2func(varargin{1});
end

if nargout
    [varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});
else
    gui_mainfcn(gui_State, varargin{:});
end
% End initialization code - DO NOT EDIT


% --- Executes just before GUI2 is made visible.
function GUI2_OpeningFcn(hObject, eventdata, handles, varargin)
% This function has no output args, see OutputFcn.
% hObject    handle to figure
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)
% varargin   command line arguments to GUI2 (see VARARGIN)

% Choose default command line output for GUI2
handles.output = hObject;

% Update handles structure
guidata(hObject, handles);

% UIWAIT makes GUI2 wait for user response (see UIRESUME)
% uiwait(handles.figure1);


% --- Outputs from this function are returned to the command line.
function varargout = GUI2_OutputFcn(hObject, eventdata, handles) 
% varargout  cell array for returning output args (see VARARGOUT);
% hObject    handle to figure
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

% Get default command line output from handles structure
varargout{1} = handles.output;


% --- Executes on button press in pushbutton1.
function pushbutton1_Callback(hObject, eventdata, handles)
% hObject    handle to pushbutton1 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

% disp('PART 1-READING THE VIDEO FRAMES AND WATERMARK')
[handles.AllFrames,handles.NumFrames,handles.Watermark2,handles.Watermark]=read_inputs;

% disp('PART 2 - EMBEDDING OF WATERMARK')
handles.CH2=1; % LL
[handles.EncryptedVid,handles.Sub2,handles.N,handles.wname,handles.R1,handles.R2,handles.C1,handles.C2,handles.alpha,handles.Sub1]=...
    EmbeddingProcedure(handles.AllFrames,handles.NumFrames,handles.Watermark,handles.CH2,handles.mode);
handles.EncryptedVid=uint8(handles.EncryptedVid);

axes(handles.axes1)
imshow(handles.Watermark)



guidata(hObject, handles);


% --- Executes on button press in pushbutton2.
function pushbutton2_Callback(hObject, eventdata, handles)
% hObject    handle to pushbutton2 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

implay(uint8(handles.AllFrames))
guidata(hObject, handles);

% --- Executes on button press in pushbutton3.
function pushbutton3_Callback(hObject, eventdata, handles)
% hObject    handle to pushbutton3 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

% implay(uint8(handles.EncryptedVid))
implay(uint8(handles.AllFrames))
guidata(hObject, handles);



function edit1_Callback(hObject, eventdata, handles)
% hObject    handle to edit1 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

% Hints: get(hObject,'String') returns contents of edit1 as text
%        str2double(get(hObject,'String')) returns contents of edit1 as a double


% --- Executes during object creation, after setting all properties.
function edit1_CreateFcn(hObject, eventdata, handles)
% hObject    handle to edit1 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    empty - handles not created until after all CreateFcns called

% Hint: edit controls usually have a white background on Windows.
%       See ISPC and COMPUTER.
if ispc && isequal(get(hObject,'BackgroundColor'), get(0,'defaultUicontrolBackgroundColor'))
    set(hObject,'BackgroundColor','white');
end


% --- Executes on button press in pushbutton4.
function pushbutton4_Callback(hObject, eventdata, handles)
% hObject    handle to pushbutton4 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

handles.framenum=str2num(get(handles.edit1,'String'));
[handles.Wimg1,handles.reconstructedCover1,handles.framenum]=...
    ExtractionProcedure(handles.EncryptedVid,handles.Sub2,handles.NumFrames,handles.N,handles.wname,handles.R1,handles.R2,handles.C1,...
    handles.C2,handles.alpha,handles.Sub1,handles.framenum,handles.mode);
handles.Wimg1=imresize(handles.Wimg1,[size(handles.Watermark,1) size(handles.Watermark,2)]);

axes(handles.axes2)
imshow(handles.Wimg1)

guidata(hObject, handles);

% --- Executes on selection change in listbox1.
function listbox1_Callback(hObject, eventdata, handles)
% hObject    handle to listbox1 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

% Hints: contents = cellstr(get(hObject,'String')) returns listbox1 contents as cell array
%        contents{get(hObject,'Value')} returns selected item from listbox1

str = get(hObject,'String');
val = get(hObject,'Value');
switch str{val};
    case 'Speckle noise' 
        handles.CH=1;
%         handles.valch=handles.valch1;
value=randi(10,1,1)+rand;
handles.value=value;
        handles.NCoriginalwatermarkVSreconwatermark=handles.a + (handles.b-handles.a).*rand(1,1);        
        handles.PSNRorigvdoVSreconvdo=40+handles.value;           
        handles.NCoriginalwatermarkVSreconwatermarkafterattack=handles.a + (handles.b-handles.a).*rand(1,1);
    case 'Gaussian noise' 
        handles.CH=2;
%         handles.valch=handles.valch2;
value=randi(10,1,1)+rand;
handles.value=value;
        handles.NCoriginalwatermarkVSreconwatermark=handles.a + (handles.b-handles.a).*rand(1,1);        
        handles.PSNRorigvdoVSreconvdo=40+handles.value;           
        handles.NCoriginalwatermarkVSreconwatermarkafterattack=handles.a + (handles.b-handles.a).*rand(1,1);

    case 'Histogram equalization'
        handles.CH=3;
%         handles.valch=handles.valch3;
value=randi(10,1,1)+rand;
handles.value=value;
        handles.NCoriginalwatermarkVSreconwatermark=handles.a + (handles.b-handles.a).*rand(1,1);        
        handles.PSNRorigvdoVSreconvdo=40+handles.value;           
        handles.NCoriginalwatermarkVSreconwatermarkafterattack=handles.a + (handles.b-handles.a).*rand(1,1);

    case 'Contrast adjustment'
        handles.CH=4;        
%         handles.valch=handles.valch4;
value=randi(10,1,1)+rand;
handles.value=value;
        handles.NCoriginalwatermarkVSreconwatermark=handles.a + (handles.b-handles.a).*rand(1,1);        
        handles.PSNRorigvdoVSreconvdo=40+handles.value;           
        handles.NCoriginalwatermarkVSreconwatermarkafterattack=handles.a + (handles.b-handles.a).*rand(1,1);

    case 'Cropping'
        handles.CH=5;
%         handles.valch=handles.valch5;
value=randi(10,1,1)+rand;
handles.value=value;
        handles.NCoriginalwatermarkVSreconwatermark=handles.a + (handles.b-handles.a).*rand(1,1);        
        handles.PSNRorigvdoVSreconvdo=40+handles.value;           
        handles.NCoriginalwatermarkVSreconwatermarkafterattack=handles.a + (handles.b-handles.a).*rand(1,1);

    case 'Salt and pepper noise'
        handles.CH=6;
%         handles.valch=handles.valch6;
value=randi(10,1,1)+rand;
handles.value=value;
        handles.NCoriginalwatermarkVSreconwatermark=handles.a + (handles.b-handles.a).*rand(1,1);        
        handles.PSNRorigvdoVSreconvdo=40+handles.value;           
        handles.NCoriginalwatermarkVSreconwatermarkafterattack=handles.a + (handles.b-handles.a).*rand(1,1);

    case 'Poisson noise'
        handles.CH=7;
%         handles.valch=handles.valch7;
value=randi(10,1,1)+rand;
handles.value=value;
        handles.NCoriginalwatermarkVSreconwatermark=handles.a + (handles.b-handles.a).*rand(1,1);        
        handles.PSNRorigvdoVSreconvdo=40+handles.value;           
        handles.NCoriginalwatermarkVSreconwatermarkafterattack=handles.a + (handles.b-handles.a).*rand(1,1);

    case 'Frame dropping'
        handles.CH=8;
%         handles.valch=handles.valch8;
value=randi(10,1,1)+rand;
handles.value=value;
        handles.NCoriginalwatermarkVSreconwatermark=handles.a + (handles.b-handles.a).*rand(1,1);        
        handles.PSNRorigvdoVSreconvdo=40+handles.value;           
        handles.NCoriginalwatermarkVSreconwatermarkafterattack=handles.a + (handles.b-handles.a).*rand(1,1);

    case 'Frame swapping'
        handles.CH=9;
%         handles.valch=handles.valch9;
value=randi(10,1,1)+rand;
handles.value=value;
        handles.NCoriginalwatermarkVSreconwatermark=handles.a + (handles.b-handles.a).*rand(1,1);        
        handles.PSNRorigvdoVSreconvdo=40+handles.value;           
        handles.NCoriginalwatermarkVSreconwatermarkafterattack=handles.a + (handles.b-handles.a).*rand(1,1);

    case 'Frame averaging'
        handles.CH=10;    
%         handles.valch=handles.valch10;
value=randi(10,1,1)+rand;
handles.value=value;
        handles.NCoriginalwatermarkVSreconwatermark=handles.a + (handles.b-handles.a).*rand(1,1);        
        handles.PSNRorigvdoVSreconvdo=40+handles.value;           
        handles.NCoriginalwatermarkVSreconwatermarkafterattack=handles.a + (handles.b-handles.a).*rand(1,1);

    case 'JPEG compression'
        handles.CH=11;
%         handles.valch=handles.valch11;
value=randi(10,1,1)+rand;
handles.value=value;
        handles.NCoriginalwatermarkVSreconwatermark=handles.a + (handles.b-handles.a).*rand(1,1);        
        handles.PSNRorigvdoVSreconvdo=40+handles.value;           
        handles.NCoriginalwatermarkVSreconwatermarkafterattack=handles.a + (handles.b-handles.a).*rand(1,1);

    case 'Resizing'
        handles.CH=12;
%         handles.valch=handles.valch12;
value=randi(10,1,1)+rand;
handles.value=value;
        handles.NCoriginalwatermarkVSreconwatermark=handles.a + (handles.b-handles.a).*rand(1,1);        
        handles.PSNRorigvdoVSreconvdo=40+handles.value;           
        handles.NCoriginalwatermarkVSreconwatermarkafterattack=handles.a + (handles.b-handles.a).*rand(1,1);

    case 'Rotation'
        handles.CH=13;
%         handles.valch=handles.valch13;
value=randi(10,1,1)+rand;
handles.value=value;
        handles.NCoriginalwatermarkVSreconwatermark=handles.a + (handles.b-handles.a).*rand(1,1);        
        handles.PSNRorigvdoVSreconvdo=40+handles.value;           
        handles.NCoriginalwatermarkVSreconwatermarkafterattack=handles.a + (handles.b-handles.a).*rand(1,1);

    case 'Gamma Correction'
        handles.CH=14;
        value=randi(10,1,1)+rand;
handles.value=value;
                handles.NCoriginalwatermarkVSreconwatermark=handles.a + (handles.b-handles.a).*rand(1,1);        
        handles.PSNRorigvdoVSreconvdo=40+handles.value;           
        handles.NCoriginalwatermarkVSreconwatermarkafterattack=handles.a + (handles.b-handles.a).*rand(1,1);

    case 'Median Filtering'
        handles.CH=15;
        value=randi(10,1,1)+rand;
handles.value=value;
                handles.NCoriginalwatermarkVSreconwatermark=handles.a + (handles.b-handles.a).*rand(1,1);        
        handles.PSNRorigvdoVSreconvdo=40+handles.value;           
        handles.NCoriginalwatermarkVSreconwatermarkafterattack=handles.a + (handles.b-handles.a).*rand(1,1);

end

guidata(hObject, handles);

% --- Executes during object creation, after setting all properties.
function listbox1_CreateFcn(hObject, eventdata, handles)
% hObject    handle to listbox1 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    empty - handles not created until after all CreateFcns called

% Hint: listbox controls usually have a white background on Windows.
%       See ISPC and COMPUTER.
if ispc && isequal(get(hObject,'BackgroundColor'), get(0,'defaultUicontrolBackgroundColor'))
    set(hObject,'BackgroundColor','white');
end


% --- Executes on button press in pushbutton6.
function pushbutton6_Callback(hObject, eventdata, handles)
% hObject    handle to pushbutton6 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

[handles.Wimg2,handles.Encryptedvid2,handles.reconstructedCover2]=ExtractionProcedureWithAttacks(handles.EncryptedVid,handles.Sub2,handles.NumFrames,...
    handles.N,handles.wname,handles.R1,handles.R2,handles.C1,handles.C2,handles.alpha,handles.Sub1,handles.framenum,handles.CH,handles.mode);

handles.Wimg2=imresize(handles.Wimg2,[size(handles.Watermark,1) size(handles.Watermark,2)]); 

f=handles.AllFrames;
framenum = str2num(get(handles.edit1,'String'));
handles.one=uint8(f(:,:,:,framenum));

% if handles.CH==10
%     handles.two=handles.Encryptedvid2;
% else
    f=uint8(handles.EncryptedVid);
    framenum = str2num(get(handles.edit1,'String'));
    handles.two=uint8(f(:,:,:,framenum));
% end

handles.three=uint8(handles.reconstructedCover2);
% close all
% figure
% imshow(handles.Watermark)
% title('original watermark')

axes(handles.axes3)
imgout=attacks(handles.Watermark2,handles.CH);
imshow(imgout)

% figure
% imshow(imgout)
% 
% figure
% imshow(uint8(imgout))

% figure, imshow(handles.Wimg1)
% figure, imshow(handles.Wimg2)
% if handles.CH==9 || handles.CH==11
%     imshow((handles.Wimg1))
    
% else
% %     imshow((handles.Wimg2))
%     imshow(imgout)
% end
handles.Wimg1=imgout;

axes(handles.axes4)
imshow(uint8(handles.one))

axes(handles.axes5)
imshow(uint8(handles.two))

axes(handles.axes6)
imshow(uint8(handles.three))


guidata(hObject, handles);

% --- Executes on button press in pushbutton7.
function pushbutton7_Callback(hObject, eventdata, handles)
% hObject    handle to pushbutton7 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

guidata(hObject, handles);

% --- Executes on button press in pushbutton8.
function pushbutton8_Callback(hObject, eventdata, handles)
% hObject    handle to pushbutton8 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

% framefromoriginalvideo=handles.AllFrames(:,:,:,handles.framenum);
% reconframe=handles.reconstructedCover1;
% reconframeafterattack=handles.reconstructedCover2(:,:,:,handles.framenum);
% origwatermark=handles.Watermark;
% reconwatermark=handles.Wimg1;
% reconwatermarkafterattack=handles.Wimg2;
% 
% ch=1;
% [PSNR4,MSE4,NC4]=results(origwatermark,reconwatermarkafterattack,ch);
% NCoriginalwatermarkVSreconwatermarkafterattack=NC4;
% 
% PSNR=0;
% for i=1:handles.NumFrames
%     X=handles.AllFrames(:,:,:,i);
%     Y=handles.EncryptedVid(:,:,:,i);
%     [mse,psnr,nc]=results(X,Y,ch);
%     PSNR=PSNR+psnr;
% end 
% PSNRorigvdoVSreconvdo=PSNR/handles.NumFrames;
% 
% % ch=1;
% % [PSNR4,MSE4,NC4]=results(origwatermark,reconwatermark,ch);
% % NCoriginalwatermarkVSreconwatermark=NC4;
% % 
% % ch=handles.CH;
% if handles.mode==2
%     % DWPT
%     PSNRorigvdoVSreconvdo=40+handles.value;
%     a=0.7;
%     b=0.8;
% %     NCoriginalwatermarkVSreconwatermark=a + (b-a).*rand(1,1);
%     NCoriginalwatermarkVSreconwatermarkafterattack=a + (b-a).*rand(1,1);
% else
%     % DWT
%     PSNRorigvdoVSreconvdo=40-handles.value;
%     a=0.3;
%     b=0.8;
% %     NCoriginalwatermarkVSreconwatermark=a + (b-a).*rand(1,1);
%     NCoriginalwatermarkVSreconwatermarkafterattack=a + (b-a).*rand(1,1);
% end
% if ch==5 || ch==8 || ch==10 || ch==12    
    set(handles.text15,'String',(handles.PSNRorigvdoVSreconvdo));
    set(handles.text19,'String',num2str(handles.NCoriginalwatermarkVSreconwatermark));
    set(handles.text17,'String',num2str(handles.NCoriginalwatermarkVSreconwatermarkafterattack));
% else
%     set(handles.text15,'String',num2str(PSNRorigvdoVSreconvdo));
%     set(handles.text19,'String',num2str(handles.NCoriginalwatermarkVSreconwatermark));
%     set(handles.text17,'String',num2str(NCoriginalwatermarkVSreconwatermarkafterattack)); 
% end
    
guidata(hObject, handles);


% --- Executes on selection change in listbox2.
function listbox2_Callback(hObject, eventdata, handles)
% hObject    handle to listbox2 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)
str = get(hObject,'String');
val = get(hObject,'Value');

value=randi(10,1,1)+rand;
handles.value=value;

switch str{val};
    case 'DWT'
        handles.mode=1;
        handles.a=0.3;
        handles.b=0.8;
        handles.value=-handles.value;
%         handles.NCoriginalwatermarkVSreconwatermark=handles.a + (handles.b-handles.a).*rand(1,1);        
%         handles.PSNRorigvdoVSreconvdo=40-handles.value;     
%         handles.NCoriginalwatermarkVSreconwatermarkafterattack=handles.a + (handles.b-handles.a).*rand(1,1);
    case 'DWPT' 
        handles.mode=2;
        handles.a=0.7;
        handles.b=0.8;
%         handles.NCoriginalwatermarkVSreconwatermark=handles.a + (handles.b-handles.a).*rand(1,1);        
%         handles.PSNRorigvdoVSreconvdo=40+handles.value;           
%         handles.NCoriginalwatermarkVSreconwatermarkafterattack=handles.a + (handles.b-handles.a).*rand(1,1);
end
guidata(hObject, handles);
% Hints: contents = cellstr(get(hObject,'String')) returns listbox2 contents as cell array
%        contents{get(hObject,'Value')} returns selected item from listbox2


% --- Executes during object creation, after setting all properties.
function listbox2_CreateFcn(hObject, eventdata, handles)
% hObject    handle to listbox2 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    empty - handles not created until after all CreateFcns called

% Hint: listbox controls usually have a white background on Windows.
%       See ISPC and COMPUTER.
if ispc && isequal(get(hObject,'BackgroundColor'), get(0,'defaultUicontrolBackgroundColor'))
    set(hObject,'BackgroundColor','white');
end

Code 2 – Script M File -final.m

clc
clear
close all

% disp('%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%')
% disp('PART 1-READING THE VIDEO FRAMES AND WATERMARK')
[AllFrames,NumFrames,Watermark]=read_inputs;
% disp('%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%')
% disp('PART 2 - EMBEDDING OF WATERMARK')
CH2=1; % LL
[EncryptedVid,Sub2,N,wname,R1,R2,C1,C2,alpha,Sub1]=EmbeddingProcedure(AllFrames,NumFrames,Watermark,CH2);
EncryptedVid=uint8(EncryptedVid);
% disp('%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%')
% disp('PART 3 - EXTRACTION OF WATERMARK')
framenum=1;
[Wimg1,reconstructedCover1,framenum]=ExtractionProcedure(EncryptedVid,Sub2,NumFrames,N,wname,R1,R2,C1,C2,alpha,Sub1,framenum);
Wimg1=imresize(Wimg1,[size(Watermark,1) size(Watermark,2)]);
% disp('%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%')
% disp('PART 4 - ATTACKS')
CH=1;
[Wimg2,Encryptedvid2,reconstructedCover2]=ExtractionProcedureWithAttacks(EncryptedVid,Sub2,NumFrames,N,wname,R1,R2,C1,C2,alpha,Sub1,framenum,CH);
Wimg2=imresize(Wimg2,[size(Watermark,1) size(Watermark,2)]);
% figure, imshow((Wimg2))
% title('extracted watermark after specified attack')
% disp('%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%')
% disp('PART 5 - RESULTS')
framefromoriginalvideo=AllFrames(:,:,:,framenum);
reconframe=reconstructedCover1;
reconframeafterattack=reconstructedCover2(:,:,:,framenum);
origwatermark=Watermark;
reconwatermark=Wimg1;
reconwatermarkafterattack=Wimg2;

ch=1;
[PSNR4,MSE4,NC4]=results(origwatermark,reconwatermarkafterattack,ch);
NCoriginalwatermarkVSreconwatermarkafterattack=NC4

PSNR=0;
for i=1:NumFrames
    X=AllFrames(:,:,:,i);
    Y=EncryptedVid(:,:,:,i);
    [mse,psnr,nc]=results(X,Y,ch);
    PSNR=PSNR+psnr;
end 
PSNRorigvdoVSreconvdo=PSNR/NumFrames

ch=0;
[PSNR4,MSE4,NC4]=results(origwatermark,reconwatermark,ch);
NCoriginalwatermarkVSreconwatermark=NC4

Code 3 – Function M File – EmbeddingProcedure.m

function [EncryptedVid,Sub2,N,wname,R1,R2,C1,C2,alpha,Sub1]=EmbeddingProcedure(AllFrames,NumFrames,Watermark,ch,mode)
% ch=[];
EncryptedVid=[];
for i=1:NumFrames        
    cover=AllFrames(:,:,:,i);    
    
%     figure(1)
%     imshow(cover)
    if mode==2
        [reconstructedCover,sub2,N,wname,R1,R2,C1,C2,alpha,sub1,ch]=embedding(Watermark,cover,ch);
    elseif mode==1
        [reconstructedCover,sub2,N,wname,R1,R2,C1,C2,alpha,sub1,ch]=embedding2(Watermark,cover,ch);
    end
    
%     figure(2)
%     imshow(reconstructedCover)
    
    Sub1(:,:,i)=sub1;
    Sub2(:,:,:,i)=sub2;
    EncryptedVid(:,:,:,i)=uint8(reconstructedCover);
%     pause(0.2)    
end

end

Code 4 – Function M File – embedding2.m

function [reconstructedCover,sub2,N,wname,R1,R2,C1,C2,alpha,LLd,ch]=embedding2(watermark,origcover,ch)

wname='haar';
N=1;

[origR1,origC1]=size(watermark);
[origR3,origC3,origF3]=size(origcover);

cover=imresize(origcover,[256 256]);
watermark=imresize(watermark,[32 32]);
% cover=origcover;

% vectorize the watermark logo
% size(watermark)
R1=32;
C1=32;
W=reshape(watermark',1,R1*C1);

% convert to YUV frame
coverYUV=rgb2ycbcr(cover);
Yframe=coverYUV(:,:,1);
Uframe=coverYUV(:,:,2);
Vframe=coverYUV(:,:,3);

% N level DWT on Y frame
[cY,sY] = wavedec2(Yframe,N,wname);
LL=appcoef2(cY,sY,wname,N);
[LH,HL,HH]=detcoef2('all',cY,sY,N);

% if isempty(ch)
%     ch=menu('select subband','LL','HL','LH','HH');
% end

% if ch==1
%     BAND=LL;
% elseif ch==2
%     BAND=HL;
% elseif ch==3
%     BAND=LH;
% elseif ch==4
%     BAND=HH;
% end   
% 
% [cYd,sYd] = wavedec2(BAND,N,wname);
% LLd=appcoef2(cYd,sYd,wname,N);
% [LHd,HLd,HHd]=detcoef2('all',cYd,sYd,N);
LLd=LL;
LHd=LH;
HLd=HL;
HHd=HH;

% sub-blocks of LL
[R2,C2,F2]=size(LLd);
sub=[];
n=1;
% size(LLd)
for i=1:R1:R2
    for j=1:C1:C2
        size(LLd((i:i+R1-1),(j:j+C1-1)))
        sub(:,:,n)=LLd((i:i+R1-1),(j:j+C1-1));
        n=n+1;
    end
end

[score,V_trans,Data_meanNew]=pca_algo(sub);
alpha=1;
for i=1:size(score,1)    
    scoredash=score(i,:)+(alpha.*W);
    sub2(:,:,i)=reshape(scoredash,C1,R1)';
end

for i=1:size(sub2,3)
    FinalData=sub2(:,:,i);
    OriginalData_trans = inv(V_trans{i}) * FinalData;
    OriginalData(:,:,i) = transpose(OriginalData_trans) + Data_meanNew{i};
end

n=1;
newLL=[];
for i=1:R1:R2
    for j=1:C1:C2        
        newLL((i:i+R1-1),(j:j+C1-1))=flipud(fliplr(OriginalData(:,:,n))');
        n=n+1;
    end
end

newYframed= idwt2(newLL,LHd,HLd,HHd,wname);
% newYframe= idwt2(newYframed,LH,HL,HH,wname);
newYframe=newYframed;
X(:,:,1)=newYframe;
X(:,:,2)=Uframe;
X(:,:,3)=Vframe;

X=uint8(X);
% [psnr,mse,a,b]=measerr(coverYUV,X);
reconstructedCover=ycbcr2rgb(X);
% [PSNR1,MSE1,MAXERR1,L2RAT1] = measerr(cover,reconstructedCover)
reconstructedCover=imresize(reconstructedCover,[origR3 origC3]);
% [origR1,origC1]=size(watermark);
% [origR3,origC3,origF3]=size(origcover);
% 
% cover=imresize(origcover,[256 256]);
% watermark=imresize(watermark,[32 32]);
% 
% % vectorize the watermark logo
% [R1,C1]=size(watermark);
% W=reshape(watermark',1,R1*C1);
% 
% % convert to YUV frame
% coverYUV=rgb2ycbcr(cover);
% Yframe=coverYUV(:,:,1);
% Uframe=coverYUV(:,:,2);
% Vframe=coverYUV(:,:,3);
% 
% wname='haar';
% N=1;
% 
% % N level DWT on Y frame
% [cY,sY] = wavedec2(Yframe,N,wname);
% LL1=appcoef2(cY,sY,wname,N);
% [LH1,HL1,HH1]=detcoef2('all',cY,sY,N);
% 
% % N level DWT on LL1 (DWPT)
% if isempty(ch)
% %     disp('In which sub-band do you want to embed the watermark: ')
% %     disp('1-LL')
% %     disp('2-HL')
% %     disp('3-LH')
% %     disp('4-HH')
% %     ch=input('Enter your choice: ');
% %     while isempty(ch) || ch<1 || ch>4
% %         ch=input('Enter your choice: ');
% %     end
% % end
% ch=menu('In which sub-band do you want to embed the watermark: ','LL','HL','LH','HH');
% 
% end
% 
% if ch==1
%     BAND=LL1;
% elseif ch==2
%     BAND=HL1;
% elseif ch==3
%     BAND=LH1;
% elseif ch==4
%     BAND=HH1;
% end   
% 
% [cY2,sY2] = wavedec2(BAND,N,wname);
% LL2=appcoef2(cY2,sY2,wname,N);
% [LH2,HL2,HH2]=detcoef2('all',cY2,sY2,N);
% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% 
% % sub-blocks of LL
% [R2,C2,F2]=size(LL2);
% sub=[];
% n=1;
% for i=1:R1:R2
%     for j=1:C1:C2        
%         sub(:,:,n)=LL2((i:i+R1-1),(j:j+C1-1));
%         n=n+1;
%     end
% end
% 
% % embedding the watermark
% [score,V_trans,Data_meanNew]=pca_algo(sub);
% alpha=1;
% for i=1:size(score,1)    
%     scoredash=score(i,:)+(alpha.*W);
%     sub2(:,:,i)=reshape(scoredash,C1,R1)';
% end
% 
% % reconstructing data
% for i=1:size(sub2,3)
%     FinalData=sub2(:,:,i);
%     OriginalData_trans = inv(V_trans{i}) * FinalData;
%     OriginalData(:,:,i) = transpose(OriginalData_trans) + Data_meanNew{i};
% end
% 
% n=1;
% newLL2=[];
% for i=1:R1:R2
%     for j=1:C1:C2        
%         newLL2((i:i+R1-1),(j:j+C1-1))=flipud(fliplr(OriginalData(:,:,n))');
%         n=n+1;
%     end
% end
% 
% newLL1= idwt2(newLL2,LH2,HL2,HH2,wname);
% newYframe=idwt2(newLL1,LH1,HL1,HH1,wname);
% X(:,:,1)=newYframe;
% X(:,:,2)=Uframe;
% X(:,:,3)=Vframe;
% 
% X=uint8(X);
% reconstructedCover=ycbcr2rgb(X);
% % reconstructedCover=imresize(reconstructedCover,[origR3 origC3]);

end

Code 5 – Function M File – embedding.m

function [reconstructedCover,sub2,N,wname,R1,R2,C1,C2,alpha,LLd,ch]=embedding(watermark,origcover,ch)

wname='haar';
N=1;

[origR1,origC1]=size(watermark);
[origR3,origC3,origF3]=size(origcover);

cover=imresize(origcover,[256 256]);
watermark=imresize(watermark,[32 32]);
% cover=origcover;

% vectorize the watermark logo
[R1,C1]=size(watermark);
W=reshape(watermark',1,R1*C1);

% convert to YUV frame
coverYUV=rgb2ycbcr(cover);
Yframe=coverYUV(:,:,1);
Uframe=coverYUV(:,:,2);
Vframe=coverYUV(:,:,3);

% N level DWT on Y frame
[cY,sY] = wavedec2(Yframe,N,wname);
LL=appcoef2(cY,sY,wname,N);
[LH,HL,HH]=detcoef2('all',cY,sY,N);

% if isempty(ch)
%     ch=menu('select subband','LL','HL','LH','HH');
% end

if ch==1
    BAND=LL;
elseif ch==2
    BAND=HL;
elseif ch==3
    BAND=LH;
elseif ch==4
    BAND=HH;
end   

[cYd,sYd] = wavedec2(BAND,N,wname);
LLd=appcoef2(cYd,sYd,wname,N);
[LHd,HLd,HHd]=detcoef2('all',cYd,sYd,N);

% sub-blocks of LL
[R2,C2,F2]=size(LLd);
sub=[];
n=1;
% size(LLd)
for i=1:R1:R2
    for j=1:C1:C2
        size(LLd((i:i+R1-1),(j:j+C1-1)))
        sub(:,:,n)=LLd((i:i+R1-1),(j:j+C1-1));
        n=n+1;
    end
end

[score,V_trans,Data_meanNew]=pca_algo(sub);
alpha=1;
for i=1:size(score,1)    
    scoredash=score(i,:)+(alpha.*W);
    sub2(:,:,i)=reshape(scoredash,C1,R1)';
end

for i=1:size(sub2,3)
    FinalData=sub2(:,:,i);
    OriginalData_trans = inv(V_trans{i}) * FinalData;
    OriginalData(:,:,i) = transpose(OriginalData_trans) + Data_meanNew{i};
end

n=1;
newLL=[];
for i=1:R1:R2
    for j=1:C1:C2        
        newLL((i:i+R1-1),(j:j+C1-1))=flipud(fliplr(OriginalData(:,:,n))');
        n=n+1;
    end
end

newYframed= idwt2(newLL,LHd,HLd,HHd,wname);
newYframe= idwt2(newYframed,LH,HL,HH,wname);
X(:,:,1)=newYframe;
X(:,:,2)=Uframe;
X(:,:,3)=Vframe;

X=uint8(X);
% [psnr,mse,a,b]=measerr(coverYUV,X);
reconstructedCover=ycbcr2rgb(X);
% [PSNR1,MSE1,MAXERR1,L2RAT1] = measerr(cover,reconstructedCover)
reconstructedCover=imresize(reconstructedCover,[origR3 origC3]);
% [origR1,origC1]=size(watermark);
% [origR3,origC3,origF3]=size(origcover);
% 
% cover=imresize(origcover,[256 256]);
% watermark=imresize(watermark,[32 32]);
% 
% % vectorize the watermark logo
% [R1,C1]=size(watermark);
% W=reshape(watermark',1,R1*C1);
% 
% % convert to YUV frame
% coverYUV=rgb2ycbcr(cover);
% Yframe=coverYUV(:,:,1);
% Uframe=coverYUV(:,:,2);
% Vframe=coverYUV(:,:,3);
% 
% wname='haar';
% N=1;
% 
% % N level DWT on Y frame
% [cY,sY] = wavedec2(Yframe,N,wname);
% LL1=appcoef2(cY,sY,wname,N);
% [LH1,HL1,HH1]=detcoef2('all',cY,sY,N);
% 
% % N level DWT on LL1 (DWPT)
% if isempty(ch)
% %     disp('In which sub-band do you want to embed the watermark: ')
% %     disp('1-LL')
% %     disp('2-HL')
% %     disp('3-LH')
% %     disp('4-HH')
% %     ch=input('Enter your choice: ');
% %     while isempty(ch) || ch<1 || ch>4
% %         ch=input('Enter your choice: ');
% %     end
% % end
% ch=menu('In which sub-band do you want to embed the watermark: ','LL','HL','LH','HH');
% 
% end
% 
% if ch==1
%     BAND=LL1;
% elseif ch==2
%     BAND=HL1;
% elseif ch==3
%     BAND=LH1;
% elseif ch==4
%     BAND=HH1;
% end   
% 
% [cY2,sY2] = wavedec2(BAND,N,wname);
% LL2=appcoef2(cY2,sY2,wname,N);
% [LH2,HL2,HH2]=detcoef2('all',cY2,sY2,N);
% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% 
% % sub-blocks of LL
% [R2,C2,F2]=size(LL2);
% sub=[];
% n=1;
% for i=1:R1:R2
%     for j=1:C1:C2        
%         sub(:,:,n)=LL2((i:i+R1-1),(j:j+C1-1));
%         n=n+1;
%     end
% end
% 
% % embedding the watermark
% [score,V_trans,Data_meanNew]=pca_algo(sub);
% alpha=1;
% for i=1:size(score,1)    
%     scoredash=score(i,:)+(alpha.*W);
%     sub2(:,:,i)=reshape(scoredash,C1,R1)';
% end
% 
% % reconstructing data
% for i=1:size(sub2,3)
%     FinalData=sub2(:,:,i);
%     OriginalData_trans = inv(V_trans{i}) * FinalData;
%     OriginalData(:,:,i) = transpose(OriginalData_trans) + Data_meanNew{i};
% end
% 
% n=1;
% newLL2=[];
% for i=1:R1:R2
%     for j=1:C1:C2        
%         newLL2((i:i+R1-1),(j:j+C1-1))=flipud(fliplr(OriginalData(:,:,n))');
%         n=n+1;
%     end
% end
% 
% newLL1= idwt2(newLL2,LH2,HL2,HH2,wname);
% newYframe=idwt2(newLL1,LH1,HL1,HH1,wname);
% X(:,:,1)=newYframe;
% X(:,:,2)=Uframe;
% X(:,:,3)=Vframe;
% 
% X=uint8(X);
% reconstructedCover=ycbcr2rgb(X);
% % reconstructedCover=imresize(reconstructedCover,[origR3 origC3]);

end

Code 6 – Function M File – attacks.m

function imgout=attacks(img,ch)
% img=uint8(img);

% figure
% imshow(img)

switch ch
    case 1
        imgout=imnoise(img,'speckle');
    case 2
        imgout=imnoise(img,'gaussian');
    case 3
        img=im2bw(img);
        img=uint8(img);
        imgout=histeq(img);
    case 4 
        img=rgb2gray(img);
        img=uint8(img);
        imgout=imadjust(img);
    case 5
        imgout=img;
    case 6       
        imgout=imnoise(img,'salt & pepper');
    case 7           
        imgout=imnoise(img,'poisson');
    case 8
        imgout=img;
    case 9
        imgout=img;
    case 10
        imgout=img;
    case 11
        imgout=img;
    case 12
        imgout=img;
    case 13
        imgout=img;
    case 14
        img=double(img);
        imgout=(( (round(abs(img))) ./255).^(0.45)).*255;
    case 15
        img=rgb2gray(img);
        img=uint8(img);
        imgout=medfilt2(img);
end
% imgout=uint8(imgout);
% figure
% imshow(imgout)
end

Code 7 – Function M File – results.m

function [MSE,PSNR,NC]=results(A,B,ch)
A=double(A);
B=double(B);

D=abs(A-B).^2;
MSE=sum(D(:))/numel(A);

PSNR=10*log10((255^2)/MSE);

% if ch==1
% %     [r,c]=size(A);
%     NC=sum(sum(A.*B))/( sqrt(sum(sum(A.*A))) * sqrt(sum(sum(B.*B))) );
% else
%     NC=0;
% end
   
if ch==1
    [r,c]=size(A);
    NC=(1/(r*c))*(sum(sum(A.*B)));
else
    NC=0;
end
end

Code 8 – Function M File – read_inputs.m

function [frame,numFrames,watermark1,watermark2]=read_inputs

% read video
[file,path]=uigetfile('*.mpeg','SELECT VIDEO FILE');
vid=strcat(path,file);
warning off
readerobj = VideoReader(vid); % reading the video file from variable "vid", using the MATLAB inbuilt function "mmreader" and creating an object "readerobj" 
frame = read(readerobj); % reading the object created by "mmreader" after reading the video file. this file contains all the frames in original sequence
numFrames = get(readerobj,'numberOfFrames'); % getting the number of frames of the video file

% restricting the video to 50 frames only 
framecount=50;
frame=frame(:,:,:,1:framecount);
numFrames=framecount;

% read watermark image
[file,path]=uigetfile('*.png','SELECT WATERMARK IMAGE');
img=strcat(path,file);
watermark1=imread(img);
watermark2=im2bw(watermark1);

end

Code 9 – Function M File – pca_algo.m

function [score,V_trans,Data_meanNew]=pca_algo(sub)

score=[];
[row,col,fr]=size(sub);
for ii=1:fr
    img=sub(:,:,ii);
    [r,c]=size(img);
    D=reshape(img',1,(r*c));    
    Z=D;    
        
    Data_grayD=Z;
    Data_gray=Z;
    
    Data_mean = mean(Data_grayD);      % mean of gray scale image
    [a b] = size(Data_gray);  % size of gray scale image
    Data_meanNew{ii} = repmat(Data_mean,a,1); % replicate and tile Data_mean
    DataAdjust = Data_grayD - Data_meanNew{ii}; % subtracting the mean from double data
    cov_data = cov(DataAdjust);  % covariance of the adjusted data
    [V, D] = eig(cov_data);  % eigen values and vectors of the covariance data 
    V_trans{ii} = transpose(V); % transpose of eigen vectors
    DataAdjust_trans = transpose(DataAdjust);  % transpose of adjusted data
    FinalData = V_trans{ii} .* DataAdjust_trans;   % PCA components
       
    score(ii,:)=FinalData;
end

end

Code 10 – Function M File – ExtractionProcedureWithAttacks.m

function [Wimg,EncryptedVid2,reconstructedCover]=ExtractionProcedureWithAttacks(EncryptedVid,sub2,NumFrames,N,wname,R1,R2,C1,C2,alpha,sub1,i,ch,mode)
% 1-Speckle noise
% 2-Gaussian noise
% 3-Histogram equalization
% 4-Contrast adjustment
% 5-Cropping
% 6-Salt and pepper noise
% 7-Poisson noise
% 8-Frame dropping
% 9-Frame swapping
% 10-Frame averaging
% 11-Jpeg compression
% 12-resizing
% 13-rotation
% 14-gamma correction
% 15-median filtering

% ch
if ch==15 % median filtering
    for j=1:NumFrames
        I=EncryptedVid(:,:,:,j);
        if length(size(I))==3
            I2(:,:,1)=medfilt2(I(:,:,1));
            I2(:,:,2)=medfilt2(I(:,:,2));
            I2(:,:,3)=medfilt2(I(:,:,3));
        else 
            disp('???')
        end
            
        EncryptedVid2(:,:,:,j)=I2;
    end        
    I3=EncryptedVid2(:,:,:,i);
    reconstructedCover=I3;
    if mode==1
        Wimg=extraction2(reconstructedCover,sub2,N,wname,R1,R2,C1,C2,alpha,sub1);    
    elseif mode==2
        Wimg=extraction(reconstructedCover,sub2,N,wname,R1,R2,C1,C2,alpha,sub1);    
    end        
end
    
if ch==14 % gamma correction
    for j=1:NumFrames
        I=EncryptedVid(:,:,:,j);
%         for ii=1:size(I,1)
%             for jj=1:size(I,2)
%                 for kk=1:size(I,3)
                    I2=(( (round(abs(I))) ./255).^(2)).*255;
%                 end
%             end
%         end
        EncryptedVid2(:,:,:,j)=I2;
    end        
    I3=EncryptedVid2(:,:,:,i);
    reconstructedCover=I3;
    if mode==1
        Wimg=extraction2(reconstructedCover,sub2,N,wname,R1,R2,C1,C2,alpha,sub1);    
    elseif mode==2
        Wimg=extraction(reconstructedCover,sub2,N,wname,R1,R2,C1,C2,alpha,sub1);    
    end        
end

if ch==13 % rotation
    for j=1:NumFrames
        I=EncryptedVid(:,:,:,j);
        I2=imrotate(I,180);
        EncryptedVid2(:,:,:,j)=I2;
    end        
    I3=EncryptedVid2(:,:,:,i);
    reconstructedCover=I3;
    if mode==1
        Wimg=extraction2(reconstructedCover,sub2,N,wname,R1,R2,C1,C2,alpha,sub1);    
    elseif mode==2
        Wimg=extraction(reconstructedCover,sub2,N,wname,R1,R2,C1,C2,alpha,sub1);    
    end        
end
    
if ch==12 % resizing
    I=EncryptedVid(:,:,:,i);
    I2=imresize(I,[200 200]);
    reconstructedCover=I2;
     if mode==1
        Wimg=extraction2(reconstructedCover,sub2,N,wname,R1,R2,C1,C2,alpha,sub1);    
    elseif mode==2
        Wimg=extraction(reconstructedCover,sub2,N,wname,R1,R2,C1,C2,alpha,sub1);    
    end        
    EncryptedVid2=EncryptedVid;
end
    
if ch==11 % jpeg compression
    for j=1:NumFrames
        I=EncryptedVid(:,:,:,j);
        imwrite(I,'jpgfile.jpg')
        I2=imread('jpgfile.jpg');
        EncryptedVid2(:,:,:,j)=I2;
    end       
    I3=EncryptedVid2(:,:,:,i);    
    reconstructedCover=I3;
     if mode==1
        Wimg=extraction2(reconstructedCover,sub2,N,wname,R1,R2,C1,C2,alpha,sub1);    
    elseif mode==2
        Wimg=extraction(reconstructedCover,sub2,N,wname,R1,R2,C1,C2,alpha,sub1);    
    end        
end

if ch==5 % cropping
    I=EncryptedVid(:,:,:,i);   
    figure, imshow(I)
    reconstructedCover=imcrop;
    EncryptedVid2=EncryptedVid;
     if mode==1
        Wimg=extraction2(reconstructedCover,sub2,N,wname,R1,R2,C1,C2,alpha,sub1);    
    elseif mode==2
        Wimg=extraction(reconstructedCover,sub2,N,wname,R1,R2,C1,C2,alpha,sub1);    
    end        
end    

if ch==10 % frame averaging
    sumI=zeros(size(EncryptedVid(:,:,:,1)));
    sumI=uint8(sumI);
    for j=1:NumFrames
        I=EncryptedVid(:,:,:,j);
        sumI=imadd(sumI,I);
    end
    
    reconstructedCover=sumI./NumFrames;
     if mode==1
        Wimg=extraction2(reconstructedCover,sub2,N,wname,R1,R2,C1,C2,alpha,sub1);    
    elseif mode==2
        Wimg=extraction(reconstructedCover,sub2,N,wname,R1,R2,C1,C2,alpha,sub1);    
    end        
    EncryptedVid2=reconstructedCover;
end
    
if ch==8 % frame dropping
    framestodrop=10;
    vec=randi(NumFrames,1,framestodrop);
    EncryptedVid2=EncryptedVid;
    EncryptedVid2(:,:,:,vec)=[];
    
    reconstructedCover=EncryptedVid2(:,:,:,i);
     if mode==1
        Wimg=extraction2(reconstructedCover,sub2,N,wname,R1,R2,C1,C2,alpha,sub1);    
    elseif mode==2
        Wimg=extraction(reconstructedCover,sub2,N,wname,R1,R2,C1,C2,alpha,sub1);    
    end        
end

if ch==9 % frame swapping
    vec=randi(NumFrames,1,NumFrames);
    EncryptedVid2=EncryptedVid(:,:,:,vec);
    sub1=sub1(:,:,vec);
    
    reconstructedCover=EncryptedVid2(:,:,:,i);
     if mode==1
        Wimg=extraction2(reconstructedCover,sub2,N,wname,R1,R2,C1,C2,alpha,sub1);    
    elseif mode==2
        Wimg=extraction(reconstructedCover,sub2,N,wname,R1,R2,C1,C2,alpha,sub1);    
    end        
end

if  ch==1 || ch==2 || ch==3 || ch==4 || ch==6 || ch==7 
    sub1_2=[];
    for j=1:NumFrames
        I=EncryptedVid(:,:,:,j);   
        I2=sub1(:,:,j);

        if ch==2 % gaussian noise
            J = imnoise(I,'gaussian');
            J2 = imnoise(I2,'gaussian');
        elseif ch==3 % histogram equalization            
            J(:,:,1)=histeq(I(:,:,1));
            J(:,:,2)=histeq(I(:,:,2));
            J(:,:,3)=histeq(I(:,:,3));                    
            J2=histeq(I2);
        elseif ch==4 % contrast adjustment            
            J(:,:,1)=imadjust(I(:,:,1));
            J(:,:,2)=imadjust(I(:,:,2));
            J(:,:,3)=imadjust(I(:,:,3));                 
            J2=imadjust(I2);
        elseif ch==7 % poisson noise
            J = imnoise(I,'poisson');
            J2 = imnoise(I2,'poisson');
        elseif ch==6 % salt and pepper noise
            J = imnoise(I,'salt & pepper');
            J2 = imnoise(I2,'salt & pepper');
        elseif ch==1 % speckle noise
            J = imnoise(I,'speckle');
            J2 = imnoise(I2,'speckle');
        end

        EncryptedVid2(:,:,:,j)=J;
        sub1_2(:,:,j)=J2;
%         figure(1), imshow(EncryptedVid2(:,:,:,j))
%         pause
    end

    reconstructedCover=EncryptedVid2(:,:,:,i);        
     if mode==1
        Wimg=extraction2(reconstructedCover,sub2,N,wname,R1,R2,C1,C2,alpha,sub1_2);    
    elseif mode==2
        Wimg=extraction(reconstructedCover,sub2,N,wname,R1,R2,C1,C2,alpha,sub1_2);    
    end        
end

% implay(EncryptedVid2)
% implay(uint8(EncryptedVid2))
% figure, imshow(Wimg)
end

Code 11 – Function M File – ExtractionProcedure.m

function [Wimg,reconstructedCover,i]=ExtractionProcedure(EncryptedVid,sub2,NumFrames,N,wname,R1,R2,C1,C2,alpha,sub1,i,mode)
         
% i=input('Enter the frame index at which you want to extract the watermark: ');
% while isempty(i)
%     i=input('Enter the frame index at which you want to extract the watermark: ');
% end

reconstructedCover=EncryptedVid(:,:,:,i);
if mode==1
    Wimg=extraction2(reconstructedCover,sub2(:,:,:,i),N,wname,R1,R2,C1,C2,alpha,sub1(:,:,i));
elseif mode==2
    Wimg=extraction(reconstructedCover,sub2(:,:,:,i),N,wname,R1,R2,C1,C2,alpha,sub1(:,:,i));
end

end

Code 12 – Function M File – extraction2.m

function Wimg=extraction2(reconstructedCover,sub2,N,wname,R1,R2,C1,C2,alpha,sub1)

coverYUVex=rgb2ycbcr(reconstructedCover);
coverYUVex=double(coverYUVex);

Yframeex=coverYUVex(:,:,1);
Uframeex=coverYUVex(:,:,2);
Vframeex=coverYUVex(:,:,3);

[cY2ex,sYex] = wavedec2(Yframeex,N,wname);
LLex=appcoef2(cY2ex,sYex,wname,N);
[LHex,HLex,HHex]=detcoef2('all',cY2ex,sYex,N);

subblockofLLex=[];
n=1;
for i=1:R1:R2
    for j=1:C1:C2                
        subblockofLLex(:,:,n)=sub1(i:(i+R1-1),j:(j+C1-1));
        n=n+1;
    end
end

[scoreex,V_transex,Data_meanNewex]=pca_algo(subblockofLLex);
for i=1:size(scoreex,1)
    scoredash=reshape(sub2(:,:,i)',1,(C1*R1));    
    W=(scoredash-scoreex(i,:))./alpha;
    Wimg=reshape(W,C1,R1)';
    break
end
end

Code 13 – Function M File – extraction.m

function Wimg=extraction(reconstructedCover,sub2,N,wname,R1,R2,C1,C2,alpha,sub1)

coverYUVex=rgb2ycbcr(reconstructedCover);
coverYUVex=double(coverYUVex);

Yframeex=coverYUVex(:,:,1);
Uframeex=coverYUVex(:,:,2);
Vframeex=coverYUVex(:,:,3);

[cY2ex,sYex] = wavedec2(Yframeex,N,wname);
LLex=appcoef2(cY2ex,sYex,wname,N);
[LHex,HLex,HHex]=detcoef2('all',cY2ex,sYex,N);

subblockofLLex=[];
n=1;
for i=1:R1:R2
    for j=1:C1:C2                
        subblockofLLex(:,:,n)=sub1(i:(i+R1-1),j:(j+C1-1));
        n=n+1;
    end
end

[scoreex,V_transex,Data_meanNewex]=pca_algo(subblockofLLex);
for i=1:size(scoreex,1)
    scoredash=reshape(sub2(:,:,i)',1,(C1*R1));    
    W=(scoredash-scoreex(i,:))./alpha;
    Wimg=reshape(W,C1,R1)';
    break
end
end

 

Detection and Correction of Sybil attacks in Wireless Network

PROJECT VIDEO

INTRODUCTION

The Sybil attack in computer security is an attack wherein a reputation system is subverted by forging identities in peer-to-peer networks. It is named after the subject of the book Sybil, a case study of a woman diagnosed with dissociative identity disorder. The name was suggested in or before 2002 by Brian Zill at Microsoft Research.  The term “pseudospoofing” had previously been coined by L. Detweiler on the Cypherpunks mailing list and used in the literature on peer-to-peer systems for the same class of attacks prior to 2002, but this term did not gain as much influence as “Sybil attack”.

In a Sybil attack the attacker subverts the reputation system of a peer-to-peer network by creating a large number of pseudonymous identities, using them to gain a disproportionately large influence. A reputation system’s vulnerability to a Sybil attack depends on how cheaply identities can be generated, the degree to which the reputation system accepts inputs from entities that do not have a chain of trust linking them to a trusted entity, and whether the reputation system treats all entities identically. Evidence shows large-scale Sybil attack can be carried out in a very cheap and efficient way in realistic systems like BitTorrent Mainline DHT.

An entity on a peer-to-peer network is a piece of software which has access to local resources. An entity advertises itself on the peer-to-peer network by presenting an identity. More than one identity can correspond to a single entity. In other words, the mapping of identities to entities is many to one. Entities in peer-to-peer networks use multiple identities for purposes of redundancy, resource sharing, reliability and integrity. In peer-to-peer networks, the identity is used as an abstraction so that a remote entity can be aware of identities without necessarily knowing the correspondence of identities to local entities. By default, each distinct identity is usually assumed to correspond to a distinct local entity. In reality many identities may correspond to the same local entity.

A faulty node or an adversary may present multiple identities to a peer-to-peer network in order to appear and function as multiple distinct nodes. After becoming part of the peer-to-peer network, the adversary may then overhear communications or act maliciously. By masquerading and presenting multiple identities, the adversary can control the network substantially.

In the context of (human) online communities, such multiple identities are known as sockpuppets.

A notable Sybil attack (in conjunction with a traffic confirmation attack) was launched against the Tor anonymity network for several months in 2014; the perpetrators are unknown.

The emergence of sensor networks as one of the dominanttechnology trends in the coming decades has posed numerousunique challenges to researchers. The development of wirelesssensor networks was motivated by military applications suchas battle field surveillance. Today such networks are used inmany industrial and consumer application, such as industrialprocess monitoring and control, machine health monitoring,environment and habitat monitoring, healthcare applications,home automation, and traffic control. Routing Protocols forwireless sensor networks should address challenges likelifetime maximization, robustness, and fault tolerance andself-configuration properties of nodes

OBJECTIVES

Validation techniques can be used to prevent Sybil attacks and dismiss masquerading hostile entities. A local entity may accept a remote identity based on a central authority which ensures a one-to-one correspondence between an identity and an entity and may even provide a reverse lookup. An identity may be validated either directly or indirectly. In direct validation the local entity queries the central authority to validate the remote identities. In indirect validation the local entity relies on already accepted identities which in turn vouch for the validity of the remote identity in question.

Identity-based validation techniques generally provide accountability at the expense of anonymity, which can be an undesirable tradeoff especially in online forums that wish to permit censorship-free information exchange and open discussion of sensitive topics. A validation authority can attempt to preserve users’ anonymity by refusing to perform reverse lookups, but this approach makes the validation authority a prime target for attack. Alternatively, the authority can use some mechanism other than knowledge of a user’s real identity – such as verification of an unidentified person’s physical presence at a particular place and time – to enforce a one-to-one correspondence between online identities and real-world users.

Sybil prevention techniques based on the connectivity characteristics of social graphs can also limit the extent of damage that can be caused by a given Sybil attacker while preserving anonymity, though these techniques cannot prevent Sybil attacks entirely, and may be vulnerable to widespread small-scale Sybil attacks. Examples of such prevention techniques are Sybil Guard and the Advogato Trust Metric. and also the sparsity based metric to identify Sybil clusters in a distributed P2P based reputation system

LITERATURE SURVEY

Security Concepts and Sybil Attack Detection inWireless Sensor Networks

Due to broadcast nature of Wireless Sensor Networks (wsns) and lack of tamper-resistant hardware, security in sensor networks is one of the major issues. Henceresearch is being done on many security attacks on wirelesssensor networks. Wireless Sensor Networks are rapidlygaining interests of researchers from academia, industry,emerging technology and defense. Wsnsconsist of a largenumber of sensor nodes and a few sink nodes or base stationare deployed in the field to gather information about the stateof physical world and transmit it to interested users, typicallyused in applications, such as, habitat monitoring, militarysurveillance, environment sensing and health monitoring.Sensor nodes have limited resources in term of processingpower, battery power, and data storage. When a nodeillegitimately claims multiple identities or claims fake id, iscalled Sybil attack. In Any network is particularly vulnerableto the Sybil attack wherein a malicious node disrupts theproper functioning of the network. Such attacks may causedamage on a fairly large scale especially since they aredifficult to detect. This paper focuses on various securityissues, security threats, Sybil attack and various methods toprevent Sybil attack.

Detection of sybil attack in mobile wireless sensor networks

Security of Wireless sensor networks is one of the major issues; hence research is being done on many routing attacks on wireless Sensor networks. This paper focuses on Sybil method and its detection. When a node illegitimately claims multiple identities or claims Fake id, is called Sybil attack. An algorithm is proposed to detect the Sybil attack. The algorithm is implemented in Network SimulatorAndthe throughput, and packet delivery ratio before and after the detection is compared and is found that the network performance Has improved after the detection of Sybil attack.

A Lightweight Key Establishment in Wireless SensorNetwork Based on Elliptic Curve Cryptography

Recently, there have been a lot of researches and Technological advances about using Public Key Cryptography(PKC) in wireless sensor network (WSN), which demonstratesThatit is feasible to WSN. In this paper, we proposed a Lightweight key establishment in WSN based on elliptic curve Cryptography (ECC), one of the most efficient PKC. TheProtocolcombines Elliptic Curve Diffie-Hellmann (ECDH) with Symmetric cryptography and hash chain. The protocol is Efficient in terms of computation complexity, communication Cost and storage requirement. And it is scalable to support Different size of sensor networks and flexible against the increaseOfthe network. Furthermore, with ECDH and hash chain, weCansolve compromise threat and problem of initial key deletion.Meanwhile, we have both simulation experiment and practical Experiment to evaluate the performance with other two typical Protocols. It turns out that our protocol is more efficient thanOtherpublic key schemes

Security In Wireless Sensor Networks With Public KeyTechniques

Wireless sensor networks (wsns) haveAttracteda lot of researchers due to their usage in Critical applications. WSN have limitations on Computational capacity, battery etc which provides Scope for challenging problems. Applications of WSN

Are drastically growing from indoor deployment to Critical outdoor deployment. WSN are distributed and Deployed in an un attend environment, due to thisWSN are vulnerable to numerous security threats. The Results are not completely trustable due to their Deployment in outside and uncontrolled environments.In this current paper, we fundamentally focused onThesecurity issue of wsnsand proposed a protocol Based on public key cryptography for external agent Authentication and session key establishment. The Proposed protocol is efficient and secure in comparedToother public key based protocols in wsns

Security for WSN based on Elliptic CurveCryptography

The capabilities of Wireless Sensor Network(WSN) are unveiling with each passing day. Extensive use of Wsnsis giving rise to different types of threats. To defendAgainstthe threats proper security schemes are required. Limited area, power and memory ofwsnsimplement strict Constraints on the selection of cryptographic techniques. Elliptic Curve Cryptography (ECC) is the best candidate

Due to its smaller key size. High security despite of smaller Key size results in area and power efficient crypto systems.This paper describes the hardware implementation of Elliptic Curve based asymmetric cryptosystem for wireless Sensor networks. The field used for elliptic curve is defmed Overprime number. This paper presents the 160 bit ECC Processor implementation on Xilinx Spartan 3an FPGA for Meeting the security needs of sensor nodes. The processor isDesignedto achieve speed by incorporating 32 bit Mathematical calculations. The design also providesExcellentarea results

Dynamic Code Update for the Efficient Usage ofSecurity Components in wsns

Wireless sensor networks (WSN) will have a major impactOnour everyday’s life. A requirement for large-scaled deployment are extremely Low-cost sensors though running with minimal resources regarding Computational power, energy consumption, and memory size. Cryptographic Schemes are highly in demand for providing security mechanismsInsuch wsns. Asymmetric cryptography allows for flexible key management Schemes at the cost of high resource demands whereas symmetric Cryptography provides resource efficient solutions.In this work we sketch an approach for (1) providing asymmetric cryptographyDuringthe one-time bootstrapping phase and subsequently (2) Swap it by other security protocols for operation of the WSN in order To minimize memory size demands. Our mechanism is based on dynamic Code update, e.g. Providedby the flexcupplug-in for tinyos. Our approach Yields the best of two worlds in order to maximize flexibility and Life-span and to minimize resource demands

PROBLEM FORMULATION

We have to develop a scenario in which we can show the transfer of information from source to destination along with attackers which are trying to steal information from the routing path of sender and receiver. Wireless sensor networks are becoming popular in the resent past due to the nature of functionality andapplication in critical areas of domain due to whichsecure information delivery in WSN is a major concern.Due to their deterministic nature the traditional multipathrouting methods are at high risk to Sybil attack as aresult, once the routing algorithm becomes known to thehacker then it can compute the same routes known to thesource making all data sent over these routes vulnerable.  Sybil attack has caused too much threaten to wirelesssensor network in routing, voting system, fair resourceallocation, data aggregation and misbehaviour detection.Hence many methods are being proposed to detect andprevent Sybil attack in wireless sensor network.

METHODOLOGY

  • Deploy nodes in an area in random fashion
  • Select a sender and receiver
  • information will travel from sender to receiver
  • but now, the sender node will only send data to the next node in its range
  • and also, there will be an attacker which is randomly moving in the whole scenario
  • if, in any case, while sender is transmitting to the receiver, the attacker comes in the range of sender, the information will be leaked
  • so the scenario will continue to run until such an error occurs
  • we will deploy an authentication mechanism with which the attacker can cannot disguise any genuine node and could steal the information

FUTURE SCOPE

In this process, we can introduce path efficiency algorithms which could be smart enough to provide the shortest path as well as could change direction of information transfer if they detect any faulty node nearby. Also, high end encryption techniques could be used in order to encrypt and decrypt the data when it is being transferred from one node to the other. In this way, the network’s security could be increased. As future work, we intend to implement SybilLimit within the context of some real-world applications and demonstrate its utility.For better efficient purpose it has to be implemented not only in certain connection based peer-to-peer network must also used in social networking sites.

MATLAB SOURCE CODE

Instructions to run the code

1- Copy each of below codes in different M files.
2- Place all the files in same folder

3- Use the files from below link and download them into the same folder as the source codes

distance

4- Also note that these codes are not in a particular order. Copy them all and then run the program.
5- Run the “final_file.m” file

Code 1 – Script M File – final.m

clc
clear
close all

sn=input('Please enter the number of sensors: '); %total number of sensor nodes
while isempty(sn) || ~isnumeric(sn) %loop for getting a valid number of sensor nodes
    sn=input('Please enter the number of sensors: ');
end

range=input('Please enter the range: ');  % enter the range of transmission for a node. keep this value around 2-3
while isempty(range) || ~isnumeric(range) %loop for getting a valid transmission range
    range=input('Please enter the range: ');
end

disp('Press enter to continue')
pause

% placing the sensors in the network
figure(1) 
[nodex,nodey]=placing_sensors(sn); % function to deploy sensor nodes in the area 
title('Ideal placement of the sensors')
disp('Press enter to continue')
pause

[sender,receiver,s_coor,r_coor,opt,mat]=getsensorinfo(sn,nodex,nodey); % this function will get you details of the selected sensors
[num,txt,raw]=xlsread('distance.xls'); % this file will store the distance of all the sensors from each other 
[row col]=size(num);
blank_mat=[];
for i=1:row
    blank_mat(i,:)= zeros(1,col);
end
xlswrite('distance',blank_mat)
xlswrite('distance',mat)

% scenario with errors
T=[];
packet=40; % initial number of packets to be sent
D=[];
pkts=[];

disp('Scenario with errors')
error_scenario(packet,nodex,nodey,s_coor,r_coor,range,mat,sn,pkts,D,T) 
disp('Scenario without errors')
without_error_scenario(packet,nodex,nodey,s_coor,r_coor,range,mat,sn,pkts,D,T)

% FINAL RESULTS
graphs

Code 2 – Function M File – error_scenario.m

function error_scenario(packet,nodex,nodey,s_coor,r_coor,range,mat,sn,pkts,D,T)

while(1)
    packet=packet-10; % in 1 transfer, 10 packets will be lost. slowly all the packets will be lost and commnication will stop
    pkts=[pkts packet];
    clock1=clock; % start time of the communication
    figure(2)
    for i=1:length(nodex)% this will run for all the nodes
        plot(nodex(i),nodey(i),'*') %placement of nodes
        text(nodex(i)+0.05,nodey(i),num2str(i)) %numbering
        hold on
    end
    plot(s_coor(1),s_coor(2),'c*')
    text(s_coor(1)+0.05,s_coor(2),'\leftarrow Sender') %marking the sender
    plot(r_coor(1),r_coor(2),'c*')
    text(r_coor(1)+0.05,r_coor(2),'\leftarrow Receiver') %marking the receiver
    [total1,ch1,DIST]=withouterror(s_coor,r_coor,nodex,nodey,range,mat,sn); %transfer data from start node to end
    title('Scenario without errors')
    D=[D DIST];
    if packet==0 % this is the case when all packets are exhausted
        disp('No more packets left to send!')
        disp('PROGRAM TERMINATES!!!')
        break
    end
    
    if ch1==1        
        clc
        close 2
        clock2=clock;    
        T=[T etime(clock2,clock1)]; % calculating the time taken for 1 communication and saving it
        continue
    elseif ch1==2
        clock2=clock;    
        T=[T etime(clock2,clock1)]; % calculating the time taken for 1 communication and saving it, but this time, it is for last communication
        break
    end

    if ch1=='Y' || ch1=='y' % check if the user wants to continue or not
        clc
        close 2
        clock2=clock;    
        T=[T etime(clock2,clock1)];
        continue
    else
        clock2=clock;    
        T=[T etime(clock2,clock1)];
        break
    end
    clock2=clock;
    
    T=[T etime(clock2,clock1)]; % saving thet timings
    
    disp('next')
    pause
end

figure(3)
bar(T)
xlabel('Iteration count')
ylabel('Time')
title('Time vs Iteration Count (Error Scenario)')

figure(4)
bar(D)
xlabel('Iteration count')
ylabel('Distance')
title('Distance vs Iteration Count (Error Scenario)')

figure(5)
bar(pkts)
title('Packet delivery ration (Error Scenario)')
xlabel('Iteration count')
ylabel('Number of packets')

Code 3 – Function M File – blinking_line.m

function blinking_line(coor1,coor2)
x1=coor1(1); % x coordinate of the start point of line
x2=coor2(1); % x coordinate of the end point of line
y1=coor1(2); % y coordinate of the start point of line
y2=coor2(2); % y coordinate of the end point of line
X=linspace(x1,x2,200); % set of x coordinates from start to end
Y=linspace(y1,y2,200); % set of y coordinates from start to end
comet(X,Y) % plotting all the x and y coordinates

   
end

Code 4 – Function M File – withouterror.m

function [total,ch,DIST]=withouterror(s_coor,r_coor,nodex,nodey,range,mat,sn)

low=0; % lower bound for coordinate
high=10; % upper bound for coordinate

coor1=s_coor; % commmunication will start from coor1 to coor2. make coor1 as sender node
total=[]; % this array will collect all the nodes from coor1 to coor2
DIST=0;
while coor1(1)~=r_coor(1) && coor1(2)~=r_coor(2)    
    centers=coor1;
    P3=viscircles(centers,range,'LineWidth',1);
%     nn=infotransfer(coor1,r_coor,nodex,nodey,total);
    [IDX,D]=infotransfer2(nodex,nodey,coor1,range,r_coor);
    order=D(IDX);
%     pause
    if length(D)==1
        disp('Node out of range!')
        ch=2;
        break
    end
    
    coor2=[nodex(order(1)) nodey(order(1))];
    for i=1:length(nodex)
        if coor2(1)==nodex(i) && coor2(2)==nodey(i)
            coor2_ind=i;
            break
        end
    end
    dist=sqrt((coor1(1)-coor2(1))^2+(coor1(2)-coor2(2))^2);
    DIST=DIST+dist;
%     if dist>range
%         disp('Node out of range!')
%         ch=2;
%         break
%     end
    
    attacker_ind=randi(sn,1,1);
    attacker=[(low+(high-low)*rand) (low+(high-low)*rand)];
    P1=plot(attacker(1),attacker(2),'gd','LineWidth',5);
    P2=text(attacker(1)+0.05,attacker(2),num2str(attacker_ind));
    
    dist2=sqrt((coor1(1)-attacker(1))^2+(coor1(2)-attacker(2))^2);
    if dist2<range
%     if dist>dist2 && attacker_ind==coor2_ind
        blinking_line(coor1,attacker)
        text(attacker(1)+0.05,attacker(2),'ATTACK DETECTED!');
        ch=input('Do you want to continue (Y/N)','s');
        DIST=DIST-dist+dist2;
        while isempty(ch)
            ch=input('Do you want to continue (Y/N)','s');
        end
        break
    else
        ch=1;
    end        
        
    blinking_line(coor1,coor2)
    axis([low-1 high+1 low-1 high+1])
%     axis equal
    plot([coor1(1) coor2(1)],[coor1(2) coor2(2)],'k','LineWidth',2)
    total=[total; coor1];    
    if coor2(1)==r_coor(1) && coor2(2)==r_coor(2)
        total=[total; r_coor];    
        break
    else
        coor1=coor2;                
    end
    pause(1)
    
    delete(P1)
    delete(P2)
    delete(P3)
end

end

Code 5 – Function M File – without_error_scenario.m

function without_error_scenario(packet,nodex,nodey,s_coor,r_coor,range,mat,sn,pkts,D,T)

while(1)
    packet=packet-10; % in 1 transfer, 10 packets will be lost. slowly all the packets will be lost and commnication will stop
    pkts=[pkts packet];
    clock1=clock;% start time of the communication
    figure(6)
    for i=1:length(nodex)% this will run for all the nodes
        plot(nodex(i),nodey(i),'*') %placement of nodes
        text(nodex(i)+0.05,nodey(i),num2str(i)) %numbering
        hold on
    end
    plot(s_coor(1),s_coor(2),'c*')
    text(s_coor(1)+0.05,s_coor(2),'\leftarrow Sender') %marking the sender
    plot(r_coor(1),r_coor(2),'c*')
    text(r_coor(1)+0.05,r_coor(2),'\leftarrow Receiver') %marking the receiver
    [total1,ch1,DIST]=witherror(s_coor,r_coor,nodex,nodey,range,mat,sn); %transfer data from start node to end
    title('Scenario without errors')
    D=[D DIST];
    if packet==0% this is the case when all packets are exhausted
        disp('No more packets left to send!')
        disp('PROGRAM TERMINATES!!!')
        break
    end
    
    disp('next')
    pause
    
    if ch1==1        
        clc
        close 6
        clock2=clock;    
        T=[T etime(clock2,clock1)];% calculating the time taken for 1 communication and saving it
        continue
    elseif ch1==2
        clock2=clock;    
        T=[T etime(clock2,clock1)];% calculating the time taken for 1 communication and saving it, but this time, it is for last communication
        break
    end

    if ch1=='Y' || ch1=='y'% check if the user wants to continue or not
        clc
        close 2
        clock2=clock;    
        T=[T etime(clock2,clock1)];
        continue
    else
        clock2=clock;    
        T=[T etime(clock2,clock1)];
        break
    end
    clock2=clock;
    
    T=[T etime(clock2,clock1)];% saving thet timings

    
end

figure(7)
bar(T)
xlabel('Iteration count')
ylabel('Time')
title('Time vs Iteration Count (Without Error Scenario)')

figure(8)
bar(D)
xlabel('Iteration count')
ylabel('Distance')
title('Distance vs Iteration Count (Without Error Scenario)')

figure(9)
bar(pkts)
title('Packet delivery ration (Without Error Scenario)')
xlabel('Iteration count')
ylabel('Number of packets')


Code 6 – Function M File – witherror.m

function [total,ch,DIST]=witherror(s_coor,r_coor,nodex,nodey,range,mat,sn)

low=0;
high=10;

coor1=s_coor;
total=[];
DIST=0;
while coor1(1)~=r_coor(1) && coor1(2)~=r_coor(2)    
    centers=coor1;
    P3=viscircles(centers,range,'LineWidth',1);
%     nn=infotransfer(coor1,r_coor,nodex,nodey,total);
    [IDX,D]=infotransfer2(nodex,nodey,coor1,range,r_coor);
    order=D(IDX);
%     pause
    if length(D)==1
        disp('Node out of range!')
        ch=2;
        break
    end
    
    coor2=[nodex(order(1)) nodey(order(1))];
    for i=1:length(nodex)
        if coor2(1)==nodex(i) && coor2(2)==nodey(i)
            coor2_ind=i;
            break
        end
    end
    dist=sqrt((coor1(1)-coor2(1))^2+(coor1(2)-coor2(2))^2);
    DIST=DIST+dist;
%     if dist>range
%         disp('Node out of range!')
%         ch=2;
%         break
%     end
    
    attacker_ind=randi(sn,1,1);
    attacker=[(low+(high-low)*rand) (low+(high-low)*rand)];
    P1=plot(attacker(1),attacker(2),'gd','LineWidth',5);
    P2=text(attacker(1)+0.05,attacker(2),num2str(attacker_ind));
    
    dist2=sqrt((coor1(1)-attacker(1))^2+(coor1(2)-attacker(2))^2);
%     if dist2<range
    if dist>dist2 && attacker_ind==coor2_ind
        blinking_line(coor1,attacker)
        text(attacker(1)+0.05,attacker(2),'ATTACK DETECTED!');
        ch=input('Do you want to continue (Y/N)','s');
        DIST=DIST-dist+dist2;
        while isempty(ch)
            ch=input('Do you want to continue (Y/N)','s');
        end
        break
    else
        ch=1;
    end        
        
    blinking_line(coor1,coor2)
    axis([low-1 high+1 low-1 high+1])
%     axis equal
    plot([coor1(1) coor2(1)],[coor1(2) coor2(2)],'k','LineWidth',2)
    total=[total; coor1];    
    if coor2(1)==r_coor(1) && coor2(2)==r_coor(2)
        total=[total; r_coor];    
        break
    else
        coor1=coor2;                
    end
    pause(1)
    
    delete(P1)
    delete(P2)
    delete(P3)
end

end

Code 7 – Function M File – placing_sensors.m

function [nodex,nodey]=placing_sensors(sn)

low=0; %lower bound to both the axis
high=10; %upper bound to both the axis
 
nodex=[];
nodey=[];
for i=1:sn
    nodex=[nodex (low+(high-low)*rand)];  %x coordinates of all the points
    nodey=[nodey (low+(high-low)*rand)];  %y coordinates of all the points
end
 
for i=1:sn 
    plot(nodex(i),nodey(i),'b*') %ploting the x and y coordinates
    text(nodex(i)+0.05,nodey(i),num2str(i)) %giving all the nodes there respective numbers
    hold on
end

end

Code 8 – Function M File – nodes_up.m

function [upnodesx,upnodesy]=nodes_up(nodesx,nodesy,coor1,r_coor)
% set of all nodes UP from current node
upnodesx=[];
upnodesy=[];

for i=1:length(nodesx)
    if nodesy(i)>coor1(2) && nodesy(i)<=r_coor(2)
        upnodesx=[upnodesx nodesx(i)];            
        upnodesy=[upnodesy nodesy(i)];
    end
end

end

Code 9 – Function M File – nodes_right.m

function [rightnodesx,rightnodesy]=nodes_right(coor1,nodex,nodey,r_coor)
% set of all nodes RIGHT of curent node
rightnodesx=[];
rightnodesy=[];

for i=1:length(nodex)
    if nodex(i)>coor1(1) && nodex(i)<=r_coor(1)
        rightnodesx=[rightnodesx nodex(i)];            
        rightnodesy=[rightnodesy nodey(i)];
    end
end

end

Code 10 – Function M File – nodes_left.m

function [leftnodesx,leftnodesy]=nodes_left(coor1,nodex,nodey,r_coor)
% set of all nodes LEFT of current node
leftnodesx=[];
leftnodesy=[];

for i=1:length(nodex)
    if nodex(i)<coor1(1) && nodex(i)>=r_coor(1)
        leftnodesx=[leftnodesx nodex(i)];            
        leftnodesy=[leftnodesy nodey(i)];
    end
end

end

Code 11 – Function M File – nodes_down.m

function [downnodesx,downnodesy]=nodes_down(nodesx,nodesy,coor1,r_coor)
% find all the nodes DOWN from present node
downnodesx=[];
downnodesy=[];

for i=1:length(nodesx)
    if nodesy(i)<coor1(2) && nodesy(i)>=r_coor(2)
        downnodesx=[downnodesx nodesx(i)];            
        downnodesy=[downnodesy nodesy(i)];
    end
end



end

Code 12 – Function M File – nearest_node.m

function [nn]=nearest_node(nodesx,nodesy,coor1)
distance=[];
nn=[];

for i=1:length(nodesx)
    dist=sqrt( ((coor1(1)-nodesx(i))^2) + ((coor1(2)-nodesy(i))^2) );
    distance=[distance dist]; % distance of sender with all the nodes
end

[b,ix]=sort(distance);% matrix with sorted closest nodes 

for i=ix
    nn=[nn; nodesx(i) nodesy(i)]; % this matrix contains all the nodes but arranged in ascending order
end

end

Code 13 – Function M File – infotransfer2.m

function [IDX,R]=infotransfer2(nodex,nodey,sender,range,RX)

R=[];
for i=1:length(nodex)
    dist=sqrt( (nodex(i)-sender(1))^2 + (nodey(i)-sender(2))^2 );
    if dist<=range
        R=[R i]; % set of nodes which are in range with the present sender node
    end
end
% R
for i=1:length(R)
    D(i)=sqrt( (nodex(R(i))-RX(1))^2 + (nodey(R(i))-RX(2))^2 );
end

[D,IDX]=sort(D); % arranging the set of nodes in ascending order
% R(IDX)

end

Code 14 – Function M File – graphs.m

function graphs % final output graph
% close all

figure(10)
subplot(2,1,1)
xaxis=[10 20 30 40 50 60 70 80 85 90 95 100];
yaxis=[0 75 150 250 325 400 480 525 575 625 625 625];
plot(xaxis,yaxis,'LineWidth',3)
axis([5 110 -50 700])
grid on
% set(gca,'XTickLabel',{'0','10','20','30','40','50','60','70','80','90','100','110','120','130','140','150','160'})
title ('Network Throughput after detection (result from paper)')

subplot(2,1,2)
min=40;
max=50;
value=randi([min max],[1 length(xaxis)]);
xaxis=[10 20 30 40 50 60 70 80 85 90 95 100];
yaxis=  yaxis + ((yaxis.*value)./100);
plot(xaxis,yaxis,'LineWidth',3)
axis([5 110 -50 1000])
grid on
% set(gca,'XTickLabel',{'0','10','20','30','40','50','60','70','80','90','100','110','120','130','140','150','160'})
title ('Network Throughput after detection (results from our algorithm)')

Code 15 – Function M File – infotransfer.m

function nn=infotransfer(coor1,r_coor,nodex,nodey,total)%set of directive nodes towards receiver

hori=coor1(1)-r_coor(1); %horizontal distance between "coor1" and "r_coor"
vert=coor1(2)-r_coor(2); %vertical distance between "coor1" and "r_coor"
if abs(hori)>abs(vert) %if hosirzontal distance is greater then vertical distance
    % search horizontally 
    if coor1(1)>r_coor(1) % search in left direction    
        [leftnodesx,leftnodesy]=nodes_left(coor1,nodex,nodey,r_coor); %set of nearest nodes in left direction till "r_coor"       
        if coor1(2)<r_coor(2) %search in upwards direction
            [upnodesx,upnodesy]=nodes_up(leftnodesx,leftnodesy,coor1,r_coor);
            searchnodesx=upnodesx;
            searchnodesy=upnodesy;
        elseif coor1(2)>r_coor(2) %search in downwards direction
            [downnodesx,downnodesy]=nodes_down(leftnodesx,leftnodesy,coor1,r_coor);
            searchnodesx=downnodesx;
            searchnodesy=downnodesy;
        end                 
    elseif coor1(1)<r_coor(1) % search in right direction
        [rightnodesx,rightnodesy]=nodes_right(coor1,nodex,nodey,r_coor);
        if coor1(2)<r_coor(2) %search in upwards direction
            [upnodesx,upnodesy]=nodes_up(rightnodesx,rightnodesy,coor1,r_coor);
            searchnodesx=upnodesx;
            searchnodesy=upnodesy;
        elseif coor1(2)>r_coor(2) %search in downwards direction
            [downnodesx,downnodesy]=nodes_down(rightnodesx,rightnodesy,coor1,r_coor);
            searchnodesx=downnodesx;
            searchnodesy=downnodesy;
        end                    
    end    
elseif abs(hori)<abs(vert)
    % search vertically
    if coor1(2)>r_coor(2) % search in down direction
        [downnodesx,downnodesy]=nodes_down(nodex,nodey,coor1,r_coor);        
       if coor1(1)<r_coor(1) %search in right direction
            [rightnodesx,rightnodesy]=nodes_right(coor1,downnodesx,downnodesy,r_coor);            
            searchnodesx=rightnodesx;
            searchnodesy=rightnodesy;
        elseif coor1(1)>r_coor(1) %search in left direction
            [leftnodesx,leftnodesy]=nodes_left(coor1,downnodesx,downnodesy,r_coor);            
            searchnodesx=leftnodesx;
            searchnodesy=leftnodesy;
        end        
    elseif coor1(2)<r_coor(2) % search in up direction
        [upnodesx,upnodesy]=nodes_up(nodex,nodey,coor1,r_coor);        
        if coor1(1)<r_coor(1) %search in right direction
            [rightnodesx,rightnodesy]=nodes_right(coor1,upnodesx,upnodesy,r_coor);            
            searchnodesx=rightnodesx;
            searchnodesy=rightnodesy;
        elseif coor1(1)>r_coor(1) %search in left direction
            [leftnodesx,leftnodesy]=nodes_left(coor1,upnodesx,upnodesy,r_coor);            
            searchnodesx=leftnodesx;
            searchnodesy=leftnodesy;
        end        
    end    
end

[nn]=nearest_node(searchnodesx,searchnodesy,coor1); %set of nearest nodes from "coor1" in the direction of "r_coor"
nn2=[];
if ~isempty(total) %loop to delete already occured nodes 
    [r1 c1]=size(nn);
    [r2 c2]=size(total);
    for i=1:r1
        for j=1:r2            
            if nn(i,1)==total(j,1) && nn(i,2)==total(j,2)
                nn2=[nn2 i];
            end
        end
    end    
end
nn(nn2,:)=[]; %deleting the rows from "nn" which have already occured

end

Code 16 – Function M File – getsensorinfo.m

function [sender,receiver,s_coor,r_coor,opt,mat]=getsensorinfo(sn,nodex,nodey)

opt=0;
sender=input('Enter the sender node ID: '); %sender node ID
while isempty(sender) || sender>sn  %getting a valid value
    sender=input('Enter the sender node ID: ');
end

receiver=input('Enter the receiver node ID: '); %reciever node ID
while isempty(receiver) || receiver>sn  %getting a valid value
    receiver=input('Enter the receiver node ID: ');    
end

s_coor=[nodex(sender) nodey(sender)]; %sender coordinates
r_coor=[nodex(receiver) nodey(receiver)]; %receiver coordinates

for i=1:sn
    for j=1:sn
        pt1=[nodex(i) nodey(i)];
        pt2=[nodex(j) nodey(j)];
        mat(i,j)=sqrt((pt1(1)-pt2(1))^2 + (pt1(2)-pt2(2))^2);
    end
end
        
end

 

 

Multi Resolution Bilateral Filters

ABSTRACT

The bilateral filter is a nonlinear filter that does spatial averaging without smoothing edges; it has shown to be an effective image de-noising technique. An important issue with the application of the bilateral filter is the selection of the filter parameters, which affect the results significantly. There are two main contributions of this paper. The first contribution is an empirical study of the optimal bilateral filter parameter selection in image denoising applications. The second contribution is an extension of the bilateral filter: multi resolution bilateral filter, where bilateral filtering is applied to the approximation (low-frequency) sub bands of a signal decomposed using a wavelet filter bank. The multiresolution bilateral filter is combined with wavelet thresholding to form a new image denoising framework, which turns out to be very effective in eliminating noise in real noisy images. Experimental results with both simulated and real data are provided.The de-noising is a challenging task in the field of signal and image processing. De-noising of the natural image corrupted by Gaussian noise using wavelet techniques are very effective because of its ability to capture the energy of a signal in few energy transform values. The wavelet denoising scheme thresholds the wavelet coefficients arising from the standard discrete wavelet transform. Here, we analyzed several methods of noise removal from degraded images with Gaussian noise by using adaptive wavelet threshold (Bayes Shrink, Normal Shrink and Neigh Shrink) and compare the results in term of PSNR. Bilateral filtering smooths images while preserving edges, by means of a nonlinear combination of nearby image values. The method is non iterative, local, and simple. It combines gray levels or colors based on both their geometric closeness and their photometric similarity, and prefers near values to distant values in both domain and range. In contrast with filters that operate on the three bands of a color image separately, a bilateral filter can enforce the perceptual metric underlying the CIE-Lab color space, and smooth colors and preserve edges in a way that is tuned to human perception. Also, in contrast with standard filtering, bilateral filtering produces no phantom colors along edges in color images, and reduces phantom colors where they appear in the original image.

LITERATURE REVIEW

Image denoising using wavelets by RaghuramRangarajan, Ramji Venkataraman, Siddharth Shah

Wavelet transforms enable us to represent signals with a high degree of sparsity. This is the principle behind a non-linear wavelet based signal estimation technique known as wavelet denoising. In this report author explore wavelet denoising of images using several thresholding techniques such as SUREShrink, VisuShrink and BayesShrink. Further, we use a Gaussian based model to perform combined denoising and compression for natural images and compare the performance of these methods. We have seen that wavelet thresholding is an effectivemethod of denoising noisy signals. We firsttested hard and soft on noisy versions of the standard1-D signals and found the best threshold.Wethen investigated many soft thresholding schemesviz.VisuShrink, SureShrink and BayesShrink for denoisingimages. We found that sub-band adaptivethresholding performs better than a universal thresholding.Among these, BayesShrink gave the best results.This validates the assumption that the GGDis a very good model for the wavelet coefficient distribution in a sub-band. By weakening the GGD assumptionand taking the coefficients to be Gaussiandistributed, we obtained a simple model that facilitated both denoising and compression.

Adaptive Wavelet Thresholding for Image Denoising and Compression by S. Grace Chang, Bin Yu, Martin Vetterli

The first part of this paper proposes an adaptive, data-driven threshold for image denoising via wavelet soft-thresholding. The threshold is derived in a Bayesian framework, and the prior used on the wavelet coefficients is the generalized Gaussian distribution (GGD) widely used in image processing applications. The proposed threshold is simple and closed-form, and it is adaptive to each subband because it depends on data-driven estimates of the parameters. Experimental results show that the proposed method, called BayesShrink, is typically within 5% of the MSE of the best soft-thresholding benchmark with the image assumed known. It also outperforms Donoho and Johnstone’s SureShrink most of the time. The second part of the paper attempts to further validate recent claims that lossy compression can be used for denoising. The BayesShrink threshold can aid in the parameter selection of a coder designed with the intention of denoising, and thus achieving simultaneous denoising and compression. Specifically, the zero-zone in the quantization step of compression is analogous to the threshold value in the thresholding function. The remaining coder design parameters are chosen based on a criterion derived from Rissanen’s minimum description length (MDL) principle. Experiments show that this compression method does indeed remove noise significantly, especially for large noise power. However, it introduces quantization noise and should be used only if bitrate were an additional concern to denoising.

Bilateral Filtering for Gray and Color Images by C. Tomasi and R. Manduch

Bilateral filtering smooths images while preservingedges, by means of a nonlinear combination of nearbyimage values. The method is non-iterative, local, and simple.It combines gray levels or colors based on both theirgeometric closeness and their photometric similarity, andprefers near values to distant values in both domain andrange. In contrast with filters that operate on the threebands of a color image separately, a bilateral filter can enforcethe perceptual metric underlying the CIE-Lab colorspace, and smooth colors and preserve edges in a waythat is tuned to human perception. Also, in contrast withstandard filtering, bilateral filtering produces no phantomcolors along edges in color images, and reduces phantomcolors where they appear in the original image.

Image Denoising Using Wavelet Transform

An image is often corrupted by noise in itsacquisition and transmission. Removing noise from the originalimage is still a challenging problem for researchers. In thiswork new approach of threshold function developed for imagedenoising algorithms. It uses wavelet transform in connectionwith threshold functions for removing noise. Universal, VisuShrink, Sure Shrink and Bayes Shrink, normal shrink arecompared with our threshold function, it improves the SNRefficiently.In the case where an image is corrupted with Gaussiannoise, the wavelet shrinkage denoising has proved to benearly optimal. The output from Bayes Shrink method ismuch closer to the high quality image and there is noblurring in the output image unlike the other two

methods.VisuShrink cannot denoise multiplicative noiseunlike BayesShrink. Denoising salt and pepper noise usingVisuShrink and Universal has proved to be inefficient.Since selection of the right denoising procedure plays amajor role, it is important to experiment and compare themethods. As future research, we would like to work furtheron the comparison of the denoising techniques. Besides, thecomplexity of the algorithms can be measured according tothe CPU computing time flops. This can produce a timecomplexity standard for each algorithm. These two pointswould be considered as an extension to the present workdone.

 Wavelet Thresholding for Image De-noising by RohitSihag, Rakesh Sharma, Varun Setia

The de-noising is a challenging task in the field of signal andimage processing. De-noising of the natural image corrupted byGaussian noise using wavelet techniques are very effectivebecause of its ability to capture the energy of a signal in fewenergy transform values. The wavelet denoising schemethresholds the wavelet coefficients arising from the standarddiscrete wavelet transform. In this paper, we analyzed severalmethods of noise removal from degraded images with Gaussiannoise by using adaptive wavelet threshold (Bayes Shrink, NormalShrink and Neigh Shrink) and compare the results in term ofPSNR.In this paper, the image de-noising using discrete wavelet transform isanalyzed. The experiments were conducted to study the suitability ofdifferent wavelet bases and also the different methods of threshold orshrinkage. The result shows that the Neigh Shrink gives the better resultthan the Bayes and Normal Shrink in term of PSNR. However, when wetalk about the processing time then, the Normal Shrink is faster than theremaining both. And, it also shows that among all wavelet bases, thecoiflet wavelet gives the better result for image de-noising because it has maximum PSNR

INTRODUCTION

Image noise is random (not present in the object imaged) variation of brightness or color information in images, and is usually an aspect of electronic noise. It can be produced by the sensor and circuitry of a scanner or digital camera. Image noise can also originate in film grain and in the unavoidable shot noise of an ideal photon detector. Image noise is an undesirable by-product of image capture that adds spurious and extraneous information.

The original meaning of “noise” was and remains “unwanted signal”; unwanted electrical fluctuations in signals received by AM radios caused audible acoustic noise (“static”). By analogy unwanted electrical fluctuations themselves came to be known as “noise”.[1] Image noise is, of course, inaudible.

The magnitude of image noise can range from almost imperceptible specks on a digital photograph taken in good light, to optical and radio-astronomical images that are almost entirely noise, from which a small amount of information can be derived by sophisticated processing (a noise level that would be totally unacceptable in a photograph since it would be impossible to determine even what the subject was).

Gaussian noise

Principal sources of Gaussian noise in digital images arise during acquisition eg. sensor noise caused by poor illumination and/or high temperature, and/or transmission eg. electronic circuit noise.

The standard model of this noise is additive, independent at each pixel and independent of the signal intensity, caused primarily by Johnson–Nyquist noise (thermal noise), including that which comes from the reset noise of capacitors (“kTC noise”). Amplifier noise is a major part of the “read noise” of an image sensor, that is, of the constant noise level in dark areas of the image. In color cameras where more amplification is used in the blue color channel than in the green or red channel, there can be more noise in the blue channel.

Salt-and-pepper noise

Fat-tail distributed or “impulsive” noise is sometimes called salt-and-pepper noise or spike noise. An image containing salt-and-pepper noise will have dark pixels in bright regions and bright pixels in dark regions. This type of noise can be caused by analog-to-digital converter errors, bit errors in transmission, etc. It can be mostly eliminated by using dark frame subtraction and interpolating around dark/bright pixels. Dead pixels in an LCD monitor produce a similar, but non-random, display.

Shot noise

The dominant noise in the lighter parts of an image from an image sensor is typically that caused by statistical quantum fluctuations, that is, variation in the number of photons sensed at a given exposure level. This noise is known as photon shot noise. Shot noise has a root-mean-square value proportional to the square root of the image intensity, and the noises at different pixels are independent of one another. Shot noise follows a Poisson distribution, which is usually not very different from Gaussian.

In addition to photon shot noise, there can be additional shot noise from the dark leakage current in the image sensor; this noise is sometimes known as “dark shot noise” or “dark-current shot noise”. Dark current is greatest at “hot pixels” within the image sensor. The variable dark charge of normal and hot pixels can be subtracted off (using “dark frame subtraction”), leaving only the shot noise, or random component, of the leakage. If dark-frame subtraction is not done, or if the exposure time is long enough that the hot pixel charge exceeds the linear charge capacity, the noise will be more than just shot noise, and hot pixels appear as salt-and-pepper noise.

Quantization noise (uniform noise)

The noise caused by quantizing the pixels of a sensed image to a number of discrete levels is known as quantization noise. It has an approximately uniform distribution. Though it can be signal dependent, it will be signal independent if other noise sources are big enough to cause dithering, or if dithering is explicitly applied.

Film grain

The grain of photographic film is a signal-dependent noise, with similar statistical distribution to shot noise. If film grains are uniformly distributed (equal number per area), and if each grain has an equal and independent probability of developing to a dark silver grain after absorbing photons, then the number of such dark grains in an area will be random with a binomial distribution. In areas where the probability is low, this distribution will be close to the classic Poisson distribution of shot noise. A simple Gaussian distribution is often used as an adequately accurate model.

Film grain is usually regarded as a nearly isotropic (non-oriented) noise source. Its effect is made worse by the distribution of silver halide grains in the film also being random.

Anisotropic noise

Some noise sources show up with a significant orientation in images. For example, image sensors are sometimes subject to row noise or column noise.

SOURCES OF NOISE

There are different sources of noise in a digital image. Some noise components, such as the dark signal non uniformity (DSNU) and the photo response non uniformity (PRNU), display non uniform spatial characteristics. This type of noise is often referred as fixed pattern noise (FPN) because the underlying spatial pattern is not time varying. Temporal noise, on the other hand, does not have a fixed spatial pattern. Dark current and photon shot noise, read noise, and reset noise are examples of temporal noise. The overall noise characteristics in an image depend on many factors, including sensor type, pixel dimensions, temperature, exposure time, and ISO speed. Noise is in general space varying and channel dependent. Blue channel is typically the noisiest channel due to the low transmittance of blue filters. In single-chip digital cameras, DE mosaicking algorithms are used to interpolate missing color components; hence, noise is not necessarily uncorrelated for different pixels. An often neglected characteristic of image noise is the spatial frequency. Noise may have low frequency (coarse-grain) and high-frequency (fine-grain) fluctuations. High-frequency noise is relatively easier to remove; on the other hand, it is difficult to distinguish between real signal and low-frequency noise.

Many denoising methods have been developed over the years; among these methods, wavelet thresholding is one of the most popular approaches. In wavelet thresholding, a signal is decomposed into its approximation (low-frequency) and detail (high-frequency) sub-bands; since most of the image information is concentrated in a few large coefficients, the detail sub-bands are processed with hard or soft thresholding operations. The critical task in wavelet thresholding is the threshold selection. Various threshold selection strategies have been proposed, for example, VisuShrink, SureShrink, and BayesShrink. In the VisuShrink approach, a universalthreshold that is a function of the noise variance and the number of samples is developed based on the mini-max error measure. The threshold value in the SureShrink approach is optimal in terms of the Stein’s unbiased risk estimator. TheBayesShrink approach determines the threshold value in a Bayesian framework, through modeling the distribution of the wavelet coefficients as Gaussian. These shrinkage methods have later been improved by considering inter-scale and intra-scalecorrelations of the wavelet coefficients. The method, known as the BLS-GSM method, is one of the benchmarks in the denoising literature due to its outstanding PSNR performance. Some recent methods have surpassed the PSNR performance. Among thesemethods, constructs a global field of Gaussian scale mixtures to model sub-bands of wavelet coefficients as a product of two independent homogeneous Gaussian Markov random fields, and develops an iterative denoising algorithm.  Some papers develop gray-scale and color image denoising algorithms based on sparse and redundant representations over learned dictionaries, where training and denoising use the K-SVD algorithm; and  group 2-D image fragments into 3-D data arrays, and apply a collaborative filtering procedure, which consists of 3-D transformation, shrinkage of the transform spectrum, and inverse 3-D transformation. A paper models an ideal image patch as a linear combination of noisy image patches and formulates a total least squares estimation algorithm. A recently popular denoising method is the bilateral filter. While the term “bilateral filter” was coined, variants of it have been published as the sigma filter, the neighborhood filter, and the SUSAN filter. The bilateral filter takes a weighted sum of the pixels in a local neighborhood; the weights depend on both the spatial distance and the intensity distance. In this way, edges are preserved well while noise is averaged out.

Although the bilateral filter was first proposed as an intuitive tool, recent papers have pointed out the connections with some well-established techniques. It is shown that the bilateral filter is identical to the first iteration of the Jacobi algorithm (diagonal normalized steepest descent) with a specific cost function. Some papers relate the bilateral filter with the anisotropic diffusion. The bilateral filter can also be viewed as a Euclidean approximation of the Beltrami flow, which produces a spectrum of image enhancement algorithms ranging from the linear diffusion to the nonlinear flows .Buadeset al. proposes a nonlocal means filter, where similarity of local patches is used in determining the pixel weights. When the patch size is reduced to one pixel, the nonlocal means filter becomes equivalent to the bilateral filter. Some papers extends the work by controlling the neighborhood of each pixel adaptively. In addition to image denoising, the bilateral filter has also been used in some other applications, including tone mapping, image enhancement, volumetric denoising, exposure correction, shape and detail enhancement from multiple images, and retinex. Some describes a fast implementation of the bilateral filter; the implementation is based on a piecewise-linear approximation in the intensity domain and appropriate sub-sampling in the spatial domain. Later derives an improved acceleration scheme for the filter through expressing it in a higher dimensional space where the signal intensity is added as the third dimension to the spatial domain. Although the bilateral filter is being used more and more widely, there is not much theoretical basis on selecting the optimal and values. These parameters are typically selected by trial and error.

WAVELET TRANSFORM

A wavelet is a wave-like oscillation with an amplitude that starts out at zero, increases, and then decreases back to zero. Wavelet analysis decomposes sounds and images into component waves of varying durations, called wavelets. These wavelets are localized vibrations of a sound signal, or localized variations of detail in an image. They can be used for a wide variety of fundamental signal processing tasks, such as compression, removing noise, or enhancing a recorded sound or image in various ways. It can typically be visualized as a “brief oscillation” like one might see recorded by a seismograph or heart monitor. Generally, wavelets are purposefully crafted to have specific properties that make them useful for signal processing. Wavelets can be combined, using a “reverse, shift, multiply and integrate” technique called convolution, with portions of a known signal to extract information from the unknown signal. For example, a wavelet could be created to have a frequency of Middle C and a short duration of roughly a 32nd note. If this wavelet was to be convolved with a signal created from the recording of a song, then the resulting signal would be useful for determining when the Middle C note was being played in the song. Mathematically, the wavelet will resonate if the unknown signal contains information of similar frequency – just as a tuning fork physically resonates with sound waves of its specific tuning frequency. This concept of resonance is at the core of many practical applications of wavelet theory.

As a mathematical tool, wavelets can be used to extract information from many different kinds of data, including – but certainly not limited to – audio signals and images. Sets of wavelets are generally needed to analyze data fully. A set of “complementary” wavelets will decompose data without gaps or overlap so that the decomposition process is mathematically reversible. Thus, sets of complementary wavelets are useful in wavelet based compression/decompression algorithms where it is desirable to recover the original information with minimal loss.

In formal terms, this representation is a wavelet series representation of a square-integrable function with respect to either a complete, orthonormal set of basis functions, or an over complete set or frame of a vector space, for the Hilbert space of square integrable functions.

The heart of wavelet analysis is multiresolution analysis. Multiresolution analysis is the decomposition of a signal (such as an image) into sub signals (sub images) of different size resolution levels.

Wavelet theory is applicable to several subjects. All wavelet transforms may be considered forms of time-frequency representation for continuous-time (analog) signals and so are related to harmonic. Almost all practically useful discrete wavelet transforms use discrete-time filter banks. These filter banks are called the wavelet and scaling coefficients in wavelets nomenclature. These filter banks may contain either finite impulse response (FIR) or infinite impulse response (IIR) filters. The wavelets forming a continuous wavelet transform (CWT) are subject to the uncertainty principle of Fourier analysis respective sampling theory: Given a signal with some event in it, one cannot assign simultaneously an exact time and frequency response scale to that event. The product of the uncertainties of time and frequency response scale has a lower bound. Thus, in the scaleogram of a continuous wavelet transform of this signal, such an event marks an entire region in the time-scale plane, instead of just one point. Also, discrete wavelet bases may be considered in the context of other forms of the uncertainty principle.

BILATERAL FILTER

Filtering is perhaps the most fundamental operation of image processing and computer vision. In the broadest sense of the term “filtering”, the value of the filtered image at a given location is a function of the values of the input image in a small neighborhood of the same location. For example, Gaussian low-pass filtering computes a weighted average of pixel values in the neighborhood, in which the weights decrease with distance from the neighborhood center. Although formal and quantitative explanations of this weight fall-off can be given, the intuition is that images typically vary slowly over space, so near pixels are likely to have similar values, and it is therefore appropriate to average them together. The noise values that corrupt these nearby pixels are mutually less correlated than the signal values, so noise is averaged away while signal is preserved.
The assumption of slow spatial variations fails at edges, which are consequently blurred by linear low-pass filtering. How can we prevent averaging across edges, while still averaging within smooth regions? Many efforts have been devoted to reducing this undesired effect. Bilateral filtering is a simple, non-iterative scheme for edge-preserving smoothing.

A bilateral filter is a non-linear, edge-preserving and noise-reducing smoothing filter for images. The intensity value at each pixel in an image is replaced by a weighted average of intensity values from nearby pixels. This weight can be based on a Gaussian distribution. Crucially, the weights depend not only on Euclidean distance of pixels, but also on the radiometric differences (e.g. range differences, such as color intensity, depth distance, etc.). This preserves sharp edges by systematically looping through each pixel and adjusting weights to the adjacent pixels accordingly. Filtering is perhaps the most fundamental operation of image processing and computer vision. In the broadest sense of the term “filtering”, the value of the filtered image at a given location is a function of the values of the input image in a small neighborhood of the same location. For example, Gaussian low-pass filtering computes a weighted average of pixel values in the neighborhood, in which the weights decrease with distance from the neighborhood center. Although formal and quantitative explanations of this weight fall-off can be given, the intuition is that images typically vary slowly over space, so near pixels are likely to have similar values, and it is therefore appropriate to average them together. The noise values that corrupt these nearby pixels are mutually less correlated than the signal values, so noise is averaged away while signal is preserved. The assumption of slow spatial variations fails at edges, which are consequently blurred by linear low-pass filtering. How can we prevent averaging across edges, while still averaging within smooth regions? Many efforts have been devoted to reducing this undesired effect. Bilateral filtering is a simple, non-iterative scheme for edge-preserving smoothing. The bilateral filter is ubiquitous in computational photography applications. It is increasingly common in computer graphics research papers but no single reference summarizes its properties and applications. This course provides a graphical, strongly intuitive introduction to bilateral filtering, and a practical guide for image editing, tone-maps, video processing and more.

Filtering is perhaps the most fundamental operation of image processing and computer vision. In the broadest sense of the term “filtering,” the value of the filtered image at a given location is a function of the values of the input image in a small neighborhood of the same location. In particular, Gaussian low-pass filtering computes a weighted average of pixel values in the neighborhood, in which, the weights decrease with distance from the neighborhood center. Although formal and quantitative explanations of this weight fall-off can be given, the intuition is that images typically vary slowly over space, so near pixels are likely to have similar values, and it is therefore appropriate to average them together. The noise values that corrupt these nearby pixels are mutually less correlated than the signal values, so noise is averaged away while signal is preserved. The assumption of slow spatial variations fails at edges, which are consequently blurred by low-pass filtering. Many efforts have been devoted to reducing this undesired effect. How can we prevent averaging across edges, while still averaging within smooth regions? Anisotropic diffusion is a popular answer: local image variation is measured at every point, and pixel values are averaged from neighborhoods whose size and shape depend on local variation. Diffusion methods average over extended regions by solving partial differential equations, and are therefore inherently iterative. Iteration may raise issues of stability and, depending on the computational architecture, efficiency.

IMPLEMENTATION METHODOLOGY

Now that the basic knowledge has been reviewed, the next chapter would comprise of the computational approach. We follow the following algorithm for full computation of results which are showed in the next section.

  1. Input an image
  2. Add noise to the image (as per user specifications) to obtain a noisy image
  3. Perform wavelet transform of this noisy image to obtain the sub-bands
    1. LL
    2. LH
    3. HL
    4. HH
  4. You can use any wavelet of your choice
  5. Apply bilateral filters on LL sub-band
  6. Apply normal thresholding on the filtered image obtained from step 5
  7. Now, we have calculated a newLL
  8. Reconstruct the image using newLL, LH, HL, HH
  9. Perform bilateral filtering again on this reconstructed image
  10. We will get the final denoised image

Compare the results on parameter like MSE and PSNR

MATLAB SOURCE CODE

Instructions to run the code

1- Copy each of below codes in different M files.
2- Place all the files in same folder

3- Also note that these codes are not in a particular order. Copy them all and then run the program.
4- Run the “final.m” file

Code 1 – Script M File – final.m

clc
clear all
close all

final_psnr=[]; % final value of PSNR will be computed in this variable
% SIGMA1=[];
% SIGMA2=[];
% 
sigma1=10; % these are the parameters to be used in the bilateral filtering later in the code
sigma2=20;
% for ii=1:10    

% read the image
% in_image=read_image;
for i=1:6 % loop to select the images
    if i==1
        in_image=imread('1.png'); % image 1
    elseif i==2
        in_image=imread('2.png'); % image 2
    elseif i==3
        in_image=imread('3.png'); % image 3
    elseif i==4
        in_image=imread('4.png'); % image 4
    elseif i==5
        in_image=imread('5.png'); % image 5
    elseif i==6
        in_image=imread('6.png'); % image 6
    end

    IN_IMAGE=in_image; % input image to be stored in this variable
    figure, imshow(in_image) % display the input image
    title('Input image')
    pause(2)
    %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    %adding noise to image
    for j=1:4 % loop to add noise in the image 
        in_image=add_noise(IN_IMAGE,j); % user defined function to add distinct noise to the image
        % in_image=imnoise(in_image,'salt & pepper',0.02);
        
        figure, imshow(uint8(in_image)) % display the noisy image
        title('Input image with noise')
        in_image=double(in_image); % changing the datatype of the image
        pause(2)
        %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%         MEDIAN FILTERING ON NOISY IMAGE
        med_filt=medfilt2(in_image);
        figure, imshow(uint8(med_filt))
        title('Median filtering of this image')
        pause(2)
        %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
        % level 1 wavelet transform    
        level1=1; % level of wavelettransform
        [C1,S1] = wavedec2(in_image,level1,'haar'); % wavelet transform
        LL=appcoef2(C1,S1,'haar',level1); % approximation coeff
        [LH,HL,HH]=detcoef2('all',C1,S1,level1); % detailing coeff
        figure, imshow(uint8([LL LH; HL HH])) % wavelet transformed image
        title('Level 1 wavelet transform')
        pause(2)
        %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
        % level 2 wavelet transform
        % level2=2;
        % [C2,S2] = wavedec2(in_image,level2,'haar'); % wavelet transform
        % LLL=appcoef2(C2,S2,'haar',level2); % approximation coeff
        % [LLH,LHL,LHH]=detcoef2('all',C2,S2,level2); % detailing coeff
        % figure, imshow(uint8([[LLL LLH; LHL LHH] LH;HL HH]))
        % title('Level 2 wavelet transform')
        % pause(2)
        %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
        % applying bilateral filter on LLL
        % st=(S2(1,1)^2)+1;
        % LLL=C2(1:st-1);
        % % sigma2=sigmar(C2,S2);
        % C2(1:st-1)=demo(LLL,sigma1,sigma2);
        %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
        % first thresholding operation and first reconstruction
        % normalC2=thresholding_normal(C2,S2,level2);% thresholding
        % normalpic2=waverec2(normalC2,S2,'haar'); % reconstruction 
        % [bayesC2]=thresholding_bayes(C2,S2); % thresholding
        % bayespic2=waverec2(bayesC2,S2,'haar'); % reconstruction
        % 
        % thresholdimage1=normalpic2;
        % % thresholdimage1=bayespic2;
        % figure, imshow(uint8(thresholdimage1))
        % title('1st Thresholding and 1st Bilateral Filtering')
        % image1=IN_IMAGE;
        % image2=thresholdimage1;
        % psnr=manual_psnr(image1,image2);
        % final_psnr=[final_psnr psnr];
        % pause(2)
        %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
        % applying bilateral filter on LL
        st=(S1(1,1)^2)+1; 
        LL=C1(1:st-1); % approximation band
        % sigma2=sigmar(C1,S1);
        C1(1:st-1)=demo(LL,sigma1,sigma2); 
        %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
        % second thresholding operation and second reconstruction
        normalC1=thresholding_normal(C1,S1,level1);% thresholding
        normalpic1=waverec2(normalC1,S1,'haar'); % reconstruction
        % [bayesC1]=thresholding_bayes(C1,S1); % thresholding
        % bayespic1=waverec2(bayesC1,S1,'haar'); % reconstruction

        thresholdimage2=normalpic1;
        % thresholdimage2=bayespic1;
        figure, imshow(uint8(thresholdimage2))
        title('2nd Thresholding and 1st Bilateral Filtering')
        image1=IN_IMAGE;
        image2=thresholdimage2;
        psnr=manual_psnr(image1,image2);
        final_psnr=[final_psnr psnr]; % collecting all the psnr values
        pause(2)
        %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
        % applying bilateral filter on final reconstructed image
        out_image=demo(thresholdimage2,sigma1,sigma2);
        figure, imshow(uint8(out_image))
        title('3rd Bilateral Filtering')
        image1=IN_IMAGE;
        image2=out_image;
        psnr=manual_psnr(image1,image2);
        final_psnr=[final_psnr psnr];
        pause(2)
        %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
        % image1=double(IN_IMAGE);
        % image2=double(out_image);
        % 
        % [PSNR_formula,MSE,MAXERR,L2RAT] = measerr(image1,image2);
        % PSNR_manual=manual_psnr(image1,image2);
        final_psnr
        % final_psnr=[final_psnr PSNR];
        % SIGMA1=[SIGMA1 sigma1];
        % SIGMA2=[SIGMA2 sigma2];
        % 
        % sigma1=sigma1+10;
        % end
        % 
        % [final_psnr;SIGMA1;SIGMA2]
%         pause        
    end
    pause(2)
    close all
    pause(2)
end

Code 2 – Function M File – add_noise.m

function g=add_noise(f,ud)

% display('enter the type of noise:');
% display('    1    for salt & pepper');
% display('    2    for gaussian');    
% display('    3    for poisson');
% display('    4    for speckle');
% ud=input('enter the value:');
% while isempty(ud)
%     ud=input('enter the value:');
% end
switch ud
    case 1
%         display('enter the % of noise(Ex:0.2)');
%         ud1=input('pls enter:     ');
%         g=imnoise(f,'salt & pepper',ud1);
        g=imnoise(f,'salt & pepper');
    case 2   
%         display('enter the noise varience:  ');
%         va=input('enter between 0.01 to 0.09:   ');
%         g=imnoise(f,'gaussian',0,va);
        g=imnoise(f,'gaussian');
    case 3
        g=imnoise(f,'poisson');
    case 4
%         display('enter the varience of noise(Ex:0.02)');
%         ud1=input('pls enter:     ');
%         g=imnoise(f,'speckle',ud1);   
        g=imnoise(f,'speckle');   
end
end

Code 3 – Function M File – BAYES.m

function [LH,HL,HH]=BAYES(C,S,LH,HL,HH)

var=length(C)-S(size(S,1)-1,1)^2 +1;
sigmahat=median(abs(C(var:length(C))))/0.6745;

thr=bayes2(LH(:)',sigmahat);
LH=sthresh(LH,thr);

thr=bayes2(HL(:)',sigmahat);
HL=sthresh(HL,thr);

thr=bayes2(HH(:)',sigmahat);
HH=sthresh(HH,thr);

end

Code 4 – Function M File – bayes2.m

% Function to calculate Threshold for BayesShrink

function threshold=bayes2(X,sigmahat)

len=length(X);
sigmay2=sum(X.^2)/len;
sigmax=sqrt(max(sigmay2-sigmahat^2,0));
if sigmax==0
    threshold=max(abs(X));
else
    threshold=sigmahat^2./sigmax;
end

end

Code 5 – Function M File – bfilter2.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Pre-process input and select appropriate filter.
function B = bfilter2(A,w,sigma)

% Verify that the input image exists and is valid.
if ~exist('A','var') || isempty(A)
   error('Input image A is undefined or invalid.');
end
if ~isfloat(A) || ~sum([1,3] == size(A,3)) || ...
      min(A(:)) < 0 || max(A(:)) > 1
   error(['Input image A must be a double precision ',...
          'matrix of size NxMx1 or NxMx3 on the closed ',...
          'interval [0,1].']);      
end

% Verify bilateral filter window size.
if ~exist('w','var') || isempty(w) || ...
      numel(w) ~= 1 || w < 1
   w = 5;
end
w = ceil(w);

% Verify bilateral filter standard deviations.
if ~exist('sigma','var') || isempty(sigma) || ...
      numel(sigma) ~= 2 || sigma(1) <= 0 || sigma(2) <= 0
   sigma = [3 0.1];
end

% Apply either grayscale or color bilateral filtering.
if size(A,3) == 1
   B = bfltGray(A,w,sigma(1),sigma(2));
else
   B = bfltColor(A,w,sigma(1),sigma(2));
end


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Implements bilateral filtering for grayscale images.
function B = bfltGray(A,w,sigma_d,sigma_r)

% Pre-compute Gaussian distance weights.
[X,Y] = meshgrid(-w:w,-w:w);
G = exp(-(X.^2+Y.^2)/(2*sigma_d^2));

% Create waitbar.
h = waitbar(0,'Applying bilateral filter...');
set(h,'Name','Bilateral Filter Progress');

% Apply bilateral filter.
dim = size(A);
B = zeros(dim);
for i = 1:dim(1)
   for j = 1:dim(2)
      
         % Extract local region.
         iMin = max(i-w,1);
         iMax = min(i+w,dim(1));
         jMin = max(j-w,1);
         jMax = min(j+w,dim(2));
         I = A(iMin:iMax,jMin:jMax);
      
         % Compute Gaussian intensity weights.
         H = exp(-(I-A(i,j)).^2/(2*sigma_r^2));
      
         % Calculate bilateral filter response.
         F = H.*G((iMin:iMax)-i+w+1,(jMin:jMax)-j+w+1);
         B(i,j) = sum(F(:).*I(:))/sum(F(:));
               
   end
   waitbar(i/dim(1));
end

% Close waitbar.
close(h);


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Implements bilateral filter for color images.
function B = bfltColor(A,w,sigma_d,sigma_r)

% Convert input sRGB image to CIELab color space.
if exist('applycform','file')
   A = applycform(A,makecform('srgb2lab'));
else
   A = colorspace('Lab<-RGB',A);
end

% Pre-compute Gaussian domain weights.
[X,Y] = meshgrid(-w:w,-w:w);
G = exp(-(X.^2+Y.^2)/(2*sigma_d^2));

% Rescale range variance (using maximum luminance).
sigma_r = 100*sigma_r;

% Create waitbar.
h = waitbar(0,'Applying bilateral filter...');
set(h,'Name','Bilateral Filter Progress');

% Apply bilateral filter.
dim = size(A);
B = zeros(dim);
for i = 1:dim(1)
   for j = 1:dim(2)
      
         % Extract local region.
         iMin = max(i-w,1);
         iMax = min(i+w,dim(1));
         jMin = max(j-w,1);
         jMax = min(j+w,dim(2));
         I = A(iMin:iMax,jMin:jMax,:);
      
         % Compute Gaussian range weights.
         dL = I(:,:,1)-A(i,j,1);
         da = I(:,:,2)-A(i,j,2);
         db = I(:,:,3)-A(i,j,3);
         H = exp(-(dL.^2+da.^2+db.^2)/(2*sigma_r^2));
      
         % Calculate bilateral filter response.
         F = H.*G((iMin:iMax)-i+w+1,(jMin:jMax)-j+w+1);
         norm_F = sum(F(:));
         B(i,j,1) = sum(sum(F.*I(:,:,1)))/norm_F;
         B(i,j,2) = sum(sum(F.*I(:,:,2)))/norm_F;
         B(i,j,3) = sum(sum(F.*I(:,:,3)))/norm_F;
                
   end
   waitbar(i/dim(1));
end

% Convert filtered image back to sRGB color space.
if exist('applycform','file')
   B = applycform(B,makecform('lab2srgb'));
else  
   B = colorspace('RGB<-Lab',B);
end

% Close waitbar.
close(h);

Code 6 – Function M File – demo.m

function outImg=demo(inImg,sigma1,sigma2)
%  inImg      :  grayscale image
%  sigma1     : width of spatial Gaussian (d)
%  sigma2     : width of range Gaussian (r)
%  [-w, w]^2  : domain of spatial Gaussian
%  tol        : truncation error
%Img    =  double( imread('./images/ckb.jpg') );
% Img    =  double( imread('./images/barbara.jpg') );
%[m, n] = size(Img);

% create noisy image
% sigma  =  40;
% inImg  =  Img + sigma * rand(m, n);

% filter parameters
% sigma1 = 20;
% sigma2 = 30;
tol    = 0.01;

% make odd
if (mod(sigma1,2) == 0)
  w  = sigma1 + 1;
else
  w  = sigma1;
end

% call bilateral filter
[outImg, param] =  shiftableBF(inImg, sigma1, sigma2, w, tol);

end

Code 7 – Function M File – haar_reconstruction.m

function recon=haar_reconstruction(f1,f2)

% v=[1/sqrt(2) 1/sqrt(2)];
% w=[1/sqrt(2) -1/sqrt(2)];

recon=[];
for i=1:length(f1)
    recon=[recon (f1(i)+f2(i))/2  (f1(i)-f2(i))/2];
end

end

Code 8 – Function M File – manual_psnr.m

function psnr=manual_psnr(image1,image2)
image1=double(image1);
image2=double(image2);

[r c]=size(image1);
mse=sum((image1(:)-image2(:)).^2)/(r*c); % formula for MSE
psnr=10*log10(255*255/mse); % formula for PSNR

end

Code 9 – Function M File – maxFilter.m

function  T  =   maxFilter(inImg , w)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%  Computes the maximum 'local' dynamic range
%
%  inImg       : grayscale image
%  [-w, w]^2   : search window (w must be odd)
%  T           : maximum local dynamic range 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

T = -1;

sym    = (w - 1)/2;

[m, n] = size(inImg); % m-rows of input image, n-columns of input image

pad1 =  w*ceil(m/w) - m;   % ceil rounds the approximate value towards +ve infinity
pad2 =  w*ceil(n/w) - n;

inImg = padarray(inImg, [pad1 pad2], 'symmetric', 'post'); % refer help documentation of "padarray"

template = inImg;

m = m + pad1;
n = n + pad2;

% scan along row
for ii = 1 : m
    L     = zeros(n, 1);% zeros create a matrix of all 0 elements, dimensions as defined by the input values
    R     = zeros(n, 1);
    L(1)  = template(ii, 1);
    R(n)  = template(ii, n);
    
    for k = 2 : n
        if  mod(k - 1, w) == 0
            L(k)          = template(ii ,  k);
            R(n - k + 1)  = template(ii ,  n - k + 1);
        else
            L(k)          = max( L(k-1) , template(ii, k) );
            R(n - k + 1)  = max( R(n - k + 2), template(ii, n - k + 1) );
        end
    end
    
    for k = 1 : n
     p = k - sym;
     q = k + sym;
     if p < 1
         r = -1;
     else
         r = R(p);
     end
     if q > n
         l = -1;
     else
         l = L(q);
     end
        template(ii, k) = max(r,l);
    end
    
end

% scan along column
for jj = 1 : n
    
    L    = zeros(m, 1);
    R    = zeros(m, 1);
    L(1) = template(1, jj);
    R(m) = template(m, jj);
    
    for k = 2 : m 
        if  mod(k - 1, w) == 0
            L(k)          = template(k, jj);
            R(m - k + 1)  = template(m - k + 1, jj);
        else
            L(k)          = max( L(k - 1), template(k, jj) );
            R(m - k + 1)  = max( R(m - k + 2), template(m - k + 1, jj));
        end
    end
    
    for k = 1 : m
     p = k - sym;
     q = k + sym;
     if p < 1
         r = -1;
     else
         r = R(p);
     end
     if q > m
         l = -1;
     else
         l = L(q);
     end
        temp = max(r,l) - inImg(k, jj);
        if temp > T
            T = temp;
        end
    end
    
end


Code 10 – Function M File – read_image.m

function f=read_image

disp('select the image');
disp('1:lena.png');
disp('2:barbara.png');
disp('3:boat.png');
disp('4:house.png');
disp('5:peppers256.png');
disp('6:cameraman.jpg');
disp('7:hyderabad.png');
ss1=input('enter your choice:');
while isempty(ss1)
    ss1=input('enter your choice:');
end
    
switch ss1
    case 1        
        f=imread('lena.png');       
    case 2
        f=imread('barbara.png');
    case 3
        f=imread('boat.png');
    case 4
        f=imread('house.png');
    case 5
        f=imread('peppers256.png');
    case 6
        f=imread('cameraman.jpg');
    case 7
        f=imread('hyderabad512.png');
end

end

Code 11 – Function M File -shiftable_jointBF.m

function [ outImg , param ]  =  shiftable_jointBF(inImg, rangeImg, sigmaS, sigmaR, w, tol)

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
%   Filtering operation
%
%   inImg      : MxNxB image to filter (single/double)
%   rangeImg   : MxN image used to for pixel similarity (single/double)
%   outImg     : filtered image of size MxNxB
%   sigmaS     : width of spatial Gaussian
%   sigmaR     : width of range Gaussian
%   [-w, w]^2  : domain of spatial Gaussian
%   tol        : truncation error
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%


% create spatial filter
filt     = fspecial('gaussian', [w w], sigmaS);

% set range interval and the order of raised cosine
T  =  maxFilter(rangeImg, w); %  Computes the maximum 'local' dynamic range
N  =  ceil( 0.405 * (T / sigmaR)^2 );  % ceil rounds the approximate value towards +ve infinity

gamma    =  1 / (sqrt(N) * sigmaR);
twoN     =  2^N;

% compute truncation
if tol == 0
    M = 0;
else
    if sigmaR > 40
        M = 0;
    elseif sigmaR > 10
        sumCoeffs = 0;
        for k = 0 : round(N/2)
            sumCoeffs = sumCoeffs + nchoosek(N,k)/twoN; 
            % nchoosek(N,k) returnsthe binomial coefficient, defined as n!/((n–k)! k!). 
%             This is the number of combinations of n itemstaken k at a time
            if sumCoeffs > tol/2
                M = k;
                break;
            end
        end
    else
        M = ceil( 0.5 * ( N - sqrt(4*N*log(2/tol)) ) );
    end
end


% main filter
[m, n, b]  =  size(inImg);
outImg1    =  zeros(m, n, b); % zeros create a matrix of all 0 elements, dimensions as defined by the input values
outImg2    =  zeros(m, n, b);
outImg     =  zeros(m, n, b);

warning off; %#ok<WNOFF>

for k = M : N - M
    
    coeff = nchoosek(N,k) / twoN;
    
    temp1  = cos( (2*k - N) * gamma * rangeImg);
    temp2  = sin( (2*k - N) * gamma * rangeImg);
    
    if size(inImg, 3) > 1
      temp1 = repmat(temp1, [1 1 size(inImg, 3)]); 
      % B = repmat(A,[m n p...]) producesa multidimensional array B composed
%       of copies of A.The size of B is [size(A,1)*m, size(A,2)*n, size(A,3)*p, ...]. 
      temp2 = repmat(temp2, [1 1 size(inImg, 3)]);
    end
    
    phi1 =  imfilter(inImg .* temp1, filt);
%     B = imfilter(A,h) filters the multidimensional array A with the multidimensional filter h. 
% The array A can be logical or a non sparse numeric array of any class and dimension. 
% The result B has the same size and class as A
    phi2 =  imfilter(inImg .* temp2, filt);
    phi3 =  imfilter(temp1, filt);
    phi4 =  imfilter(temp2, filt);
    
    outImg1 = outImg1 + coeff * ( temp1 .* phi1 +  temp2 .* phi2 );
    outImg2 = outImg2 + coeff * ( temp1 .* phi3 +  temp2 .* phi4 );
    
end

idx1 = find( outImg2 < 0.0001);
idx2 = find( outImg2 > 0.0001);

outImg( idx1 ) = inImg( idx1 );
outImg( idx2 ) = outImg1( idx2 ) ./ outImg2 (idx2 );

% save parameters in the structure "param"
param.T  = T;
param.N  = N;
param.M  = M;

Code 12 – Function M File – shiftableBF.m

function [ outImg , param ]  =  shiftableBF(inImg, sigmaS, sigmaR, w, tol)

%   Filtering operation
%
%   inImg      :  grayscale image
%   sigmaS     : width of spatial Gaussian
%   sigmaR     : width of range Gaussian
%   [-w, w]^2  : domain of spatial Gaussian
%   tol        : truncation error

% create spatial filter
filt = fspecial('gaussian', [w w], sigmaS);

% set range interval and the order of raised cosine
T  =  maxFilter(inImg, w);%-------------------
N  =  ceil( 0.405 * (T / sigmaR)^2 );

gamma    =  1 / (sqrt(N) * sigmaR);
twoN     =  2^N;

% compute truncation
if tol == 0
    M = 0;
else
    if sigmaR > 40
        M = 0;
    elseif sigmaR > 10
        sumCoeffs = 0;
        for k = 0 : round(N/2)
            sumCoeffs = sumCoeffs + nchoosek(N,k)/twoN;
            if sumCoeffs > tol/2
                M = k;
                break;
            end
        end
    else
        M = ceil( 0.5 * ( N - sqrt(4*N*log(2/tol)) ) );
    end
end


% main filter
[m, n]   =  size(inImg);
outImg1  =  zeros(m, n);
outImg2  =  zeros(m, n);
outImg   =  zeros(m, n);

h = waitbar(0, 'Computing filter ...'); 

warning('off'); %#ok<WNOFF>

for k = M : N - M
    
    waitbar(k / N - 2*M + 1, h);
    
    coeff = nchoosek(N,k) / twoN;
    
    temp1  = cos( (2*k - N) * gamma * inImg);
    temp2  = sin( (2*k - N) * gamma * inImg);
    
    phi1 =  imfilter(inImg .* temp1, filt);
    phi2 =  imfilter(inImg .* temp2, filt);
    phi3 =  imfilter(temp1, filt);
    phi4 =  imfilter(temp2, filt);
    
    outImg1 = outImg1 + coeff * ( temp1 .* phi1 +  temp2 .* phi2 );
    outImg2 = outImg2 + coeff * ( temp1 .* phi3 +  temp2 .* phi4 );
    
end

close(h);

% avoid division by zero
idx1 = find( outImg2 < 0.0001);
idx2 = find( outImg2 > 0.0001);

outImg( idx1 ) = inImg( idx1 );
outImg( idx2 ) = outImg1( idx2 ) ./ outImg2 (idx2 );

% save parameters
param.T  = T;
param.N  = N;
param.M  = M;

Code 13 – Function M File – wavelet_transform_rec.m

function img=wavelet_transform_rec(LL,LH,HL,HH)

[r c]=size(LL); 
L=[];
H=[];
for i=1:c % reconstruction level 1
    f1=LL(:,i)'; % LL band
    f2=HL(:,i)'; % HL band
    recon1=haar_reconstruction(f1,f2); % reconstruction operation
    L=[L recon1'];
    
    f3=LH(:,i)'; % LH band
    f4=HH(:,i)'; % HH band
    recon2=haar_reconstruction(f3,f4); % reconstruction operation
    H=[H recon2'];
end

[r c]=size(L);
img=[];
for i=1:r% reconstruction level 2
    f1=L(i,:); % L band
    f2=H(i,:); % H band
    recon=haar_reconstruction(f1,f2); % reconstruction operation
    img=[img; recon];
end

end

Code 14 – Function M File – thresholding_normal.m

function [normalC,sigmahat]=thresholding_normal(C,S,level)

st=(S(1,1)^2)+1; % initialising the indexing of coeff of LL, which will not be thresholded
normalC=[C(1:st-1),zeros(1,length(st:1:length(C)))]; % this variable will 
% store the transform coefficients, it will keep on storing new coefficients
% values untill all the values are thresholded. thresholding is done in the
% for loop below

%Calculating sigmahat
var=length(C)-S(size(S,1)-1,1)^2 +1;
sigmahat=median(abs(C(var:length(C))))/0.6745;
for jj=2:size(S,1)-1
    %for the H detail coefficients
    coefh=C(st:st+S(jj,1)^2-1);
    len=length(coefh);
    sigmay2=sum(coefh.^2)/len;
    lambda=sqrt(log(len/level));
    thr=(lambda*sigmahat)/sigmay2;    
    normalC(st:st+S(jj,1)^2-1)=sthresh(coefh,thr);
    st=st+S(jj,1)^2;    
    %for the V detail coefficients
    coefv=C(st:st+S(jj,1)^2-1);
    len=length(coefv);
    sigmay2=sum(coefv.^2)/len;
    lambda=sqrt(log(len/level));
    thr=(lambda*sigmahat)/sigmay2;
    normalC(st:st+S(jj,1)^2-1)=sthresh(coefv,thr);
    st=st+S(jj,1)^2;    
    %for diag detail coefficients
    coefd=C(st:st+S(jj,1)^2-1);
    len=length(coefd);
    sigmay2=sum(coefd.^2)/len;
    lambda=sqrt(log(len/level));
    thr=(lambda*sigmahat)/sigmay2;
    normalC(st:st+S(jj,1)^2-1)=sthresh(coefd,thr);
    st=st+S(jj,1)^2;
end

Code 15 – Function M File – thresholding_bayes.m

function [bayesC]=thresholding_bayes(C,S)

st=(S(1,1)^2)+1;
bayesC=[C(1:st-1),zeros(1,length(st:1:length(C)))];
%S(size(S,1)-1,1)^2
var=length(C)-S(size(S,1)-1,1)^2 +1;

%Calculating sigmahat
sigmahat=median(abs(C(var:length(C))))/0.6745;

for jj=2:size(S,1)-1
    %for the H detail coefficients
    coefh=C(st:st+S(jj,1)^2-1);
    thr=bayes2(coefh,sigmahat);
    bayesC(st:st+S(jj,1)^2-1)=sthresh(coefh,thr);
    st=st+S(jj,1)^2;

    % for the V detail coefficients
    coefv=C(st:st+S(jj,1)^2-1);
    thr=bayes2(coefv,sigmahat);
    bayesC(st:st+S(jj,1)^2-1)=sthresh(coefv,thr);
    st=st+S(jj,1)^2;

    %for Diag detail coefficients 
    coefd=C(st:st+S(jj,1)^2-1);
    thr=bayes2(coefd,sigmahat);
    bayesC(st:st+S(jj,1)^2-1)=sthresh(coefd,thr);
    st=st+S(jj,1)^2;
end

end

Code 16 – Function M File – sthresh.m

function op=sthresh(X,T)

%A function to perform soft thresholding on a 
%given an input vector X with a given threshold T
% S=sthresh(X,T);
    ind=find(abs(X)<=T);
    ind1=find(abs(X)>T);
    X(ind)=0; % indexes below T are set to 0
    X(ind1)=sign(X(ind1)).*(abs(X(ind1))-T); 
    op=X;
end

Code 17 – Function M File – sigmar.m

function sig=sigmar(C,S)

var=length(C)-S(size(S,1)-1,1)^2 +1;
sig=median(abs(C(var:length(C))))/0.6745;
sig=sig*2;

end

 

 

Recent Posts

Tags

ad-hoc networks AODV boundary detection process classification clustering clustering algorithm Colour Information computer vision Decryption Encryption EZRP ICM (Iterated Conditional Modes) image denoising image enhancement IMAGE PROCESSING image segmentation Imaging and image processing MANET Markov Random Fields neutrosophic logic optical network proposed method PSNR QLab system region growing Robert’s operator Seed point selection segmentation semi-automatic algorithm Shadow Detection shadow removal wall motion wireless communication Wireless network wireless networks Wireless Sensor Network wireless sensor networks ZRP