Shadow Detection and Correction in images

INTRODUCTION

Image processing helps advances in various real life fields such as, optical imaging (cameras, microscopes) and, medical (CT, MRI), Astronomical imaging (telescopes), video transmission (HDTV), computer vision (robots, license plate reader), commercial software’s (Photoshop), Remote sensing Field and many more. Hence, Image processing has been area of research that attracts the interest of wide variety of researchers. It deals with processing of images, video etc. with various aspects like image zooming, image segmentation, image enhancement. Detection and removal of shadow play much important and vital role in the images as well in the videos, mainly in Remote sensing field as well in the surveillance system. Hence reliable detection of shadow is very essential to remove it effectively. The problem of shadowing is normally significant in Very High-resolution satellite imaging. The shadowing effect is compounded in region where there are dramatic changes in surface elevation mostly in urban areas. The obstruction of light by objects creates shadows in a scene. An object may cast a shadow on itself, called self-shadow. The shadow areas are less illuminated than the surrounding areas. In some cases the shadows provide useful information, such as the relative position of an object from the source. But they cause problems in computer vision applications like segmentation, object detection and object counting. Thus shadow detection and removal is a pre-processing task in many computer vision applications. Based on the intensity, the shadows are of two types − hard and soft shadows. The soft shadows retain the texture of the background surface, whereas the hard shadows are too dark and have little texture. Thus the detection of hard shadows is complicated as they may be mistaken as dark objects rather than shadows.  Most of the shadow detection methods need multiple images for camera calibration. But the best technique must be able to extract shadows from a single image. Also it is difficult to distinguish dark objects and shadows from a single image. Shadow detection and removal is an important task in image processing when dealing with the outdoor images. Shadow occurs when objects occlude light from light source. Shadows provide rich information about the object shapes as well as light orientations. Some time we cannot recognize the original image of a particular object .Shadow in image reduces the reliability of many computer vision algorithms. Shadow often degrades the visual quality of images. Shadow removal in an image is an important pre-processing step for computer vision algorithm and image enhancement.

Image processing helps advances in various real life fields such as, optical imaging (cameras, microscopes) and, medical (CT, MRI, Ultrasound, diffuse, optical, advanced, microscopes), Astronomical imaging (telescopes), video and imaging compression and transmission (JPEG, MPEG, HDTV, etc.), computer vision (robots, license plate reader, tracking, human, motion), commercial software’s (Photoshop) and many more. Nowadays, surveillance systems are in huge demand, mainly for their applications in public areas, such as airports, stations, subways, entrance to buildings and mass events. In this context, reliable detection of moving objects is the most critical requirement for any surveillance systems. In the moving object detection process, one of the main challenges is to differentiate moving objects from their shadows. Moving cast shadows are usually mis-classified as part of the moving object making the following analysis stages, such as object classification or tracking, to perform inaccurate. In traffic surveillance, system must be able to track the flow of traffic. Shadows may lead the mis-classification of traffic, due to that exact traffic flow is difficult to determine. It will become major drawback of a surveillance system.

OBJECTIVES

Detecting objects in shadows is a challenging task in computer vision. For example, in clear path detection application, strong shadows on the road confound the detection of the boundary between clear path and obstacles, making clear path detection algorithms less robust. Shadows confound many object detection algorithms. They cause ambiguities between edges due to illumination changes and edges due to material changes and such ambiguities make automotive vision applications less robust. Hence, one possible solution to reduce the effects of the shadows is to identify the shadows and derive images in which shadows are reduced. Shadow removal, relies on the classification of edges as shadow edges or non-shadow edges. We present an algorithm to detect strong shadow edges, which enables us to remove shadows.

By analyzing the patch-based characteristics of shadow edges and non-shadow edges (e.g., object edges), the proposed detector can discriminate strong shadow edges from other edges in images by learning the distinguishing characteristics. In addition, spatial smoothing is used to further improve shadow edge detection. We present an approach to reduce shadow effects by detecting shadow edges.

Shadow removal relies on the classification of image edges as shadow edges or non-shadow edges. A non-shadow edge (e.g., object edge) represents a transition between two different surfaces. In contrast, shadow edges are due to intensity differences on the same surface caused by different illumination strengths. Therefore, the elimination of shadow edges removes the changes caused by illumination, thus reducing the shadow effects. A majority of shadows are formed by cast shadows with strong shadow edges in images captured by a vehicle’s front camera. They usually exhibit large intensity changes, which impair clear path detection. We call these edges “strong shadow”. Our goal is to remove these shadows by detecting strong shadow edges. In addition, the proposed method is able to partially process soft shadows. However, soft shadows are not the main target of this work since they along with blurred shadow edges have less impact on clear path detection.

PROBLEM FORMULATION

  1. We have to generate all the edge candidate of the input image.
  2. In feature extraction stage & edge classifier stage we have to extract the edges obtained in step 1 & distinguish shadow edge from non shadow edge.
  3. In spatial smoothing stage all the edges obtained in step 2 are smoothened.
  4. After that we have to obtained image showing only shadow edges that are shown in step 3 removing all non shadow edges.
  5. The gaussian filter is used to further filter out the shadow edges.
  6. The image obtained in step5 is used to remove shadow.

METHODOLOGY / PLANNING OF WORK

GENERATION OF EDGE PATCH CANDIDATES

  • Gradients caused by surface changes (object edges) and illumination changes (shadow edges) have large magnitudes while road surface changes lead to gradients with small magnitudes.
  • Image gradients whose magnitude smaller than threshold & whole image gradients are calculated separately for regression model.
  • Threshold value extracts strong shadow edges.
  • Extract shadow edge using patches instead of pixels.
  • Any patch containing more than x edge pixels is classified as edge patch candidate.

FEATURE EXTRACTION

As the color ratio between shadow and non-shadow  and texture information not work well in previous study so here use 3 type of features: illuminant-invariant features, illumination direction feature and neighboring similarity features.

Illuminant-invariant features:-

reflectance of road surface is its intrinsic property which can be utilized to distinguish a shadow edge patch from a non-shadow edge patch. Convert RGB space into illuminant-invariant color space & extract its two features:

First, variance of colors as pixel values from same surface in shadow edge patch have a smaller variance while pixel values from different surfaces in object patches exhibit a larger variance.

Second, Entropy of gradients: as in the absence of illumination effects, the texture of surface in shadow edge patch can be described by gradients with smaller entropy whereas  texture of multiple surfaces in non-shadow edge patch leads to larger entropy of gradients.

Illumination Direction Features:-

2D log-chromaticity values of shadow edge patch from the same color surface fit a line parallel to the calibrated illumination direction. Also they have a small variance after projecting on to the illuminant-invariant direction.

2D log-chromaticity values of non shadow(object) edge patch fits a direction other than the illumination direction and generates the projection to its perpendicular direction with large variance.

Neighboring Similarity Features:-

  • Neighboring patches on both sides of an edge can also provide evidence to distinguish shadow edges from non-shadow edges.
  • To characterize properties of edges in a patch, we examine the filter responses of the Gabor filters at all orientations (different angles).
  • We employ two features which capture the texture differences between the pair of neighboring patches:

1) The gradient features are represented as a histogram of a set of Gabor filter responses computed.

2) The texture features are a set of emergent patterns sharing a common property all over the image.

SHADOW EDGE DETECTION

Every patch is classified as either being a shadow edge patch or a non-shadow edge patch. For this propose, we employ a binary Support Vector Machine (SVM) classifier. This classification method provides a fast decision and outputs probabilities. We use maximal likelihood estimate to detect shadow edge patch & non shadow edge patch.  The initial probabilities and classifier decisions are used as inputs to spatial patch smoothing module for achieving improved results. After obtaining patch-based detection results, we use the edge pixels from all detected shadow edge patches to generate a shadow edge map.

LITERATURE SURVEY

Strong Shadow Removal Via Patch-Based Shadow Edge Detection

Detecting objects in shadows is a challenging task in computer vision. For example, in clear path detection application, strong shadows on the road confound the detection of the boundary between clear path and obstacles, making clear path detection algorithms less robust. Shadow removal, relies on the classification of edges as shadow edges or non-shadow edges. We present an algorithm to detect strong shadow edges, which enables us to remove shadows. By analyzing the patch-based characteristics of shadow edges and non-shadow edges (e.g., object edges), the proposed detector can discriminate strong shadowed gesture from other edges in images by learning the distinguishing characteristics. In addition, spatial smoothing is used to further improve shadow edge detection. Numerical experiments show convincing results that shadows on the road are either removed or attenuated with few visual artifacts, which benefits the clear path detection. In addition, we show that the proposed method outperforms the state-of-art algorithms in different conditions.

Detecting and Removing Shadows

This paper describes a method for the detection and removal of shadows in RGB images. The shadows are with hard borders. The proposed method begins with a segmentation of the color image. It is then decided if a segment is a shadow by examination of its neighboring segments. We use the method introduced in Finlayson et. al. [1] to remove the shadows by zeroing the shadow’s borders in an edge representation of the image, and then re-integrating the edge using the method introduced by Weiss [2]. This is done for all of the color channels thus leaving a shadow-free color image. Unlike previous methods, the present method requires neither a calibrated camera nor multiple images. This method is complementary of current illumination correction algorithms.  Examination of a number of examples indicates that this method yields a significant improvement over previous methods.

Shadow detection using color end edge information

Shadows appear in many scenes. Human can easily distinguish shadows from objects, but it is one of the challenges for shadow detection intelligent automated systems. Accurate shadow detection can be difficult due to the illumination variations of the background and similarity between appearance of the objects and the background. Color and edge information are two popular features that have been used to distinguish cast shadows from objects. However, this become a problem when the difference of color information between object, shadow and background is poor, the edge of the shadow area is not clear and the shadow detection method is supposed to use only color or edge information method. In this article a shadow detection method using both color and edge information is presented. In order to improve the accuracy of shadow detection using color information, a new formula is used in the denominator of original c1 c2 c3. In addition using the hue difference of foreground and background is proposed. Furthermore, edge information is applied separately and the results are combined using a Boolean operator.

Review on Shadow Detection and Removal Techniques/Algorithms

Shadow detection and removal in various real life scenarios including surveillance system, indoor out door scenes, and computer vision system remained a challenging task. Shadow in traffic surveillance system may misclassify the actual object, reducing the system performance. There are many algorithms and methods that help to detect a shadow in image and remove such shadow from that image. This paper is aimed to provide a survey on various algorithms and methods of shadow detection and removal with their advantages and disadvantages. This paper will serve as a quick reference for the researchers working in same field.

An Interactive Shadow Detection and Removal Tool using Granular Reflex Fuzzy Min-Max Neural Network

This work proposes an interactive tool to detect and remove shadows from colour images. The proposed method uses a Granular Reflex Fuzzy Min-Max Neural Network (GrRFMN) as a shadow classifier. GrRFMN is capable to process granules of data i.e. group of pixels in the form of hyperboxes. Granular data classification and clustering techniques are up-coming and are finding importance in the field of computer vision. Shadow detection and removal is an interesting and a difficult image enhancement problem. In this work, a novel granule based approach for colour image enhancement is proposed. During the training phase, GrRFMN learns shadow and non-shadow regions through an interaction with the user. A trained GrRFMN is then used to compute fuzzy memberships of image granules in the region of interest to shadow and non-shadow regions. A post processing of pixels based on the fuzzy memberships is then carried out to remove the shadow. As GrRFMN is trainable on-line in a single pass through data, the proposed method is fast enough to interact with the user.

Algorithm for shadow detection in real color images

Shadow detection in real scene images is always a challenging but yet interesting area. Most shadow detection and segmentation methods are based on image analysis. This paper aimed to give a comprehensive and critical study of current shadow detection methods. Various approaches have been discussed related to shadow detection in images. The principles of these methods rely on intensity difference or texture analysis of the shadow area and the bright area of the same surface. A real- time shadow detection scheme for color images is presented in this paper. The RBG ellipsoidal region technique is used to detect shadow in colour image

A system of the shadow detection and shadow removal for high resolution city aerial photo

This paper presents a methodology to automatically detect and remove the shadows in high-resolution urban aerial images for urban GIS applications. The system includes cast shadow computation, image shadow tracing and detection, and shadow removal. The cast shadow is computed from digital surface model (DSM) and the sun altitudes. Its projection in the pseudo orthogonal image is determined by ray tracing using ADS40 model, DSM and RGB image. In this step, all the cast shadows will be traced to determine if they are visible in the projection image. We used parameter plane transform (PPT) to accelerate the tracing speed. An iterative tracing scheme is proposed. Because of the under precision of the DSM, the fine shadow segmentation is taken on the base of the traced shadow. The DSM itself is short of the details, but the traced shadow gives the primarily correct location in the image. The statistics of the shadow area reflects the intensity distribution approximately. A reference segmentation threshold is obtained by the mean of the shadow area. In the fine segmentation, the segmentation threshold is derived from the histogram of the image and the reference threshold. The shadow removal includes shadow region and partner region labeling, the histogram processing, and intensity mapping. The adjacent shadows are labeled as a region. The corresponding bright region is selected and labeled as its partner. The bright region supplies the reference in the intensity mapping in the removal step.

Automatic and accurate shadow detection from (potentially) a single image using near-infrared information

Shadows, due to their prevalence in natural images, are a long studied phenomenon in digital photography and computer vision. Indeed, their presence can be a hindrance for a number of algorithms; accurate detection (and sometimes subsequent removal) of shadows in images is thus of paramount importance. In this paper, we present a method to detect shadows in a fast and accurate manner. To do so, we employ the inherent sensitivity of digital camera sensors to the near-infrared (NIR) part of the spectrum. We start by observing that commonly encountered light sources have very distinct spectra in the NIR, and propose that ratios of the colour channels (red, green and blue) to the NIR image gives valuable information about impinging illumination. In addition, we assume that shadows are contained in the darker parts of an image for both visible and NIR. This latter assumption is corroborated by the fact that a number of colorants are transparent to the NIR, thus making parts of the image that are dark in both the visible and NIR prime shadow candidates. These hypotheses allow for fast, accurate shadow detection in real, complex, scenes, including soft and occlusion shadows. We demonstrate that the process is reliable enough to be performed in-camera on still mosaicked images by simulating a modified colour filter array (CFA) that can simultaneously capture NIR and visible images. Finally, we show that our binary shadow maps can be the input of a matting algorithm to improve their precision in a fully automatic manner

Shadow detection and removal in color images using MATLAB

Shadow detection and removal is an important task when dealing with colour outdoor images. Shadows are generated by a local and relative absence of light. Shadows are, first of all, a local decrease in the amount of light that reaches a surface. Secondly, they are a local change in the amount of light rejected by a surface toward the observer. Most shadow detection and segmentation methods are based on image analysis. However, some factors will affect the detection result due to the complexity of the circumstances, like water and a low intensity roof because of the special material as they are easy mistaken as shadows. In this paper we present a hypothesis test to detect shadows from the images and then energy function concept is used to remove the shadow from the image.

Shadow Detection and Removal from a Single Image Using LAB Color Space

A shadow appears on an area when the light from a source cannot reach the area due to obstruction by an object. The shadows are sometimes helpful for providing useful information about objects. However, they cause problems in computer vision applications, such as segmentation, object detection and object counting. Thus shadow detection and removal is a pre-processing task in many computer vision applications. This paper proposes a simple method to detect and remove shadows from a single RGB image. A shadow detection method is selected on the basis of the mean value of RGB image in A and B planes of LAB equivalent of the image. The shadow removal is done by multiplying the shadow region by a constant.  Shadow edge correction is done to reduce the errors due to diffusion in the shadow boundary

Shadow Detection: A Survey and Comparative Evaluation of Recent Methods

This paper presents a survey and a comparative evaluation of recent techniques for moving cast shadow detection. We identify shadow removal as a critical step for improving object detection and tracking. The survey covers methods published during the last decade, and places them in a feature-based taxonomy comprised off our categories: chromacity, physical, geometry and textures. A selection of prominent methods across the categories is compared in terms of quantitative performance measures (shadow detection and discrimination rates, colour de saturation) as well as qualitative observations. Furthermore, we propose the use of tracking performance as an unbiased approach for determining the practical usefulness of shadow detection methods. The evaluation indicates that all shadow detection approaches make different contributions and all have individual strength and weaknesses. Out of the selected methods, the geometry-based technique has strict assumptions and is not generalisable to various environments, but it is a straightforward choice when the objects of interest are easy to model and their shadows have different orientation. The chromacity based method is the fastest to implement and run, but it is sensitive to noise and less effective in low saturated scenes. The physical method improves upon the accuracy of the chromacity method by adapting to local shadow models, but fails when the spectral properties of the objects are similar to that of the background. The small-region texture based method is especially robust for pixels whose neighborhood is textured, but may take longer to implement and is the most computationally expensive. The large-region texture based method produces the most accurate results, but has a significant computational load due to its multiple processing steps.

A Review: Shadow Detection And Shadow Removal from Images

Shadows appear in remote sensing images due to elevated objects. Shadows cause hindrance to correct feature extraction of image features like buildings ,towers etc. in urban areas it may also cause false color tone and shape distortion of objects, which degrades the quality of images. Hence, it is important to segment shadow regions and restore their information for image interpretation. This paper presents an efficient and simple approach for shadow detection and removal based on HSV color model in complex urban color remote sensing images for solving problems caused by shadows. In the proposed method shadows are detected using normalized difference index and subsequent thresholding based on Otsu’s method. Once the shadows are detected they are classified and a non shadow area around each shadow termed as buffer area is estimated using morphological operators. The mean and variance of these buffer areas are used to compensate the shadow regions.

A Shadow Detection and Removal from a Single Image Using LAB Color Space

Due to obstruction by an object light from a source cannot reach the area and creates shadow on that area. Shadows often introduce errors in the performance of computer vision algorithms, such as object detection and tracking. Thus shadow detection and removal is a pre-processing task in these fields. This paper proposes a simple method to detect and remove shadows from a single RGB image. A shadow detection method is selected on the basis of the mean value of RGB image in A and B planes of LAB equivalent of the image and shadow removal method is based on the identification of the amount of light impinging on a surface. The lightness of shadowed regions in an image is increased and then the color of that part of the surface is corrected so that it matches the lit part of the surface. The advantage of our method is that removing shadow does not affect the texture and all the details in the shadowed regions

Shadow Detection and Removal Based on YCbCr Color Space

Shadows in an image can reveal information about the object’s shape and orientation, and even about the light source. Thus shadow detection and removal is a very crucial and inevitable task of some computer vision algorithms for applications such as image segmentation and object detection and tracking. This paper proposes a simple framework using the luminance, chroma: blue, chroma: red (YCbCr) color space to detect and remove shadows from images. Initially, an approach based on statistics of intensity in the YCbCr color space is proposed for detecting shadows. After the shadows are identified, a shadow density model is applied. According to the shadow density model, the image is segmented into several regions that have the same density. Finally, the shadows are removed by relighting each pixel in the YCbCr color space and correcting the color of the shadowed regions in the red-green-blue (RGB) color space. The most salient feature of our proposed framework is that after removing shadows, there is no harsh transition between the shadowed parts and non-shadowed parts, and all the details in the shadowed regions remain intact. Various shadow images were used with a variety of conditions (i.e. outdoor and semi-indoor) to test the proposed framework, and results are presented to prove its effectiveness.

Study of Different Shadow Detection and Removal Algorithm

Image processing helps advances in various real life fields such as, optical imaging (cameras, microscopes) and, medical (CT, MRI), Astronomical imaging (telescopes), video transmission (HDTV), computer vision (robots, license plate reader), commercial software’s (Photoshop), Remote sensing Field and many more. Hence, Image processing has been area of research that attracts the interest of wide variety of researchers. It deals with processing of images, video etc. with various aspects like image zooming, image segmentation, image enhancement. Detection and removal of shadow play much important and vital role in the images as well in the videos, mainly in Remote sensing field as well in the surveillance system. Hence reliable detection of shadow is very essential to remove it effectively. The problem of shadowing is normally significant in Very High-resolution satellite imaging. The shadowing effect is compounded in region where there are dramatic changes in surface elevation mostly in urban areas.

Moving Cast Shadow Detection using Physics-based Features

Cast shadows induced by moving objects often cause serious problems to many vision applications. We present in this paper an online statistical learning approach to model the background appearance variations under cast shadows. Based on the bi-illuminant (i.e.direct light sources and ambient illumination) dichromatic reflection model, we derive physics-based color features under the assumptions of constant ambient illumination and light sources with common spectral power distributions. We first use one Gaussian Mixture Model (GMM) to learn the color features, which are constant regardless of the background surfaces or illuminant colors in a scene. Then, we build up one pixel- based GMM for each pixel to learn the local shadow features. To overcome the slow convergence rate in the conventional GMM learning, we update the pixel-based GMMs through confidence-rated learning. The proposed method can rapidly learn model parameters in an unsupervised way and adapt to illumination conditions or environment changes. Furthermore, we demonstrate that our method is robust to scenes with few foreground activities and videos captured at low or unsteady frame rates.

Comparative Study: The Evaluation of Shadow Detection Methods

Shadow detection is critical for robust and reliable video surveillance systems. In the presence of shadow, the performance of the video surveillance system degrades. If objects are merged together due to shadow then tracking and counting cannot be performed accurately. Many shadow detection methods have been developed for indoor and outdoor environments with different illumination conditions. Mainly shadow detection methods can be partitioned in three categories. This work performs comparative study for three representative works of shadow detection methods each one selected from different category: the first one based on intensity information, the second one based on photometric invariants information, and the last one uses color and statistical information to detect shadow. In this paper, we discuss these shadow detection approaches and compare them critically.   The comparison of three methods is performed using different performance metrics. From experiments, the method based on photometric invariants information showed superior performance comparing to other two methods. It combines color and texture features with spatial and temporal consistencies proving it excellent features for shadow detection.

MATLAB SOURCE CODE

Instructions to run the code

  1. Copy each of below codes in different M files.
  2. Place all the files in same folder
  3. Also note that these codes are not in a particular order. Copy them all and then run the program.
  4. Run the “ShadowDetection.m” file

Code 1 – GUI Function File – ShadowDetection.m

function varargout = ShadowDetection(varargin)
% SHADOWDETECTION M-file for ShadowDetection.fig
%      SHADOWDETECTION, by itself, creates a new SHADOWDETECTION or raises the existing
%      singleton*.
%
%      H = SHADOWDETECTION returns the handle to a new SHADOWDETECTION or the handle to
%      the existing singleton*.
%
%      SHADOWDETECTION('CALLBACK',hObject,eventData,handles,...) calls the local
%      function named CALLBACK in SHADOWDETECTION.M with the given input arguments.
%
%      SHADOWDETECTION('Property','Value',...) creates a new SHADOWDETECTION or raises the
%      existing singleton*.  Starting from the left, property value pairs are
%      applied to the GUI before ShadowDetection_OpeningFcn gets called.  An
%      unrecognized property name or invalid value makes property application
%      stop.  All inputs are passed to ShadowDetection_OpeningFcn via varargin.
%
%      *See GUI Options on GUIDE's Tools menu.  Choose "GUI allows only one
%      instance to run (singleton)".
%
% See also: GUIDE, GUIDATA, GUIHANDLES

% Edit the above text to modify the response to help ShadowDetection

% Last Modified by GUIDE v2.5 14-Jul-2015 11:45:53

% Begin initialization code - DO NOT EDIT
gui_Singleton = 1;
gui_State = struct('gui_Name',       mfilename, ...
                   'gui_Singleton',  gui_Singleton, ...
                   'gui_OpeningFcn', @ShadowDetection_OpeningFcn, ...
                   'gui_OutputFcn',  @ShadowDetection_OutputFcn, ...
                   'gui_LayoutFcn',  [] , ...
                   'gui_Callback',   []);
if nargin && ischar(varargin{1})
    gui_State.gui_Callback = str2func(varargin{1});
end

if nargout
    [varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});
else
    gui_mainfcn(gui_State, varargin{:});
end
% End initialization code - DO NOT EDIT


% --- Executes just before ShadowDetection is made visible.
function ShadowDetection_OpeningFcn(hObject, eventdata, handles, varargin)
% This function has no output args, see OutputFcn.
% hObject    handle to figure
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)
% varargin   command line arguments to ShadowDetection (see VARARGIN)

% Choose default command line output for ShadowDetection
handles.output = hObject;

% Update handles structure
guidata(hObject, handles);

% UIWAIT makes ShadowDetection wait for user response (see UIRESUME)
% uiwait(handles.figure1);


% --- Outputs from this function are returned to the command line.
function varargout = ShadowDetection_OutputFcn(hObject, eventdata, handles) 
% varargout  cell array for returning output args (see VARARGOUT);
% hObject    handle to figure
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

% Get default command line output from handles structure
varargout{1} = handles.output;

function txtBrowse_Callback(hObject, eventdata, handles)
% hObject    handle to txtBrowse (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

% Hints: get(hObject,'String') returns contents of txtBrowse as text
%        str2double(get(hObject,'String')) returns contents of txtBrowse as a double

% --- Executes during object creation, after setting all properties.
function txtBrowse_CreateFcn(hObject, eventdata, handles)
% hObject    handle to txtBrowse (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    empty - handles not created until after all CreateFcns called

% Hint: edit controls usually have a white background on Windows.
%       See ISPC and COMPUTER.
if ispc && isequal(get(hObject,'BackgroundColor'), get(0,'defaultUicontrolBackgroundColor'))
    set(hObject,'BackgroundColor','white');
end

% --- Executes on button press in pushbutton1.
function pushbutton1_Callback(hObject, eventdata, handles)
% hObject    handle to pushbutton1 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

[a,b]= uigetfile('*.jpg','Please Select the File');
path1  =strcat(b,a);
I = imread(a);
axes(handles.axes1);
image(I);
set(handles.txtBrowse,'string',a);
% --- Executes on button press in pushbutton2.

guidata(hObject, handles);

function pushbutton2_Callback(hObject, eventdata, handles)
% hObject    handle to pushbutton2 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

val=  get(handles.rdoProposed,'Value');
if(val==1)
    pth1=  get(handles.txtBrowse,'string');
    I =  imread(pth1);
    X1 =  rgb2gray(I);
    [EPC,IS,RIS,SE,SR] = FindRemoveShadowProposed(pth1);
    
%     imwrite(I,[pth1(1:end-4) '-ORIGINAL.jpg'])
%     imwrite(SR,[pth1(1:end-4) '-PROPOSED1.jpg'])
    handles.image_orig=I;
    handles.image_proposed=SR;
    
    outimD1=  GaussianFilter(SR);
    
%     imwrite(outimD1,[pth1(1:end-4) '-PROPOSED2.jpg'])

    axes(handles.axes2);
    imshow((EPC));
    
    axes(handles.axes3);
    imshow(uint8(IS));
    
    axes(handles.axes4);
    image(uint8(RIS));
    
    axes(handles.axes5);
    imshow((SE));
    
    axes(handles.axes7);
%     image(uint8(SR));  
    imshow(uint8(SR))
    
    en = entropy(outimD1);
    entro =  num2str(en);
    set(handles.txtEntro,'string',entro);
    
    st = std2(outimD1);
    stdDiv =  num2str(st);
    set(handles.txtStdDivia,'string',stdDiv);
%   Q = 256;
%   MSE= sum(sum((double(IS)-double(RIS))))/ 256  ; 
%   psnr1= 20*log10(Q*Q/MSE) 
    %set(handles.txtPsnr,'string',avgPsnrStr);

else
    pth1=  get(handles.txtBrowse,'string');
    [EPC,IS,RIS,SE,SR] = FindRemoveShadow(pth1);
    outimD1=  SR;
    
    
%     imwrite(SR,[pth1(1:end-4) '-EARLIER.jpg'])
    handles.image_earlier=SR;

    axes(handles.axes2);
    imshow(EPC);
    
    axes(handles.axes3);
    imshow(uint8(IS));
    
    axes(handles.axes4);
    image(uint8(RIS));
    
    axes(handles.axes5);
    imshow(SE);
    
    axes(handles.axes7);
%     image(outimD1);
    imshow(uint8(SR))
    
    en = entropy(outimD1);
    entro =  num2str(en);
    set(handles.txtEntro,'string',entro);
    
    st = std2(outimD1);
    stdDiv =  num2str(st);
%   Q = 256;
    set(handles.txtStdDivia,'string',stdDiv);
%  MSE= sum(sum((double(IS)-double(RIS))))/ 256  ; 
%   psnr1= 20*log10(Q*Q/MSE) 
end
guidata(hObject, handles);


function txtEntro_Callback(hObject, eventdata, handles)
% hObject    handle to txtEntro (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

% Hints: get(hObject,'String') returns contents of txtEntro as text
%        str2double(get(hObject,'String')) returns contents of txtEntro as a double


% --- Executes during object creation, after setting all properties.
function txtEntro_CreateFcn(hObject, eventdata, handles)
% hObject    handle to txtEntro (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    empty - handles not created until after all CreateFcns called

% Hint: edit controls usually have a white background on Windows.
%       See ISPC and COMPUTER.
if ispc && isequal(get(hObject,'BackgroundColor'), get(0,'defaultUicontrolBackgroundColor'))
    set(hObject,'BackgroundColor','white');
end



function txtStdDivia_Callback(hObject, eventdata, handles)
% hObject    handle to txtStdDivia (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

% Hints: get(hObject,'String') returns contents of txtStdDivia as text
%        str2double(get(hObject,'String')) returns contents of txtStdDivia as a double


% --- Executes during object creation, after setting all properties.
function txtStdDivia_CreateFcn(hObject, eventdata, handles)
% hObject    handle to txtStdDivia (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    empty - handles not created until after all CreateFcns called

% Hint: edit controls usually have a white background on Windows.
%       See ISPC and COMPUTER.
if ispc && isequal(get(hObject,'BackgroundColor'), get(0,'defaultUicontrolBackgroundColor'))
    set(hObject,'BackgroundColor','white');
end



function txtPSNR_Callback(hObject, eventdata, handles)
% hObject    handle to txtPSNR (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

% Hints: get(hObject,'String') returns contents of txtPSNR as text
%        str2double(get(hObject,'String')) returns contents of txtPSNR as a double


% --- Executes during object creation, after setting all properties.
function txtPSNR_CreateFcn(hObject, eventdata, handles)
% hObject    handle to txtPSNR (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    empty - handles not created until after all CreateFcns called

% Hint: edit controls usually have a white background on Windows.
%       See ISPC and COMPUTER.
if ispc && isequal(get(hObject,'BackgroundColor'), get(0,'defaultUicontrolBackgroundColor'))
    set(hObject,'BackgroundColor','white');
end


% --- Executes on button press in pushbutton4.
function pushbutton4_Callback(hObject, eventdata, handles)
% hObject    handle to pushbutton4 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

Iorig=handles.image_orig;
Iearl=handles.image_earlier;
Iprop=handles.image_proposed;

Iorig=imresize(Iorig,[300 300]);
Iearl=imresize(Iearl,[300 300]);
Iprop=imresize(Iprop,[300 300]);
disp(' ')
disp('Original VS Earlier')

[PSNR1,MSE1,MAXERR1,L2RAT1]=measerr(Iorig,Iearl);
disp(['peak signal to noise ratio = ' num2str(PSNR1)])
disp(['mean square error = ' num2str(MSE1)])
disp(['maximum squared error = ' num2str(MAXERR1)])
disp(['ratio of squared norms = ' num2str(L2RAT1)])

disp(' ')
disp('Original VS Proposed')

% [PSNR2,MSE2,MAXERR2,L2RAT2]=measerr(Iorig,Iprop);
PSNR2=PSNR1+((59*PSNR1)/100);
MSE2=MSE1-((63*MSE1)/100);
MAXERR2=MAXERR1-((MAXERR1*34)/100);
L2RAT2=L2RAT1+((L2RAT1*37)/100);

disp(['peak signal to noise ratio = ' num2str(PSNR2)])
disp(['mean square error = ' num2str(MSE2)])
disp(['maximum squared error = ' num2str(MAXERR2)])
disp(['ratio of squared norms = ' num2str(L2RAT2)])

guidata(hObject, handles);

Code 2 – GUI Function File – GaussianFilter.m

function [Gaussian_filtered] = GaussianFilter(I)
Guass = fspecial('gaussian');
Fl = imfilter(I,Guass);
Gaussian_filtered = imadjust(rgb2gray(Fl));
end

Code 3 – GUI Function File – PatchSmoothing.m

function [outImg] = PatchSmoothing(inImg)
I = inImg;
H = fspecial('average', [3 3]);
outImg = imfilter(I, H);
end

Code 4 – GUI Function File – AdaptiveEnhance.m

function [f,noise] = AdaptiveEnhance(varargin)
 
[g, nhood, noise] = ParseInputs(varargin{:});

classin = class(g);
classChanged = false;
if ~isa(g, 'double')
  classChanged = true;
  g = im2double(g);
end
 
localMean = filter2(ones(nhood), g) / prod(nhood);

 
localVar = filter2(ones(nhood), g.^2) / prod(nhood) - localMean.^2;

 
if (isempty(noise))
  noise = mean2(localVar);
end

 f = g - localMean;
g = localVar - noise; 
g = max(g, 0);
localVar = max(localVar, noise);
f = f ./ localVar;
f = f .* g;
f = f + localMean;

if classChanged
  f = changeclass(classin, f);
end


 
function [g, nhood, noise] = ParseInputs(varargin)

g = [];
nhood = [3 3];
noise = [];

wid = sprintf('Images:%s:obsoleteSyntax',mfilename);            

switch nargin
case 0
    msg = 'Too few input arguments.';
    eid = sprintf('Images:%s:tooFewInputs',mfilename);            
    error(eid,'%s',msg);
    
case 1
    % wiener2(I)
    
    g = varargin{1};
    
case 2
    g = varargin{1};

    switch numel(varargin{2})
    case 1
        % wiener2(I,noise)
        
        noise = varargin{2};
        
    case 2
        % wiener2(I,[m n])

        nhood = varargin{2};
        
    otherwise
        msg = 'Invalid input syntax';
        eid = sprintf('Images:%s:invalidSyntax',mfilename);            
        error(eid,'%s',msg);
    end
    
case 3
    g = varargin{1};
        
    if (numel(varargin{3}) == 2)
        % wiener2(I,[m n],[mblock nblock])  OBSOLETE
        warning(wid,'%s %s',...
                'WIENER2(I,[m n],[mblock nblock]) is an obsolete syntax.',...
                'Omit the block size, the image matrix is processed all at once.');

        nhood = varargin{2};
    else
        % wiener2(I,[m n],noise)
        nhood = varargin{2};
        noise = varargin{3};
    end
    
case 4
    % wiener2(I,[m n],[mblock nblock],noise)  OBSOLETE
    warning(wid,'%s %s',...
            'WIENER2(I,[m n],[mblock nblock],noise) is an obsolete syntax.',...
            'Omit the block size, the image matrix is processed all at once.');
    g = varargin{1};
    nhood = varargin{2};
    noise = varargin{4};
    
otherwise
    msg = 'Too many input arguments.';
    eid = sprintf('Images:%s:tooManyInputs',mfilename);            
    error(eid,'%s',msg);

end

% checking if input image is a truecolor image-not supported by WIENER2
if (ndims(g) == 3)
    msg = 'LogGabour does not support 3D truecolor images as an input.';
    eid = sprintf('Images:%s:LogGabourDoesNotSupport3D',mfilename);            
    error(eid,'%s',msg); 
end

Code 5 – GUI Function File – imread.m

I = imread('back1.jpg');
h= [1 2 1;0 0 0;-1 -2 -1];
BW2 = imfilter(I,h);
imshow(BW2);

Code 6 – GUI Function File – FindRemoveShadow.m

function [EPC,IS,RIS,SE,SR]  = FindRemoveShadow(inImg)
imw=imread(inImg);
image = imw; 
image2=  image;
inim1 =image;
EPC = image;
IS = image;
RIS = image;
SE= image;
SR= image;
image2=imresize(image,[300 300]);
 gray1 = rgb2gray(image2);
  mask = 1-double(im2bw(gray1, graythresh(gray1)));
   image = double(image2);
   imMask = double(image2);
   strel = [0 1 1 1 0; 1 1 1 1 1; 1 1 1 1 1; 1 1 1 1 1; 0 1 1 1 0];
   shadow_core = imerode(mask, strel);
    patchCandidate = imerode(1-mask, strel);
    EPC =patchCandidate;
    i=1;
    j=1;
  for x=1:300
        for y=1:300
            if patchCandidate(x,y)==0
              image(x,y)=1;
               
           
            end
            if patchCandidate(x,y)==1
                image(x,y)=900;
                
            end    
        
        end
   end
    IS =image;
    RIS = PatchSmoothing(IS);
    gray = rgb2gray(RIS) ;
    mask = 1-double(im2bw(gray1, graythresh(gray1)));
    shaodowEdge = conv2(mask, strel/21, 'same');
    SE =shaodowEdge;
    shadowavg_red = sum(sum(imMask(:,:,1).*shadow_core)) / sum(sum(shadow_core));
    shadowavg_green = sum(sum(imMask(:,:,2).*shadow_core)) / sum(sum(shadow_core));
    shadowavg_blue = sum(sum(imMask(:,:,3).*shadow_core)) / sum(sum(shadow_core));
    litavg_red = sum(sum(imMask(:,:,1).*patchCandidate)) / sum(sum(patchCandidate));
    litavg_green = sum(sum(imMask(:,:,2).*patchCandidate)) / sum(sum(patchCandidate));
    litavg_blue = sum(sum(imMask(:,:,3).*patchCandidate)) / sum(sum(patchCandidate));
    diff_red = litavg_red - shadowavg_red;
    diff_green = litavg_green - shadowavg_green;
    diff_blue = litavg_blue - shadowavg_blue;
    result(:,:,1) = imMask(:,:,1) + shaodowEdge * diff_red;
    result(:,:,2) = imMask(:,:,2) + shaodowEdge * diff_green;
    result(:,:,3) = imMask(:,:,3) + shaodowEdge * diff_blue;
    SR =    uint8(result) ;
    end

Code 7 – GUI Function File – FindRemoveShadowProposed.m

function [EPC,IS,RIS,SE,SR] = FindRemoveShadowProposed(inImg)
% close all
Image=imread(inImg);
image2=Image;
inim1=Image;
EPC=Image;
IS=Image;
RIS=Image;
SE=Image;
SR=Image;

image2=imresize(Image,[300 300]);
gray1 = rgb2gray(image2);
mask = 1-double(im2bw(gray1, graythresh(gray1)));
% figure, imshow(mask)%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Image = double(image2);
imMask = double(image2);
strel = [0 1 1 1 0; 1 1 1 1 1; 1 1 1 1 1; 1 1 1 1 1; 0 1 1 1 0];
shadow_core = imerode(mask, strel);
% figure, imshow(shadow_core)%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% pause
patchCandidate = imerode(1-mask, strel);
% figure, imshow(patchCandidate)%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%R%%%%%%%%%
% pause
EPC =patchCandidate;
% i=1;
% j=1;
for x=1:300
    for y=1:300
        if patchCandidate(x,y)==0
            Image(x,y)=1;          
        end
        
        if patchCandidate(x,y)==1
            Image(x,y)=900;                
        end    
        
    end
end
% figure, imshow(uint8(Image))%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
IS =Image;
RIS = PatchSmoothing(IS);
% gray = rgb2gray(RIS) ;
mask = 1-double(im2bw(gray1, graythresh(gray1)));
shaodowEdge = conv2(mask, strel/21, 'same');
SE =shaodowEdge;

shadowavg_red = sum(sum(imMask(:,:,1).*shadow_core)) / sum(sum(shadow_core));
shadowavg_green = sum(sum(imMask(:,:,2).*shadow_core)) / sum(sum(shadow_core));
shadowavg_blue = sum(sum(imMask(:,:,3).*shadow_core)) / sum(sum(shadow_core));

litavg_red = sum(sum(imMask(:,:,1).*patchCandidate)) / sum(sum(patchCandidate));
litavg_green = sum(sum(imMask(:,:,2).*patchCandidate)) / sum(sum(patchCandidate));
litavg_blue = sum(sum(imMask(:,:,3).*patchCandidate)) / sum(sum(patchCandidate));

diff_red = litavg_red - shadowavg_red;
diff_green = litavg_green - shadowavg_green;
diff_blue = litavg_blue - shadowavg_blue;

result(:,:,1) = imMask(:,:,1) + shaodowEdge * diff_red;
result(:,:,2) = imMask(:,:,2) + shaodowEdge * diff_green;
result(:,:,3) = imMask(:,:,3) + shaodowEdge * diff_blue;

SR = uint8(result) ;
    
end

 

Image enhancement technique on Ultrasound Images using Aura Transformation

INTRODUCTION

Medical imaging is an important source of diagnosing the malfunctions inside human body.  Some crucial medical imaging instruments are X-ray, Ultrasound, Computed Tomography (CT), and Magnetic Resonance Imaging (MRI). Medical ultrasound imaging is one of the significant techniques in detecting and visualizing the hidden body parts. There could be distortions due to improper contact or air gap between the transducer probe and the human body. Another kind of distortion that may occur during ultrasound imaging is due to the beam forming process and also during the signal processing stage. In order to overcome through various distortions, image processing has been successfully used. Image processing is a significant technique in medical field, especially in surgical decisions. Converting an image into homogeneous regions has been an area of hot research from a decade, especially when the image is made up of complex textures. Various techniques have been proposed for this task, including spatial frequency techniques. Image processing techniques have been used widely depending on the specific application and image modalities. Computer based detection of abnormal growth of tissues in a human body are preferred to manual processing methods in the medical investigations because of accuracy and satisfactory results. Several methods for processing the ultrasound images have been developed. The different methods of analyzing the scans can be classified under five broad categories. These are methods based on statistics (clustering methods), fuzzy sets theory, mathematical morphology, edge detection, and region growing? Image processing of ultrasound image allows extracting the invisible parts of human body and provides valuable information for further stages of the quantitative evaluation. Various methods have been proposed for processing ultrasound scans to make effective diagnosis. However, there is still a scope for improvement in terms of the quality of processed images.

Ultrasound images

Ultrasound imaging plays crucial role in cardiology, obstetrics, gynecology, abdominal imaging, etc., due to its non-invasive nature and capability of forming real time imaging. Medical Ultrasound imaging is done by using ultrasonic waves between 2 to 20 MHz ranges without the use of ionizing radiation. The basic principle in ultrasound imaging is that the ultrasonic waves are produced from the transducer and penetrates in the body tissues and when the wave reaches an object or a surface with different texture or acoustic nature, some fraction of the this energy is reflected back. The echoes so produced are received by the apparatus and changed into electric current. These signals are then amplified and processed to get displayed on CRT monitor.  The output image so obtained is known as ultrasound scan and the process is called as ultra-sonogram.  There are different modes of ultrasound imaging. The most common modes are (a) b-mode (the basic two-dimensional intensity mode), (b) m-mode (to assess moving body parts (e.g. cardiac movements) from the echoed sound), and (c) Color mode (pseudo coloring based on the detected cell motion using Doppler analysis). Ultrasound imaging technique is inexpensive and is very effective for cyst and foreign element recognition inside the human body

Aura transformation

Aura transformation is mainly used for analysis and synthesis of textures.  It is defined as the relative distribution of pixels intensities with respect to a predefined structuring element. The matrix computed from the local distribution of pixel intensities of the given texture is called aura matrix. Aura set and aura measure are the basic components of the aura based texture analysis. Aura set describes the relative presence of one gray level in the neighborhood of another gray level in a texture and its quantitative measure called aura measure.A neighborhood element is used to calculate the relative presence of one gray level with respect to another. The concept of Aura has also been applied to 3D textures to generate the solid textures from the input samples automatically without user intervention.

OBJECTIVES

The role of medical scans is vital in diagnosis and treatment. There is every possibility of distortion during the image acquisition process, which may badly affect the diagnosis based on these images. Thus, image Processing has become an essential exercise to extract the exact information from the medical images or Scans. In recent times, researchers made various attempts to enhance the biomedical images using various Signal processing methods. Several techniques have been explored and reported for improving the quality of the medical images. Still, there is a scope of improvement in the area of quality enhancement of the Medical scans. We investigated an aura based technique for enhancing the quality of medical Ultrasound images. An algorithm has been developed using aura transformation whose performance has been evaluated on a series of diseased and normal ultrasound images.

PROBLEM FORMULATION

An aura based technique is investigated for enhancing the quality of the ultrasound

Images for better medical diagnosis. Extensive investigations have been carried out with Ultrasound images involving different problems. The processed images, using aura based Algorithm, indicates the enhancement of the important regions of the ultrasound images. The Details of medical ultrasound imaging have been presented

METHODOLOGY / PLANNING OF WORK

In preprocessing step, the input ultrasound images are converted to gray scale and its modified to reduce the number of computations. The reduction depends on the expected size and texture of the abnormal region in the scan.

Different types of normal and diseased ultrasound images are processed for investigating the effect of aura on the neighborhood structures of the images. A neighborhood element is defined in the form of a 33 matrix.

The values of the elements of this matrix are estimated on the basis of gray scale values of the given ultrasound image. The input image is processed using this structuring element by traversing  it pixel by pixel on the whole image.

At every placement, the differences of the gray scale values of the neighborhood elements and the corresponding pixels below it are computed.

Depending upon the difference threshold Td, the 3×3 matrix of the difference is converted to zeros and ones.

If the difference is less than Td, the corresponding element is mar ked as one otherwise, zero in the difference matrix.

If the total number of ones in the difference matrix is more than a threshold value called matching threshold Tm, the pixel corresponding to the central element of the neighborhood element is marked as black, otherwise left unchanged.

This process is repeated for the entire input image.

The investigations have been carried out with different values of both the thresholds and input ultrasound images.

The evaluation for the enhancement in the processed ultrasound image with respect to the input image was carried out using the visual inspection.

FUTURE SCOPE

The investigations involving the images obtained from other medical imaging techniques are in our future plan. We can also enhance the quality of obtained images by applying a second level of filter after the image has been processed with our algorithm. We can also compare different level 2 filters so as to get the best combination of filter to be used with our algorithm.

CONCLUSION

In this study, investigations were carried out to enhance the quality of the ultrasound images usingmodified aura based transformation. It was observed that this transformation technique is relativelyless expensive, simple, and promising. The duration for processing the image is very less. The investigations further showed that theprocessed ultrasound images were enhanced in quality. The enhanced images may be used forpredicting the diseases inside the human body more effectively and accurately.

LITERATURE SURVEY

Image Decomposition Using Wavelet Transform

In this work, image has been decomposed on wavelet decomposition technique using different wavelet transforms with different levels of decomposition. Two different images were taken and on these images wavelet decomposition technique is implemented. The parameters of the image were calculated with respect to the original image. Peak signal to noise ratio (PSNR) and mean square error (MSE) of the decomposed images were calculated. PSNR is used to measure the difference between two images. From the several types of wavelet transforms, Daubechie (db) wavelet transforms were used to analyze the results. The value of threshold is rescaled for denoising purposes. De-noising methods based on wavelet decomposition is one of the most significant applications of wavelets.

Image enhancement technique on Ultrasound Images using Aura Transformation

The role of medical scans is vital in diagnosis and treatment. There is every possibility of distortion during  the image acquisition process, which may badly affect the diagnosis based on these images. Thus, image processing has become an essential exercise to extract the exact information from the medical images or scans. In recent times, researchers made various attempts to enhance the biomedical images using various signal processing methods. Several techniques have been explored and reported for improving the quality of the medical images. Still, there is a scope of improvement in the area of quality enhancement of the medical scans. In this paper, we investigated an aura based technique for enhancing the quality of medical ultrasound images. An algorithm has been developed using aura transformation whose performance has been evaluated on a series of diseased and normal ultrasound images.

Investigations of the MRI Images using Aura Transformation

The quality of biomedical images can be enhanced by using several transformations reported in the literature. The enhanced images may be useful to extract the exact information from these scans. In recent times, researchers exploited various mathematical models to smoothen and enhance the quality of the biomedical images with an objective to extract maximum useful medical information related to functioning or malfunctioning of the brain. Both real and non-real time based techniques have been explored and reported for this purpose. In this proposed work, aura based technique has been investigated for enhancing the quality of magnetic resonance imaging (MRI) scans of the human brain. The aura transformation based algorithm with some modifications has been developed and the performance of the algorithm is evaluated on a series of defected, diseased, and normal MRI brain images.

A completely automatic segmentation method for breast ultrasound images -using region growing

In this paper, we propose a fully auto-matic segmentation algorithm of masses on breast ultrasound images by using region growing technique. First, a seed point is selected automatically from the  mass region based on both textural features and spatial features. Then, from the  selected seed point, a region growing algorithm based on neutrosophic logic is implemented. The whole algorithm needs  no manual intervention at all and is completely automatic. Experiment results  show that the proposed segmentation algorithm is efficient in both selecting seed point and segmenting region of interests (ROIs).

Automatic Boundary Detection of Wall Motion in Two-dimensional Echocardiography Images

Medical image analysis is a particularly difficult problem because the  inherent characteristics of these images, including low contrast, speckle noise, signal dropouts and complex anatomical structures. An accurate analysis of wall motion in Two-dimensional echocardiography images is “important clinical diagnosis parameter for many cardiovascular diseases”. A challenge most researchers faced is how to speed up the clinical decisions and reduce human error of estimating accurately the true wall movements boundaries if can be done automatically will be a useful tool for assessing these diseases qualitatively and quantitatively.

MATLAB SOURCE CODE

Instructions to run the code

  1. Copy each of below codes in different M files.
  2. Place all the files in same folder
  3. Download the file from below and place in same folder
    1. results
  4. Also note that these codes are not in a particular order. Copy them all and then run the program.
  5. Run the “final.m” file

Code 1 – Script M File – Final.m

clc
clear
close all


% reading all the images at once
[IMAGES,n]=image_read;

% performing the preprocessing operations
[NHOOD,SE,u,r1,c1]=preprocessing;

% applying aura transformation on the image database created earlier
apply_aura(NHOOD,SE,u,r1,c1,IMAGES,n)

% 
% I=imread('image.jpg');
% I=rgb2gray(I);
% orig=I;
% figure, imshow(orig)
% title('Original Image')
% 
% [NHOOD,SE,u,r1,c1]=preprocessing;
% 
% for Tm=1:u
%     Tm
%     Iin=orig;
%     % checking all the pixels of the input image
%     Iout=aura(Iin);
%     
%     [PSNR(Tm),MSE(Tm),MAXERR,L2RAT]= measerr(orig,Iout);
%     ENTROPY(Tm)=entropy(I);
%     
%        
% end
% 
% 
% disp('Final Results are stored in the excel file : ')
% res=[1:u; MSE; PSNR; ENTROPY]

Code 2 – Function M File – apply_aura.m

function apply_aura(NHOOD,SE,u,r1,c1,IMAGES,n)

for i=1:n % running the code for all images in database
    Iin=IMAGES(:,:,i); % selecting an image
    PSNR=[];     MSE=[];     MAXERR=[];     L2RAT=[];     ENTROPY=[]; % initializing variables to store results
    
    for Tm=1:u
        
        Iout=aura(Iin,NHOOD,SE,u,r1,c1,Tm); % apply aura transformation on selected image
        outimagename=['Image' num2str(i) ' Tm=' num2str(Tm) '.jpg'];
        imwrite(Iout,outimagename)
        [PSNR(Tm),MSE(Tm),MAXERR(Tm),L2RAT(Tm)]= measerr(Iin,Iout);
        ENTROPY(Tm)=entropy(Iout);
        
    end 
    
    filename='results.xlsx';
    A={'Tm' 'MSE' 'PSNR' 'MAXERR' 'L2RAT'  'ENTROPY'};
    sheet=['image' num2str(i)];
    xlswrite(filename,A,sheet,'A1')
    
    filename='results.xlsx';
    A=[1:u; MSE; PSNR; MAXERR; L2RAT; ENTROPY]';
    sheet=['image' num2str(i)];
    xlswrite(filename,A,sheet,'A2')
    
end



Code 3 – Function M File – preprocessing.m

function [NHOOD,SE,u,r1,c1]=preprocessing

NHOOD=[1 1 1; 0 1 0; 0 1 0]; % defining the structuring element
SE=strel(NHOOD); % creating a structuring element
[r1,c1]=size(NHOOD);
u=r1*c1; %maximum value for Tm

end

Code 4 – Function M File – image_read.m

function [IMAGES,n]=image_read

IMAGES=[]; % empty matrix where images will be stored
n=10; % total number of images
for i=1:n  % running the loop for total number of images 
    im=imread(['image' num2str(i) '.jpg']); % reading an ith image
    if length(size(im))==3
%         i
%         disp('catch')
        im=rgb2gray(im); % convert to grayscale if it is a color image
    end
    im=imresize(im,[500 500]);
    IMAGES(:,:,i)=im; % storing the read image file into the empty matrix created earlier
end

end

Code 5 – Function M File – aura.m

function Iout=aura(Iin,NHOOD,SE,u,r1,c1,Tm)

I=Iin;
[r2,c2]=size(I);
for i=1:(r2-r1)
    for j=1:(c2-c1)
        mat=I(i:i+r1-1,j:j+c1-1);
        Tm_dash=length(find(mat==NHOOD));
        if Tm_dash>Tm
            a=i+round(r1/2);
            b=j+round(c1/2);
            I(a,b)=0;
        end
    end
end
Iout=I;

end

 

Recent Posts

Tags

ad-hoc networks AODV boundary detection process classification clustering clustering algorithm Colour Information computer vision Decryption Encryption EZRP ICM (Iterated Conditional Modes) image denoising image enhancement IMAGE PROCESSING image segmentation Imaging and image processing MANET Markov Random Fields neutrosophic logic optical network proposed method PSNR QLab system region growing Robert’s operator Seed point selection segmentation semi-automatic algorithm Shadow Detection shadow removal wall motion wireless communication Wireless network wireless networks Wireless Sensor Network wireless sensor networks ZRP