Friday, December 27, 2013

Data Mining Topics & Applications 2014

The European Conference on Data Mining (ECDM’14) is aimed to gather researchers and application developers from a wide range of data mining related areas such as statistics, computational intelligence, pattern recognition, databases and visualization. ECDM’14 is aimed to advance the state of the art in data mining field and its various real world applications. ECDM’14 will provide opportunities for technical collaboration among data mining and machine learning researchers around the globe.

Acceptance will be based primarily on originality, significance and quality of contribution.

Topics for this conference include, but are not limited to:

Core Data Mining Topics
- Parallel and distributed data mining algorithms
- Data streams mining
- Graph mining
- Spatial data mining
- Text video, multimedia data mining
- Web mining
- Pre-processing techniques
- Visualization
- Security and information hiding in data mining
Data Mining Applications
- Databases,
- Bioinformatics,
- Biometrics
- Image analysis
- Financial modeling
- Forecasting
- Classification
- Clustering
- Social Networks
- Educational data mining


For further details please contact the publicity chair - secretariat@datamining-conf.org

This is a blind peer-reviewed conference.


Important Dates

- Submission Deadline: 31 January 2014
- Notification to Authors: 3 March 2014
- Final Camera-Ready Submission and Early Registration: until 31 March 2014
- Late Registration: after 31 March 2014
- Conference: 15 - 17 July, 2014

Areas and Resources in Software Engineering

The following areas of interest:
  • Software engineering in and for the cloud
  • Data analytics for software development and engineering
  • Programming paradigms and software engineering tools for the Internet of Things
  • Tools for testing and verification
  • Tools and technologies challenging computer science education in general, and teaching of programming in particular
  • Empowering end users with program synthesis
  • Programming in the presence of uncertainty and approximation
  • Infrastructure for cloud-scale software
    • Server, storage, interconnect, and data center architecture and design
    • Intra- and inter-data center networking
    • Resource models and resource-aware programming models
    •  Resource provisioning, monitoring, and control for cloud computing

    Resources

    Microsoft offers a wide range of platforms and services of direct relevance to the call. PIs are encouraged to use these, as well as Microsoft Research’s free tools and frameworks. A selection of these follows.

    Devices and “Internet of Things” 

  • TouchDevelop – programming on multi-platform devices
  • Lab of Things – a research platform for deploying interconnected devices at scale
  • Windows Phone Dev Center: development tools for Windows Phone
  • .NET Gadgeteer: rapid prototyping platform for small electronic gadgets and embedded hardware devices
  • Kinect for Windows
Software engineering in the cloud
Sharing of tools and teaching
  • Rise4Fun allows visitors to interact directly with new tools and languages, developed by Microsoft and others—including IDEs, compilers, and analysis—in the cloud
  • TouchDevelop: programming on touch devices, including tablets and phones 
Design, programming, and services
  • Microsoft Visual Studio Ultimate: for lifecycle management of project development
  • Debugger Canvas: a new user experience for the debugger in Microsoft Visual Studio Ultimate
  • DKAL: a distributed authorization policy language
  • FORMULA: a modern formal specification language targeting model-based development
  • F*: dependently typed language for secure distributed programming
  • P: a language for asynchronous event-driven programming
  • Visual Studio F# 3.0: functional programming with information-rich programming and Try F#
  • Koka: a function-oriented language with effect inference
Dynamic analysis tools
  • Code Contracts: language-agnostic contracts for Microsoft .NET
  • CHESS: concurrency testing tools
  • Detours: for detouring Win32 and application APIs
  • Pex: automatic unit testing tool for Microsoft .NET
  • Stubs: lightweight test stubs and detouring for Microsoft .NET
Static analysis and program verification
  • Bek: fast and precise sanitizer analysis using regular transducers
  • Boogie: intermediate language for targeting program verifiers
  • VCC: program verifier for C with contract language
  • Z3: automated theorem prover
  • Common Compiler Infrastructure: CIL-metadata reader/writer, compiler/decompiler framework
Infrastructure for cloud-scale software
Additional resources
All of this software is free for download to by everyone, except for Windows Azure and Microsoft Visual Studio, which are free to universities that are enrolled in DreamSpark.

Research Areas in Software Engineering

Software Engineering group works on development and maintenance of software, with the overall goal of creating high-quality software.  Research done in 
  • Software Testing
  • Program Analysis
  • Program understanding
  • Modeling and Design
  • Failure Analysis
  • Fault Localization 
  • Debugging 
  • Remote Monitoring
  • Human and social aspects of software engineering
  • Software Engineering Education


ICSOFT-EA 2014, the 9th International Conference on Software Engineering and Applications

Papers may address one or more of the listed sub-topics, although authors should not feel limited by them. Unlisted but related sub-topics are also acceptable, provided they fit in one of the following main topic areas:
1. ENTERPRISE SOFTWARE TECHNOLOGIES
2. SOFTWARE ENGINEERING METHODS AND TECHNIQUES
3. DISTRIBUTED AND MOBILE SOFTWARE SYSTEMS
4. SOFTWARE PROJECT MANAGEMENT


AREA 1: ENTERPRISE SOFTWARE TECHNOLOGIES
  • Business Process Modelling
  • Client/Server Computing
  • IT Service Management
  • Customer Relationship Management
  • Enterprise Resource Planning
  • Interoperability
  • Middleware
  • Business Intelligence
  • Decision Support Systems
  • Intelligent Problem Solving
  • User Modelling and HCI
  • Virtual Organizations
AREA 2: SOFTWARE ENGINEERING METHODS AND TECHNIQUES
  • Requirements Elicitation and Specification
  • Software Integration
  • Software Testing and Maintenance
  • Model-driven Engineering
  • Software Quality
  • Software and Information Security
  • User Identification and Intrusion Detection
  • Web Services
  • Formal Methods
  • Programming Languages
AREA 3: DISTRIBUTED AND MOBILE SOFTWARE SYSTEMS
  • Distributed Architectures
  • Cloud Applications
  • Web-based Software Development
  • Mobile Technologies and Applications
  • Pervasive Computing and Communications
  • Ambient Intelligence
  • Agents and Multi-agent Systems
  • Communication Networks and Protocols
  • Parallel and High Performance Computing
  • Process Coordination and Synchronization
  • Distributed Systems Privacy
AREA 4: SOFTWARE PROJECT MANAGEMENT
  • Project Management Software
  • Scheduling and Estimating
  • Software Metrics
  • Project Planning, Monitoring and Control
  • Agile Methodologies
  • Performance Evaluation
  • Risk Management
  • Quality Assurance
  • Certification
  • Case Studies of Project Management
   Conference Date: 29 - 31 August, 2014

  Regular Papers
Paper Submission: March 18, 2014
Authors Notification:
June 12, 2014
Camera Ready and Registration:
June 26, 2014

Position Papers
  Paper Submission: May 21, 2014
 Authors Notification:
June 24, 2014
 Camera Ready and Registration:
July 7, 2014

The best papers will be selected to appear either in an international journal or in a book to be published by Springer-Verlag. Additional information can be found at http://www.icsoft-ea.org.

Monday, October 28, 2013

Face Detection and Tracking in Real Time video using Matlab

Matlab has included incredible examples for face detection and tracking in the Computer Vision toolbox. Of course the first thing I did was running it, in order to test its capabilities and performance and they were great. The only limitation I found, was that the input video couldn't be realtime, for example, taken with my own webcam, so, I offer you a solution for this problem based on the very same example Mathworks provides, hope will be useful. I added too a small text in the top left corner to point out bbox location.

Here is the code:


function [  ] = face_tracker(  )
%Create a cascadeObjectDetector, by default it detects the face
faceDetector = vision.CascadeObjectDetector();

%Get the input device using image acquisition toolbox,resolution = 640x480 to improve performance
vidDevice = imaq.VideoDevice('winvideo', 1, 'MJPG_640x480','ROI',[1 1 640 480]);

%Get a video frame and run the detector.
videoFrame = step(vidDevice);

%Get a bounding box around the face
bbox            = step(faceDetector, videoFrame);

%Check if something was detected, otherwise exit
if numel(bbox) == 0
    error('Nothing was detected, try again');
end  

%Show coordinates of tracked face
textColor = [255, 0, 0];
textLocation = [1 1];
text =  ['x: ',num2str(bbox(1)),' y: ',num2str(bbox(2)),' Width: ',num2str(bbox(3)), ' Height: ',num2str(bbox(4))];
textInserter = vision.TextInserter(text,'Color', textColor, 'FontSize', 12, 'Location', textLocation);
videoOut = step(textInserter, videoFrame);

% Draw the returned bounding box around the detected face.
videoOut = insertObjectAnnotation(videoOut,'rectangle',bbox,'Face');
figure, imshow(videoOut), title('Detected face');

% Get the skin tone information by extracting the Hue from the video frame
% converted to the HSV color space.
[hueChannel,~,~] = rgb2hsv(videoFrame);

% Display the Hue Channel data and draw the bounding box around the face.
figure, imshow(hueChannel), title('Hue channel data');
rectangle('Position',bbox(1,:),'LineWidth',2,'EdgeColor',[1 1 0])
% Detect the nose within the face region. The nose provides a more accurate
% measure of the skin tone because it does not contain any background
% pixels.
noseDetector = vision.CascadeObjectDetector('Nose');
faceImage    = imcrop(videoFrame,bbox(1,:));
noseBBox     = step(noseDetector,faceImage);

% The nose bounding box is defined relative to the cropped face image.
% Adjust the nose bounding box so that it is relative to the original video
% frame.
noseBBox(1,1:2) = noseBBox(1,1:2) + bbox(1,1:2);

% Create a tracker object.
tracker = vision.HistogramBasedTracker;

% Initialize the tracker histogram using the Hue channel pixels from the
% nose.
initializeObject(tracker, hueChannel, noseBBox(1,:));

% Create a video player object for displaying video frames.
ROI = get(vidDevice,'ROI');
videoSize = [ROI(3) ROI(4)];
videoPlayer  = vision.VideoPlayer('Position',[300 300 videoSize(1:2)+30]);


% Track the face over successive video frames until the video is finished.
%You could set here a finite number of frames to capture
while 1
    % Extract the next video frame
    videoFrame = step(vidDevice);

    % RGB -> HSV
    [hueChannel,~,~] = rgb2hsv(videoFrame);

    % Track using the Hue channel data
    bbox = step(tracker, hueChannel);

    % Insert a bounding box around the object being tracked
    videoOut = insertObjectAnnotation(videoFrame,'rectangle',bbox,'Face');
 
    %Insert text coordinates
    text =  ['x: ',num2str(bbox(1)),' y: ',num2str(bbox(2)),' Width: ',num2str(bbox(3)), ' Height: ',num2str(bbox(4))];
    textInserter = vision.TextInserter(text,'Color', textColor, 'FontSize', 12, 'Location', textLocation);
    videoOut = step(textInserter,videoOut);
 
    % Display the annotated video frame using the video player object
    step(videoPlayer, videoOut);
end

% Release resources
release(vidDevice);
release(videoPlayer);

end

FACE DETECTION USING MATLAB

Face Detection:



In this tutorial, I present a face recognition system that attempts to recognize faces using the Skin Segmentation Technique. This tutorial is intended to provide an insight into developing a face recognition system using Skin Detection and hopefully gives a good starting point for those who are interested in developing a face recognition system.


There are other methods of face detection in an image, but i structured my own format and it works very well.


My Method:


1. Histogram Equalization
2. Skin Detection and Segmentation
3. Filling The Holes
4. Eliminating Pixels Below a Threshold
5. Putting Bounding Boxes Around Detected Faces And Counting
Step 1:


Brightness Preserving Dynamic Histogram Equalization:


Code:
close all;
clear all;
clc;
rgbInputImage = imread('Your File Here');
%rgbInputImage=getsnapshot(rgbInputImage);
labInputImage = applycform(rgbInputImage,makecform('srgb2lab'));
Lbpdfhe = fcnBPDFHE(labInputImage(:,:,1));
labOutputImage = cat(3,Lbpdfhe,labInputImage(:,:,2),labInputImage(:,:,3));
rgbOutputImage = applycform(labOutputImage,makecform('lab2srgb'));
figure, imshow(rgbInputImage);
figure, imshow(rgbOutputImage);
img=rgbOutputImage;
final_image = zeros(size(img,1), size(img,2));
This routine calls a function fcnBPDHE.(Function Brightness Preserving Dynamic Histogram Equalization.)
You Need To Have The Function saved in your MATLAB directory.


Step 2:

Detect Skin Regions:

The next step is to detect the skin regions in an image. Photos in which people are fully covered give the best results as the complexity of the code reduces and complex procedures such as neural networks or template matching are not required. But this code does a good job of identifying faces even with some skin shown in the image. 
Code For Skin Detection & Segmentation:
if(size(img, 3) > 1)
for i = 1:size(img,1)
for j = 1:size(img,2)
R = img(i,j,1);
G = img(i,j,2);
B = img(i,j,3);
if(R > 92 && G > 40 && B > 20)
v = [R,G,B];
if((max(v) - min(v)) > 15)
if(abs(R-G) > 15 && R > G && R > B)
%it is a skin
final_image(i,j) = 1;
end
end
end
end
end

Step 3:
Fill In The Holes:

So far so good. The idea is to let MATLAB identify white blobs in an image but because of certain broken blobs they are identified as separate blobs and this makes it difficult to detect faces with accuracy. In the above picture, the chinese women's face has a big black cut running between the two eyes. and the same with the chinese man.
What we would do is we would fill the gaps with white spaces so as to make it a solid white blob.
MATLAB has an inbuilt function for the same known as 'imfill'. Type imfill in MATLAB help to find out more about the function.
Before performing 'imfill' we need to convert our grayscale image into a binary image.
'BW=im2bw(I,level);' converts the grayscale image I to a binary image. The output image BW replaces all pixels in the input image with luminance greater than level with the value 1 (white) and replaces all other pixels with the value 0 (black). You specify level in the range [0,1], regardless of the class of the input image. The function 'greythresh' can be used to compute the level argument automatically. If you do not specify levelim2bw uses the value 0.5.

%Grayscale To Binary.
binaryImage=im2bw(final_image,0.6);
figure, imshow(binaryImage);

%Filling The Holes.
binaryImage = imfill(binaryImage, 'holes');
figure, imshow(binaryImage);


Step 4:
Putting Bounding Boxes Around Detected Blobs and Counting Them
Well in this routine we eliminate those white pixels that fall under a threshold.
For eg: if the face pixel density is lets say 2000 then anything below that is not a face. And such pixels would be discarded and a new binary image would be present with the remaining face pixels.
The Basic Steps Are As Follows:
1. Determine the Connected Components.
2. Computer the Area of Each Component.
3. Remove Small Objects.
Checkout MATLAB Help For More information on these functions...Click On Read More To See The Complete Post.






FINAL IMAGE WITH BOUNDING BOXES AROUND DETECTED FACES:


Code:
binaryImage = bwareaopen(binaryImage,1890);   
figure,imshow(binaryImage);
labeledImage = bwlabel(binaryImage, 8);
blobMeasurements = regionprops(labeledImage, final_image, 'all');
numberOfPeople = size(blobMeasurements, 1)
imagesc(rgbInputImage); title('Outlines, from bwboundaries()'); 
%axis square;
hold on;
boundaries = bwboundaries(binaryImage);
for k = 1 : numberOfPeople
thisBoundary = boundaries{k};
plot(thisBoundary(:,2), thisBoundary(:,1), 'g', 'LineWidth', 2);
end
% hold off;


imagesc(rgbInputImage);
hold on;
title('Original with bounding boxes');
%fprintf(1,'Blob # x1 x2 y1 y2\n');
for k = 1 : numberOfPeople % Loop through all blobs.
% Find the mean of each blob. (R2008a has a better way where you can pass the original image
% directly into regionprops. The way below works for all versionsincluding earlier versions.)
thisBlobsBox = blobMeasurements(k).BoundingBox; % Get list of pixels in current blob.
x1 = thisBlobsBox(1);
y1 = thisBlobsBox(2);
x2 = x1 + thisBlobsBox(3);
y2 = y1 + thisBlobsBox(4);


   % fprintf(1,'#%d %.1f %.1f %.1f %.1f\n', k, x1, x2, y1, y2);
x = [x1 x2 x2 x1 x1];
y = [y1 y1 y2 y2 y1];
%subplot(3,4,2);
plot(x, y, 'LineWidth', 2);
end


%figure, imshow(labeledImage);
%B = bwboundaries(binaryImage);
%imshow(B);
%text(10,10,strcat('\color{green}Objects Found:',num2str(length(B))))
%hold on
%for k = 1:length(B)
%boundary = B{k};
%plot(boundary(:,2), boundary(:,1), 'g', 'LineWidth', 0.2)
%end
end

BALL DETECTION using MATLAB

TABLE TENNIS BALL DETECTION-MAT LAB CODE:


Using the concept of roundness to detect a circular object, the table tennis ball in mid air or in palm is detected.

FLOW CHART:


1.     Read input image
·        Read a RGB image with ball in mid air or ball in palm.
                   MATLAB CODE:
                       %Read the input image
         Img=imread(Filename);
         axes('Position',[0 .1 .74 .8],'xtick',[],'ytick',[]);
         imshow(Img);title('Original Image');


2.     Image Pre-processing
·        Convert the RGB image to grayscale image.
·        Apply median filter
·        Adjust the brightness and contrast of the image using ‘imadjust’ function.
                      MATLAB CODE:
           I=rgb2gray(Img);     % Converting RGB Image to
                                % Gray Scale Image
           I=im2double(I);      % Converting Gray scale Image
                                % to Double type
           J = medfilt2(I,[3 3]); % Median Filter , 
                                  % 3x3 Convolution
                                  % on Image
           I2 = imadjust(J);     % Improve to quality of Image
                                 % and adjusting
                                 % contrast and brightness values


3.     Threshold the image
·        The pixel value greater than the threshold value is converted to one else zero.
                     MATLAB CODE:
           Ib = I2> 0.9627; 









4.     Image Labeling
·        Label the connected components using ‘bwlabel’ function
·        Remove components that are smaller in size.
                     MATLAB CODE:
           %Labelling
           [Label,total] = bwlabel(Ib,4); % Indexing segments by
                                          % binary label function
            %Remove components that is small and tiny
            for i=1:total
                if(sum(sum(Label==i)) < 500 )

                    Label(Label==i)=0;
  
                end
            end
5.     Find the image properties: Area, Perimeter and Centroid
·        Using ‘regionprops’ function, find the Area, Perimeter, Bounding Box and Centroid.
                     MATLAB CODE:
            %Find the properties of the image
             Sdata = regionprops(Label,'all'); 
6.     Calculate the Roundness
·        Roundness = 4*PI*A/P^2
           MATLAB CODE:
                %Find the components number
          Un=unique(Label);
          my_max=0.0;
            %Check the Roundness metrics
            %Roundness=4*PI*Area/Perimeter.^2
            for i=2:numel(Un)
               Roundness=(4*pi*Sdata(Un(i)).Area)/Sdata(Un(i)).Perimeter.^2;
               my_max=max(my_max,Roundness);
                if(Roundness==my_max)
                   ele=Un(i);
                end
            end
7.     Find the component with the maximum roundness value
·        Find the max of the Roundness value for all the labeled components
8.     Show the detected table tennis ball
·        Use the ‘BoundingBox’ values to plot rectangle around the ball
·        Mark the centroid of the ball
                     MATLAB CODE:
          %Draw the box around the ball
           box=Sdata(ele).BoundingBox;
           box(1,1:2)=box(1,1:2)-15;
           box(1,3:4)=box(1,3)+25;
          %Crop the image
           C=imcrop(Img,box);
          %Find the centroid
          cen=Sdata(ele).Centroid;
          %Display the image
           axes('Position',[0 .1 .74 .8],'xtick',[],'ytick',[])
           imshow(Img);
           hold on
           plot(cen(1,1),cen(1,2),'rx');%Mark the centroid



9.     Generate report
·        Find the radius using the Equidiameter obtained using ‘regionprops’ function.
·        Display the radius,Area,Perimeter and Centroid of the ball.
·        Show the Binary and Original image of the cropped ball.
MATLAB CODE:
        Rad=(sdata(ele).EquivDiameter)/2;
        Rad=strcat('Radius of the Ball :',num2str(Rad));
     
        Area=sdata(ele).Area;
        Area=strcat('Area of the ball:',num2str(Area));
       
        Pmt=sdata(ele).Perimeter;
        Pmt=strcat('Perimeter of the ball:',num2str(Pmt));
       
        Cen=sdata(ele).Centroid;
        Cent=strcat('Centroid:',num2str(Cen(1,1)),',',num2str(Cen(1,2)));






BALL IN MID AIR:
 

MATLAB Function ‘regionprops’

Find Area, Perimeter, Centroid, Equivdiameter, Roundness and Bounding Box without Using MATLAB Function ‘regionprops’ 

MATLAB CODE: 

      %Measure Basic Image Properties without using 'regionprops' function
%Measure Area, Perimeter, Centroid , Equvidiameter, Roundness and Bounding Box
clc
%Read Original Image
I=imread('coins.png');
%Convert to Binary
B=im2bw(I);
                                                 
%Fill the holes
C=imfill(B,'holes');
                                                    
 %Label the image
[Label,Total]=bwlabel(C,8);
%Object Number
num=4;
[row, col] = find(Label==num);



%To find Bounding Box
sx=min(col)-0.5;
sy=min(row)-0.5;
breadth=max(col)-min(col)+1;
len=max(row)-min(row)+1;
BBox=[sx sy breadth len];
display(BBox);
figure,imshow(I);
hold on;
x=zeros([1 5]);
y=zeros([1 5]);
x(:)=BBox(1);
y(:)=BBox(2);
x(2:3)=BBox(1)+BBox(3);
y(3:4)=BBox(2)+BBox(4);
plot(x,y);



%Find Area
Obj_area=numel(row);
display(Obj_area);
%Find Centroid
X=mean(col);
Y=mean(row);
Centroid=[X Y];
display(Centroid);
plot(X,Y,'ro','color','r');
hold off;


%Find Perimeter
BW=bwboundaries(Label==num);
c=cell2mat(BW(1));
Perimeter=0;
for i=1:size(c,1)-1
Perimeter=Perimeter+sqrt((c(i,1)-c(i+1,1)).^2+(c(i,2)-c(i+1,2)).^2);
end
display(Perimeter);
                                

%Find Equivdiameter
EquivD=sqrt(4*(Obj_area)/pi);
display(EquivD);


%Find Roundness
Roundness=(4*Obj_area*pi)/Perimeter.^2;
display(Roundness);
                          


%Calculation with 'regionprops'(For verification Purpose);
%Sdata=regionprops(Label,'all');
%Sdata(num).BoundingBox
%Sdata(num).Area
%Sdata(num).Centroid
%Sdata(num).Perimeter
%Sdata(num).EquivDiameter

FACE DETECTION - MATLAB CODE

Lets see how to detect face, nose, mouth and eyes using the MATLAB built-in class and function. Based on Viola-Jones face detection algorithm, the computer vision system toolbox contains vision.CascadeObjectDetector System object which detects objects based on above mentioned algorithm.

   Prerequisite: Computer vision system toolbox

FACE DETECTION:

clear all
clc
%Detect objects using Viola-Jones Algorithm
%To detect Face
FDetect = vision.CascadeObjectDetector;
%Read the input image
I = imread('HarryPotter.jpg');
%Returns Bounding Box values based on number of objects
BB = step(FDetect,I);
figure,
imshow(I); hold on
for i = 1:size(BB,1)
    rectangle('Position',BB(i,:),'LineWidth',5,'LineStyle','-','EdgeColor','r');
end
title('Face Detection');
hold off;
 
The step(Detector,I) returns Bounding Box value that contains [x,y,Height,Width] of the objects of interest.
BB =
    52    38    73    73
   379    84    71    71
   198    57    72    72

NOSE DETECTION:

%To detect Nose
NoseDetect = vision.CascadeObjectDetector('Nose','MergeThreshold',16);
BB=step(NoseDetect,I);
figure,
imshow(I); hold on
for i = 1:size(BB,1)
    rectangle('Position',BB(i,:),'LineWidth',4,'LineStyle','-','EdgeColor','b');
end
title('Nose Detection');
hold off;
 

EXPLANATION:


To denote the object of interest as 'nose', the argument  'Nose' is passed.

vision.CascadeObjectDetector('Nose','MergeThreshold',16);

The default syntax for Nose detection :
vision.CascadeObjectDetector('Nose');

Based on the input image, we can modify the default values of the parameters passed to vision.CascaseObjectDetector. Here the default value for 'MergeThreshold' is 4.

When default value for 'MergeThreshold' is used, the result is not correct.
 
To avoid multiple detection around an object, the 'MergeThreshold' value can be overridden. 

MOUTH DETECTION:

%To detect Mouth
MouthDetect = vision.CascadeObjectDetector('Mouth','MergeThreshold',16);
BB=step(MouthDetect,I);
figure,
imshow(I); hold on
for i = 1:size(BB,1)
 rectangle('Position',BB(i,:),'LineWidth',4,'LineStyle','-','EdgeColor','r');
end
title('Mouth Detection');
hold off;
 

EYE DETECTION:

%To detect Eyes
EyeDetect = vision.CascadeObjectDetector('EyePairBig');
%Read the input Image
I = imread('harry_potter.jpg');
BB=step(EyeDetect,I);
figure,imshow(I);
rectangle('Position',BB,'LineWidth',4,'LineStyle','-','EdgeColor','b');
title('Eyes Detection');
Eyes=imcrop(I,BB);
figure,imshow(Eyes);
 
 
I will discuss more about object detection and how to train detectors to identify object of our interest in my upcoming posts. Keep reading for updates.