MediaPipe Face Detection is an ultrafast face detection solution that comes with 6 landmarks and multi-face support. It is based on BlazeFace, a lightweight ...
LinkSearchMenuExpandDocumentHomeGettingStartedMediaPipeonAndroidHelloWorld!onAndroidMediaPipeAndroidSolutionsMediaPipeAndroidArchiveMediaPipeoniOSHelloWorld!oniOSMediaPipeinPythonMediaPipePythonFrameworkMediaPipeinJavaScriptMediaPipeinC++HelloWorld!inC++InstallationGPUSupportGettingHelpFAQTroubleshootingSolutionsFaceDetectionFaceMeshIrisHandsPosePoseClassificationHolisticSelfieSegmentationHairSegmentationObjectDetectionBoxTrackingInstantMotionTrackingObjectron(3DObjectDetection)KNIFT(Template-basedFeatureMatching)AutoFlip(Saliency-awareVideoCropping)DatasetPreparationwithMediaSequenceYouTube-8MFeatureExtractionandModelInferenceModelsandModelCardsToolsVisualizerTracingandProfilingPerformanceBenchmarkingFrameworkConceptsCalculatorsGraphsPacketsSynchronizationGPUReal-timeStreamsMediaPipeonGitHubSolutionsFaceDetectionMediaPipeFaceDetectionTableofcontentsOverviewSolutionAPIsConfigurationOptionsmodel_selectionmin_detection_confidenceOutputdetectionsPythonSolutionAPIJavaScriptSolutionAPIAndroidSolutionAPICameraInputImageInputVideoInputExampleAppsMobileGPUPipelineCPUPipelineDesktopCoralResourcesOverviewMediaPipeFaceDetectionisanultrafastfacedetectionsolutionthatcomeswith6landmarksandmulti-facesupport.ItisbasedonBlazeFace,alightweightandwell-performingfacedetectortailoredformobileGPUinference.Thedetector’ssuper-realtimeperformanceenablesittobeappliedtoanyliveviewfinderexperiencethatrequiresanaccuratefacialregionofinterestasaninputforothertask-specificmodels,suchas3Dfacialkeypointestimation(e.g.,MediaPipeFaceMesh),facialfeaturesorexpressionclassification,andfaceregionsegmentation.BlazeFaceusesalightweightfeatureextractionnetworkinspiredby,butdistinctfromMobileNetV1/V2,aGPU-friendlyanchorschememodifiedfromSingleShotMultiBoxDetector(SSD),andanimprovedtieresolutionstrategyalternativetonon-maximumsuppression.FormoreinformationaboutBlazeFace,pleaseseetheResourcessection.SolutionAPIsConfigurationOptionsNamingstyleandavailabilitymaydifferslightlyacrossplatforms/languages.model_selectionAnintegerindex0or1.Use0toselectashort-rangemodelthatworksbestforfaceswithin2metersfromthecamera,and1forafull-rangemodelbestforfaceswithin5meters.Forthefull-rangeoption,asparsemodelisusedforitsimprovedinferencespeed.Pleaserefertothemodelcardsfordetails.Defaultto0ifnotspecified.min_detection_confidenceMinimumconfidencevalue([0.0,1.0])fromthefacedetectionmodelforthedetectiontobeconsideredsuccessful.Defaultto0.5.OutputNamingstylemaydifferslightlyacrossplatforms/languages.detectionsCollectionofdetectedfaces,whereeachfaceisrepresentedasadetectionprotomessagethatcontainsaboundingboxand6keypoints(righteye,lefteye,nosetip,mouthcenter,righteartragion,andlefteartragion).Theboundingboxiscomposedofxminandwidth(bothnormalizedto[0.0,1.0]bytheimagewidth)andyminandheight(bothnormalizedto[0.0,1.0]bytheimageheight).Eachkeypointiscomposedofxandy,whicharenormalizedto[0.0,1.0]bytheimagewidthandheightrespectively.PythonSolutionAPIPleasefirstfollowgeneralinstructionstoinstallMediaPipePythonpackage,thenlearnmoreinthecompanionPythonColabandtheusageexamplebelow.Supportedconfigurationoptions:model_selectionmin_detection_confidenceimportcv2
importmediapipeasmp
mp_face_detection=mp.solutions.face_detection
mp_drawing=mp.solutions.drawing_utils
#Forstaticimages:
IMAGE_FILES=[]
withmp_face_detection.FaceDetection(
model_selection=1,min_detection_confidence=0.5)asface_detection:
foridx,fileinenumerate(IMAGE_FILES):
image=cv2.imread(file)
#ConverttheBGRimagetoRGBandprocessitwithMediaPipeFaceDetection.
results=face_detection.process(cv2.cvtColor(image,cv2.COLOR_BGR2RGB))
#Drawfacedetectionsofeachface.
ifnotresults.detections:
continue
annotated_image=image.copy()
fordetectioninresults.detections:
print('Nosetip:')
print(mp_face_detection.get_key_point(
detection,mp_face_detection.FaceKeyPoint.NOSE_TIP))
mp_drawing.draw_detection(annotated_image,detection)
cv2.imwrite('/tmp/annotated_image'+str(idx)+'.png',annotated_image)
#Forwebcaminput:
cap=cv2.VideoCapture(0)
withmp_face_detection.FaceDetection(
model_selection=0,min_detection_confidence=0.5)asface_detection:
whilecap.isOpened():
success,image=cap.read()
ifnotsuccess:
print("Ignoringemptycameraframe.")
#Ifloadingavideo,use'break'insteadof'continue'.
continue
#Toimproveperformance,optionallymarktheimageasnotwriteableto
#passbyreference.
image.flags.writeable=False
image=cv2.cvtColor(image,cv2.COLOR_BGR2RGB)
results=face_detection.process(image)
#Drawthefacedetectionannotationsontheimage.
image.flags.writeable=True
image=cv2.cvtColor(image,cv2.COLOR_RGB2BGR)
ifresults.detections:
fordetectioninresults.detections:
mp_drawing.draw_detection(image,detection)
#Fliptheimagehorizontallyforaselfie-viewdisplay.
cv2.imshow('MediaPipeFaceDetection',cv2.flip(image,1))
ifcv2.waitKey(5)&0xFF==27:
break
cap.release()
JavaScriptSolutionAPIPleasefirstseegeneralintroductiononMediaPipeinJavaScript,thenlearnmoreinthecompanionwebdemoandthefollowingusageexample.Supportedconfigurationoptions:modelSelectionminDetectionConfidence
constvideoElement=document.getElementsByClassName('input_video')[0];
constcanvasElement=document.getElementsByClassName('output_canvas')[0];
constcanvasCtx=canvasElement.getContext('2d');
functiononResults(results){
//Drawtheoverlays.
canvasCtx.save();
canvasCtx.clearRect(0,0,canvasElement.width,canvasElement.height);
canvasCtx.drawImage(
results.image,0,0,canvasElement.width,canvasElement.height);
if(results.detections.length>0){
drawingUtils.drawRectangle(
canvasCtx,results.detections[0].boundingBox,
{color:'blue',lineWidth:4,fillColor:'#00000000'});
drawingUtils.drawLandmarks(canvasCtx,results.detections[0].landmarks,{
color:'red',
radius:5,
});
}
canvasCtx.restore();
}
constfaceDetection=newFaceDetection({locateFile:(file)=>{
return`https://cdn.jsdelivr.net/npm/@mediapipe/[email protected]/${file}`;
}});
faceDetection.setOptions({
modelSelection:0,
minDetectionConfidence:0.5
});
faceDetection.onResults(onResults);
constcamera=newCamera(videoElement,{
onFrame:async()=>{
awaitfaceDetection.send({image:videoElement});
},
width:1280,
height:720
});
camera.start();
AndroidSolutionAPIPleasefirstfollowgeneralinstructionstoaddMediaPipeGradledependenciesandtrytheAndroidSolutionAPIinthecompanionexampleAndroidStudioproject,andlearnmoreintheusageexamplebelow.Supportedconfigurationoptions:staticImageModemodelSelectionCameraInput//ForcamerainputandresultrenderingwithOpenGL.
FaceDetectionOptionsfaceDetectionOptions=
FaceDetectionOptions.builder()
.setStaticImageMode(false)
.setModelSelection(0).build();
FaceDetectionfaceDetection=newFaceDetection(this,faceDetectionOptions);
faceDetection.setErrorListener(
(message,e)->Log.e(TAG,"MediaPipeFaceDetectionerror:"+message));
//InitializesanewCameraInputinstanceandconnectsittoMediaPipeFaceDetectionSolution.
CameraInputcameraInput=newCameraInput(this);
cameraInput.setNewFrameListener(
textureFrame->faceDetection.send(textureFrame));
//InitializesanewGlSurfaceViewwithaResultGlRendererinstance
//thatprovidestheinterfacestorunuser-definedOpenGLrenderingcode.
//Seemediapipe/examples/android/solutions/facedetection/src/main/java/com/google/mediapipe/examples/facedetection/FaceDetectionResultGlRenderer.java
//asanexample.
SolutionGlSurfaceViewglSurfaceView=
newSolutionGlSurfaceView<>(
this,faceDetection.getGlContext(),faceDetection.getGlMajorVersion());
glSurfaceView.setSolutionResultRenderer(newFaceDetectionResultGlRenderer());
glSurfaceView.setRenderInputImage(true);
faceDetection.setResultListener(
faceDetectionResult->{
if(faceDetectionResult.multiFaceDetections().isEmpty()){
return;
}
RelativeKeypointnoseTip=
faceDetectionResult
.multiFaceDetections()
.get(0)
.getLocationData()
.getRelativeKeypoints(FaceKeypoint.NOSE_TIP);
Log.i(
TAG,
String.format(
"MediaPipeFaceDetectionnosetipnormalizedcoordinates(valuerange:[0,1]):x=%f,y=%f",
noseTip.getX(),noseTip.getY()));
//RequestGLrendering.
glSurfaceView.setRenderData(faceDetectionResult);
glSurfaceView.requestRender();
});
//TherunnabletostartcameraaftertheGLSurfaceViewisattached.
glSurfaceView.post(
()->
cameraInput.start(
this,
faceDetection.getGlContext(),
CameraInput.CameraFacing.FRONT,
glSurfaceView.getWidth(),
glSurfaceView.getHeight()));
ImageInput//ForreadingimagesfromgalleryanddrawingtheoutputinanImageView.
FaceDetectionOptionsfaceDetectionOptions=
FaceDetectionOptions.builder()
.setStaticImageMode(true)
.setModelSelection(0).build();
FaceDetectionfaceDetection=newFaceDetection(this,faceDetectionOptions);
//ConnectsMediaPipeFaceDetectionSolutiontotheuser-definedImageView
//instancethatallowsuserstohavethecustomdrawingoftheoutputlandmarks
//onit.Seemediapipe/examples/android/solutions/facedetection/src/main/java/com/google/mediapipe/examples/facedetection/FaceDetectionResultImageView.java
//asanexample.
FaceDetectionResultImageViewimageView=newFaceDetectionResultImageView(this);
faceDetection.setResultListener(
faceDetectionResult->{
if(faceDetectionResult.multiFaceDetections().isEmpty()){
return;
}
intwidth=faceDetectionResult.inputBitmap().getWidth();
intheight=faceDetectionResult.inputBitmap().getHeight();
RelativeKeypointnoseTip=
faceDetectionResult
.multiFaceDetections()
.get(0)
.getLocationData()
.getRelativeKeypoints(FaceKeypoint.NOSE_TIP);
Log.i(
TAG,
String.format(
"MediaPipeFaceDetectionnosetipcoordinates(pixelvalues):x=%f,y=%f",
noseTip.getX()*width,noseTip.getY()*height));
//Requestcanvasdrawing.
imageView.setFaceDetectionResult(faceDetectionResult);
runOnUiThread(()->imageView.update());
});
faceDetection.setErrorListener(
(message,e)->Log.e(TAG,"MediaPipeFaceDetectionerror:"+message));
//ActivityResultLaunchertogetanimagefromthegalleryasBitmap.
ActivityResultLauncherimageGetter=
registerForActivityResult(
newActivityResultContracts.StartActivityForResult(),
result->{
IntentresultIntent=result.getData();
if(resultIntent!=null&&result.getResultCode()==RESULT_OK){
Bitmapbitmap=null;
try{
bitmap=
MediaStore.Images.Media.getBitmap(
this.getContentResolver(),resultIntent.getData());
//PleasealsorotatetheBitmapbasedonitsorientation.
}catch(IOExceptione){
Log.e(TAG,"Bitmapreadingerror:"+e);
}
if(bitmap!=null){
faceDetection.send(bitmap);
}
}
});
IntentpickImageIntent=newIntent(Intent.ACTION_PICK);
pickImageIntent.setDataAndType(MediaStore.Images.Media.INTERNAL_CONTENT_URI,"image/*");
imageGetter.launch(pickImageIntent);
VideoInput//ForvideoinputandresultrenderingwithOpenGL.
FaceDetectionOptionsfaceDetectionOptions=
FaceDetectionOptions.builder()
.setStaticImageMode(false)
.setModelSelection(0).build();
FaceDetectionfaceDetection=newFaceDetection(this,faceDetectionOptions);
faceDetection.setErrorListener(
(message,e)->Log.e(TAG,"MediaPipeFaceDetectionerror:"+message));
//InitializesanewVideoInputinstanceandconnectsittoMediaPipeFaceDetectionSolution.
VideoInputvideoInput=newVideoInput(this);
videoInput.setNewFrameListener(
textureFrame->faceDetection.send(textureFrame));
//InitializesanewGlSurfaceViewwithaResultGlRendererinstance
//thatprovidestheinterfacestorunuser-definedOpenGLrenderingcode.
//Seemediapipe/examples/android/solutions/facedetection/src/main/java/com/google/mediapipe/examples/facedetection/FaceDetectionResultGlRenderer.java
//asanexample.
SolutionGlSurfaceViewglSurfaceView=
newSolutionGlSurfaceView<>(
this,faceDetection.getGlContext(),faceDetection.getGlMajorVersion());
glSurfaceView.setSolutionResultRenderer(newFaceDetectionResultGlRenderer());
glSurfaceView.setRenderInputImage(true);
faceDetection.setResultListener(
faceDetectionResult->{
if(faceDetectionResult.multiFaceDetections().isEmpty()){
return;
}
RelativeKeypointnoseTip=
faceDetectionResult
.multiFaceDetections()
.get(0)
.getLocationData()
.getRelativeKeypoints(FaceKeypoint.NOSE_TIP);
Log.i(
TAG,
String.format(
"MediaPipeFaceDetectionnosetipnormalizedcoordinates(valuerange:[0,1]):x=%f,y=%f",
noseTip.getX(),noseTip.getY()));
//RequestGLrendering.
glSurfaceView.setRenderData(faceDetectionResult);
glSurfaceView.requestRender();
});
ActivityResultLaunchervideoGetter=
registerForActivityResult(
newActivityResultContracts.StartActivityForResult(),
result->{
IntentresultIntent=result.getData();
if(resultIntent!=null){
if(result.getResultCode()==RESULT_OK){
glSurfaceView.post(
()->
videoInput.start(
this,
resultIntent.getData(),
faceDetection.getGlContext(),
glSurfaceView.getWidth(),
glSurfaceView.getHeight()));
}
}
});
IntentpickVideoIntent=newIntent(Intent.ACTION_PICK);
pickVideoIntent.setDataAndType(MediaStore.Video.Media.INTERNAL_CONTENT_URI,"video/*");
videoGetter.launch(pickVideoIntent);
ExampleAppsPleasefirstseegeneralinstructionsforAndroid,iOSanddesktoponhowtobuildMediaPipeexamples.Note:Tovisualizeagraph,copythegraphandpasteitintoMediaPipeVisualizer.Formoreinformationonhowtovisualizeitsassociatedsubgraphs,pleaseseevisualizerdocumentation.MobileGPUPipelineGraph:mediapipe/graphs/face_detection/face_detection_mobile_gpu.pbtxtAndroidtarget:(ordownloadprebuiltARM64APK)mediapipe/examples/android/src/java/com/google/mediapipe/apps/facedetectiongpu:facedetectiongpuiOStarget:mediapipe/examples/ios/facedetectiongpu:FaceDetectionGpuAppCPUPipelineThisisverysimilartotheGPUpipelineexceptthatatthebeginningandtheendofthepipelineitperformsGPU-to-CPUandCPU-to-GPUimagetransferrespectively.Asaresult,therestofgraph,whichsharesthesameconfigurationastheGPUpipeline,runsentirelyonCPU.Graph:mediapipe/graphs/face_detection/face_detection_mobile_cpu.pbtxtAndroidtarget:(ordownloadprebuiltARM64APK)mediapipe/examples/android/src/java/com/google/mediapipe/apps/facedetectioncpu:facedetectioncpuiOStarget:mediapipe/examples/ios/facedetectioncpu:FaceDetectionCpuAppDesktopRunningonCPU:Graph:mediapipe/graphs/face_detection/face_detection_desktop_live.pbtxtTarget:mediapipe/examples/desktop/face_detection:face_detection_cpuRunningonGPUGraph:mediapipe/graphs/face_detection/face_detection_mobile_gpu.pbtxtTarget:mediapipe/examples/desktop/face_detection:face_detection_gpuCoralPleaserefertotheseinstructionstocross-compileandrunMediaPipeexamplesontheCoralDevBoard.ResourcesPaper:BlazeFace:Sub-millisecondNeuralFaceDetectiononMobileGPUs(presentation)(poster)ModelsandmodelcardsWebdemoPythonColab