fbpx
Get In Touch
1201 3rd Avenue Seattle, WA 98101, US
(HQ) Av. Punto Sur 31, Tlajomulco de Zúñiga, Jal 45050, MX
Carrera 11B # 99 - 25, Btá, 110221, CO
Let's talk
hello@inmediatum.com
Ph: +1 (650) 603 0883
Sales attention M - F 9am - 5pm (CT)
Get support
Careers
Endless inspiration and meaningful work
See open positions
Back

Cloud Vision API + Camera

Hi again! Eduardo Rodriguez here with a cool way to use Google’s Cloud Vision API with the camera! Before we start, remember to check my previous post about custom fonts.

Some background…

A few weeks ago I was discussing a camera feature with a co-worker. We didn’t want to use the traditional way to get the user’s photo, one reason was that we wanted to make it a bit funnier and the other one was that, since we were trying to make our application as secure as possible, we were trying to ensure that the user is the person on the picture. The way we thought to do this was to ask the user to smile at the camera and then take the picture.

After some research, I found this Google’s API called Cloud Vision. Among other things, this API allows me to detect faces and get a smiling probability, which is exactly what I needed.

Some things before coding…

So, what are we going to do is an Activity and a couple of Fragments. The way these Activity works is, I call my CameraSmileActivity with startActivityForResult and the Activity returns a String, which is the path to the image the camera takes.

Let’s get to it, ok?.

First of all, we need to add Cloud Vision’s dependency on our app level gradle file:

implementation 'com.google.android.gms:play-services-vision:19.0.0'

Now, we need to add one metadata in our manifest, so our app can download the necessary dependencies and the permissions for camera and storage, that gives us, as a result, something like this:

<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
    package="com.inmediatum.camerasmiledemo">

    <uses-permission android:name="android.permission.CAMERA"/>
    <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
    <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />

    <application
        android:allowBackup="true"
        android:icon="@mipmap/ic_launcher"
        android:label="@string/app_name"
        android:roundIcon="@mipmap/ic_launcher_round"
        android:supportsRtl="true"
        android:theme="@style/AppTheme">
        <activity android:name=".CameraSmileActivity">
            <intent-filter>
                <action android:name="android.intent.action.MAIN" />

                <category android:name="android.intent.category.LAUNCHER" />
            </intent-filter>
        </activity>

        <meta-data
            android:name="com.google.android.gms.vision.DEPENDENCIES"
            android:value="face" />
    </application>

</manifest>

Let’s code!

We are ready to start coding. It’s something really simple. What we need to do is create a FaceDetector object (which is the one that is going to detect the faces) and add it to our CameraSource. Pretty simple, right?. Let’s see how this is done.

This piece of code is the one we use to build a FaceDetector object:

faceDetector  = FaceDetector.Builder(context)
                        .setTrackingEnabled(false)
                        .setClassificationType(FaceDetector.ALL_CLASSIFICATIONS)
                        .setLandmarkType(FaceDetector.ALL_LANDMARKS)
                        .setMode(FaceDetector.FAST_MODE)
                        .build()

After that, we create a CameraSource object like this:

mCameraSource = CameraSource.Builder(context, faceDetector)
            .setFacing(CameraSource.CAMERA_FACING_BACK)
            .setRequestedPreviewSize(1280, 1024)
            .setAutoFocusEnabled(true)
            .build()

As you can see, the CameraSource builder takes our FaceDetector object. Now, all we need to do is add to our FaceDetector the callback where we are going to check if there is some face detected and if such face is smiling or not:

faceDetector.setProcessor(object : Detector.Processor<Face> {
            override fun release() {
            }

            override fun receiveDetections(p0: Detector.Detections<Face>?) {

                val faces = p0?.detectedItems as SparseArray<Face>

                if (faces.size()>0){
                    val face = faces.valueAt(0)
                    if(face.isSmilingProbability >= 0.75) {
                        if(!takingImage) {
                            takingImage = true
                            takePicture()
                        }
                    }


                }
            }


        })

So, basically these three parts are the core of our smiling camera. After that, we can use it as we want.

This is the code of my back camera Fragment so you can see my full implementation:

class PhotoBackSmileFragment : Fragment(), View.OnClickListener{

    private var listener: OnFragmentInteractionListener? = null


    private var mCameraSource :CameraSource? = null

    lateinit var faceDetector: FaceDetector

    var takingImage = false

    override fun onCreateView(
        inflater: LayoutInflater, container: ViewGroup?,
        savedInstanceState: Bundle?
    ): View? {
        var view = inflater.inflate(R.layout.fragment_smile_photo, container, false)

        return view
    }

    override fun onViewCreated(view: View, savedInstanceState: Bundle?) {
        super.onViewCreated(view, savedInstanceState)
        changeCameraBtn.setOnClickListener(this)
        startCameraSource()
    }


    fun startCameraSource() {


      faceDetector  = FaceDetector.Builder(context)
                        .setTrackingEnabled(false)
                        .setClassificationType(FaceDetector.ALL_CLASSIFICATIONS)
                        .setLandmarkType(FaceDetector.ALL_LANDMARKS)
                        .setMode(FaceDetector.FAST_MODE)
                        .build()



        if (!faceDetector.isOperational) {
            return
        }

        mCameraSource = CameraSource.Builder(context, faceDetector)
            .setFacing(CameraSource.CAMERA_FACING_BACK)
            .setRequestedPreviewSize(1280, 1024)
            .setAutoFocusEnabled(true)
            .build()

        surface_camera_preview.holder.addCallback(object : SurfaceHolder.Callback {
            override fun surfaceChanged(p0: SurfaceHolder?, p1: Int, p2: Int, p3: Int) {
            }

            override fun surfaceDestroyed(p0: SurfaceHolder?) {
                mCameraSource?.stop()
            }

            @SuppressLint("MissingPermission")
            override fun surfaceCreated(p0: SurfaceHolder?) {
                try {
                    mCameraSource?.start(surface_camera_preview.holder)


                } catch (e: Exception) {
                    Log.e("Error:  ${e.message}",e.message, e)
                }
            }
        })


        faceDetector.setProcessor(object : Detector.Processor<Face> {
            override fun release() {
            }

            override fun receiveDetections(p0: Detector.Detections<Face>?) {

                val faces = p0?.detectedItems as SparseArray<Face>

                if (faces.size()>0){
                    val face = faces.valueAt(0)
                    if(face.isSmilingProbability >= 0.75) {
                        if(!takingImage) {
                            takingImage = true
                            takePicture()
                        }
                    }


                }
            }


        })

    }


    override fun onAttach(context: Context) {
        super.onAttach(context)
        if (context is OnFragmentInteractionListener) {
            listener = context
        } else {
            throw RuntimeException(context.toString() + " must implement OnFragmentInteractionListener")
        }
    }

    override fun onDetach() {
        super.onDetach()
        listener = null
    }

    companion object {
        @JvmStatic
        fun newInstance() =
            PhotoBackSmileFragment().apply {
                arguments = Bundle().apply {

                }
            }
    }

    interface OnFragmentInteractionListener {
        fun onPhotoTaken(photoPath : String)
        fun onLoading()
        fun onDone()
        fun onChangeCamera()

    }


    override fun onClick(v: View?) {


        listener?.onChangeCamera()

    }

    // Method to save an bitmap to a file
    private fun bitmapToFile(bitmap:Bitmap): String {
        // Get the context wrapper
        val wrapper = ContextWrapper(context)

        // Initialize a new file instance to save bitmap object
        var file = wrapper.getDir("Images",Context.MODE_PRIVATE)
        file = File(file,"${UUID.randomUUID()}.png")

        try{
            // Compress the bitmap and save in jpg format
            val stream: OutputStream = FileOutputStream(file)
            bitmap.compress(Bitmap.CompressFormat.PNG,100,stream)
            stream.flush()
            stream.close()

        }catch (e: IOException){
            e.printStackTrace()
        }

        var fileRezised = wrapper.getDir("Images",Context.MODE_PRIVATE)
        fileRezised = File(fileRezised,"${UUID.randomUUID()}.png")
        saveResizedFile(file, fileRezised)
        return fileRezised.path
    }

    fun saveResizedFile(image: File, newImage: File?) {
        try {
            val b = BitmapFactory.decodeFile(image.path)
            val out: Bitmap = scaleBitmapAndKeepRation(b, 800, 800)
            val fOut: FileOutputStream
            fOut = FileOutputStream(newImage)
            out.compress(Bitmap.CompressFormat.PNG, 100, fOut)
            fOut.flush()
            fOut.close()
            b.recycle()
            out.recycle()
        } catch (e: java.lang.Exception) {
        }
    }

    fun scaleBitmapAndKeepRation(
        TargetBmp: Bitmap,
        reqHeightInPixels: Int,
        reqWidthInPixels: Int
    ): Bitmap {
        val m = Matrix()
        m.setRectToRect(
            RectF(0f, 0f, TargetBmp.width.toFloat(), TargetBmp.height.toFloat()),
            RectF(0f, 0f, reqWidthInPixels.toFloat(), reqHeightInPixels.toFloat()),
            Matrix.ScaleToFit.CENTER
        )
        return Bitmap.createBitmap(
            TargetBmp,
            0,
            0,
            TargetBmp.width,
            TargetBmp.height,
            m,
            true
        )
    }

    fun takePicture() {

        activity?.runOnUiThread {

            listener?.onLoading()

        }
        Thread{
            mCameraSource?.takePicture(null, CameraSource.PictureCallback { data ->


                var bitmap = BitmapFactory.decodeByteArray(data, 0, data.size)
                val imagePath = bitmapToFile(bitmap)

                takingImage = false

                activity?.runOnUiThread {

                    listener?.onDone()
                    listener?.onPhotoTaken(imagePath)

                }


            })
        }.start()
    }
}

I ended up using two Fragments, one for the front camera and the other one for the back camera since I was having issues refreshing the SurfaceView, that’s the way to go.

So that’s it. Pretty simple, doesn’t it?. I hope this helps you as much as it helped me.

Eduardo Rodriguez
Eduardo Rodriguez

We use cookies to give you the best experience. Cookie Policy