Any body Can do DeepFakes with Colab for non-coder also! 2020

Madesh Selvarani
5 min readApr 27, 2020

--

You dont need any knwoldge to understand deepfakes or maths behind the model.All you need to is basic understanding of command line.Even further GUI version is also avaiable ,just load the files you need and play with it!

Group of Dev ,made deepfakes so easy to play with .Below Link is their Github.

https://github.com/deepfakes/faceswap

Before going through method,Go though below ethical guidelines.

FaceSwap ethical uses:

When faceswapping was first developed and published, the technology was groundbreaking, it was a huge step in AI development. It was also completely ignored outside of academia because the code was confusing and fragmentary. It required a thorough understanding of complicated AI techniques and took a lot of effort to figure it out. Until one individual brought it together into a single, cohesive collection. It ran, it worked, and as is so often the way with new technology emerging on the internet, it was immediately used to create inappropriate content. Despite the inappropriate uses the software was given originally, it was the first AI code that anyone could download, run and learn by experimentation without having a Ph.D. in math, computer theory, psychology, and more. Before “deepfakes” these techniques were like black magic, only practiced by those who could understand all of the inner workings as described in esoteric and endlessly complicated books and papers.

“Deepfakes” changed all that and anyone could participate in AI development. To us, developers, the release of this code opened up a fantastic learning opportunity. It allowed us to build on ideas developed by others, collaborate with a variety of skilled coders, experiment with AI whilst learning new skills and ultimately contribute towards an emerging technology which will only see more mainstream use as it progresses.

Are there some out there doing horrible things with similar software? Yes. And because of this, the developers have been following strict ethical standards. Many of us don’t even use it to create videos, we just tinker with the code to see what it does. Sadly, the media concentrates only on the unethical uses of this software. That is, unfortunately, the nature of how it was first exposed to the public, but it is not representative of why it was created, how we use it now, or what we see in its future. Like any technology, it can be used for good or it can be abused. It is our intention to develop FaceSwap in a way that its potential for abuse is minimized whilst maximizing its potential as a tool for learning, experimenting and, yes, for legitimate faceswapping.

We are not trying to denigrate celebrities or to demean anyone. We are programmers, we are engineers, we are Hollywood VFX artists, we are activists, we are hobbyists, we are human beings. To this end, we feel that it’s time to come out with a standard statement of what this software is and isn’t as far as us developers are concerned.

  • FaceSwap is not for creating inappropriate content.
  • FaceSwap is not for changing faces without consent or with the intent of hiding its use.
  • FaceSwap is not for any illicit, unethical, or questionable purposes.
  • FaceSwap exists to experiment and discover AI techniques, for social or political commentary, for movies, and for any number of ethical and reasonable uses.

Dont use to hurt anyone by knowing,KARMA IS BITCH!

Open Colab noteBook,follow the Code below.

Above one to download the git repo and install in your notebook.GUI is not Supported in Colab,Try same steps in conda or ubunut terminal.

extract function will detect the faces from your input video file and save into extract_video dir.

We need to execute the above steps again for another Video,face to swap in this video.

After information Extraction from the video,we gona make the model train on these features.

let the model train ,and monitor the loss value for both face.When you Feel loss is less enough that model might trained enough features.There will be an empty box ,click and press ‘enter’.The training will be stopped and model wil be saved in ‘model_data_new_videos’

To play with every step after extract or train keyword execute with ‘-h’ you will find more information and parameters you can work with.

Final step is to execute convert function ,where we gona load the train model information and output file.If you dont follow exact code your output will be huge number of pictures as each frame.

Check out the video full implementation with code:

link!

--

--