Play2Live Development Updates: Computer Learning
Play2Live is building algorithms that use computer vision to recognize the actions occurring during a player’s stream. With the help of computer vision, the system aims to visualize and process everything that occurs in a certain game.
To build this FEATURE, Play2Live is using the latest versions of open source libraries, such as OpenCV and DLib. A well-known drawback to early machine learning development Is the lack of permissible and readable data to allocate the necessary basepoints for data processing. Thankfully, video streaming allows Play2Live to get raw data from game livestream or recorded footage to integrate into a machine learning algorithm for training.
We have conducted some experiments with the classical cv approaches, such as thresholding, edge detectors, template matching, etc. We can save some CPU time required for calculating some basic game states using classical methods without any resource-intensive methods of machine learning. Sometimes we resort to more complex approaches based on machine learning, such as SVM, k-NN, etc., which allows us to deal with more complex tasks.
However sometimes linear models used in our initial experimental design are not applicable to certain tasks sets. In those cases, we use modern techniques based on deep learning. To test our hypotheses, we use the Keras framework with a background on Tensorflow to train convolutional neural network models for our tasks. This allows to teach and validate the model very quickly.
We are continuing the process of building, testing and updating the underlying systems necessary to bring the full Play2Live experience for our users.