Make Your Animal Crossing Character React to Your Gestures With Machine Learning
By training models in Google’s Teachable Machine to learn our gestures, we could swing our hands or make a face to send the corresponding reaction command to Animal Crossing’s API.
Reverse Engineer Animal Crossing’s API
The NSO app on the phone allows us to send reaction commands to the game. By using a tool called mitmproxy, we could know what requests are sent from our phone and simulate the reaction command.
brew install mitmproxy
pip install mitmproxy.
Install the mitmproxy certificate on your phone
With your phone connected to the same internet as your computer, visit http://mitm.it/ and install the certificate. In the internet settings on your phone, add a manual proxy that points to your computer’s IP address.
Setting-up mitmproxy on macOS to intercept https requests
Due to the need of intercepting https requests to investigate security issues on iOS handsets, I was looking for…
Sending Requests through Nintendo Switch App
Now launch the NSO app on the phone and play around with the Animal Crossing App. You should see your phone’s request data coming in through the mitmproxy terminal. We can start finding out the request format of reactions by sending them from our phone.
The request endpoint for messaging and reaction is api/sd/v1/messages. Click on it and you should see the cookies and form data of this post request.
The post data is as follows.
Tip: Press q in the mitmproxy terminal to return to the request list.
These are some of the reaction types I’ve collected: Hello, Greeting, HappyFlower, Negative, Apologize, Aha, QuestionMark…
Note: I don’t have all the reactions in my game right now. It would be great if anyone could provide the other reaction values.
Accessing Nintendo Switch API
Access to Nintendo Switch API requires making multiple requests to Nintendo’s server with an authentication token.
Intro to Nintendo Switch REST API
Thanks to community effort, we can programmatically access Nintendo Switch App’s API at zero cost. This allows us to…
Successful authentication will give us three values:
- _g_token cookie
- _park_session cookie
- authentication bearer token
Test and see if it works :)
Google’s Teachable Machine is an easy-to-use online tool to train models to recognize your speech, photo, and video. If you’re new to machine learning, I highly recommend watching Google’s 5 minute tutorial.
First create a Pose Project.
Choose Webcam for Pose Samples. Name your first class neutral and record yourself without any gestures. Then add extra classes such as clapping or waving. You can be as creative as you want.
When you’re done, press train. When training is complete, you can test the model in the preview panel. Once you’re satisfied, press Export Model above the preview panel and download the TensorFlow model.
We can use the provided Tensorflow.js Sample Script for a simple user interface. Copy the sample script to an empty html file and serve it through Node.js.
npm install http-server -g
Insert our API call inside the predict() function. The API endpoint should direct to our python server to send the reaction.
Be creative and have fun!
- Reverse engineer private APIs with mitmproxy
- Send API requests with Python
- Use Google’s Teachable Machine for ML prototyping