Make Your Animal Crossing Character React to Your Gestures With Machine Learning

Mathew Chan
Nov 24, 2020 · 4 min read
I made my Animal Crossing character react to my gestures!


By training models in Google’s Teachable Machine to learn our gestures, we could swing our hands or make a face to send the corresponding reaction command to Animal Crossing’s API.

Reverse Engineer Animal Crossing’s API

The NSO app on the phone allows us to send reaction commands to the game. By using a tool called mitmproxy, we could know what requests are sent from our phone and simulate the reaction command.

brew install mitmproxy

Or use pip install mitmproxy.

mitmiproxy -h

With your phone connected to the same internet as your computer, visit and install the certificate. In the internet settings on your phone, add a manual proxy that points to your computer’s IP address.

Now launch the NSO app on the phone and play around with the Animal Crossing App. You should see your phone’s request data coming in through the mitmproxy terminal. We can start finding out the request format of reactions by sending them from our phone.

Reactions on the Nintendo Switch App
request list in mitmproxy terminal

The request endpoint for messaging and reaction is api/sd/v1/messages. Click on it and you should see the cookies and form data of this post request.

request details in mitmproxy terminal

The post data is as follows.

"body": "Smiling",
"type": "emoticon"

Tip: Press q in the mitmproxy terminal to return to the request list.

These are some of the reaction types I’ve collected: Hello, Greeting, HappyFlower, Negative, Apologize, Aha, QuestionMark…

List of Reaction Values

Note: I don’t have all the reactions in my game right now. It would be great if anyone could provide the other reaction values.

Accessing Nintendo Switch API

Access to Nintendo Switch API requires making multiple requests to Nintendo’s server with an authentication token.

Successful authentication will give us three values:

  • _g_token cookie
  • _park_session cookie
  • authentication bearer token

Test and see if it works :)


Teachable Machine

Google’s Teachable Machine is an easy-to-use online tool to train models to recognize your speech, photo, and video. If you’re new to machine learning, I highly recommend watching Google’s 5 minute tutorial.

First create a Pose Project.

Choose Webcam for Pose Samples. Name your first class neutral and record yourself without any gestures. Then add extra classes such as clapping or waving. You can be as creative as you want.

When you’re done, press train. When training is complete, you can test the model in the preview panel. Once you’re satisfied, press Export Model above the preview panel and download the TensorFlow model.

We can use the provided Tensorflow.js Sample Script for a simple user interface. Copy the sample script to an empty html file and serve it through Node.js.

npm install http-server -g
cd my-pose-model

Insert our API call inside the predict() function. The API endpoint should direct to our python server to send the reaction.

Be creative and have fun!


  • Reverse engineer private APIs with mitmproxy
  • Send API requests with Python
  • Use Google’s Teachable Machine for ML prototyping

The Startup

Get smarter at building your thing. Join The Startup’s +730K followers.