Unity Camera Movement By Touch similar with Clash of Clans

Motivation

Every time you want to create a 2d mobile management game similar to Clash of Clans or Simcity, in which the user has the capabilities to interact with different buildings or interfaces, you need a module or system that manages the touch inputs of the player. On devices so small in comparison with tv or desktop, the only way to determine player intentions is through gestures that are quite simple for a human, but quite tough for a developer to interpret.

After some research about the topic, and trying different solutions that were paid and free, I concluded that there is no module out there that can meet my expectations. Clash of clans is made by a team of professionals with a ton of experience in the field, that can create excellent products. So this was the reason to try to inspire from their camera movement system and achieve the same functionality.

Description

Below, I will describe the code essential parts to keep everything tight and comment on what I believe is better than the current solutions in the market.

The first script is controlling the camera based on user gestures.

In the Update method, we check if the touch has been made on a UI element so we can be sure to ignore it. Inside the first condition, we init a bool variable so we can ve sure if a user is touching with at least a finger or not, the screen of the phone.

The second “if” is the main part. If _initTouch is false, that means we have at least a finger on the mobile device so we can do the computation. If _initTouch is true then we will apply inertia to the camera with the feedback if the orthographic size of the camera has achieved the minimum value.

The majority of solutions out there forgot about the importance of user experience and have a plain camera movement without any real-life feedback. I consider inertia or some will tell it to drag (not talking about their meaning in physics) an important element in a camera moving system.

Here are the main methods inside the script that we use.

As we discussed earlier, CheckIfUiHasBeenTouched is checking if touches have been made on UI elements. The verification is a basic one using EventSystem.current.IsPointerOverGameObject(i) where ‘i’ is the index of the touch. Unfortunately, this approach is working for the old input system and will easily break the new one. Moreover, there is a small user experience detail that is always forgotten by other solutions. If you start the touch on a UI element, the camera will not start to move, because this means you want to press the button. As long as you keep the finger down and move with it on the screen, nothing will happen. If you start the touch on an element that is not UI, nothing will also happen if you move the finger over UI elements during the touch event. That means you want to control the camera.

Panning() is responsible for moving the camera with one finger and it calls PanningFunction(touchDeltaPosition) where touchDeltaPosition is the touch.deltaposition of the moving finger.

Pinching() is used for zooming the camera and in the same time can pan it. This is the reason there is the PanningFunction(touchDeltaPosition) in which touchDeltaPosition is now a value that is determined by the deltaposition of at least 2 fingers. To determine the zooming rate, it has been used a linear equation a*x + b in combination with the lerp function. In this case, the solutions out there are quite non-intuitive with zooming, having basic scripts that will zoom in the middle of the camera with no panning during the execution, everything feeling like a mess. Not the experience a player shall have.

The MinOrthoAchievedAnimation() creates the same feedback as in Clash of Clans to know where the inferior limit for the orthographicsize is. It is improving the user experience with a small zoom out that feels natural.

LimitCameraMovement() is creating some boundaries for the camera field of view that are easily displayed with Gizmos in green lines. When I am saying the field of view, I mean that boundaries are not related to the center of the camera, and there is a calculation using orthographic size and camera aspect to lock the FOV inside the green rectangle. Zooming on the edges will center the user camera inside the rectangle so there will be no escape.

Ending

The full technical solution that is free to use for any non commercial usage can be found here on github:

Thanks for reading! Hope to hear you on GitHub for further improvements to this solution.

--

--

--

Interested in 💱macroeconomy, 🧑‍💻 tech, and ₿ blockchain. In an open relationship with humanoid robots 🤖

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Crio Winter Of Doing — My Learning Experience till now

8 Reasons to Use Java for Mobile App Development

Java for Mobile App Development

How Does Robotic Process Automation Work?

How does RPA Work?

The importance of software design

Edit Telegam

Bruce https://t.co/95AdGRpXF5 Thanks for Following us on Twitter!

How I almost ruined the careers of my dev team

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Serban Chisca

Serban Chisca

Interested in 💱macroeconomy, 🧑‍💻 tech, and ₿ blockchain. In an open relationship with humanoid robots 🤖

More from Medium

How to Install the Universal Render Pipeline

Creating Main Menu UI Panel Part 2!

How To Integrate HMS Kit With Android Studio Arctic Fox

project name

Beginner : Create simple login application with Account Kit (HUAWEI ID Sign-In)