Building an Interactive Login Screen with Flare & Flutter

Our team at 2Dimensions recently came across the Remembear login form interaction: we thought that this was a perfect example we could build in Flare and share with the community!

The source code is available on GitHub, and the Flare file can be found on 2Dimensions.


First, we need to import the flare_flutter library in pubspec.yaml (N.B. We use a relative path since we’re in the library’s repo, but the package is also available on DartPub). We also added the assets folder to the pubspec.yaml so that its contents are accessible in Flutter.

The relevant files are all in the /lib folder, while the Flare file is in the assets folder:

- input_helper.dart
- main.dart
- signin_button.dart
- teddy_controller.dart
- tracking_text_input.dart
- Teddy.flr

How This Works

Let’s first take a look at Teddy in Flare: this character has a node named ctrl_face which is the Target for the Translation Constraint of the face elements. This means that moving the node will cause all of its dependants to move as well.

By grabbing the reference to the ctrl_face node, we can move Teddy’s face and adjust the direction of his gaze. We’ll just need to find the position of the Text Field below Teddy and adjust the ctrl_face node’s position accordingly.

Into The Code

In main.dart, MyHomePage builds the layout for the app. 
We use the FlareActor widget from the flare_flutter library to place the animation in the view:

// Bind a FlareController
controller: _teddyController

Since we want to manipulate the position of the ctrl_face node, we bind _teddyController to our FlareActor. A controller is a concrete implementation of FlareController, an interface provided by flare_flutter, and it gives us the ability to query and manipulate the Flare hierarchy.

Custom Controls

Let’s take a look at theTeddyController class: you’ll notice that TeddyController extendsFlareControls and not FlareController
FlareControls is a concrete implementation of FlareController that flare_flutter already provides, and it has some basic play/mix functionality.

TeddyController has a few fields:

// Matrix to transform Flutter global coordinates
into Flare world coordinates.
Mat2D _globalToFlareWorld = Mat2D();
// A reference to the `ctrl_look` node.
ActorNode _faceControl;
// Store the node's origin in world and local transform spaces.
Vec2D _faceOrigin = Vec2D();
Vec2D _faceOriginLocal = Vec2D();
// Caret in global Flutter coordinates, and in Flare world coordinates.
Vec2D _caretGlobal = Vec2D();
Vec2D _caretWorld = Vec2D()

This class will then need to override three methods: initialize(), advance() and setViewTransform()
initialize() is called — you guessed it! — at initialization time, when the FlareActor widget is built. This is where our node reference is first fetched, again with a library call:

_faceControl = artboard.getNode("ctrl_face");
if (_faceControl != null) {
Vec2D.copy(_faceOriginLocal, _faceControl.translation);

Artboards in Flare are the top-level containers for nodes, shapes and animations. artboard.getNode(String name) returns the ActorNode reference with the given name.

After having stored the node’s reference, we also save its original translation, so we can restore it when the text-field loses focus, and we start playing the idle animation.

The other two overrides are called every frame: setViewTransform() is used here to build _globalToFlareWorld — that is the matrix to transform global Flutter screen coordinates into Flare world coordinates.

The advance() method is where all of the above comes together!
When the user starts typing, TrackingTextInput will relay the screen position of the caret into _caretGlobal. With this coordinate, the controller can compute the new position of the ctrl_face, thus shifting its gaze.

// Project gaze forward by this many pixels.  
static const double _projectGaze = 60.0;
// Get caret in Flare world space.
_caretWorld, _caretGlobal, _globalToFlareWorld);
// Compute direction vector.
Vec2D toCaret = Vec2D.subtract(Vec2D(), _caretWorld, _faceOrigin);
Vec2D.normalize(toCaret, toCaret);
// Scale the direction with a constant value.
Vec2D.scale(toCaret, toCaret, _projectGaze);
// Compute the transform that gets us in face ctrl_face space.
Mat2D toFaceTransform = Mat2D();
if (Mat2D.invert(toFaceTransform,
_faceControl.parent.worldTransform)) {
  // Put toCaret in local space.
// N.B. we're using a direction vector, not a translation,
// so use
transformMat2() to transform without translation
Vec2D.transformMat2(toCaret, toCaret, toFaceTransform);
  // The final ctrl_face position is the original face translation 
// plus this direction vector
targetTranslation = Vec2D.add(Vec2D(), toCaret, _faceOriginLocal);

Since a picture is worth a thousand words — or in this case, lines of code — below we can see how the direction is computed: the difference vector is stored in toCaret.

Since this is a direction, it is normalized, and then scaled up by the number of pixels the gaze should project from its original position.

Lastly, we transform toCaret into the node’s own space so that we can add it to the node’s original translation.

Caret Position

The last piece of the puzzle is how to compute the screen position of the caret.

This is done in the TrackingTextInput widget. This widget stores a reference to a GlobalKey to build its TextFormFields. Through this key, Flutter allows us to get the RenderObject that encompasses this TextFormField:

RenderObject fieldBox = _fieldKey.currentContext.findRenderObject();

With the three helper functions available in lib/input_helper.dart, we can use the RenderBox to compute the actual caret position in screen coordinates by traversing the widget hierarchy from that RenderBox, and looking for a RenderEditable. This Flutter Class provides the getEndpointsForSelection() method that’s used to compute local coordinates, which can be transformed into global coordinates by the originalRenderBox.

And that’s it!

Once again, be sure to check out the sources on GitHub and Flare, and come join us at!