I Am Rick (Episode 9): Walkie

How to build a walkie-talkie app using Flutter + Firebase.

Alexandros Baramilis
Flutter Community
37 min readMay 26, 2020

--

Intro

If you haven’t been following the series so far, you can check out the previous episodes here:

or have a look at the Github repo for the series.

If you’re having trouble installing Flutter on macOS Catalina, check out Setting up Flutter on macOS Catalina.

ALERT: The final episode is out!

As to why I’m writing this series:

The best way to learn something is to put it into practice and then write about it, so I’m making this series based on the notes that I take as I go through App Brewery’s Flutter Bootcamp. I usually go above and beyond the course to make something cool and learn more stuff. Publishing my notes and code also forces me to maintain a high standard and allows me to easily go back for a quick review.

As for the Rick Grimes theme, it all started from a typo when I was making the first app in the series (I Am Rich). I accidentally typed ‘I Am Rick’ when I was setting up the project in Android Studio. The rest is history.

Walkie

Rick was sick of everyone in his new group being hooked to their smartphones all the time.

He was sorely missing his old walkie…

So he decided to develop a walkie-talkie app to talk to his friends the old school way. 🤠

In this module, App Brewery is building a chat app, so I made a walkie-talkie app instead. 😄

Here’s a short preview:

You hold down the mic button to record your message and when you release it the recording is sent to Firebase’s servers and pushed to all other listening devices so they can play it back.

What’s even more amazing, is the amount of code it took to get such a complex app up and running on both Android and iOS. Welcome to 2020!

The topics we’ll be covering today are:

  • addPostFrameCallback: How to use this method to add a callback after the end of the frame. I use this to calculate the widget heights after they have been rendered, in order to animate the walkie icon so that it takes up all the remaining screen height.
  • Animations: Hero Animations, AnimationController, CurvedAnimation, ColorTween & Animated Text.
  • Setting up Firebase for Flutter: For both Android and iOS apps.
  • Authentication with Firebase: Register, Sign In & Log out, as well as detecting if there is a user signed in so we can jump straight to the walkie screen, even after a hot restart. Creating an easy form with validation, using the Form and TextFormField widgets.
  • Making a consistent bottom button across devices: A bottom button that will look nice and behave well in both rectangular screen devices, as well as more irregular screen devices like the new iPhones.
  • Recording and playing sound: Using the audio_recorder and audioplayers packages, as well as the permission_handler package to deal with permissions on both Android and iOS.
  • Solving the `SWIFT_VERSION` attribute error: When trying to integrate a Flutter package that uses Swift.
  • Using Firebase’s Cloud Storage and Cloud Firestore: We save the recording in Cloud Storage and store a reference to the filename in Cloud Firestore.
  • Streams, StreamBuilder & ListView: What are Streams and using a StreamBuilder widget to “listen” for new recordings as they are added to the server and pushed to the phone. Populating a ListView with the new items.
  • Player mechanics: Play and stop a recording. Update the UI when a recording has finished playing. When the user taps on another recording while the current one hasn’t finished playing, stop the current one and start the next one.

Let’s do it!

You can grab the final code if you want to have it open next to the article.

main.dart

Before we jump into the animations, let’s quickly clear main.dart first.

Routing with a static const id

Here, we setup our routes. The only difference to what we’ve been doing so far is that we use a static const id that we define inside each screen to avoid explicitly typing the string if we want to use it at multiple points in the code, thus avoiding typos. (although typos can sometimes take you in unexpected directions, like me writing the I Am Rick series 😂)

By making our constant static, it means that it belongs to the class as a blueprint, not a particular instance of a class. So to use a static const (or method), we call it directly on the class name, like WelcomeScreen.id and not on an instance of a class, like WelcomeScreen().id, as we would do with a non-static constant.

This way we don’t need to instantiate a new object every time we need to use that id, saving computing resources.

Setting the cursor colour for Android and iOS

The other new thing we’re doing here, is setting the cursor colour.

If you noticed in the nice GIF that I made above, when we’re typing text in the form textfields, the cursor colour is Rick’s kColourPrimary red, instead of the default iOS blue.

ThemeData(
cursorColor: kColourPrimary,
cupertinoOverrideTheme: CupertinoThemeData(
primaryColor: kColourPrimary,
),
),

Just setting the cursorColor property works for Android, but for iOS we need to override the Cupertino theme with a CupertinoThemeData object and set its primaryColor property.

Animations

Back in main.dart, we set our initialRoute to WelcomeScreen.id, which means that when the app opens, the first screen will be WelcomeScreen.

There’s quite a few things happening here, but we’ll break it down bit by bit.

Let’s start from initState().

Determining if there is a user signed in

The first method called after super, is determineAuthStatus().

This basically checks if there is a user signed in, and if there is, pushes the WalkieScreen onto the Navigator, so we don’t have to sign in every time the app is restarted.

Using addPostFrameCallback to add a callback after the end of the frame

The next line of code in initState() is:

WidgetsBinding.instance.addPostFrameCallback(_afterLayout);

This basically means that after the build method is executed and the widgets are laid out, an _afterLayout() callback method will be called.

Here, we calculate the final height of the walkie.

By walkie, I mean this awesome digital replica that I made of the original walkie-talkie from The Walking Dead. 😁

The purpose is to create this animation:

We want the walkie to take up all the remaining screen height left after the label and buttons are rendered.

The thing is, we’re developing for different screen heights and we don’t know what the final remaining height will be.

We could easily solve this problem by wrapping our walkie with an Expanded widget if we didn’t need any animations.

But to perform the animations here we need to know the final height of the walkie.

The easiest way I could come up with for doing this, is to create this layout:

Column(
key: _keyParentColumn,
children: <Widget>[
Container(
child: // the walkie
),
Column(
key: _keyChildColumn,
children: // the rest of the widgets
),
],
),

We have a Column, called the “Parent Column”, and inside it we have a Container that will hold the walkie, and another Column, called the “Child Column”, that will hold all the other widgets.

We give each of the Columns a GlobalKey: _keyParentColumn and _keyChildColumn.

We define these keys at the top of the state class, as well as the walkie height, initially setting it to zero.

A GlobalKey is a key that is unique across the entire app, so we can use it to identify any specific instance of a widget.

We use these keys when we call the _afterLayout() method to calculate the height of the Columns.

The walkie height will simply be the height of the Parent Column, minus the height of the Child Column.

The heightOfWidget() method is this:

It takes a GlobalKey as a parameter, finds the current context associated with this key and then finds the RenderBox associated with the widget represented by the key. Using the RenderBox, we can get the widget height.

The whole purpose of using addPostFrameCallback is that because before the build method is called and the widgets are rendered, we don’t know their height!

They need to be rendered first, then we get the callback _afterLayout() and then we can calculate their height.

Now, normally we would need to wrap the setting of the _walkieHeight inside a setState(), like this:

in order to update the UI after the height is calculated.

In this case it’s not necessary because the animations that we’re going to use in a bit call setState() many times.

But if you wanted to have a more robust code — say you wanted to remove the animations in the future — you could leave that setState() in there as it doesn’t harm anyone.

This was a complex topic so I hope it made sense! 😅

Animation with AnimationController

By now you’ll be like: Dude, can we get to the animations already!?

Ok, ok!

The final method we have in initState() is setupAnimations().

It looks like this:

First, we setup the AnimationController. This is like the conductor for our animations.

We need to specify the total duration of the animation, in this case one second.

We also need to set the vsync property. This property determines who is going to be the TickerProvider.

Think of the TickerProvider as the drummer in the band. He’s gonna give us the beat for the animation.

So who’s going to be the drummer?

In our case it will be the _WelcomeScreenState class itself.

To teach it the drums, we just need to use the with keyword to teach it the SingleTickerProviderStateMixin.

What are mixins you might ask? Here’s a great article by Romain Rastel on Flutter Community, or if you’re more of an official docs kind of person, read here.

According to the docs, mixins are a way of reusing a class’s code in multiple class hierarchies. So you don’t have to inherit from a class to use its code. Since you can only inherit from one class, that would be quite limited. So using mixins offers a lot of flexibility in reusing code.

CurvedAnimation

So for the first animation we have a CurvedAnimation:

animationBounce = CurvedAnimation(
parent: controller,
curve: Curves.bounceOut,
);

We use a CurvedAnimation when we want to apply a Curve to an animation object.

There is a collection of Curves that we can apply to an animation. Here, I chose the .bounceOut, which gives the animation this bounce at the end.

To apply this animation I use one Container that will have the final height of the walkie: _walkieHeight. This is to keep all other widgets in place as the walkie grows in size.

Inside this Container, I use another Container that will have a height of _walkieHeight * animationBounce.value . I align this Container to the centre of its parent Container by setting the parent’s alignment property to Alignment.center.

Container(
height: _walkieHeight,
alignment: Alignment.center,
child: Container(
height: _walkieHeight * animationBounce.value,
child: Image.asset('images/walkie.png'),
),
),

This animationBounce.value will go from 0 to 100 in 1 second, as we set the total duration of the animation to 1 second. This way the height of the walkie will go from 0 to its final height in 1 second as well.

ColorTween

The next animation is called a ColorTween. It animates from a starting colour to an ending colour.

animationColour = ColorTween(
begin: Colors.black,
end: kColourBackground,
).animate(controller);

Here, I start from black and fade into the background colour.

To apply this animation, we set the backgroundColor property of the Scaffold to animationColour.value.

Scaffold(
backgroundColor: animationColour.value,

This value will contain all the different colours that we need to go from the starting colour to the final colour in one second.

Starting and listening to animations

To start all these animations, we need to call:

controller.forward();

You can also do other funky stuff here, like reverse, loops, etc.

And finally, you need to add a listener:

controller.addListener(() {
setState(() {});
});

This listener will wait for every beat from our drummer and will call setState() every time to update the UI.

This is why I said we didn’t necessarily need to call setState() before when we were setting the _walkieHeight.

Animated text with the animated_text_kit package

To animate the ‘Welcome to Walkie’ label, we use a nice Flutter package called animated_text_kit. It has all sorts of different text animations, so it’s worth having a look!

I separated the code needed to build the animated text into a buildAnimatedWelcomeText() method.

Initially, I set the total duration of the animation to 1 second.

Then I calculate the millisecond interval needed to animate each character. I take the duration in seconds, multiply it by 1000 to convert it to milliseconds, divide by the length of the welcome text (i.e. the number of characters) and then round it.

I use this millisecond interval to set the speed property of the TyperAnimatedTextKit, which specifies how long it takes to animate each character.

Before I return a TyperAnimatedTextKit widget, I first check if the animations that we set up before have completed. I do this by checking:

if (controller.status == AnimationStatus.completed)

If they haven’t completed yet, I just return a Text widget with an empty string to occupy the space that the Text would take.

This way, we get the animated walkie first and as soon as it stops bouncing, the welcome label animates.

Hero animations

The final animation that we’re going to cover today is the Hero animation.

A Hero, is a widget that ‘flies’ between two screens.

Here you can see that the walkie stays on screen and resizes as we transition from the WelcomeScreen to the SignInScreen.

To create a Hero animation, we need two Hero widgets that are in different screens but share the same tag.

Here’s the first Hero widget in the WelcomeScreen that contains our walkie Container.

Hero(
tag: 'logo',
child: Container(
child: Image.asset('images/walkie.png'),
height: _walkieHeight * animationBounce.value,
),
),

And here’s our second Hero widget in the AuthForm component.

Hero(
tag: 'logo',
child: Image.asset('images/walkie.png'),
),

And we’re done with the WelcomeScreen and with all the animations!

In the next part we’ll cover how to setup Firebase in your Flutter project.

Setting up Firebase

Creating a Firebase project is very easy. You just need to sign in to firebase.google.com with your Google account. Then click on Go to console and select Add project. Give your project a name and select any of the other optional settings that you want. Done!

Setting it up for iOS and Android is a bit more involved, but follow these steps and you should be fine :)

Setting up Firebase for iOS apps (only on Mac)

  • After you have created a Firebase project, go to Add appiOS.
  • First you need to register the app. If you remember your Package name from when you created the Flutter project, paste it in the iOS bundle ID field. It should be something like ‘com.alexbaramilis.iamrick’. If you don’t remember it, follow the instruction below to find it.
- Open the ios folder in the Project navigator on Android Studio, right-click on the Runner folder and select Reveal in Finder.
-
From Finder, open the Runner.xcodeproj file. This will open the project on Xcode.
- In Xcode, from the Project navigator on the left, select the top level Runner (not the folder Runner inside). Make sure you have Runner selected under TARGETS and you are in the General tab. Under the Identity section you should be able to see your Bundle Identifier. This is the same as your Package name. Paste it in the iOS bundle ID field.
  • Next you need to download the GoogleService-Info.plist config file. Just save it to your Downloads folder for now. Then you need to drag and drop it from Finder to the inside of the Runner folder in Xcode. That is the Runner folder inside the Runner project in the Project navigator, not the top-level Runner project. You need to do this drap and drop manually in order to get this popup:

Make sure ‘Copy items if needed’ and ‘Add to targets: Runner’ are selected. This will do all the linking behind the scenes. If you just copy and paste the file in Finder it won’t work.

  • Stop and run the app again on the iOS simulator to build the project and make sure everything is fine.
  • The next step is Add Firebase SDK. You can skip the instructions on the Firebase website because we do things a bit differently on Flutter. Flutter has already done the pod init for us by default. We just need to add the Firebase packages that we want. Instead of adding them to the Podfile directly like we would do when developing natively for iOS, we just add them like regular Flutter packages in our pubspec.yaml file. You can check out all the Flutter packages by Firebase on pub.dev or on Github. For this app we need firebase_core, which is the base for all other Firebase packages, firebase_auth, firebase_storage and cloud firestore. Add them to pubspec.yaml and click on Pub get.
  • Before proceeding to the next step it’s good to make sure you have an up to date version of cocoapods. Just run sudo gem install cocoapods
    in Terminal. This command is both for installing and updating cocoapods. To check which version of cocoapods you have you can type gem which cocoapods . You can see all the releases on their Github page.
  • Now, if you’re on Terminal, navigate to your project directory and run pod setup. You can also do this from Android Studio by going to the Terminal tab at the bottom. This is already pointing to the project directory so you don’t need to navigate there.
  • Finally, stop and run the app again. It will now show: Running pod install… Wait for it to finish (it might take a while) and if the app runs fine with no errors shown you’re good to go!

Setting up Firebase for Android apps

  • After you have created a Firebase project, go to Add appAndroid.
  • First you need to register the app. If you remember your Package name from when you created the Flutter project, paste it in the Android package name field. It should be something like ‘com.alexbaramilis.iamrick’. If you don’t remember it, follow the instruction below to find it.
- From the project navigator in Android Studio, go to: android/app, and open build.gradle. Under androiddefaultConfig you should be able to find your applicationId.
- Paste it in the Android package name field.
  • Next you need to download the google-services.json file. Save it to your Downloads folder and then drag and drop it inside the android/app folder in Android Studio.
  • Then you need to add the Firebase SDK: In the android folder, open the build.gradle file. This is different than the build.gradle inside android/app. Copy the classpath line from the Firebase console (Something like classpath ‘com.google.gms:google-services:4.3.3’) and add it under dependencies. It also tells you to check if you have google() under repositories of buildscript and repositories of allprojects. This should already be there normally.
  • Now go back to our previous build.gradle inside android/app. Add the following line under the other apply statements.
apply plugin: 'com.google.gms.google-services'
  • Now you need to add gradle dependencies for the SDKs of your desired Firebase products. You can check out all the available Firebase libraries here. For this app we just need Firebase Auth, Cloud Storage and Cloud Firestore. You don’t need to add Firebase Core here. Just under the apply statements add a dependencies section, or if you already have one, add the implementations that you want in there.
  • Next you need to add the Firebase packages as Flutter packages. If you’ve already done this during the iOS setup you don’t need to do it again. If not, you can check out all the Flutter packages by Firebase on pub.dev or on Github. For this app we need firebase_core, which is the base for all other Firebase packages, firebase_auth, firebase_storage and cloud firestore. Add them to pubspec.yaml and click on Pub get.
  • Finally, you can stop the app and run it again on Android emulator to make sure everything is running smoothly.
  • If you get an error saying something like “CloudFirestorePlugin.java uses unchecked or unsafe operations.” add multiDexEnabled true under defaultConfig under android in the build.gradle file in the android/app folder.

Authentication with Firebase

After all the pain of setting up Firebase for Flutter, authentication is really easy, so it’s definitely worth it!

First we need to setup a sign-in method from Firebase.

Go the Firebase console and select Authentication from the menu on the left (under the Develop section), then press Set up sign-in method. Choose Email/Password and enable it.

You get a whole load of other sign-in options there such as Google, Facebook, etc.

What’s really cool is that you get out of the box: email address verification, password recovery, email address change, as well as the option to sign in by email link without a password!

Before we jump into the sign in and register screens, let’s have a look at the AuthForm widget.

The AuthForm widget

This is a custom widget that I created that handles a form for both sign in and registration.

It takes as parameters an isSignIn boolean that will determine whether we’re doing sign in or registration, and an onFormSubmitted callback that will perform the sign in or registration logic when the user submits the form.

Inside the state class, we just declare two variables to hold the email and password values.

Then, inside our main Column, we have the Hero widget that I mentioned before, wrapped inside an Expanded widget to take up all the remaining screen height that is left by the form. Here, because we’re not doing any animations, we don’t need all the complexity with addPostFrameCallback etc.

By using the Expanded widget, we also get the walkie to resize when the keyboard goes up.

After the Expanded widget I have the main label which I set to ‘SIGN IN’ or ‘REGISTER’ based on the isSignIn parameter.

Text(
widget.isSignIn ? 'SIGN IN' : 'REGISTER',
...
)

For the form, I use a Form widget, which makes handling forms easier. For each field of the form, I use a TextFormField widget, which is basically a FormField widget that contains a TextField widget.

The TextFormField widget for the email field looks like this:

  • We can specify a style for the textfield text.
  • Set the keyboardType to TextInputType.emailAddress so we get the keyboard that has the @ symbol, as well as the domain extensions when you tap and hold the . button.
  • Set the decoration property to get the email icon, as well as the hint text and style.
  • Set the validator property with a validator function. Here I just do some very basic validation of checking whether the user has typed an @ symbol or not. Firebase does it’s own validation of course on the server-side, but it’s nice to have some validation on the client-side too.
  • We can also set certain callback methods such as onChanged, onSaved, etc. Here I just update the email variable every time the textfield value is changed.

The TextFormField for the password is rather similar:

The main differences are:

  • I set the obscureText property to true to hide the password as the user is typing it.
  • I set the validator to check if the password is at least 6 characters long.

For the RaisedButton which is the form’s submit button, I set the text again based on the isSignIn bool.

In the onPressed callback, I get the FormState by calling Form.of(primaryFocus.context) .

Then I check if it’s null (for ex. the user hasn’t interacted with it yet) and if it’s valid by calling the validate() method, which runs the validators and returns a boolean value.

If the form is valid, we call the onFormSubmitted function that we passed to the AuthForm as a parameter, passing it in turn the email and password values.

Sign In

Now that we know how the AuthForm widget works, we can look at the SignInScreen.

We just return an AuthForm widget inside its build method, properly setup for the sign in process!

All the work here is in the onFormSubmitted callback that we pass as a parameter to the AuthForm widget.

First, we have this showDialog function. We pass it the current context and it will overlay it with what we pass it as a builder.

Here, we give it a CircularProgressIndicator wrapped inside a Center widget. This is the easiest way to show a spinner in Flutter.

To set the colour of the spinner, we need to set the valueColor property of the CircularProgressIndicator. We can’t set it directly to a Color because it takes an Animation<Color>, but if you only want it to be a single colour you can set it to AlwaysStoppedAnimation<Color>(kColourPrimary).

Then, we have the actual log in logic, which is surprisingly simple after all we’ve done so far:

We just call the signInWithEmailAndPassword(email: , password: ) method on a FirebaseAuth.instance object, which asynchronously returns a FirebaseUser object.

If the signedInUser is not null, it means the sign in was successful.

We pop the current context (which is the spinner) and use the pushReplacementNamed method on the Navigator to replace the current screen with the screen that we specify.

I do this to replace the SignInScreen with the WalkieScreen, so that when we pop the context from WalkieScreen, which happens when the user taps the log out button, we return back to the WelcomeScreen, instead of the SignInScreen, which I think is nicer.

If we have an error, we have to pop the current context again for the spinner to disappear. Here I just print the error in the console. We’ve done a lot of error handling in previous episodes so I wanted to keep it simpler here.

Register

Now that you’ve seen the Sign In flow, you’ll see that the Register flow is almost identical.

We have abstracted all the repetitive code inside AuthForm, so the only differences here are:

  • Setting the isSignIn value to false.
  • Calling createUserWithEmailAndPassword(email: , password: ) instead.

Sign Out

After we sign in or register successfully, we jump to the WalkieScreen.

The first thing I want to do is to hide the back button that comes by default when we use the Navigator. To do this, we set the automaticallyImplyLeading property to false.

Then we set the actions property, which takes a List of Widgets with a FlatButton that has the ‘Sign out’ Text as a child.

In the onPressed callback we just call FirebaseAuth.instance.signOut(); to sign out and then pop the context to get back to the WelcomeScreen.

And that’s it for the Authentication!

The Microphone button, permissions and recording sound

Making a consistent bottom button across devices

Before I get to the actual recording, I want to write about how I built the microphone button to look and behave consistently across different devices, esp. between the new generation of iPhones (after the X) and regular rectangular screen devices.

On the newest iPhones, instead of the home button, we have this bottom bar that you drag upwards to go to the home screen.

You see here that the button looks like it extends all the way to the bottom, but only the area above the home bar is the actual button, i.e. is tappable.

I built it this way so that the button doesn’t mess with the home bar. You wouldn’t want to go to the home screen and the app actually starts recording you instead!

On the other hand, it wouldn’t be nice if only the button rectangle became grey when you start recording, so I made the whole area become grey.

So it looks like it’s one button, but behind the scenes you have the rectangle above the home bar that is the actual microphone button, and the area below belongs to the home bar.

On regular rectangular screen devices, the story is more simple. The screen ends in a straight line and then we have some digital or physical buttons to perform basic OS functions.

You don’t have to differentiate between the home bar area and the microphone button.

And what’s even cooler about this solution here, is that you don’t even have to detect which device you’re on and adjust accordingly. This solution works for all kinds of devices.

Here it is.

We’re still in WalkieScreen btw.

Create an isRecording bool.

Make the Scaffold backgroundColor adjust based on isRecording, between the grey kColourIsRecording and red kColourPrimary.

For the body of the Scaffold, wrap your main Column inside a Container with your background colour and inside a SafeArea widget.

The SafeArea will take care of all the different device types, making sure you’re always in… a safe area.

The Container with the background colour will give the SafeArea a background colour, exposing only the areas that are not in the safe area to the adjustable Scaffold colour. We don’t care about the non-safe area of the status bar because it’s hidden behind the AppBar.

There’s one thing left to do: to create the actual button.

Inside the main Column, after the RecordingsStream widget (more on this later), I create a Microphone widget. I pass it the isRecording variable, as well as two functions for its onStartRecording and onStopRecording callbacks that simply call setState() and set the isRecording accordingly.

So whenever the recording starts or stops, these callbacks are going to set the isRecording variable, and because we wrap them in setState() they will update the UI with the correct Scaffold colour.

This will also in turn re-render the Microphone widget with the updated isRecording, that will set the correct microphone button colour.

So what does this microphone button look like in code?

I wrap an Icon widget of fixed size and white colour, inside a Container of fixed size and adjustable colour. Then I wrap the Container inside a GestureDetector widget.

The GestureDetector will give us access to the onTapDown and onTapUp callbacks, where we can call startRecording() and stopRecording().

And we’re ready to start recording!

Getting permissions easily with the permission_handler package

Ok I lied.

There’s one more thing we need before we can start recording. 😭

We need to ask the user for permission!

Thankfully, as with all things Flutter, there is a nice package to make our lives easier.

It’s called permission_handler. It handles all kinds of system permissions for both iOS and Android, but here we just use it for microphone permission.

You can have a look at the package page for more detailed setup instructions, but basically it involves this:

For iOS:

Add this key to your Info.plist in: ./ios/Runner/Info.plist:

<key>NSMicrophoneUsageDescription</key>
<string>Walkie needs to use the microphone to record your voice!</string>

This is the message that will appear here:

For Android:

In ./android/app/src/main/AndroidManifest.xml, add inside the <manifest> tag, but outside the <application> tag:

<uses-permission android:name="android.permission.RECORD_AUDIO"/>
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>

Then do a flutter clean from the Terminal tab and rebuild the app.

I wrote a handy method to verify the permission status and request for permission in one go.

It checks the Permission.microphone.status with a switch statement.

  • If the status is .granted, it returns true, in all other cases it breaks and returns false.
  • If the status is .undetermined (first time the user runs the app) or .denied (he has selected deny before), it requests for permission again with Permission.microphone.request();
  • If the status is .restricted, it “shows” an error message.
  • If the status is .permanentlyDenied it “tells” the user to go to Settings to enable it again.

On Android, if you have denied access before and the app requests permission again, you get the option to permanently deny by tapping on Deny & don’t ask again. If access is permanently denied, requesting for permission will show nothing, so you have to tell the user to go to the Settings to enable it manually again. It’s ok to do that here since microphone permission is essential for the microphone button to function.

Recording sound

We’re finally ready to record some audio! For real this time 😂

I chose the audio_recorder package for this purpose.

This is the startRecording() method that we called onTapDown:

  • First, we check if permissions are granted by calling hasPermissions().
  • If they are, then we call the onStartRecording callback function that we passed from WalkieScreen, which sets the isRecording bool to true.
  • Then we get the file path that the recording will be saved to by calling getFilePath().

This method is based on the ‘dart:io’ library and the path_provider package.

The getApplicationDocumentsDirectory() from path_provider gets us the file path where we can save the recording for both iOS and Android.

Then, I append the ‘recording_’ string, a timestamp from the current date and time in the ISO8601 format, as well as the file extension (.m4a).

This results in a file name like:

recording_2020-05-23T18:10:35.748149.m4a
  • Finally, we start the recording by calling the start() method on AudioRecorder, passing it the path and the audio output format.
await AudioRecorder.start(
path: path,
audioOutputFormat: AudioOutputFormat.AAC,
);

To stop the recording we call the stopRecording() method onTapUp:

  • Here, I first check if the isRecording variable is true. This is because when the user taps on the microphone button but hasn’t given permission first, he will get the popup and the microphone button will be unpressed without an actual recording. By checking if isRecording is true, we make sure we have an actual recording before we stop and send it.
  • Then we can call the onStopRecording() callback which set isRecording to false.
  • Then we stop the recording by calling AudioRecorder.stop().
  • Finally, we send the recording to Firebase by calling sendRecording(recording.path). I’ll explain this method in the next section.

That’s it for recording. Pretty simple once you’ve done all the setup properly!

Solving the `SWIFT_VERSION` attribute error

Speaking of painful setup, when you try to integrate audio_recorder you might encounter this error where it says: ‘Please contact the author or set the `SWIFT_VERSION` attribute in at least one of the targets that integrate this pod.’

I think this happens when you try to integrate Flutter packages that were written in Swift.

This is what worked for me:

  • In ios/Podfile, add use_frameworks! under target ‘Runner' do
  • Also in ios/Podfile, add config.build_settings['SWIFT_VERSION'] = '4.1' under target.build_configurations.each do |config| . You might have to play around with the Swift version number until you find one that is compatible with your package (or all your packages?).
  • Right-click on the ios/Runner folder and select Reveal in Finder.
  • From Finder, open Runner.xcodeproj with Xcode.
  • In Xcode, go to FileNewFileSwift file. Give your file any name. It doesn’t matter since you’re going to delete it after.
  • Xcode then will ask if you want it to create a Bridging Header file. Select yes. You can then delete the Swift file that you just created.
  • Go to the Terminal tab and type flutter clean
  • Run the app to trigger a pod install.

It should hopefully work now!

Using Firebase’s Cloud Storage and Cloud Firestore

Cloud Storage is more appropriate for uploading and downloading files such as images, audio and video, while Cloud Firestore is more like a scalable database that keeps your data in sync across client apps.

In this app we’re going to be using both.

We need Cloud Storage to upload and download our recordings and we need Cloud Firestore to store a reference to each recording filename and keep all the filenames in sync across all devices.

This combination works well because:

  • The List API of Cloud Storage has not been implemented yet in Flutter. So you can’t call listAll() to get the filenames of all the recordings in your storage. You could work around this by storing a list of filenames in your Cloud Storage, but it’s not a great solution anyway.
  • By using Cloud Firestore instead to store the filenames, we get powerful syncing capabilities across all devices and we can use listeners to listen for when a new recording is available and display it instantly in our chat.
  • Also, by storing the filename only, we don’t have to download all the recordings in the app, making our app much more lightweight. We only download a recording if the user has tapped on it to play it.

So let’s get to coding it!

Add Cloud Storage

  • In Firebase console, go to the Storage section under Develop on the menu on the left and click Get started.
  • You can start with the default rules that allow all reads and writes from authenticated users, if you don’t really care about the security of your data at this point. If you’re interested in setting stricter rules, have a look here.
  • You can also set the geographical location of the server that will serve your project.

Create a Cloud Firestore database

  • In Firebase console, go to the Database section under Develop on the menu on the left and click Create database.
  • You can start in Test mode if you don’t mind anyone reading your database data for now. In test mode, anyone with your database reference will be able to read or write to your database for 30 days. Again, if you want to set stricter rules, read here.
  • Click on ‘+ Start collection
  • Give your collection a name: ex. ‘walkie’
  • Start it off with a field: ex. ‘filename’ for Field and ‘string’ for Type. You can delete this entry after creating the collection.

Send the recording!

Now we can finish off the Microphone widget by checking out the sendRecording() method that we called from stopRecording().

From the stopRecording() method, we passed the path where our recording is saved on the device.

This is something like:

/Users/You/Library/Developer/CoreSimulator/Devices/75BA8766-DAFE-4351-993F-5469CF5E2468/data/Containers/Data/Application/620A4F41-7C9F-451D-A756-0DB7EDD1F423/Documents/recording_2020-05-26T14:17:44.022341.m4a

By using .split('/').last; on path, we split the string at each / character and only keep the last string from the resulting list, which is our filename:

recording_2020-05-26T14:17:44.022341.m4a

Then we create a File object from the path by calling File(path) and we upload it to Cloud Storage by calling:

FirebaseStorage.instance.ref().child(fileName).putFile(File(path));

We get the reference to our storage from FirebaseStorage.instance.ref() , create a child path with our filename with .child(filename) and upload the file using .putFile() .

There is much more logic that we could write here to make sure the upload was completed and to handle errors etc, but I wanted to keep it simple.

After uploading the file, we store the reference to its filename in Cloud Firestore by calling:

Firestore.instance.collection('walkie').add({'filename': fileName});

Here, we get a reference to the collection we created in the console by calling Firestore.instance.collection('walkie') and add the reference by storing the fileName variable at the 'filename' field.

Again we can do more error handling here etc.

And we’re done with the Microphone widget!

“Listening” for new recordings as a Stream and displaying them using StreamBuilder and ListView

If you remember back in WalkieScreen’s main Column we had a RecordingStream widget, just above the Microphone widget.

Time to dive into it.

Let’s start by looking at the build method:

It returns a StreamBuilder widget. According to the docs, a StreamBuilder is a:

Widget that builds itself based on the latest snapshot of interaction with a Stream.

What is a Stream?

Angela, the course instructor from App Brewery, makes a great analogy between a Stream and a sushi conveyor belt:

The sushi chef makes the sushi and you, by sitting next to the belt, you’re “listening” for new sushis and when they reach you, you grab them and eat them.

She also made a handy table that helps you put Streams in context:

  • If you have a single String available now, it’s just a String variable.
  • If you have a single String that’s not available yet and you’re waiting for it, it’s a Future<String>.
  • If you have many Strings that are available now, it’s a List<String>.
  • If you have many Strings that are not available yet, but you’re waiting for them, and they may not necessarily all come in one go, it’s a Stream<String>.

You can replace String in the above with any other type of course.

And if all this wasn’t enough for you to get the picture, Angela also included a video of Nagashi Somen, which are noodles that flow down in water inside a bamboo branch and people pick them up with their chopsticks as they reach them.

She must have been really hungry.

And I’m getting hungry too.

This article is getting really long again.

When the Medium editor starts to lag, you know you’ve written too much 😂

Anyway, just a bit more left.

Back to StreamBuilder.

So in a few words, the StreamBuilder listens to a Stream and rebuilds the UI every time it receives a new event.

That’s why it takes as properties a stream and a builder.

These events can be either a data events, where we get an element of the stream, error events, if something has failed, or a single “doneevent when the end of the stream has been reached. Normally we should handle all these events, but again for simplicity we’ll do only the data events.

For the stream property we give it:

stream: Firestore.instance
.collection('walkie')
.orderBy(
'filename',
descending: true,
)
.snapshots(),

We tap into the ‘walkie’ collection again, ordered by the ‘filename’ field in descending order (more on this later), and by calling snapshots() we turn it into a Stream<QuerySnapshot>.

A QuerySnapshot is a Firebase class that contains the results of a query. That’s why our StreamBuilder<QuerySnapshot> is also of type QuerySnapshot.

A QuerySnapshot can contain zero or more DocumentSnapshot objects. Each document is an entry in the Cloud Firestore database.

For its builder property, we pass it our build code. Every time it receives a new QuerySnapshot from the Stream, it will try to build this:

For the outer if, we just check if snapshot.hasData . If it doesn’t have data, we just return a CircularProgressIndicator.

If it does have data, we build a list of RecordingRow widgets. Each RecordingRow will contain one recording. More on this widget later.

By iterating through snapshot.data.documents , which is a List<DocumentSnapshot> i.e. our list of documents, we can tap into document.data['filename'] . This way we access the filename field of each document.

Then for each document (i.e. each filename), we create a RecordingRow widget and add it to our list.

We then create a ListView widget and pass it our list of RecordingRow as children.

By setting the reverse property to true, whenever we receive a new recording, it gets added to the bottom of the list and the list moves up automatically to show it. This is also the reason we set descending to true in our query, so that the recordings appear in the order of newest to oldest (starting from the bottom).

By wrapping our ListView inside an Expanded widget we make sure it takes up all the available screen space.

The final mile: Player mechanics

We’ve run a brain marathon today and this is the final mile. I don’t know about you, but my brain is fried 😂

The final part is to go from our list of filenames to this nice player:

  1. We want to play to a recording when we tap on it.
  2. We want to stop playing when we tap on it again.
  3. We want when we tap on a recording while still playing another recording, to stop the other recording and to play the tapped one.
  4. Finally, we want the row to return to its non-playing state when the track is finished (without us tapping stop).

RecordingRow

Let’s start by the RecordingRow widget and then we’ll go back to finish the RecordingsStream widget.

Each RecordingRow needs:

  • A filename: its own filename
  • A currentlyPlayingFilename: the filename of the track currently playing
  • An onTap callback: some code to execute when the user taps on a row

And this is the build method:

Let’s start from the inside out:

  • We have an Icon widget. If the filename matches the currentlyPlayingFilename it means we’re currently playing this recording and we set the icon to Icons.pause_circle_filled. It should actually be a stop icon since we’re stopping and not pausing, but Material Icons didn’t have a stop_circle_filled button and I didn’t like how the stop button looked. I hope you will forgive me. We also set color based on currentlyPlayingFilename.
  • Next to it, we have a Text widget. We format the filename string like this:

filename:

recording_2020-05-26T14:17:44.022341.m4a

.split('_').last :

2020-05-26T14:17:44.022341.m4a

.substring(0, 25) :

2020-05-26T14:17:44.022341

This is the timestamp String that we created earlier in the ISO8601 format.

We parse it back into a DateTime object by using DateTime.parse() .

Then we format it using the timeago Flutter package to get this ‘5 minutes ago’ format.

Again, we toggle the style based on currentlyPlayingFilename, to set the text to black or white depending on the background.

We wrap our elements inside a Row and the Row inside a Container where we set the colour based on currentlyPlayingFilename.

We wrap the Container inside a GestureRecogniser so the whole Container is tappable and set the onTap property to our onTap(filename) callback, passing it the filename. We will use this filename in RecordingsStream to detect which row was tapped.

RecordingsStream

Back to RecordingsStream, we create an AudioPlayer object from the audioplayers package, an isPlaying bool (initially set to false since nothing is playing) and the currentlyPlayingFilename variable (initially null since nothing is playing).

If you remember earlier, inside the builder of StreamBuilder we were creating the RecordingRow widget and I said I will come back to it later.

Here it is:

For the filename, we pass it the value from the ‘filename’ field of our document.

For the currentlyPlayingFilename we pass it currentlyPlayingFilename. Obviously.

For the onTap callback, we get the filename parameter from the row that was tapped and we update the currentlyPlayingFilename. If row tapped matches the currently playing filename, it means we need to stop playing, so we set the currentlyPlayingFilename to null. If it doesn’t match, it means that we need to start playing another track, so we set it to that track’s filename.

We do the above inside a setState() to update the UI.

Then, based on the isPlaying bool, we call stopPlaying(filename) or startPlaying(filename).

Since isPlaying is initially false, the first call will go to startPlaying(filename).

The first thing to do is to grab the download URL of our recording from Cloud Storage.

We do this by getting the reference to our storage with FirebaseStorage.instance.ref() , getting the child reference to our filename with .child(filename) and then calling .getDownloadURL() .

We then pass this download URL into audioPlayer’s play method. Yes, AudioPlayer can play from URLs as well as from local files!

Then, if the whole operation was successful, we set isPlaying to true. If not, we “notify” the user.

Requirement no. 1 check! We’re playing recordings!

To stop them, we call audioPlayer.stop() .

If the stopping was successful, we set isPlaying to false.

Requirement no. 2 check!

But then I included:

if (currentlyPlayingFilename != null) {
startPlaying(filename);
}

There are two possible paths that can lead to this if statement.

Both paths will come from when a track is playing, as only a true isPlaying can lead to the stopPlaying() method.

  1. The user tapped on the row that was currently playing.

A reminder of what we had in the onTap callback that we passed to RecordingRow:

In this case, filename will equal currentlyPlayingFilename, so currentlyPlayingFilename will be set to null.

So when we check if (currentlyPlayingFilename != null) it will be false, so we won’t startPlaying(filename) , which is what we want, since we tapped on the currently playing track to stop it.

2. The user tapped on another row, while a track was still playing.

In this case, filename (the parameter that was passed by the onTap callback of the different row) will not equal currentlyPlayingFilename, so currentlyPlayingFilename will be set to the filename of the new track.

Now, when we check if (currentlyPlayingFilename != null) it will be true, so startPlaying(filename) will be called with the filename of the new track and it will start playing.

Requirement no. 3 check! Now if we tap on another track while there is a track still playing, the current track stops and the new track starts playing.

For the final requirement, in initState(), we set a listener onPlayerCompletion. This means that whenever a track finishes playing (excluding tracks that finished playing after we manually stopped them), this code will be executed.

This is easy, we set isPlaying to false and currentlyPlayingFilename back to null, inside setState() to update the UI accordingly.

So when a track stops playing the UI will revert back to normal.

Requirement no. 4 check!

And that brings us to the end!

Seriously, if you made it so far, you’re a hero.

I broke my article length record again.

ALERT: The final episode is out!

--

--