Getting Started with dotnet Core and AWS Aurora

Lee Harding
circuitpeople
Published in
10 min readJun 8, 2018

It’s *extremely* interesting to me that AWS is working on a serverless implementation of Aurora for on-demand (no reserved capacity) workflows. SQL hasn’t had a place in my serverless architectures in recent years, as I’m allergic to running idle DB instances and the operational complexity of managing cluster on/off/scaling myself. I’m very much looking forward to getting back to relational storage when that product is available.

Since it’s been a while since I’ve worked in such a space, I recently made a quick run-through of building and running a dotnet core application with EntityFrameworkCore for modeling and using an AWS Aurora cluster as the storage engine. Why? Well, this stack requires code from three (primarily, more actually) vendors to work together, and call me skeptical but I tend to test this kind of thing. Here is what I found.

Setting-up an Aurora Cluster

Currently AWS Aurora is setup as any other RDS database, so the process involved choosing an engine, instance types, roles and other options. Since I’m just running-though a quick test here I basically just clicked-through the options for a new cluster:

Choose Aurora as the engine. I’m not sure why I have to choose the kind of compatibility (wouldn’t “all of the above” be nice?), but MySQL 5.6 works for me this time.
Yep, pick your rate of wallet-drain, err, I mean select your instance type. Smaller is better unless it isn’t.

As with any other (current) RDS option, the instances supporting your database are going to be running 24/7 unless you take responsibility for shutting them down, and restoring from backups when you need them again. The cost may be small-ish, but since separating storage in microservices is a thing, the small costs become large costs.

Aurora Serverless will, hopefully, mean this step in the process goes away and AWS takes the responsibility of scaling-up and down (to zero) my DB when I’m not using it.

More settings, yes you should use RDS in a VPC and with a well-crafted security group and ACLs. And a strong master password that you don’t hard-code into your source code control system (or use anywhere, except maybe in disaster recovery). Sorry, that’s a sore point.
Encryption is awesome, so says GDPR and NIST. Use it, please. And retain backups for perhaps longer than a day — is that a reasonable default, AWS? Sorry, again, I digress.

Once through the wizard, you’ll find yourself with a cluster with a primary endpoint (DNS name) for access. Since the DB instances are in a VPC to make life simple we’ll use a bastion host for building the test app.

Creating a Bastion Instance

To make this simple let’s build the app on an AWS Linux 2 with dotnet core instance in the same VPC as the cluster, and with the cluster’s security group modified to allow traffic from the instances private IP. Using Windows bash (aka. Ubuntu on Windows), start by grabbing the current list of amazon AMIs and piping it into jq to find the one for .Net Core:

$ export AMI_ID=$(aws ec2 describe-images --owners amazon | jq -r ".Images[] | { id: .ImageId, desc: .Description } | select(.desc?) | select(.desc | contains(\"Amazon Linux 2\")) | select(.desc | contains(\".NET Core\")) | .id")
$ echo $AMI_ID
ami-950c6ced

Given the AMI, we’ll spin-up an micro instance. For that I need a key pair, and since this is a throw-away project I’m just going to create one here:

$ aws ec2 create-key-pair --key-name aurora-test-keypair > aurora-test-keypair.pem

When creating instances, I like to test the command first using the --dry-run option. Note that you’ll need to grab the subnet ID from the cluster created above:

$ aws ec2 run-instances --instance-type t2.micro --image-id $AMI_ID --subnet-id <your_subnet_id> --key-name aurora-test-keypair --count 1 --dry-run

It looks like the command is well-formed and what I want, so I create the instance for real and pipe the output to a file:

$ aws ec2 run-instances --instance-type t2.micro --image-id $AMI_ID --region us-east-1 --subnet-id <your_subnet_id> --key-name aurora-test-keypair --count 1 > instance.json

To SSH into the instance I’ll need the public IP address (the DNS name would work, too):

$ export INSTANCE_ID=$(jq -r .Instances[].InstanceId instance.json)
...~wait a bit~...
$ export INSTANCE_IP=$(aws ec2 describe-instances --instance-ids $INSTANCE_ID --output text --query 'Reservations[*].Instances[*].PublicIpAddress')

And, finally, let’s get onto the instance:

$ ssh -i aurora-test-keypair.pem ec2-user@$INSTANCE_IP

Creating the dotnet Core Application

Now on the some coding. Step one for any dotnet core app is to initialize the project:

[ec2-user@my-instance]$ dotnet new console -o aurora-demo
The template "Console Application" was created successfully.
Processing post-creation actions...
Running 'dotnet restore' on aurora-demo/aurora-demo.csproj...
Restoring packages for /home/ec2-user/aurora-demo/aurora-demo.csproj...
Generating MSBuild file /home/ec2-user/aurora-demo/obj/aurora-demo.csproj.nuget.g.props.
Generating MSBuild file /home/ec2-user/aurora-demo/obj/aurora-demo.csproj.nuget.g.targets.
Restore completed in 199.49 ms for /home/ec2-user/aurora-demo/aurora-demo.csproj.
Restore succeeded.[ec2-user@my-instance]$ cd aurora-demo/

Next, add the needed references for EntityFrameworkCore and MySql. Even though we’re not going to be doing any “design time” modeling, the project still needs a reference to Microsoft.EntityFrameworkCore.Design to allow us to use the CLI extensions (more on that below):

[ec2-user@my-instance aurora-demo]$ dotnet add package Microsoft.EntityFrameworkCore --version=2.0.3
Writing /tmp/tmpOirB8v.tmp
info : Adding PackageReference for package 'Microsoft.EntityFrameworkCore' into project '/home/ec2-user/aurora-demo/aurora-demo.csproj'.
log : Restoring packages for /home/ec2-user/aurora-demo/aurora-demo.csproj...
info : Package 'Microsoft.EntityFrameworkCore' is compatible with all the specified frameworks in project '/home/ec2-user/aurora-demo/aurora-demo.csproj'.
info : PackageReference for package 'Microsoft.EntityFrameworkCore' version '2.0.3' added to file '/home/ec2-user/aurora-demo/aurora-demo.csproj'.
[ec2-user@my-instance aurora-demo]$ dotnet add package Microsoft.EntityFrameworkCore.Design --version=2.0.3
Writing /tmp/tmpLwRNxf.tmp
info : Adding PackageReference for package 'Microsoft.EntityFrameworkCore.Design' into project '/home/ec2-user/aurora-demo/aurora-demo.csproj'.
log : Restoring packages for /home/ec2-user/aurora-demo/aurora-demo.csproj...
info : Package 'Microsoft.EntityFrameworkCore.Design' is compatible with all the specified frameworks in project '/home/ec2-user/aurora-demo/aurora-demo.csproj'.
info : PackageReference for package 'Microsoft.EntityFrameworkCore.Design' version '2.0.3' added to file '/home/ec2-user/aurora-demo/aurora-demo.csproj'.
[ec2-user@my-instance aurora-demo]$ dotnet add package MySql.Data --version 8.0.11
Writing /tmp/tmpj1hhEQ.tmp
info : Adding PackageReference for package 'MySql.Data' into project '/home/ec2-user/aurora-demo/aurora-demo.csproj'.
log : Restoring packages for /home/ec2-user/aurora-demo/aurora-demo.csproj...
info : Package 'MySql.Data' is compatible with all the specified frameworks in project '/home/ec2-user/aurora-demo/aurora-demo.csproj'.
info : PackageReference for package 'MySql.Data' version '8.0.11' added to file '/home/ec2-user/aurora-demo/aurora-demo.csproj'.
[ec2-user@my-instance aurora-demo]$ dotnet add package MySql.Data.EntityFrameworkCore --version 8.0.11
Writing /tmp/tmpJpOeCZ.tmp
info : Adding PackageReference for package 'MySql.Data.EntityFrameworkCore' into project '/home/ec2-user/aurora-demo/aurora-demo.csproj'.
log : Restoring packages for /home/ec2-user/aurora-demo/aurora-demo.csproj...
info : Package 'MySql.Data.EntityFrameworkCore' is compatible with all the specified frameworks in project '/home/ec2-user/aurora-demo/aurora-demo.csproj'.
info : PackageReference for package 'MySql.Data.EntityFrameworkCore' version '8.0.11' added to file '/home/ec2-user/aurora-demo/aurora-demo.csproj'.

At this point project file should look like this:

[ec2-user@my-instance aurora-demo]$ more aurora-demo.csproj 
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>netcoreapp2.0</TargetFramework>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Microsoft.EntityFrameworkCore" Version="2.0.3" />
<PackageReference Include="Microsoft.EntityFrameworkCore.Design" Version="2.0.3" />
<PackageReference Include="MySql.Data" Version="8.0.11" />
<PackageReference Include="MySql.Data.EntityFrameworkCore" Version="8.0.11" />
</ItemGroup>
</Project>

EntityFrameworkCore provides some nice extensions to the dotnetcommand that may be enabled by hand-editing the project file. I used nanoto make those changes:

[ec2-user@my-instance aurora-demo]$ nano aurora-demo.csproj[ec2-user@my-instance aurora-demo]$ more aurora-demo.csproj 
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>netcoreapp2.0</TargetFramework>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Microsoft.EntityFrameworkCore" Version="2.0.3" />
<PackageReference Include="Microsoft.EntityFrameworkCore.Design" Version="2.0.3" />
<PackageReference Include="MySql.Data" Version="8.0.11" />
<PackageReference Include="MySql.Data.EntityFrameworkCore" Version="8.0.11" />
</ItemGroup>
<ItemGroup>
<DotNetCliToolReference Include="Microsoft.EntityFrameworkCore.Tools" Version="2.0.3" />
<DotNetCliToolReference Include="Microsoft.EntityFrameworkCore.Tools.DotNet" Version="2.0.0" />
</ItemGroup>
</Project>

The bit I added to enable EF tooling is the ItemGroup section containing the two DotNetCliToolReference entries. This tells dotnet that I want to use extra tooling inside this project, and that tooling will be managed like a package reference (i.e. dotnet restore). Once dotnet Core 2.1 is supported this step will no longer be necessary since it bundles these tools into the distribution.

The dotnet new command, above, also created a basic source code file for a console application. I just added a little code to test that the database is actually being used. Again, a little bit more time in nano:

[ec2-user@my-instance aurora-demo]$ nano Program.cs 
[ec2-user@my-instance aurora-demo]$ more Program.cs
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.EntityFrameworkCore;
namespace aurora_demo
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Hello World!");
using (var db = new FooContext())
{
var foo = db.Foos.FirstOrDefault() ?? db.Foos.Add(new Foo()).Entity;
Console.WriteLine(foo?.Bar);
db.SaveChanges();
}
}
}
public class Foo
{
public int Id { get; set; }
public string Bar { get; set; } = "Baz";
}
internal class FooContext : DbContext
{
protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
optionsBuilder.UseMySQL(System.Environment.GetEnvironmentVariable("FOO_CONNECTION_STRING"));
base.OnConfiguring(optionsBuilder);
}
public DbSet<Foo> Foos { get; set; }
}
}

Note that I’m pulling the connection string from an environment variable, which I’ll set later (a better way is to use Parameter Store). The code here simply checks that a Foo is in the database, and if not creates one, then returns the value of the Foo's Bar property.

Source code changes complete, let’s see how we do pulling-down the referenced packages:

[ec2-user@my-instance aurora-demo]$ dotnet restore
Restoring packages for /home/ec2-user/aurora-demo/aurora-demo.csproj...
Restore completed in 367.65 ms for /home/ec2-user/aurora-demo/aurora-demo.csproj.
Restore completed in 12.26 ms for /home/ec2-user/aurora-demo/aurora-demo.csproj.
Restore completed in 5.03 ms for /home/ec2-user/aurora-demo/aurora-demo.csproj.

So far, so good. But does it build?

[ec2-user@my-instance aurora-demo]$ dotnet build
Microsoft (R) Build Engine version 15.7.177.53362 for .NET Core
Copyright (C) Microsoft Corporation. All rights reserved.
Restore completed in 50.87 ms for /home/ec2-user/aurora-demo/aurora-demo.csproj.
Restore completed in 8.9 ms for /home/ec2-user/aurora-demo/aurora-demo.csproj.
Restore completed in 6.07 ms for /home/ec2-user/aurora-demo/aurora-demo.csproj.
aurora-demo -> /home/ec2-user/aurora-demo/bin/Debug/netcoreapp2.0/aurora-demo.dll
Build succeeded.
0 Warning(s)
0 Error(s)
Time Elapsed 00:00:04.09

Indeed! Now, on to the interesting bit. As well as testing basic run-time SELECT and INSERT commands I want to see if the tooling works to setup and update the database schema. EF calls this support “migrations”. My first basic test is to have the tooling create the DB for me. But first I need to setup the connection string environment variable.

[ec2-user@my-instance aurora-demo]$ export FOO_CONNECTION_STRING="server=<your_aurora_cluster_dns_name>;port=3306;database=foo;uid=<your_user_name>;password=<your_password>;SSLMode=None" 

Specifying the connection must be done before attempting to update the database, obviously. Note the use of SSLMode=None -- that's unfortunate, but current versions of the Oracle MySql entity framework library defaults to SSL required and that isn't supported on my cluster.

As I mentioned before, it would be safer and better to use AWS EC2 Parameter Store (or Secrets Manager) for the connection string, but I’m just keeping things simple for this demo.

The EF tooling adds the ef sub-command to dotnet. Two of the sub-sub-commands it adds are migrations and database which are used to generate differential change sets (i.e. code) based on snapshots of the DbContext instances in the code, and apply them to the database, respectively.

To create the initial version of the database we need a schema, and that’s what migrations handle. Creating a migration creates a snapshot of the current state of the code for the data context, and compares that to the previous state (which is empty for the initial migration). To take this first snapshot, I used the add sub-sub-sub-command.

[ec2-user@my-instance aurora-demo]$ dotnet ef migrations add Initial
Done. To undo this action, use 'ef migrations remove'

Note that last line of the message — if we want to make more changes to the information model (DbContext) you can simply remove the previous migration and start again. But, we're good here so let's just look at the project directory structure:

[ec2-user@my-instance aurora-demo]$ ls -R
.:
aurora-demo.csproj bin Migrations obj Program.cs
./bin:
Debug
./bin/Debug:
netcoreapp2.0
./bin/Debug/netcoreapp2.0:
aurora-demo.deps.json aurora-demo.dll aurora-demo.pdb aurora-demo.runtimeconfig.dev.json aurora-demo.runtimeconfig.json
./Migrations:
20180527012338_Initial.cs 20180527012338_Initial.Designer.cs FooContextModelSnapshot.cs
./obj:
aurora-demo.csproj.EntityFrameworkCore.targets aurora-demo.csproj.nuget.cache aurora-demo.csproj.nuget.g.props aurora-demo.csproj.nuget.g.targets Debug project.assets.json
./obj/Debug:
netcoreapp2.0
./obj/Debug/netcoreapp2.0:
aurora-demo.AssemblyInfo.cs aurora-demo.csprojAssemblyReference.cache aurora-demo.csproj.FileListAbsolute.txt aurora-demo.pdb
aurora-demo.AssemblyInfoInputs.cache aurora-demo.csproj.CoreCompileInputs.cache aurora-demo.dll

The Migrations folder contains the code-generated migration code -- code generated from code, is that a good thing? Yes, yes it is. Now that we have the migration ready, can we use Microsoft's code extended with Oracle's code to talk to Amazon's code?

Let’s try:

[ec2-user@my-instance aurora-demo]$ dotnet ef database updateApplying migration '20180527012338_Initial'.
Done.

Nice. But does it run?

[ec2-user@my-instance aurora-demo]$ dotnet run
Hello World!
Unhandled Exception: MySql.Data.MySqlClient.MySqlException: Reading from the stream has failed. ---> System.IO.IOException: Unable to read data from the transport connection: Connection timed out. ---> System.Net.Sockets.SocketException: Connection timed out
at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size)
--- End of inner exception stack trace ---
at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size)
at MySql.Data.MySqlClient.TimedStream.Read(Byte[] buffer, Int32 offset, Int32 count)
at MySql.Data.MySqlClient.MySqlStream.ReadFully(Stream stream, Byte[] buffer, Int32 offset, Int32 count)
at MySql.Data.MySqlClient.MySqlStream.LoadPacket()
--- End of inner exception stack trace ---
at MySql.Data.MySqlClient.MySqlStream.LoadPacket()
at MySql.Data.MySqlClient.MySqlStream.ReadPacket()
at MySql.Data.MySqlClient.NativeDriver.Open()
at MySql.Data.MySqlClient.Driver.Open()
at MySql.Data.MySqlClient.Driver.Create(MySqlConnectionStringBuilder settings)
at MySql.Data.MySqlClient.MySqlPool.CreateNewPooledConnection()
at MySql.Data.MySqlClient.MySqlPool.GetPooledConnection()
at MySql.Data.MySqlClient.MySqlPool.TryToGetDriver()
at MySql.Data.MySqlClient.MySqlPool.GetConnection()
at MySql.Data.MySqlClient.MySqlConnection.Open()
at Microsoft.EntityFrameworkCore.Storage.RelationalConnection.Open(Boolean errorsExpected)
at Microsoft.EntityFrameworkCore.Query.Internal.QueryingEnumerable`1.Enumerator.BufferlessMoveNext(Boolean buffer)
at Microsoft.EntityFrameworkCore.Query.Internal.QueryingEnumerable`1.Enumerator.MoveNext()
at System.Linq.Enumerable.TryGetFirst[TSource](IEnumerable`1 source, Boolean& found)
at lambda_method(Closure , QueryContext )
at Microsoft.EntityFrameworkCore.Query.Internal.QueryCompiler.<>c__DisplayClass17_1`1.<CompileQueryCore>b__0(QueryContext qc)
at Microsoft.EntityFrameworkCore.Query.Internal.QueryCompiler.Execute[TResult](Expression query)
at Microsoft.EntityFrameworkCore.Query.Internal.EntityQueryProvider.Execute[TResult](Expression expression)
at System.Linq.Queryable.FirstOrDefault[TSource](IQueryable`1 source)
at aurora_demo.Program.Main(String[] args) in /home/ec2-user/aurora-demo/Program.cs:line 16

Oops. Not to worry, my cluster is a little special and can take some time to respond to the initial request. So, let’s just try again now that it’s had a chance to warm up:

[ec2-user@my-instance aurora-demo]$ dotnet run
Hello World!
Baz

Bingo! But did that response actually come from the database? Let’s see what MySql says:

[ec2-user@my-instance aurora-demo]$ mysql -h <your_aurora_cluster_dns_name> -u <your_user_name> -p -e "SELECT * FROM foo.Foos;"
Enter password:
+----+------+
| Id | Bar |
+----+------+
| 1 | Baz |
+----+------+

Yep, that’s the right data in the table. Let’s use MySql to change the information directly and make sure the code responds and reads the new value:

[ec2-user@my-instance aurora-demo]$ mysql -h <your_aurora_cluster_dns_name> -u <your_user_name> -p -e "UPDATE foo.Foos SET Bar='Bingo' WHERE Id = 1;"
Enter password:
[ec2-user@my-instance aurora-demo]$ mysql -h <your_aurora_cluster_dns_name> -u <your_user_name> -p -e "SELECT * FROM foo.Foos;"
Enter password:
+----+-------+
| Id | Bar |
+----+-------+
| 1 | Bingo |
+----+-------+
[ec2-user@my-instance aurora-demo]$ dotnet run
Hello World!
Bingo

Fun. It’s pretty amazing that this works, and a testament to the strength and maturity of these technologies. While there are almost certainly issues I’ve yet to discover, taken with the high performance of dotnet core Lambdas, it’s looking like a production stack to me.

I’m more excited for Aurora Serverless than ever…

--

--