How I handled possible null values from database rows in Golang?

Few weeks ago, I was working on one of the web services of my project, where back-end has been developed using Go and MySQL. For the web services, we had to show some data over web browser in w2ui grid layout. The data comes from a SQL query that joins multiple tables.

We prepared a data struct for the web service, containing fields with built-in data types (int64, time.Time, string etc…). But we found that there were possibilities of null values (NULL) from the SQL query result and we had to deal with it.

Let’s start with an example and let me explain how I dealt with the problem. We’ll go with the basic information of an article. The rough structure with data type is as below:

Article
- Id           = int
- Title = string
- PubDate = datetime
- Body = text
- User = int

Let’s create schema for it in MySQL:

use test;
CREATE TABLE Article(
`id` int NOT NULL AUTO_INCREMENT,
`title` varchar(100) NOT NULL,
`pubdate` datetime DEFAULT NULL,
`body` text,
`userid` int DEFAULT NULL,
PRIMARY KEY(`id`)
);
SELECT * FROM Article; (Empty Set)

Now, let’s insert our very first article with only title information:

INSERT INTO Article(`title`) VALUES("first article");

Let’s prepare some basic struct in go to get those values by scanning sql rows:

type Article struct {
Id int `json:"id"`
Title string `json:"title"`
PubDate time.Time `json:"pub_date"`
Body string `json:"body"`
User int `json:"user"`
}

The complete go program would be:

package main
import (
"database/sql"
"fmt"
"time"
    _ "github.com/go-sql-driver/mysql"
)
type Article struct {
Id int `json:"id"`
Title string `json:"title"`
PubDate time.Time `json:"pub_date"`
Body string `json:"body"`
User int `json:"user"`
}
func main() {
db, err := sql.Open("mysql", "user:pwd@/test?charset=utf8")
checkErr(err)
    // insert
rows, err := db.Query("SELECT * FROM Article")
checkErr(err)
    for rows.Next() {
var a Article
err = rows.Scan(&a.Id, &a.Title, &a.PubDate, &a.Body, &a.User)
checkErr(err)
fmt.Printf("%#v", a)
}
    db.Close()
}
func checkErr(err error) {
if err != nil {
panic(err)
}
}

save it, compile and run it.
You will see something like this:

panic: sql: Scan error on column index 2: unsupported Scan, storing driver.Value type <nil> into type *time.Time

Why?
Because, we have defined PubDate in Article at index 2 with time.Time data type, and the error says that Scanner is unable to convert null values into time.Time (i.e, PubDate) data type.

That was surprising! (I was expecting zero-value of Date or NULL value of Date, but NULL is different story in Go). On searching the issue and I came to my notice that nil is a different data type in Go and because of Go’s static nature, you can’t assign nil values in other built-in data types (int, string, time.Time etc…).

Well, later I found that database/sql package does provide NullInt64, NullString, NullFloat64 etc., structs to handle null values. These structs are embedded with one additional field Valid which is boolean type, indicates whether field is NULL or not.

Look at the one of struct (NullInt64) implementation here.

Now let’s change Article struct to handle null values properly.

package main
import (
"database/sql"
"fmt"
    "github.com/go-sql-driver/mysql"
)
type Article struct {
Id int `json:"id"`
Title string `json:"title"`
PubDate mysql.NullTime `json:"pub_date"`
Body sql.NullString `json:"body"`
User sql.NullInt64 `json:"user"`
}

Now Article struct field’s data type has been modified and also we’ve removed the import line for "time” package and changed 
_ “github.com/go-sql-driver/mysql” to ”github.com/go-sql-driver/mysql” because now we’re now using mysql.NullTime for datetime fields that may have null value.

You may also find alternatives of NullTime implementation based on the driver you work with (for example, lib/pq has implementation for it here).

Build and run it!
You should see a result like this:

main.Article{Id:2, Title:"first article", PubDate:mysql.NullTime{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}, Valid:false}, Body:sql.NullString{String:"", Valid:false}, User:sql.NullInt64{Int64:0, Valid:false}}

So, we can do something like this:

if a.PubDate.Valid:
// handle a.PubDate.Time
else:
// handle nil

Hurrray!!

But hey, how can we show data properly with json marshalling that contains two possible values i.e. either null or field value?

For json marshalling, I added aliases in my package, as the compiler mentioned, you can’t extend existing types in another package i.e. you can’t write UnMarshalJSON or MarshalJSON on sql.NullString, sql.NullInt64, etc… directly in your package.

If you want to support json marshalling and unmarshalling to user defined data type, then you should implement MarshalJSON and UnMarshalJSON interface methods on that data type.

In my code, aliases look as below:

// NullInt64 is an alias for sql.NullInt64 data type
type NullInt64 sql.NullInt64
// NullBool is an alias for sql.NullBool data type
type NullBool sql.NullBool
// NullFloat64 is an alias for sql.NullFloat64 data type
type NullFloat64 sql.NullFloat64
// NullString is an alias for sql.NullString data type
type NullString sql.NullString
// NullTime is an alias for mysql.NullTime data type
type NullTime mysql.NullTime

As we have these kind of user defined data types, to read data from a sql database, Go provides a mechanism that is implement Scanner database/sql interface.

So, now we shall define Scan method for these user defined data types (aliases) to implement Scanner interface.

Below is the example for NullInt64.

// Scan implements the Scanner interface for NullInt64
func (ni *NullInt64) Scan(value interface{}) error {
var i sql.NullInt64
if err := i.Scan(value); err != nil {
return err
}
    // if nil the make Valid false
if reflect.TypeOf(value) == nil {
*ni = NullInt64{i.Int64, false}
} else {
*ni = NullInt64{i.Int64, true}
}
return nil
}

Inside Scan implementation, it scans the record and later checks for a null value from the database, then marks Valid flag to false. The same implementation style can be applied for other user defined data types (NULLFloat64, NullString, etc…) also.

Now, we are going to provide implementation for json marshalling, which is as below:

// MarshalJSON for NullInt64
func (ni *NullInt64) MarshalJSON() ([]byte, error) {
if !ni.Valid {
return []byte("null"), nil
}
return json.Marshal(ni.Int64)
}

Inside, MarshalJSON method implementation, it checks that NullInt64 typed variable’s Valid flag is true or false. If it’s True then it does json marshal over int64 data else return bytes of string “null” (which is one of the primitive values in JavaScript).

You’ll see a similar style of code in the gist.

Here is the full code of gist:

So, that’s how I solved my problem. Hope sharing this would help!


UPDATE

As Alexandre Bodin suggested in comments that we can extend sql.Nullxxx type into our struct so that we can totally avoid the hassle of re-implement Scan method. It was good suggestion and I would totally recommend it! I also replaced this style of code in my application!

So, now you can do something like this.

type NullInt64 struct {
sql.NullString
}
// now you don't have to take care of Scan method.
// Just focus on marshal logic of JSON

You should also follow this fashion unless you want to do some customization inside Scan and Value method but I don’t see any use cases. And you know what, it reduces one more call of using reflect as I had in my initial version of code in customized Scan method.

FYI, Scan method from database/sql package, uses convertAssignmethod, which also uses reflect internally.

Anyway, I’ve made another gist to replicate this style in my program, you can find it here. So the updated code looks like now:

db rows null handling by extending types in go

You also might want to have a look at this library which takes care of nullable values in go as suggested by Stephen Wood and Steve Heyns! I wasn’t aware of this library when I solved the problem but anyways this is another option in which you might be interested!

You see how it is helpful when you crack the problems and people make more correction in your solution. It’s great that at initial stage we solve the problems and make it functional and sharing it would make more perfect and more correct.


I hope you enjoyed reading the article! Please check out my new post about the experience on working with MySQL JSON data type with prepared statements and using it in Go.