A series on Phantom — Part 1 : Getting started with Phantom

Flavian Alexandru
Outworkers
Published in
6 min readJan 15, 2019

This article is the first in a series about one of our crown jewels, the official Scala driver for Apache Cassandra and Datastax Enterprise, our very own phantom. From its very humble beginnings of a bare bones afternoon project in 2013, phantom has slowly but surely grown to become an established Scala framework, and through its high focus on ease of use, quality coding and perhaps most importantly, quality documentation, phantom is now the leading tool for integrating Cassandra in the Scala eco-system.

Without further ado, it’s safe to say that if you are planning on using Cassandra with Scala, phantom is the weapon of choice. And if you’re reading this, you’re probably wondering how to integrate phantom and Cassandra in your eco-system as fast as possible to quickly prove the concept and get to business.

At this point the assumption is you know your way around Cassandra. If you don’t, an excellent place to start is our series on Cassandra, available here.

First things first.

Integrating phantom in your project

All you need for this is a dependency on phantom in your favourite build tool. Phantom offers a variety of modules, including more esoteric options like Thrift and ZooKeeper support out of the box, but for most projects all you need is phantom-dsl. The DSL contains only the DSL specifics and connectors, allowing you to quickly create a type safe database service, connect to Cassandra and get things done. At the time of this writing, we will be using phantom 2.31.0. Just make sure the below dependencies are added to the right module in your SBT build definition and you’re ready to go.

Phantom is published either on Maven Central or on the Outworkers maven repository, which is also publicly available. Phantom offers support for both Scala 2.10, 2.11, 2.12 and soon 2.13.

val PhantomVersion = "2.31.0"libraryDependencies ++= Seq(
"com.outworkers" %% "phantom-dsl" % PhantomVersion
)

Now the following imports should work:

import com.outworkers.phantom.dsl._

Connecting to a Cassandra cluster

The connectors framework, included by default in the DSL, controls the way phantom connects to Cassandra. The underlying functionality is still based on the Datastax Java driver. Phantom adds a very thin layer of functionality to make things “just work” and provide you with sensible defaults so you don’t actually have to do any hard work.

Connectors work with both single node deployments as well as multi-dc service discovery based Cassandra installations, available via phantom-zookeeper support. For most single node deployments, the SimpleCassandraConnector will get you started in seconds. All you need to do is to pick a keyspace and you’re ready to go.

object Defaults {  val hosts = Seq("10.10.5.20", "1.1.1.1")  val Connector = ContactPoints(hosts).keySpace("whatever")
}

How connectors work

Why trouble with connectors at all? Basically, behind the scenes phantom will do significant plumbing work, including creating the keyspace with lightweight transactions, guaranteeing a just in time thread safe global initialisation of your session and automated injection in all your tables.

When Phantom queries are transformed to futures, via methods like one(), future(), fetch(), and so on, the signature of all these methods masks the very core of the functionality. Let’s look at the signature of the fetch method:

MyTable.select.where(_.id eqs someId).and(_.age > 5).fetch()def fetch()(implicit session: Session, ctx: ExecutionContext): Future[Seq[Record]]

The implicit session

The implicit session, the actual cluster connection, which is injected by any Connector implementation, will guarantee just-in-time thread safe single global init of your session. Phantom virtually does all the heavy lifting for you, so you don’t need to worry about when to block, when you can start running queries, thread safety of session access, ensuring the keySpace exists, and so on. Everything should “just work”.

When you mixin a connector trait into a table, it may appear that each table has its own session or that indeed at least every separate keySpace will use a different session. Before 1.10.0, the latter was true. A separate session would be generated for every keySpace in use. After 1.10.0, Phantom unifies database access through a single global session that is now capable of simultaneously using different keyspaces. For most applications though, you are only using a single keyspace to begin with so you needn’t worry about multiple sessions co-existing.

Creating your first table

You are now ready to create your very first table. Let’s assume we are modelling the CQL schema for a case class that looks like this:

case class User(
id: UUID,
email: String,
name: String,
registrationDate: DateTime
)

Here’s how the Phantom DSL equivalent looks like:

import com.outworkers.phantom.dsl._abstract class Users extends Table[Users, User] {  object id extends UUIDColumn with PartitionKey
object email extends StringColumn
object name extends StringColumn
object registrationDate extends DateTimeColumn
}
abstract class UserExamples extends Users { def insertRecord(user: User): Future[ResultSet] = {
store(user)
.consistencyLevel_=(ConsistencyLevel.ALL)
.future()
}
def getById(id: UUID): Future[Option[User]] = {
select.where(_.id eqs id).one()
}
}

And there you have it, your very first phantom table, with 2 query methods defined on it to store and retrieve a user. Now, to make all that work, we need to define our first connector.

import com.outworkers.phantom.dsl._object Defaults {
val connector = ContactPoint.local.keySpace("my_keyspace")
}
class MyDatabase(override val connector: CassandraConnector) extends Database(connector) {
object users extends Users with connector.Connector
}
object MyDatabase extends MyDatabase(Defaults.connector)

This may seem a little bit convoluted, but you gain an immense deal of functionality from this simple pattern. First of all, when you add those objects into the database implementation itself, you can instantiate any number of instances, each with a different connector. This is especially useful if you later want to do something like this for running tests under embedded Cassandra with phantom-sbt or phantom-docker.

object TestDatabase extends MyDatabase(ContactPoint.embedded.keySpace("my_keyspace"))

You can also apply any form of logic you like inside the definition of a connector. Let’s say if you are in test mode you want to use a local Cassandra and if not you want to provide a sequence of known IPs as seed nodes for the connector. To achieve that:

object CustomConnector {
val hosts = if (testMode == true) {
Seq("localhost")
} else {
Seq("10.124.123.2", "124.12125.1232") // or whatever
}

val connector = ContactPoints(hosts).keySpace("my_keyspace")
}object MySmartSwitchingDatabase extends MyDatabase(CustomerConnector.connector)

When using the database class, you also get automated initialisation methods for the entire database, as well as automated truncation methods.

import scala.concurrent.Await
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.duration._
object Demo { Await.result(MyDatabase.create.future(), 5.seconds)
}

The same can also be very useful when running tests in let’s say ScalaTest, using beforeAndAfterAll semantics.

We can create a custom fixture for out tests which automatically creates and truncates the entire database before and after tests, respectively.

trait MySuite extends Suite with BeforeAndAfterAll with Defaults.connector.Connector {
override def beforeAll(): Unit = {
super.beforeAll()
Await.result(TestDatabase.autocreate.future(), 5.seconds)
}
override def afterAll(): Unit = {
super.afterAll()
Await.result(TestDatabase.autotruncate.future(), 5.seconds)
}

}

This marks the end of our introduction to Phantom. We hope you’ve enjoyed it and that you will stay tuned for more to come in this series, where we will go into a great deal of depth about phantom data modelling, structuring a database layer and an application, using the integrated teskit using advanced data mocking gear to achieve a very high degree of test automation and many more tricks of the trade.

If you enjoyed this article, follow us on Twitter and stay tuned for more: @outworkers. Websudos is an elite marketplace for engineers with a unique out-staffing model. If you’re looking for high level Scala expertise to transform your business and applications, give us a call and we will give you an incredible definition of engineering!

Want to learn more?

As official Datastax partners, Outworkers offers a comprehensive range of professional training services for Apache Cassandra and Datastax Enterprise, taking your engineering team from Cassandra newbies to full blown productivity in record time. Our example driven courses are the weapon of choice for companies of any size and if you happen to be a Scala user, we will also throw in a professional training session on using phantom in your company. All of our face-to-face training courses come with free ongoing access to our online training material.

For enquiries and bookings, please contact us by email at office@outworkers.com.

--

--