Easy Setup Fresh Local Development Environment with Docker

Faza Zulfika Permana Putra
3 min readAug 19, 2021

--

After several time doing e2e and integration test on local, I found my self need to set a fresh environment or found a broken tools that need to be reconfigured or reinstalled. For some tools, it’s easy to clear, download, install, and run it again. But for some tools, it’s a little bit tricky to do that. So it will takes some times. Caused by laziness to do many action and times, I got an idea, why not running that containerized tools ?
So this is my guidelines to setup some containerized tools using Docker (you can read about it from this page).

Installing docker

Setup a docker network to communicate our running container

  • As default, each container that we run in docker will isolate itself. So it cannot communicate with each other.
  • To make our containers can communicate with each other, we need to set it in same network that has been configured.
  • To create a network with name application-tier please run → docker network create application-tier --driver bridge
  • To show all created network → docker network ls

Show our running container logs

  • docker container logs {container-name}
  • Example : docker container logs redis

Setup and running Redis

  • Create redis container (with empty password) → docker container create --name redis --network application-tier -p 6379:6379 -e ALLOW_EMPTY_PASSWORD=yes bitnami/redis
  • Running redis container → docker container start redis
  • Run redis cli inside our running redis → docker exec -it redis redis-cli

Setup and running Redis Sentinel

  • Create redis sentinel container → docker container create --name redis-sentinel --network application-tier -p 26379:26379 -e REDIS_MASTER_HOST=localhost bitnami/redis-sentinel
  • Running redis sentinel container → docker container start redis-sentinel
  • Run redis cli inside our running redis sentinel → docker exec -it redis-sentinel redis-cli -p 26379
  • Run redis cli inside our running redis sentinel and connect to redis master → docker exec -it redis-sentinel redis-cli -h redis

Setup and running MongoDB

  • Create mongoDB container without auth → docker container create --name mongo --network application-tier -p 27017:27017 mongo
  • Create mongoDB container with auth → docker container create --name mongo --network application-tier -p 27017:27017 -e MONGO_INITDB_ROOT_USERNAME=mongodb -e MONGO_INITDB_ROOT_PASSWORD=mongodb mongo
  • Running mongoDB container → docker container start mongo
  • To verify our running mongoDB, we can use mongo client tools, for me i’m using Robo 3T (download link)
  • You can connect to our running mongoDB using localhost:27017

Setup and running RabbitMQ

  • Create rabbitMQ container without auth → docker container create --name rabbitmq --network application-tier -p 5672:5672 -p 15672:15672 rabbitmq:management-alpine
  • Create rabbitMQ container with auth → docker container create --name rabbitmq --network application-tier -p 5672:5672 -p 15672:15672 -e RABBITMQ_DEFAULT_USER=user -e RABBITMQ_DEFAULT_PASS=password rabbitmq:management-alpine
  • Running rabbitMQ container → docker container start rabbitmq
  • To verify our running rabbitMQ, we can open rabbitMQ management page from localhost:15672

Setup and running Zookeeper

  • To run apache kafka we must running zookeeper first
  • Create zookeeper container → docker container create --name zookeeper --network application-tier -e ALLOW_ANONYMOUS_LOGIN=yes -p 2181:2181 bitnami/zookeeper
  • Running zookeeper container → docker container start zookeeper

Setup and running Apache Kafka

  • Running zookeeper first!!!
  • Create kafka container → docker container create --name kafka --network application-tier -e ALLOW_PLAINTEXT_LISTENER=yes -e KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181 -e KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://localhost:9092 -p 9092:9092 bitnami/kafka
  • Running kafka container → docker container start kafka
  • To verify our running kafka, we can use kafka tool client (download link)
  • To publish a kafka message, we can use our running kafka container → docker exec -it kafka kafka-console-producer.sh --broker-list localhost:9092 --topic {topic-name}
  • To consume kafka messages, we can use our running kafka container → docker exec -it kafka kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic {topic-name} --from-beginning

Setup and running Postgres

  • Create postgres container → docker container create --name postgres --network application-tier -e POSTGRES_PASSWORD=postgres -p 5432:5432 postgres:alpine
  • Running postgres container → docker container start postgres
  • To verify our running postgres, we can use postgres client tool, for me i’m using pgadmin

Setup and running PgAdmin

  • Postgres client to access postgres
  • Create pgAdmin container → docker container create --name pgadmin --network application-tier -p 5050:80 -e PGADMIN_DEFAULT_EMAIL={your awesome-email} -e PGADMIN_DEFAULT_PASSWORD=postgres dpage/pgadmin4
  • Running pgAdmin container → docker container start pgadmin
  • Open localhost:5050, login with your email and password. Create a new server, postgres host is postgres (not localhost) with port 5432

--

--