#MERN webstack #AWS EC2
This is a quick demo of how to build a web app full-stack, from front to back, and make it interact with MongoDB Atlas.
# Whenever I start running my server on EC2, this link will work, or you will see a blank list :D
#Kafka #Zookeeper # Kafka manager #Logstash #Elasticsearch #Kibana #Python faker library
Nowadays there are large amount of data generating from numerous sources, such as web services, digital media, sensor log data, etc., and only a small portion of it have been well managed or utilized to create value. This article may be a bit old but still worth a quick read — The Data Made Me Do It.
It has become more challenging than ever before to read large amount of data, process it and take action upon that data.
In this article, I’m trying to demonstrate:
Recently I’m exploring how to deploy my app to different platform. Here’s how I build and deploy a Selfie App (built with Node.js) on Heroku. It’s great to see my app working on a cloud platform and allow me to share how it works to other people so much easier.
My app is quite simple, so maybe it wouldn’t make too much difference when it comes to using different platforms to deploy. This article is to showcase how easily we can deploy our app to AWS this sort of cloud service platform.
This is a showcase how I will package my previous Selfie App in a docker. Come check it out: https://hub.docker.com/r/ting11222001/selfieapp
I’ve heard about docker for awhile, but today I’m going to package my Selfie App in a docker (you can read more about it here) and push it to docker hub (a place where we can share our app’s container images publicly or privately with a subscription plan), so other people can easily download it (i.e. pull the container image down) and run it on their machine. …
This is a data selfie app built with Express application framework in Node.js.
Node allows us to write code on server-side within a runtime environment, so we can run our code in terminal directly. It also has plenty of great features e.g. it’s able to incorporate asynchronized interactions between client and server, comes with a package manager called npm (node package manager) which allows us to manage our dependencies much easier, etc.
It’s neat and somehow reminds me of flask in Python a lot, feeling the logic behind Express in Node.JS …
#Web Scraping (Python) #Database (MariaDB) #API (Django) #Visualization (echart)
Recently my other friends and I created an online dashboard to showcase what are the latest online/offline classes offered in Taiwan.
Here’s how it looks like:
It was a 6 weeks on and off group project, and before this project we barely knew how to code. This is our milestone of learning how to code together. I’m so grateful that we finally made it!
Each one of us needed to scrape 2–3 websites with Python BeautifulSoup and Selenium, clean the data, wrap all the code into a testing python script. Some of us helped extra with setting up a VM server, designing the frontend look for this application, creating the entire django framework and the API, and finally deploying this web app using apache web server. …
Using Python to connect to Taiwan Government PM2.5 open data API, and schedule to update data in real time to MongoDB — Part 2
This time I’m using the same PM2.5 open data API (used in Part 1) to showcase how to refresh real time data into mongoDB for every 2 min (because it’s the time for the government’s portal to refresh its API). The strength of mongoDB is it’s simple to use, especially with JSON document format data. This makes connecting to open data much easier. …
Using Python to connect to Taiwan Government PM2.5 open data API and upload batch data to MongoDB — Part 1
MongoDB is the most popular NoSQL database in the world currently, and is quite simple to use.
Taiwan River Pollution Status Data Visualization with Elasticsearch and Kibana
I’m always fascinated by data analytics tools like the Elastic Stack, Splunk, etc. This time I will use the two components of the well-known Elastic Stack — Elasticsearch and Kibana — to showcase how we can utilize speedy store-search features of Elasticsearch and then use Kibana to give shape to our data.
I will create a Kibana dashboard to present river pollution status in Taiwan. Dashboard will cover: