<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Luka Stosic on Medium]]></title>
        <description><![CDATA[Stories by Luka Stosic on Medium]]></description>
        <link>https://medium.com/@lukastosic?source=rss-60ae27ed05ab------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Mon, 24 Jul 2017 15:20:48 GMT</lastBuildDate>
        <atom:link href="https://medium.com/feed/@lukastosic" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Traefik & Docker — reverse proxy and much much more]]></title>
            <link>https://medium.com/@lukastosic/traefik-docker-reverse-proxy-and-much-much-more-a39b24b9d959?source=rss-60ae27ed05ab------2</link>
            <guid isPermaLink="false">https://medium.com/p/a39b24b9d959</guid>
            <category><![CDATA[devops]]></category>
            <category><![CDATA[docker]]></category>
            <category><![CDATA[microservices]]></category>
            <category><![CDATA[traefik]]></category>
            <dc:creator><![CDATA[Luka Stosic]]></dc:creator>
            <pubDate>Tue, 21 Mar 2017 23:25:32 GMT</pubDate>
            <atom:updated>2017-03-21T23:25:32.116Z</atom:updated>
            <content:encoded><![CDATA[<h4>OK, so you have your beautiful web application, you packaged it and deployed as a docker containers, but how to expose it to the world?</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*DZcLoOo0Jpu9jSaBw9aF6A.jpeg" /></figure><p>For some time already we delivered several projects to our clients that consisted of architecture that was very similar to micro-services. We used separation primarily between presentation layer (web site), backend system (REST API) and database.</p><p>We were treating every part as an individual unit — website contains only presentation layer, REST API (sometimes several REST APIs) are handling all business logic. Database — sometimes one, sometimes many (depending on how we executed separation of concerns).</p><p>During development and testing we are using Docker. Docker is giving us confidence when testing that we are on the same “environment” as on production.</p><p>For deployments to our central TEST environment we are using <strong>docker-compose</strong> to quickly start up whole environment with all services. This method is pretty much straight forward — we specify what services to run (images, mapping volumes, etc.) and we define what ports to expose from docker container to the host system. Then we map it in our internal DNS and we have nice working access to our TEST environment.</p><h3>Multiple sub-domains with different deployed versions</h3><p>On our latest project, requirement was to have (after first phase):</p><p>- multiple environments running with different versions;<br>- every environment running on its own sub-domains;<br>- all of those environments are supposed to be covered with proper SSL/TLS certificate.</p><p>For example: env1.coolapp.com would contain release from sprint 10, but env2.coolapp.com would contain release from sprint 11. At one point in time env1 would be updated to sprint11 (or even 12, depending on the need).</p><h4>Step 1: docker-compose with .env file</h4><p>When using docker-compose you can create .env file (that is of course just default name, you can use different but then you have to specify it when executing docker-compose command).</p><p>.env file can contain “variables” that will be loaded in docker-compose (very similar to “environment” variables).</p><p>This gives us an option to have the same docker-compose file but just having different .env file we can point to different image version and give different name to the environment.</p><h4>Step 2: Reverse proxy</h4><p>We needed some kind of reverse proxy that will accept traffic coming to specific sub-domain and route that traffic to its appropriate docker environment.</p><p>Reverse proxy can be executed in many ways, we can make custom service, we can use NginX, but it would be really cool if there would be some kind of already existing tool that can give us easy configuration, <strong>dynamic </strong>discovery of new subdomains and if it could also automatically obtain SSL/TLS for every new environment … oh, and when I am writing my wish list, I would also like to include — load balancing. Yeah, that would be nice…</p><h3>Traefik</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/200/1*_fw6B_jhy0t9I_e1db-KOg.png" /><figcaption>Super cute logo :)</figcaption></figure><p>Traefik (<a href="https://traefik.io">traefik.io</a>) is wonderful piece of software written in Go language that gives everything that we need, and it can do much much more.</p><p>In its essence it is <strong>dynamic </strong>reverse proxy. It can connect to many popular deployment platforms (docker, swarm, mezos, kubernetes, etc.) and obtain information about services (containers). It is using <strong>.toml</strong> file (simple text config file) for configuration.</p><p>Traefik is composed of rules that are used to connect “Frontend” with “Backend” In terms of Traefik — “Frontend” is internet domain like api.myapp.com. On the other hand “Backend” is our deployed web service. In this case — we can set a rule in Traefik that for Host:api.myapp.com traffic should be routed to our api service container.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*rb3X-B9O0b8w_i7nPOBy7g.png" /><figcaption>Overview of Traefik usage.</figcaption></figure><p>All those rules can be set in .toml file. But here comes a very interesting part — they don’t have to be! Rules can be defined in <strong>labels </strong>on docker containers and Traefik will pick them up dynamically.</p><h4>Set docker service labels to “push” rule into Traefik</h4><p>In our case we have a service defined in our docker-compose and that service will have a docker labels with content like (there are lot more labels, this is just a small sample):</p><pre>labels:<br>      - &quot;traefik.backend=restapi&quot;<br>      - &quot;traefik.frontend.rule=Host:api.coolapp.com&quot;<br>      - &quot;traefik.enable=true&quot;<br>      - &quot;traefik.port=8080&quot;</pre><p>This will mark this docker service as a backend with name “restapi”, it will create a rule in Traefik that all traffic coming from api.coolapp.com should be redirected to this docker service to port 8080.</p><p>In some cases you don’t need to expose some docker services to Traefik (for example you can have some backend api that doesn’t need to be exposed) then you just omit “enable” label.</p><h4>Start up Traefik</h4><p>We start up Traefik as separate docker container from our environments. When we start it up we map docker.sock so Traefik can communicate with Docker, but also we give it very simple .toml file:</p><pre>logLevel = &quot;DEBUG&quot;<br>defaultEntryPoints = [&quot;http&quot;, &quot;https&quot;]<br><br># WEB interface of Traefik - it will show web page with overview of frontend and backend configurations <br>[web]<br>address = &quot;:8080&quot;<br><br># Connection to docker host system (docker.sock)<br>[docker]<br>domain = &quot;mycoolapp.com&quot;<br>watch = true<br># This will hide all docker containers that don&#39;t have explicitly  <br># set label to &quot;enable&quot;<br>exposedbydefault = false<br><br># Force HTTPS<br>[entryPoints]<br>  [entryPoints.http]<br>  address = &quot;:80&quot;<br>    [entryPoints.http.redirect]<br>    entryPoint = &quot;https&quot;<br>  [entryPoints.https]<br>  address = &quot;:443&quot;<br>    [entryPoints.https.tls]<br>  <br># Let&#39;s encrypt configuration<br>[acme]<br>  email=&quot;email@mycoolapp.com&quot;<br>  storage=&quot;/etc/traefik/acme.json&quot;<br>  entryPoint=&quot;https&quot;<br>  acmeLogging=true<br>  onDemand=true<br>  OnHostRule=true</pre><p>Beginning of the file is mostly self explanatory — it is setting log level, enabling both http and https, couple of lines that makes it connect to docker host and finally “force to https” (all traffic that comes to port 80 — redirect to 443)</p><p>But last part is really neat — <strong>acme</strong>. This is connection to Let’s encrypt service, but the best part of all — it is completely dynamic. For every Host rule (domain/sub-domain) that appears in Traefik — it will go to Let’s encrypt and obtain key and certificate (store them in acme.json file) for that host configuration.</p><p>As you can see here — .toml configuration doesn’t have any configuration about our sub-domains and docker-compose environments. It just simply “waits” for any incoming rule that will be “pushed” when docker-compose starts up new environment. Then it will generate all rules “on-the-fly” and (if necessary) obtain SSL/TLS certificate for every environment.</p><h4>Load balancing</h4><p>This is simple — it is just there “out-of-the-box” load balancing works, you can specify different load balancing algorithm to be used, but if you don’t specify anything and you scale up your docker-compose service, load balancing is up and running.</p><h3>Instead of conclusion</h3><p>Traefik is solving our problems really good. It is built with dynamic environments in mind, so lots of things just work out of the box without configuring too much. Other features like let’s encrypt, load balancing, circuit breakers etc. just makes it even more appealing.</p><p>We still have a lots of ways to go, to explore more features, try it out in docker swarm environment, etc. but for now we are very satisfied.</p><p>Give it a try, if nothing else, at least because of cute logo :)</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a39b24b9d959" width="1" height="1">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Continuous integration workflow]]></title>
            <link>https://medium.com/@lukastosic/continuous-integration-workflow-91fd83b8d69a?source=rss-60ae27ed05ab------2</link>
            <guid isPermaLink="false">https://medium.com/p/91fd83b8d69a</guid>
            <category><![CDATA[git]]></category>
            <category><![CDATA[jenkins]]></category>
            <category><![CDATA[bitbucket]]></category>
            <category><![CDATA[jira]]></category>
            <category><![CDATA[github]]></category>
            <dc:creator><![CDATA[Luka Stosic]]></dc:creator>
            <pubDate>Thu, 11 Aug 2016 15:41:31 GMT</pubDate>
            <atom:updated>2016-08-11T15:51:58.596Z</atom:updated>
            <content:encoded><![CDATA[<p><em>How we do it @ INFOdation</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*D2FhG7IKXjCtDwIjk3Jvcg.jpeg" /></figure><p>Having proper continuous integration flow is the key of reducing regressions, finding bugs as soon as possible, keeping main code base clean etc.</p><p>But before we start, lets try to answer a question:</p><blockquote>What is continuous integration?</blockquote><p>Continuous integration as a term was coined in early 1990s to mark idea that developers should often merge their working code with main code base.</p><p>It was introduced as a way to avoid long running development jobs without proper checking against main code base that can lead to <em>hair-pulling</em> merges that can produce lots of build fails and long fixing times on something that is already <em>done</em>.</p><h3>Simple idea, but it has big implications</h3><p>If you want to perform successful continuous integration, you need to cover several areas</p><h4><strong>Fast development time</strong></h4><p>Development tasks (features) should be defines/split in a way that development of one feature takes least possible time. This will ensure short development times and often merges to main code base. Potential problems/regressions are reduced because new items are just small chunks of work.</p><h4><strong>Automate tests</strong></h4><p>It is difficult to cover 100% of any application in completely automated tests that can cover all possible and impossible combinations. But you should always aim to cover as much as possible. There are lots of different testing frameworks that can help you cover back-end and front-end parts of the application.</p><h4><strong>Test environment should be as similar as possible to production</strong></h4><p>Test environments should be configured in a way to mimic production environments as much as possible. Same version of OS, hardware platform, pre-installed software, configurations, etc. This step should eliminate lots of cases of “It works on my machine…”</p><h4><strong>Automate builds</strong></h4><p>This step ensures <em>continuous</em> part of term Continuous Integration. There are several software platforms that perform automated builds. One of most opensource platform is <a href="https://jenkins.io/">Jenkins</a>. Job for CI platform is to detect changes in code base and automatically perform build and testing actions.</p><h3>INFOdation CI flow</h3><h4><strong>Everything start from job planning</strong></h4><p>At INFOdation we are aiming to split development jobs into tickets that shouldn’t take more than 2 days to complete. All automated tests are defined at the beginning, so developer knows what kind of tests he need to create to cover business logic parts.</p><p>For job planning we are using <a href="https://www.atlassian.com/software/jira">JIRA </a>and for development documentation we are using <a href="https://www.atlassian.com/software/confluence">Confluence</a>.</p><h4><strong>GIT for successful code management</strong></h4><p>For code management we are using GIT, specifically <a href="https://www.atlassian.com/software/bitbucket">Bitbucket</a> where we keep our central code repositories.</p><p>In our everyday development flow we tend to use popular <a href="http://nvie.com/posts/a-successful-git-branching-model/">GitFlow</a> method.</p><h4>Quick word about GitFlow</h4><p>We are using 2 main branches: <strong>master </strong>and <strong>develop</strong>.</p><p><strong>Develop</strong> branch is used in everyday work. <strong>Master </strong>branch contains only finished versions that will be published/deployed to end customer.</p><ul><li>When developer picks up ticket to work on, he starts from latest develop branch and creates feature branch with specific naming pattern: <em>feature/&lt;ticket_name_from_jira&gt;</em>. This ensures easy tracking on what is in progress when we look at repository.</li><li>Work on feature is finished, code from feature branch is merged with develop branch. Any eventual merge conflicts are resolved on feature branch before merging.</li><li>When sprint is close to the end, we create <strong>release </strong>branch where we perform final tuning, stress testing, eventual bug fixing etc.</li><li>After release is successfully tested, it is tagged and merged to <strong>master </strong>and <strong>develop</strong> branch.</li></ul><h4>Jenkins for automated builds</h4><p>We are using <em>webhook </em>mechanism of Bitbucket that posts information to our Jenkins CI system that new code is available on central repository.</p><p>Jenkins is set up to take latest code from Bitbucket and perform build and test. Specific configurations differ from project to project, depending on requirements and environments.</p><p>Usual configuration is that Jenkins will build and deploy from <strong>develop </strong>and <strong>release </strong>branches for additional manual user testing, but build jobs that are building <strong>feature </strong>branches are configured in a way that they will only perform build and automated tests (no deployment).</p><h4>Code reviews on pull request</h4><p>Pull request is very important task of keeping the main code base (develop branch) clean.</p><p>Developer will do his development on <strong>feature </strong>branch, perform testing on his development platform, also Jenkins will execute build and automated tests.</p><p>But developer cannot merge directly to <strong>develop </strong>branch. Instead he must open <em>Pull request</em>. This is notification that development is tested and ready to be merged. This pull request will be sent as notification to set of users who are defined as <em>reviewers</em>. It is their job to do last minute check on the code to make sure there are not some potential issues.</p><h3>Getting everything together</h3><p>Merging (approving) pull requests can be tiresome job if you don’t have any good help with valid insights. That is why having several useful plugins in Jenkins platform can help reviewer with information if build is good and how many tests are passing before reviewer even have to “dive into” the code.</p><p>Also there are useful integrations among different platforms that can make progress tracking much easier and give confidence that all code is reviewed.</p><h4>Jenkins plugins for test results and code coverage</h4><p>There are multiple plugins that can collect test execution results and code coverage reports.</p><p>On this picture you can see sample JAVA application that is using JUnit plugin to show test execution and JaCoCo plugin to show code coverage:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*JXJKihm3jH-QM7bRg_6cAA.png" /><figcaption>Jenkins build job overview with test cases and code coverage. Design is not really “modern” ;) but it serves the purpose</figcaption></figure><h4>Jenkins Bitbucket build notification plugin</h4><p>With this plugin, Jenkins results are pushed to Bitbucket and provide information if specific git commit is good or bad.</p><p>Here you can see example of pull request page where build status is shown from Jenkins:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*UA4fUKWrtd9l57oCIyOIdQ.png" /><figcaption>Pull request page on Bitbucket showing build status from CI system</figcaption></figure><p>If you click on that link it will show you details of build. Here you can see which job is showing status with total number of passed tests. You can follow link to open direct Jenkins page of the build job:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/615/1*ToL9w0kc_7kRnoW8c2qchA.png" /><figcaption>Build status details</figcaption></figure><p>Beside showing build status on the pull request, this status is also shown on list of active branches and commits, so with just a quick glance on the repository you can easy see if development is on track:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*UeunC9wB-5VKitZDxIORCQ.png" /><figcaption>Build status on all commits</figcaption></figure><h4>Bitbucket and JIRA integration</h4><p>Because both Bitbucket and JIRA are products of the same company (Atlassian) it is expected to have some form of integration.</p><p>On Bitbucket it is shown in form of linking JIRA tickets if you properly reference them in commit message (seen on previous picture as links in commit messages). This way when you click on that link you will be transferred to details of that JIRA ticket</p><p>On the other hand, on JIRA on every ticket details there is section <strong>Development </strong>that shows references to specific ticket (number of commits, branches and pull requests).</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*_F7mrsP81YtYEPM6ED2zHA.png" /><figcaption>Development section showing on JIRA ticket</figcaption></figure><p>This also transfers to <strong>Releases </strong>section on JIRA where tickets are grouped in their versions and gives good overview progress per different version. There you can see very important <strong>Warnings </strong>section that shows tickets that are mentioned in Bitbucket, but are not part of any Pull Request. Those tickets can pose major stability issues, because if they were not part of a Pull Request, then they are probably not properly tested and/or reviewed.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*exkeszxoBfyU9mf7rqtXVw.png" /><figcaption>Jira releases section showing version overview and progress</figcaption></figure><h4>Potential issues</h4><p>CI is not of much use if it code is not covered with automated tests. Tests can be very simple, but also very complicated and take lot of development time.</p><p>Also, CI must be configured in a way to perform tests on a platform that is close to production as possible. Those configurations can be very difficult to create and maintain.</p><h4>Conclusion</h4><p>CI systems are vital in ensuring that code base is kept good after new development tasks.</p><p>Good CI systems must be able to properly integrate into development workflow. At INFOdation we found very good combination of JIRA, Bitbucket and Jenkins. There are lots of useful plugins in Jenkins that can ensure build results on every code change and show status back on Bitbucket.</p><p>Be aware that you will need to spend some time in devising proper testing strategies and also time will need to be invested in proper CI configuration and maintenance.</p><p>If you overcome those obstacles, your everyday development process (and tracking) will become significantly easier and you will be able to focus on actual problem solving.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=91fd83b8d69a" width="1" height="1">]]></content:encoded>
        </item>
    </channel>
</rss>