<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Julija Alieckaja on Medium]]></title>
        <description><![CDATA[Stories by Julija Alieckaja on Medium]]></description>
        <link>https://medium.com/@alieckaja?source=rss-ea9bcf695a7d------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Mon, 11 May 2026 09:42:57 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@alieckaja/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Unleashing the Power of Fibers for Background Jobs]]></title>
            <link>https://medium.com/@alieckaja/unleashing-the-power-of-fibers-for-background-jobs-8a22e3a38cd1?source=rss-ea9bcf695a7d------2</link>
            <guid isPermaLink="false">https://medium.com/p/8a22e3a38cd1</guid>
            <category><![CDATA[ruby]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[async]]></category>
            <category><![CDATA[programming]]></category>
            <dc:creator><![CDATA[Julija Alieckaja]]></dc:creator>
            <pubDate>Mon, 23 Jan 2023 10:23:41 GMT</pubDate>
            <atom:updated>2023-01-24T23:54:53.102Z</atom:updated>
            <content:encoded><![CDATA[<h3>How to use Ruby Fibers for Background Jobs</h3><p>Ruby 3 has introduced a game-changing feature for concurrent programming with the release of Fiber::SchedulerInterface. This powerful tool allows developers to manage fibers, making it easier to handle context switching in I/O-bound tasks. In this article, we will dive into the world of fibers and socketry/async stack, exploring the powerful capabilities they offer on the sample of background jobs processor.</p><p>It’s going to be a rather big piece, so here is a quick plan of what’s going to be described:</p><ul><li>What’s Fiber?</li><li>What’s so special about Fiber::SchedulerInterface?</li><li>Different kinds of event selectors</li><li>Socketry overview</li><li>Jiggler. Fiber-based background job processor</li><li>Jiggler core components</li><li>Pre-emptive fiber scheduling</li><li>Benchmarks</li><li>Limitations</li><li>Future</li></ul><h3>What is Fiber?</h3><p>Fibers are primitives for implementing lightweight cooperative concurrency. Fibers exist within a thread and only one fiber per thread can run at a time. They use very little memory, so it is possible to create thousands of fibers without a huge memory footprint. Another important aspect is that threads are managed by the operating system’s scheduler, and fibers are scheduled and managed by developers.</p><p>First introduced in Ruby 1.9 (December 2007), it’s not a new concept. Despite being lightweight and providing more control, fibers didn’t gain wide community popularity, staying a rather rarely used feature. Perhaps, due to the fact that Ruby didn’t provide a simple out-of-the-box scheduling interface, or maybe because the community is heavily Rails-centric, thus it’s just easier for people to stick to threads as a more commonly used construct.</p><p>However, there are still several fiber-using libraries, which used to be pioneers and gained some popularity:</p><ul><li>Celluloid <a href="https://github.com/celluloid/celluloid">https://github.com/celluloid/celluloid</a> — not maintained since 2016, but back at the times it was one of the main Ruby gems providing handy API to write concurrent Ruby code.</li><li>Em-synchrony <a href="https://github.com/igrigorik/em-synchrony">https://github.com/igrigorik/em-synchrony</a> — a fibered implementation of EventMachine, latest commits happenned about 5 years ago.</li></ul><p>When using fibers in older Ruby versions (from 1.9 until 3), developers had to keep track of all the fibers and manually manage them by calling the Fiber::yield, Fiber#resume and Fiber#transfermethods. This process can be complex, too verbose and error-prone.</p><p>Nowadays, with Ruby 3 the fibers management has become much simpler.</p><h3>What’s so special about Fiber Scheduler?</h3><p>Fiber::SchedulerInterface provides a set of hooks invoked when a blocking operation begins/ends.</p><p>Basically, Ruby standard I/O methods such as IO#wait_readable, IO#wait_writable, IO#read, IO#write,Kernel.sleep, etc., have been patched to yield to the scheduler if it‘s defined in the context of the current thread. And the scheduler, in turn, passes execution control to the other ready fibers.</p><p>This automatically makes all standard Ruby calls scheduler-friendly. However, there are still a lot of Ruby gems using C-extentions, which could perform I/O on their own bypassing Ruby’s native methods. For example, different kinds of DB-adapters. Fortunately, it’s relatively easy to implement a call to the scheduler from a C-extention, so the fiber-support across such gems gradually grows. It’s up to developers to check if a given gem with a built-in C-extention supports the scheduler and thus if it makes sense to use with fibers.</p><h3>Different kinds of event selectors</h3><p>An event selector is a Linux kernel feature that allows a program to monitor multiple file descriptor sources for events, such as input or output operations. In other words, that is a mechanism that among other things can notify a program whenever a specific non-blocking operation is complete.</p><p>For example, if the fiber was blocked because no data was available on the socket it reads from, it should be resumed once the data arrives. The scheduler should be notified that the non-blocking operation is complete by using one of the next strategies underneath:</p><ul><li>poll()/epoll() is the default mechanism for watching sockets to see if new data is available on almost all Linux systems.</li><li>io_uring is a Linux kernel library that provides a high-performance interface for asynchronous I/O operations. It was introduced in Linux kernel version 5.1 and aims to address some of the limitations and scalability issues of the existing asynchronous I/O interface.</li><li>kqueue() is used in OSX/FreeBSD.</li><li>IO Completion Ports for Windows.</li></ul><p>Most production systems would most likely use epoll() as it’s the most common approach for Linux systems, however it might worth giving a try to io_uring if it’s supported by the specific Fiber.scheduler and is available in the OS.</p><p>It worth mentioning that before the arrival of Fiber scheduler, the main library which took care of I/O events start/end detections was Nio4r <a href="https://github.com/socketry/nio4r">https://github.com/socketry/nio4r</a> . And it is still used in Action Cable, Puma, Async v1 and a lot of other libraries dealing with I/O.</p><h3>Socketry overview</h3><p>As of now, Socketry <a href="https://github.com/socketry">https://github.com/socketry</a> is the most popular collection of the asynchronous libraries in the Ruby world.</p><p>Async <a href="https://github.com/socketry/async">https://github.com/socketry/async</a> is the framework providing handy interfaces to make fiber-driven development even simpler. It has a built-in Fiber.scheduler with epoll/kqueue and io_uring support. It encapsulates all scheduler-related calls, so the users could easily spin up asynchronous tasks, see the example below:</p><pre>Async do<br>  resources.each do |resource|<br>    Async do<br>      result = api_client.get(resource)<br>      DB.connection.save(result)<br>    rescue =&gt; err<br>      logger.error(err)<br>    end<br>  end<br>end</pre><p>That’s it, as simple as that.</p><h3>Jiggler. Background job processor implementation</h3><p>Let’s re-evaluate when fibers could shine the most.</p><p>Usually, the systems manage all I/O waiting by allocating a separate OS thread for every request or worker. This approach is effective to some extent, but OS thread is relatively expensive to create and the context switches performed by the OS also don’t come for free. This leads to a lot of overhead. With fibers, it is possible to utilize non-blocking I/O to minimize this overhead.</p><p>When dealing with CPU-heavy tasks, breaking down the workload into smaller parts through the use of fibers will not bring benefits if the system lacks the resources to handle the workload. However, many Ruby processes spend a significant amount of time waiting for I/O operations, such as awaiting responses from API or database calls, or writing data to a file. This is the perfect usecase for fibers.</p><p>Given, that socketry/async provides all the required APIs to build a job-processor, then such projects simply had to start appearing.</p><blockquote>It was very, very hard, but also it was easy © davie504</blockquote><p>Jiggler <a href="https://github.com/tuwukee/jiggler">https://github.com/tuwukee/jiggler</a></p><p>jiggler is inspired by sidekiqand althrough it doesn’t support as many features, it still implements a very similar paradigm, so it’s fair to compare these 2 in terms of performance, to clearly see all the benefits fibers and socketry/async can provide when compared with the pure thread-based approach.</p><h3>Jiggler core components</h3><p>Conceptually jiggler consists of two parts: the client and the server.<br>The client is responsible for pushing jobs into Redis and allows to read statistics, while the server reads jobs from Redis, processes them, and writes statistics.</p><p>The server consists of 3 parts: Manager, Poller, Monitor.</p><ul><li>Manager spins up and handles workers.</li><li>Poller periodically fetches data for retries and scheduled jobs.</li><li>Monitor periodically loads stats data into redis.</li></ul><h3>Pre-emptive fiber scheduling</h3><p>One important catch here, is that we want Poller and Monitor to be guaranteed to work in their time, even if the workers perform some CPU-heavy tasks. We want to have up-to-date stats and stable polling. With threads we don’t care, as OS periodically switches threads without developer’s interventions, while the Fiber.scheduler waits for a command to switch context.</p><p>That’s called co-operative scheduling. Once started, a task within a co-operative scheduling system will continue to run until it relinquishes control. This is usually at its synchronisation point.</p><p>In a pre-emptive model tasks can be forcibly suspended.</p><p>There’s a great article by Wander Hillen explaining the problematics in the context of Ruby, highly recommend to read it:<br><a href="https://www.wjwh.eu/posts/2021-02-07-ruby-preemptive-fiber.html">https://www.wjwh.eu/posts/2021-02-07-ruby-preemptive-fiber.html</a></p><p>Jiggler solves this problem by forcing the scheduler to switch between fibers in case their execution takes more time than a given threshold value. It introduces a dedicated thread, which encapsulates the scheduler management. The dedicated thread adds some overhead, yet it’s compensated with the achieved time execution control.</p><pre>CONTEXT_SWITCHER_THRESHOLD = 0.5<br><br>def patch_scheduler<br>  @switcher = Thread.new(Fiber.scheduler) do |scheduler|<br>    loop do<br>      sleep(CONTEXT_SWITCHER_THRESHOLD)<br>      switch = scheduler.context_switch<br>      next if switch.nil?<br>      next if Process.clock_gettime(Process::CLOCK_MONOTONIC) - switch &lt; CONTEXT_SWITCHER_THRESHOLD<br><br>      Process.kill(&#39;URG&#39;, Process.pid)<br>    end<br>  end<br><br>  Signal.trap(&#39;URG&#39;) do<br>    next Fiber.scheduler.context_switch!(nil) unless Async::Task.current?<br>    Async::Task.current.yield<br>  end<br><br>  Fiber.scheduler.instance_eval do<br>    def context_switch<br>      @context_switch<br>    end<br><br>    def context_switch!(value = Process.clock_gettime(Process::CLOCK_MONOTONIC))<br>      @context_switch = value<br>    end<br><br>    def block(...)<br>      context_switch!(nil)<br>      super<br>    end<br><br>    def kernel_sleep(...)<br>      context_switch!(nil)<br>      super<br>    end<br><br>    def resume(fiber, *args)<br>      context_switch!<br>      super<br>    end<br>  end<br>end</pre><h3>Benchmarks</h3><p>The latest benchmarks are available on the jiggler <a href="https://github.com/tuwukee/jiggler/blob/main/README.md">README page</a>.<br>As of now, it beats sidekiq in all benchmarks when it comes to the execution of the code aware of Fiber scheduler hooks.</p><p>It depends on the payload, but on the given samples it usually saves 10–20% of memory with the same concurrency settings. For the speed — it shows rather good results with File IO, but the difference is not so impressive with net/http or PG requests.</p><p>But don’t forget, that when using fibers — the concurrency can be set to higher values. Technically the overhead of spawning workers is expected to be low. So it’s possible to test it with 50, or 100, or even more workers against a specific payload, in case there is indeed a lot of I/O going on. The limitation is rather in the connection pool (when the workers are doing some DB queries) or in the external services accepting the requests (in case of network requests).</p><h3>Limitations</h3><p>Developers should be careful when using C-extentions within fibers. socketry/async doesn’t support JRuby as of now, so MRI is the only way. No support for Windows users as well.</p><p>When running the benchmarks with OSX (M1 processor) — they weren’t as good both natively and in Docker. Why? I don’t know for sure, and the investigations from my side is still ongoing (more like it stands still, but I hope to find out one day anyway).</p><p>I tried to test directly socketry/async against native Ruby threads, and against Polyphony (<a href="https://github.com/digital-fabric/polyphony">https://github.com/digital-fabric/polyphony</a> — another interesting framework for fibers), and Polyphony actually works great on OSX M1. So the problem might be in kqueue() support implementation within socketry/async Fiber scheduler, but I didn’t get anywhere further than that, and it might be a false lead.</p><h3>Future</h3><p>It’s fun building jiggler, I learn a lot. I have a lot of plans and ideas for this lib but not so much free time to actually work on it. As of now, the gem is already available in RC version.</p><p>One idea would be to try to implement brpoplpush per queue in dedicated reader classes (instead of brpop for all queues in each worker), thus granting at-least-once worker execution (currently it’s at-most-once, same as free version of sidekiq). Another idea might be to test PG as a backend instead of Redis. Wait, what? Yes! It will add A LOT of overhead, but this approach will also grant out-of-the-box reliability. Great opportunities lie ahead of us!</p><p>Thanks for reading!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=8a22e3a38cd1" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Mastering Rails dates operations]]></title>
            <link>https://medium.com/@alieckaja/mastering-rails-dates-operations-2033de8bd4cb?source=rss-ea9bcf695a7d------2</link>
            <guid isPermaLink="false">https://medium.com/p/2033de8bd4cb</guid>
            <category><![CDATA[ruby-on-rails]]></category>
            <category><![CDATA[web-development]]></category>
            <category><![CDATA[ruby]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[performance]]></category>
            <dc:creator><![CDATA[Julija Alieckaja]]></dc:creator>
            <pubDate>Wed, 04 Aug 2021 19:27:15 GMT</pubDate>
            <atom:updated>2021-08-06T09:54:25.160Z</atom:updated>
            <content:encoded><![CDATA[<h3>Mastering Rails time operations</h3><p>Rails ActiveSupport time methods are slow. Let’s take beginnig_of_month as an example and look at the benchmarks comparing it with a pure Ruby implementation (Rails 6.1.4, Ruby 2.6.6):</p><pre>time = Time.now.utc<br>n = 1_000_000</pre><pre>Benchmark.bm do |x|<br>  x.report do<br>    n.times do<br>      time.beginning_of_month<br>    end<br>  end<br>  x.report do<br>    n.times do <br>      Time.utc(time.year, time.month)<br>    end<br>  end<br>end</pre><pre>user       system     total    real<br>5.256222   0.012179   5.268401 (  5.280565)<br>0.614967   0.004348   0.619315 (  0.620292)</pre><p>The difference in performance is quite noticeable, ActiveSupport is almost 9 times slower. The same situation appears with other ActiveSupport methods, such as beginnig_of_day, beginning_of_quarter , end_of_month , etc. which are not so easily replaceable with quick pure Ruby versions.</p><p>So, why does ActiveSupport take so much time?<br>First of all, the problem persists for both <a href="https://github.com/rails/rails/blob/main/activesupport/lib/active_support/core_ext/date_time/calculations.rb">DateTime::Calculations</a> and <a href="https://api.rubyonrails.org/files/activesupport/lib/active_support/core_ext/date/calculations_rb.html">Date::Calculations</a> modules, though they are affected in varying degrees.<br>Internally ActiveSupport <a href="https://github.com/rails/rails/blob/main/activesupport/lib/active_support/core_ext/date_and_time/calculations.rb">adds a lot of complexity</a>, checks multiple options, handles timezones and conversions, and thus causes performance degradation. Yet it can be avoided by relying on native Ruby solutions where it makes sense.</p><p>I’ve run into the ActiveSupport date methods performance issue while trying to speed up a code iterating through a huge collection of entities with dates in their attributes. The entities were aggregated and modified depending on the result of beginning_of_quarter and end_of_quartermethods. And it turned out that a significant improvement could be achieved by simply using a cache. Classic.</p><p>Here goes a class wrapping the dates cache implementation. Be careful, It’ll help only in case the date range is limited, and the dates are repeated within the given loop.</p><pre>class MemoizedDates<br>  attr_accessor :cache<br>  MDate = Struct.new(:beginning_of_quarter, :end_of_quarter)</pre><pre>  def initialize<br>    @cache = {}<br>  end</pre><pre>  # handles cache misses<br>  def find(date)<br>    cache[date] || memoize(date)<br>  end</pre><pre>  private</pre><pre>  def memoize(date)<br>    cache[date] = MDate.new(<br>      date.beginning_of_quarter,<br>      date.end_of_quarter<br>    )<br>  end<br>end</pre><pre>### sample usage</pre><pre>memoized_dates = MemoizedDates.new<br>collection.each do |entry|<br>  date = memoized_dates.find(entry.date)<br>  if date.beginning_of_quarter &gt; n<br>    # do stuff<br>  end<br>end</pre><pre>### benchmark</pre><pre>time = Time.now.utc<br>memoized_cache = MemoizedDates.new<br>n = 1_000_000</pre><pre>Benchmark.bm do |x|<br>  x.report do<br>    n.times do<br>      rand_date = time - rand(1000).days</pre><pre>      rand_date.beginning_of_quarter<br>      rand_date.end_of_quarter<br>    end<br>  end<br>  x.report do<br>    n.times do<br>      rand_date = time - rand(1000).days<br>      mem_date = memoized_cache.find(rand_date)</pre><pre>      mem_date.beginning_of_quarter<br>      mem_date.end_of_quarter<br>    end<br>  end<br>end</pre><pre>user        system    total     real<br>39.541243   0.210272  39.751515 ( 39.920949)<br>10.769937   0.065638  10.835575 ( 10.891969)</pre><p>Happy coding!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=2033de8bd4cb" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How lucky I am to have a chance to work with a well written Rails app]]></title>
            <link>https://medium.com/@alieckaja/how-lucky-i-am-to-have-a-chance-to-work-with-a-well-written-rails-app-5639e5eee042?source=rss-ea9bcf695a7d------2</link>
            <guid isPermaLink="false">https://medium.com/p/5639e5eee042</guid>
            <category><![CDATA[architecture]]></category>
            <category><![CDATA[ruby-on-rails]]></category>
            <category><![CDATA[rails]]></category>
            <category><![CDATA[refactoring]]></category>
            <dc:creator><![CDATA[Julija Alieckaja]]></dc:creator>
            <pubDate>Sun, 08 Mar 2020 18:38:26 GMT</pubDate>
            <atom:updated>2020-03-09T20:21:06.530Z</atom:updated>
            <content:encoded><![CDATA[<p>For quite a some time I was working with a small team on a Rails application. Our team wrote the app from scratch, we used a reasonable amount of patterns and best practices, but in a way that it’s not over engineered. I enjoyed the structure of the app, we had an agreement in the team on where and how we store certain service classes and structures, so it was quite easy to decide where to place new files or where to look for the code you need.</p><p>Since the team was quite small, we could easily consent on new practices, apply creative approaches, devote time to refactoring whenever we felt that a tech debt grows. It was so handy and convenient for me that I got used to the idea that this is it. This is the way people do things. This is the life of an ordinary ruby developer.</p><p>But the good cannot last forever and I was transferred to another project for a few months. And I must admit, it affected me quite sobering.</p><p>The new application was a pretty old monolith with a huge amount of legacy code. The team consisted of more than 20 developers, and this does not include QAs, PMs, POs, BIs, etc. They are all good people with a solid tech knowledge. They had a fine idea of the system as a whole. Yet when it came down to the details — nobody knew anything. I mean, nobody really knew why this or that specific decision was made. Why this service is stored in this directory, while a similar one is stored elsewhere. Why the same problem is solved by 3 different classes in different places. No naming conventions. Chaotic folders structure, while some services are stored in the /lib dir, the others are defined as nested classes, some are placed together with models, others are in the /services dir. Crazy after_save callbacks with external API calls right in the AR models. The Gemfile included dozens of gems, some of them were not even used in the app. Business logic smeared evenly on models, controllers, views, services, helpers, with significant interspersed in JQuery based scripts. Lots of heavy DB requests on each page with extra joins and unnecessary data load. Multiple package managers and significant frontend build times. And much more. Or, in other words, everyone knew the answer to any of these, and the answer to any question was: it’s like this because it’s legacy.</p><p>But that’s not bad, after all these are classic problems of many legacy apps. Everyone agreed that the technical debt is huge and it needs to be addressed at some point. Yet the main c-level technical guys were mostly focused on new contracts and implementing new parts of the system. The usual day-to-day tech debt was placed under the responsibility of the whole team. The team has a few team leads and so they should deal with it somehow. Yet it didn’t help and in this direction there was nearly no work at all (well, except some cosmetic minor improvements, but that doesn’t count). One of the main reasons why we didn’t do anything — first, we should agree on what to do, what to start from, and what’s the way we’d like to stick to, to standardize our approaches. And in such a big team it’s nearly impossible to agree on anything. There always was some kind of push back to any idea, and even in case a suggestion is minor and everyone does not mind, there was no active support (only ActiveSupport was there hehe) and all proposals inevitably drowned at the bottom of JIRA backlog.</p><p>In any case, it is a very rewarding experience to plunge into for a short while. I made a lot of conclusions for myself. And one of them is that it’s much easier to prevent the system from chaotic crawling into random and sometimes mind-blowing code constructions than to try to reorganize it at the later stages. And the other one, is that some developers could all their lives work in this fusion-style-chaotic systems, and never even see that there is another way to work on a production application. I’ve seen the “another way”, so I’m a lucky one (yay).</p><p>With this being said, now I’d like to share the way how I personally like to organize code in a Rails app.</p><p>Let’s start from the classic confrontation of /app vs /lib folder.</p><p>The /app folder contains all the stuff that belong to the business logic. We’ll get back to it later.</p><p>The /lib folder, in theory, should contain the code that later could be packaged as gems. But in the real world I usually do not know in advance, which part of the app’s code is going to be good enough to be decoupled to become a gem. So this problem is solved by the fact that I store nothing in the /lib folder. Okay, okay, <em>almost</em> nothing.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/420/1*ma2gvcAs60HzUNs24nBCDg.png" /></figure><p>So, in my case, /lib folder contains rake tasks and another subfolder called developer. This one is rather specific, and not all apps need this. It stores certain media and text files which I’m using in dev-env to conveniently test different kinds of processing.</p><p>All the other folders contain configuration, database schema, migrations, logs, tests, basically all the stuff that allows the application to run but not the application itself.</p><p>Now let’s take a closer look at the /app folder itself.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/516/1*ytFCLGW-iiGk5CJ_CWwN2Q.png" /></figure><p>Lately, a lot of developers tend to call any class which is not an AR model a service and simply throw it into /services dir, no matter what the class actually does. I‘d rather not do it. In my case services are only these classes which are making API calls. For example, S3Service or IBMWatsonService, or OpenWeatherService, you got the idea.</p><p>All the other helper classes or modules are usually can be classified as some kind of design patterns. The most common patterns I use are decorators, forms, query objects, commands, and strategies. They are stored in dedicated folders and can be re-used between f.e. controllers and sidekiq jobs.</p><p>Btw, talking about the sidekiq jobs. In my case they are rather sophisticated and the most complex processing logic is extracted to /processes folder and is stored in classes like CaptionExtractionProcess , or SyncRecordsProcess, etc., so they can be easily tested and re-used between different jobs.</p><p>All that implies that there’re almost no “fat” in models and controllers.</p><p>And yeah, it’s not completely fair to compare these 2 applications and their processes. The first one is a part of a service-oriented system, originally designed in a way that small dedicated teams can work independently. And the other one is a monolith with huge codebase in a combination with a big team without well-established processes (on their way to apply agile practices). With that being said, I personally came to a conclusion that huge monoliths are evil. Moderate-size monoliths are ok though, but this’s a topic for another story.</p><p>A few samples to make it a bit less abstract. <br>Here’s a query object:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/a54f166d93418a9a58e3149797bdb73d/href">https://medium.com/media/a54f166d93418a9a58e3149797bdb73d/href</a></iframe><p>This’s a simplified strategy code sample together with the InputProcessor class, which selects the strategy depending on the type of input.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/ac4b3f7e168bbe68a0fbf31b8483bebe/href">https://medium.com/media/ac4b3f7e168bbe68a0fbf31b8483bebe/href</a></iframe><p>All of the above is my personal opinion and it’s not the only “right” way to organize the code. If you’re happy with your style and it works for you, then cool ;) Feel free to share your opinion in the comments or reach me at Twitter. <br>Cheers!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=5639e5eee042" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How to start your Rails app in a Docker container]]></title>
            <link>https://medium.com/@alieckaja/how-to-start-your-rails-app-in-a-docker-container-9f9ce29ff6d6?source=rss-ea9bcf695a7d------2</link>
            <guid isPermaLink="false">https://medium.com/p/9f9ce29ff6d6</guid>
            <category><![CDATA[rails]]></category>
            <category><![CDATA[docker]]></category>
            <dc:creator><![CDATA[Julija Alieckaja]]></dc:creator>
            <pubDate>Wed, 30 Oct 2019 21:14:02 GMT</pubDate>
            <atom:updated>2019-10-30T21:36:42.693Z</atom:updated>
            <content:encoded><![CDATA[<p>It’s a short guide on how to set up and run your Rails app in a Docker container.</p><p>First of all, the official Docker documentation is great <a href="https://docs.docker.com/">https://docs.docker.com/</a> and even provides a step by step manual for Rails <a href="https://docs.docker.com/compose/rails/">https://docs.docker.com/compose/rails/</a>. You do not actually need this article in case you’ve read it all, but modern engineers are short on time, so here goes a quick and a bit simplified Rails instruction for Puma + PG + Webpack combination.</p><p>Let’s say your Gemfile looks more or less like this.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/04c22f0079079ddaaea1219bc5362024/href">https://medium.com/media/04c22f0079079ddaaea1219bc5362024/href</a></iframe><p>After you’ve installed Docker you’ll need to add only 2 files to your app’s root dir: Dockerfile and docker-compose.yml.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/a7565f79ee62d19cc488a43c5665e543/href">https://medium.com/media/a7565f79ee62d19cc488a43c5665e543/href</a></iframe><p>A quick overview of what’s going on in this file.<br>Firstly, we’re using a prebuilt Docker image with the desired Ruby version.</p><pre>FROM ruby:2.6.5-slim-stretch</pre><p>Then all packages required for PostgreSQL, NodeJS, Yarn, and for a few gems with C-extensions (like nokogiri) are being installed.</p><pre>RUN apt-get update &amp;&amp; apt-get install -y \<br>  curl \<br>  build-essential \<br>  libpq-dev &amp;&amp;\<br>  curl -sL <a href="https://deb.nodesource.com/setup_10.x">https://deb.nodesource.com/setup_10.x</a> | bash - &amp;&amp; \<br>  curl -sS <a href="https://dl.yarnpkg.com/debian/pubkey.gpg">https://dl.yarnpkg.com/debian/pubkey.gpg</a> | apt-key add - &amp;&amp; \<br>  echo &quot;deb <a href="https://dl.yarnpkg.com/debian/">https://dl.yarnpkg.com/debian/</a> stable main&quot; | tee /etc/apt/sources.list.d/yarn.list &amp;&amp; \<br>  apt-get update &amp;&amp; apt-get install -y nodejs yarn</pre><p>The rest of the file is pretty obvious, so we can move directly to docker-compose.yml.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/0242dc4d856b033b467646d85a2e088b/href">https://medium.com/media/0242dc4d856b033b467646d85a2e088b/href</a></iframe><p>We’re all set now. Here’re a few useful commands to run in the console.</p><pre>docker-compose build<br>docker-compose run web bundle install<br>docker-compose run web yarn install<br>docker-compose run web rake db:create<br>docker-compose run web rake db:migrate<br>docker-compose up<br>docker-compose down</pre><p>Yep, it’s that easy.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=9f9ce29ff6d6" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Web Performance Optimizations]]></title>
            <link>https://medium.com/@alieckaja/web-performance-optimizations-316789b453c1?source=rss-ea9bcf695a7d------2</link>
            <guid isPermaLink="false">https://medium.com/p/316789b453c1</guid>
            <category><![CDATA[distributed-systems]]></category>
            <category><![CDATA[scaling]]></category>
            <category><![CDATA[web-development]]></category>
            <category><![CDATA[performance]]></category>
            <dc:creator><![CDATA[Julija Alieckaja]]></dc:creator>
            <pubDate>Wed, 10 Apr 2019 19:35:13 GMT</pubDate>
            <atom:updated>2019-04-10T19:54:36.777Z</atom:updated>
            <content:encoded><![CDATA[<p>A lot of programmers are familiar with the famous quote of Donald Knuth:</p><blockquote>“Premature optimization is the root of all evil.” — Donald Knuth</blockquote><p>However, as it usually happens with famous quotes, in the full context the meaning slightly changes.</p><blockquote>“Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.” — Donald Knuth</blockquote><p>The real question is how to detect in advance which optimizations will be critical for a specific program. Depending on endless circumstances and conditions it can be absolutely anything.</p><p>Still, we can try to distinguish the most common pitfalls at least for web applications. There are already a lot of fine sources with recommendations for frontend performance (f.e. <a href="https://github.com/thedaviddias/Front-End-Performance-Checklist">this checklist</a>) including such bits of advice as minified HTML, CSS, and JS, lazy loading, images optimizations, non-blocking JS calls, etc, which are easy to implement and guarantee a performance boost. In this article, I’d like to try combining a similar checklist but for the backend performance. Backend is more tricky in terms of general recommendations so each advice should be carefully checked in the context of your specific application before blindly applying all the practices.</p><p>I’ll group the advice in 4 categories— Data, CPU &amp; IO, Metrics, and Scaling. Databases have entire books written devoted solely to performance tuning, so in this article we’ll be focusing on the way how the app interacts with data which let’s assume is already stored within a perfectly tuned database.</p><h3>Data</h3><h4>Store the required data in memory</h4><p>IO operations are increasing latency, so storing the required and/or commonly used data in memory boosts the performance. To minimize the tolls on crashes use persistent in-memory storages so the data can be at least partially restored on the process restart.<br>But keep in mind, that sometimes cache invalidation may cost a lot, so use it wisely.</p><h4>Construct a colocated data model</h4><p>Ideally, the related data should be located on the same host (and be stored in memory). All the data necessary to service a specific request should be available locally without extra lookups.</p><h4>Use data types which can be stored sequentially</h4><p>With massive cache usage, memory access can become a bottleneck. Prefer flat arrays instead of linked lists when possible.</p><h4>Fewer DB requests</h4><p>For the cases when you must retrieve the data from the database it’s better when it can be done within a single query.</p><h4>Fewer SQL JOINs</h4><p>Multiple JOINs can dramatically degrade query performance. When a query requires 3 and more JOINs then it’s a good reason to think about database tables denormalization.</p><h4>Don’t store more data than you really need</h4><p>Unnecessary data slows down performance. It doesn’t necessarily mean to blindly truncate all the data, just be smarter on deciding on what and where should be stored. For instance, session data can be stored in cookies instead of a table. Btw, cookies shouldn’t be too heavy either.</p><h4>RAM</h4><p>Have as much RAM as you need (within reasonable limits of hardware, OS, and funds).</p><h3>CPU &amp; IO</h3><h4>Threads number should be close to a number of cores</h4><p>In a perfect case, threads should be running in parallel, each pinned to its own core, with a minimum amount of context switches.</p><h4>Batch writes</h4><p>Instead of doing single writes group your data and do batch writes wherever that’s possible.</p><h4>Use async and non-blocking operations</h4><p>Locks are overhead. Each time lock is used the app should go down by the OS stack. Prefer async operations wherever it makes sense and does not complicate the codebase too much.</p><h4>Parallelize operations</h4><p>In order to reduce the overall processing time.</p><h4>Use thread pulls with a fixed number of workers</h4><p>This implies that there’s a queue to drag a work from, which substantially increases throughput. Thread per connection/worker usually leads to a case when you have more threads than cores meaning your system tries to do too much at once.</p><h4>Compress the data on a disk</h4><p>IO operations are usually time-consuming. Data compressing can cost some CPU cycles but will help to effectively increase IO throughput.</p><h4>Compress the data being sent over the network</h4><p>It decreases transfer time and increases throughput. The CPU time cost of the compression and decompression is usually trivial. The overall efficiency of a system using compressed network transmissions is almost always higher than sending data uncompressed.</p><h4>Keepalive connections</h4><p>Minimizes the costs of connection open/close operations. Especially valuable for the cases of frequent requests.</p><h4>Data streaming</h4><p>You can save some CPU and memory if you stream the data to the external services directly instead of combining a full file in advance (e.g. firstly writing data to a disk and then reading it and post the data to the network).</p><h4>Operating systems limitations</h4><p>On Linux, if you have more than approximately 1000 files per directory then performance will start to degrade. You can split them up, and store the files in nested directories. Popular databases (MySQL, PostgreSQL, etc) store tables in files, so too many tables can affect performance from an OS perspective as well.</p><h3>Metrics</h3><h4>Little’s law</h4><p>Using this law we can determine how many app instances do we need to cover the application load. <a href="https://en.wikipedia.org/wiki/Little%27s_law">A theorem by John Little</a> states that:</p><blockquote>L = <em>λ</em> * W</blockquote><p>In the context of a web application, L is the number of app instances (e.g. threads/processes with an app copy), <em>λ</em> is the average requests rate (e.g. 10 requests per second), W is the average response time (e.g. 0.3 seconds).</p><p>10 * 0.3 = 3 (3 is the number of app instances we need)</p><p>It’s not the most accurate prediction, but at least it’s an easy way to approximately estimate required app instances for given load requirements. Using the same formula we can calculate the theoretical maximum throughput. It also helps to realize that the page load time is quite important when it comes to surviving traffic peaks. <br>Keep in mind, that in real life these are not isolated units, and with increasing load on the database, cache, network, the numbers won’t be changing strictly proportional. In other words, if DB is the bottleneck, then the increase of app instances number won’t increase throughput.</p><h4>Know your requirements</h4><p>You should know in advance the expected traffic on your web site and plan the app infrastructure accordingly.</p><h4>Use monitoring tools</h4><p>Automatically detect spikes in CPU and RAM. During the application’s lifespan pages tend to grow up in size and their load time tends to slow down. It’s a good practice to measure this, set boundary values (e.g. the max page load time is 200ms and the max page size is 100KB, except for the data from CDN), and take action when it starts to get out of control.</p><h4>Automated testing is essential</h4><p>Use integration and unit tests for daily agile development. Use performance testing to ensure your application will perform well under the expected workload.</p><h4>Aggregate your logs</h4><p>Aggregate and store logs in a central location for easy access. Keeping the logs explicit but not extremely verbose is a separate art form.</p><h4>Timeouts</h4><p>Put timeouts on all out-of-process calls and pick a default timeout for everything.</p><h3>Scaling</h3><h4>Use CDNs</h4><p>Guarantees faster load times for users, can be quickly scaled in case of traffic spikes.</p><h4>Prefer eventual consistency if possible</h4><p>Eventually consistent data storage systems use asynchronous processes to update remote replicas. If BASE (Basically Available Soft-State Eventually Consistent) is sufficient for your data, then you can easily achieve scalability and availability from a distributed data storage system.</p><h4>Autoscaling</h4><p>Setup auto-scaling based on the data from your monitoring tools. DDOS attacks may cause extra scale up operations, so set smart rules to make sure if a scale-up is necessary.</p><h4>Service discovery</h4><p>A central server (or servers) that maintain a global view of addresses and clients that connect to the central server to update and retrieve addresses. It’s a must-have for a dynamic system with autoscaling in place anyways.</p><h4>Containers</h4><p>Containers make it easier to manage deployments and service discoveries.</p><h4>Decentralize services</h4><p>Embrace self-service wherever possible, allowing services to be independently deployable and backward compatible. Prefer choreography over orchestration with smart endpoints to ensure that you’re keeping things cohesive with associated logic and data within service boundaries.</p><h4>Instead of a conclusion</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/924/1*r0rJu1Z0vXyYgf8FU67vDA.png" /></figure><h3>Resources</h3><p><a href="https://twitter.com/realdonaldknuth">Donald Knuth</a> book “Computer Programming as an Art”<br><a href="https://github.com/futurice/backend-best-practices">https://github.com/futurice/backend-best-practices</a><br><a href="http://khaidoan.wikidot.com/performance-tuning-backend">http://khaidoan.wikidot.com/performance-tuning-backend</a><br><a href="https://github.com/binhnguyennus/awesome-scalability">https://github.com/binhnguyennus/awesome-scalability</a><br><a href="https://twitter.com/piotr_murach">Piotr Murach</a> talk “It is correct, but is it fast?”</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=316789b453c1" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[SOLID Rails: A Phantom Pain]]></title>
            <link>https://medium.com/@alieckaja/solid-rails-a-phantom-pain-7ba087f724f6?source=rss-ea9bcf695a7d------2</link>
            <guid isPermaLink="false">https://medium.com/p/7ba087f724f6</guid>
            <category><![CDATA[object-oriented]]></category>
            <category><![CDATA[object-oriented-design]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[ruby]]></category>
            <category><![CDATA[ruby-on-rails]]></category>
            <dc:creator><![CDATA[Julija Alieckaja]]></dc:creator>
            <pubDate>Fri, 30 Nov 2018 07:07:22 GMT</pubDate>
            <atom:updated>2018-12-03T20:55:39.677Z</atom:updated>
            <content:encoded><![CDATA[<p>You probably heard of object-oriented design principles. Yes, the ones defined by the SOLID acronym created by Bob Martin and Michael Feathers. SOLID aims to help engineers to write easily maintainable code. Although those principles have long proven their effectiveness, sometimes it’s hard to follow them. Or even impossible. Especially, if you’re developing a Rails application.</p><p>It’s complicated by the fact that engineers usually interpret SOLID principles in a slightly different way, and moreover, each one (me included) has their own ideas of the cases where and when the best object-oriented design practices do not make sense to be applied. After all, the principles aren’t strict rules, they are simply the guidelines.</p><p>I have faced with the complexity and versatility of a decision which is the most convenient design solution not so long ago myself. I was attending a job interview and I was asked to refactor a piece of code similar to this one:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/845e98b431e97c923805fa6741fa13db/href">https://medium.com/media/845e98b431e97c923805fa6741fa13db/href</a></iframe><p>My very first idea was to implement a decorator class and to move all the logic related to the credit there. In this way, we could decorate the class only in those places where this business logic is required, achieving lousy coupling, skinny models, etc, etc.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/ad84bb6ee85a9dbf28dbada6ddb2371b/href">https://medium.com/media/ad84bb6ee85a9dbf28dbada6ddb2371b/href</a></iframe><p>But although this was accepted as a possible option, the interviewer pointed out that the usage of the decorated class requires too much code on the “client-side”. And suggested the next solution.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/81d3fe67738c7fd8c643a47911fbbabd/href">https://medium.com/media/81d3fe67738c7fd8c643a47911fbbabd/href</a></iframe><p>The “client-side” usage appears to be more elegant. And this led us to quite an interesting conversation.</p><p>Int — <em>With the CreditHandler implemented as the above we can</em> <em>treat the credit attribute as an object and benefit from the OOP perspective.<br></em>Me — <em>This solution does not benefit from the OOP perspective. It breaks SOLID principles, namely Dependency Inversion. Dependence on abstraction, not a specific implementation. Dependencies need to be passed either through the constructor or through the property, we shouldn’t hard-code classes into each other.<br></em>Int<em> — Well, you can go for a dependency injection. And implement it as</em></p><pre>Agency.new(credit_handler: CreditHandler) <br>def credit; <a href="http://twitter.com/balance">@c</a>redit ||= credit_handler.new(self); end</pre><p>Int — <em>It does not change the essence. Looks like overengineering and premature optimization for me. The original proposed implementation indeed breaks SOLID, but those 5 are the design principles, it does not break the OOP rules itself (4 of them). In my opinion, follow SOLID means not to apply the practices immediately, but to apply them when necessary; that is, as soon as it becomes necessary to use different flows for the credit attribute in the example.</em></p><p>And it made me think. How does one make a decision what is an overengineering and what is a reasonable solution? Should the engineers follow the best practices from the beginning? Or it is an overcomplication and it’s best to apply only a limited set of recommendations? How do I make those decisions and how often I follow the best practices? How often I don’t? And, gradually, it led me to a realization, that I’m, as a Ruby on Rails developer, break the SOLID recommendations every single day! Well, at least some of them. And it’s while trying hard to follow the best practices whenever I see an opportunity. If you’re a Ruby developer then most likely you break the guidelines on a daily basis too.</p><p>Let’s list the SOLID principles to take a closer look at them before we’ll try to understand when and how do we violate them:</p><ul><li>S — Single responsibility principle (SRP). <em>A class should have only one reason to change.</em></li><li>O — Open/closed principle (OCP). <em>Software entities should be open for extension, but closed for modification</em></li><li>L — Liskov substitution principle (LSP). <em>If S is a subtype of T, then objects of type T in a program may be replaced with objects of type S without altering any of the desirable properties of that program.</em></li><li>I — Interface segregation principle (ISP). <em>No client should be forced to depend on methods it does not use.</em></li><li>D — Dependency inversion principle (DIP). <em>High-level modules should not depend on low-level modules. Both should depend on abstractions. Abstractions should not depend on details. Details should depend on abstractions.</em></li></ul><h3>SRP</h3><p>That principle is often being misinterpreted.<br>Almost every Ruby on Rails developer at least once said something like: “Ah, single responsibility principle? Yeah, ActiveRecord breaks this one, it has way too many responsibilities”. Active Record indeed is an example of a God-object anti-pattern, it knows too much and does too much. But it has nothing to do with SRP from SOLID.<br>The problem lays in the interpretation of “<em>reason</em>” word from “<em>A class should have only one reason to change.</em>”. Developers tend to think that each meaningful function of an object is some kind of a “<em>reason</em>”, while SRP itself refers to the business layer of things. <br>ActiveRecord is responsible for validations, database adapters, caching, combining SQL queries, and many other things, but from a business logic perspective, ActiveRecord’s User model is responsible only for storing users in a database correctly.<br>Let’s take a look at the next example to make it a bit more clear.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/d30a9fdf5461d6433872019014ec803d/href">https://medium.com/media/d30a9fdf5461d6433872019014ec803d/href</a></iframe><p>There are several types of employees and several types of payments. From the business point of view, the private method calculate has to handle too much — it calculates salaries and benefits for lawyers, managers, and accountants. <br>Let’s say at some point the logic of calculation of benefits for lawyers specifically should be changed, while the rest of the calculations should stay the same. Can you imagine how much confusion and possible errors can potentially bring this kind of business requirement? Ouch.<br>Shortly speaking, in most cases the business domain stays behind the violation of SRP.</p><h3>OCP</h3><p>The Ruby itself has a very vague concept of a “closed”. And while everything is open for extension, everything is open for modification as well. <br>I used to think that it’s out of my concerns as long as I do not directly monkey-patch other classes. But let’s take a closer look at the next example.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/8a8590ebbdf98708ec4b652e3a2e09b9/href">https://medium.com/media/8a8590ebbdf98708ec4b652e3a2e09b9/href</a></iframe><p>In the example, the actual result is that the classes in the namespace are being replaced by Rspec’s mocks and by this — modified. I’m a happy user of Rspec and it simplifies my life dramatically. But the fact is each time I’m using its mocks I’m modifying the upper class in the hierarchy by the lower one, which is supposed to depend on the upper one and is supposed only to extend it without any modifications.<br>I’m not saying we must stop using Rspec, the gem is absolutely great and perfectly fits for testing needs, I just need to admit that it violates the OCP in SOLID.</p><h3>LSP</h3><p>LSP aims to ensure that the inheritance is used correctly. This principle stands out from others within this article, as it has nothing special in terms of the Ruby world. No common misusages, no misconceptions, seems like at least LSP is usually respected among Rubyists. You can read more about LSP on this article, I have nothing to add.</p><h3>ISP</h3><p>The ISP states that a client should not be forced to depend on methods that it does not use. Interfaces Segregation is considered to be a problem of a programming language, not an architecture. Ruby together with all other dynamically typed languages cannot violate ISP at any mean.</p><h3>DIP</h3><p>This is the one which started this article. High-level modules should not depend on low-level modules. Both should depend on abstractions. Abstractions should not depend on details. Details should depend on abstractions. That’s the principle which is violated the most in Ruby on Rails community. And also Sinatra community. And well, I think in many other Ruby communities independently of a framework they use, web-centered or not.<br>Let’s take a look at the next example.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/b93a8719b034d86db3a170120925fe1d/href">https://medium.com/media/b93a8719b034d86db3a170120925fe1d/href</a></iframe><p>UsersController — a high-level class depends on multiple low-level classes — User, UserCustomersPortalsQuery, AuthService.</p><p>The band of UsersController and User model is not such a bad thing though. The abuse of Dependency Injection is considered to be an anti-pattern itself. Some classes meant to be coupled, and in most cases UsersController does not make sense without User model. But we all can agree that the coupling of the controller with the AuthService does not feel so right.</p><p>In the enterprise world of .NET, this problem is usually solved by applying Inversion of Control and Dependency Injection patterns. <br>IoC implies that within your application there is an explicit point of a request entry, where you can explicitly call a constructor for a specific controller and explicitly pass all the classes it needs through Dependency Injection.</p><p>As a benefit — developers can easily substitute dependencies by other classes in the IoC. And by this, for example, easily replace the real classes with mocks in the test env.</p><p>It’s quite fun, as in the Ruby world we usually do not substitute real classes with mocks via DI. We’re already using Rspec and mocking the actual classes by the violation of OCP.</p><p>Just in order to help to get a very basic idea of what IoC looks like here’s a bit of ASP.NET MVC code. The example is quick and simple, and hopefully can be understood by people with zero experience in .NET (like me).</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/277ee13c111270c2c93662343b5be1d5/href">https://medium.com/media/277ee13c111270c2c93662343b5be1d5/href</a></iframe><p>Controllers are not the root of all evil. I’m sure more or less the same violation of DIP can be found in, for example, ActiveJob classes.</p><h3>Conclusion</h3><p>It should be mentioned, that Bob Martin himself noted that the recommendations are not so strict in dynamic languages.</p><p>Violation of DIP helps to write code fast. Violation of OCP allows to reliably test the code. Although, the realization of mocks is tricky and as I can imagine is quite hard to maintain. Anyway, from some point of view, it proves that it’s possible to write maintainable software without a strict following of SOLID.</p><p>Yet, I believe that Ruby developers should stick more to the object-oriented design practices. Use Dependency Injection more often and write maintainable easy-extendable loose-couped code. It simplifies life a lot when properly handled. Cheers.</p><figure><a href="https://blog.usejournal.com/meet-journal-d222fce8db1d"><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*f2IVAl0TbsfES9cFGYr40g.png" /></a></figure><p><strong>This story is published in Noteworthy, where 10,000+ readers come every day to learn about the people &amp; ideas shaping the products we love.</strong></p><p><strong>Follow our publication to see more product &amp; design stories featured by the </strong><a href="https://usejournal.com/?/utm_source=usejournal.com&amp;utm_medium=blog&amp;utm_campaign=guest_post"><strong>Journal</strong></a><strong> team.</strong></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=7ba087f724f6" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Standardize Rails log output]]></title>
            <link>https://medium.com/@alieckaja/standardize-rails-log-output-d6ad0827a172?source=rss-ea9bcf695a7d------2</link>
            <guid isPermaLink="false">https://medium.com/p/d6ad0827a172</guid>
            <category><![CDATA[ruby]]></category>
            <category><![CDATA[logging]]></category>
            <category><![CDATA[ruby-on-rails]]></category>
            <category><![CDATA[web-development]]></category>
            <dc:creator><![CDATA[Julija Alieckaja]]></dc:creator>
            <pubDate>Tue, 04 Sep 2018 07:50:16 GMT</pubDate>
            <atom:updated>2018-09-04T07:50:16.354Z</atom:updated>
            <content:encoded><![CDATA[<p>Currently my team is working on an application which intensively communicates with external APIs and writes to log files a lot of information about received/processed/failed/sent requests. Log messages in the app are popping up from nested subscribers, publishers, services classes, they are inconsistent and don’t have any common structure.</p><p>The log itself was pretty noisy and was painful to read and analyze it with the naked eye.</p><p>At some point it was decided to switch to a centralized log management tool (Splunk in our case) in order to allow our support team to search/filter log entries on their own and in a more convenient way than less. Therefore we had to standardize all the log entries. The desired log entry format looks like</p><pre>timestamp={time} level={severity} source_class={class_name} user_id={email} message={log_message} tag=#{tag}</pre><p>Moreover there’s also a standard Rails log output which needs to be casted to this uniform format as well. There’re plenty of gems which do logs formatting, but after a quick investigation it turned out that it’s easy to adjust Rails logging without external libraries. And none of the gems was offering the required log format out-of-the-box anyway.</p><p>First of all, we’ll need a dedicated log formatter class, which has to respond to call(severity, time, progname, msg) method with this exact signature. severity stands for the level of message ‘importance’. time and msg arguments are more or less obvious, and progname is the name of the program which uses the logger class, it’s usually used by gems with built-in logging (<a href="https://ruby-doc.org/stdlib-2.2.3/libdoc/logger/rdoc/Logger/Formatter.html">Ruby docs</a>).</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/4c3a8f3d67bdd724134f321e824149c9/href">https://medium.com/media/4c3a8f3d67bdd724134f321e824149c9/href</a></iframe><p>Now let’s “plug” it into Rails. It was decided to keep default Rails TaggedLogging wrapper.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/d12f112b8735bfbd7b88735b3167b581/href">https://medium.com/media/d12f112b8735bfbd7b88735b3167b581/href</a></iframe><p>Yeah, that’s easy! Now our brand new log formatter is in place. Except it still misses some of desired attributes, i.e. source_class and user_id, they’re a bit trickier to implement, but let’s do it.</p><p>Our app has multiple event processing classes and it’d be neat to show within the log entry which of them produce this exact output. There’re a few possible options: manually add a class name to the logger message wherever we need it, or detect the caller class automatically on the fly. The second one sounds fun, so I decided to give it a shot first.</p><p>Ruby’s Kernel includes caller method which returns the current execution stack represented as an array of strings with files/methods names (<a href="https://apidock.com/ruby/Kernel/caller">Ruby Docs</a>). It’s troublesome to build source_class detection based on this method solely. Still, we can achieve the same in an easier and a more performant way — by using <a href="https://github.com/banister/binding_of_caller">binding_of_caller</a> gem. It’s written in C and does exactly what we need — detects the class name flawlessly.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/b8ac6259bb1bfb874254409c08cf87de/href">https://medium.com/media/b8ac6259bb1bfb874254409c08cf87de/href</a></iframe><p>STACK_LEVEL is 7 as it goes through 6 methods in the logger stack before it gets into the log formatter.</p><p>Sadly, the gem’s author asks to not use it in production applications. Well ok, since we’re mostly interested in our internal classes output analysis we can go for explicit argument passing.</p><p>I didn’t want to monkey patch Logger class to make it accept additional arguments or to create a separate logger. Hooray due to dynamic typing it’s not necessary. We can use the next logger calls notation:</p><pre>logger.info(&#39;Info message&#39;)</pre><pre># or</pre><pre>logger.info(message: &#39;Info message&#39;, source_class: class_name, user_id: user.email)</pre><p>It’s still the same single first method argument, but in the first example it’s a string, and in the second one — a hash. All we need to do is to allow the log formatter to work with hashes and to manually add source_class and/or user_id values to logger calls in the app wherever we need them to be in place.<br>The next log formatter version:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/4bc9828296a1ab1344ea0c481267ce4f/href">https://medium.com/media/4bc9828296a1ab1344ea0c481267ce4f/href</a></iframe><p>That could be enough in case we don’t use ActiveSupport::TaggedLogging. But we do, and this module casts all messages to string concatenating it with the tags. We have to add support for hash messages as well.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/626018e01ef13a275ba5caff6045acbf/href">https://medium.com/media/626018e01ef13a275ba5caff6045acbf/href</a></iframe><p>And that’s it, now let’s check it on a few examples.<br>Progname:</p><pre>logger.info(<strong>&#39;</strong>MyApp<strong>&#39;</strong>) { &#39;Received a new package&#39; }</pre><pre>=&gt; timestamp=&#39;2018-09-03 19:50:39 +0000&#39; level=INFO progname=&#39;MyApp&#39; message=&#39;Received a new package&#39;</pre><p>Message as a hash:</p><pre>logger.info(message: <strong>&#39;</strong>Message<strong>&#39;</strong>, user_id: <strong>&#39;</strong>admin@example.com<strong>&#39;</strong>, foo: <strong>&#39;</strong>Bar<strong>&#39;</strong>)</pre><pre>=&gt; timestamp=&#39;2018-09-03 19:54:56 +0000&#39; level=INFO message=&#39;Message&#39; user_id=&#39;admin@example.com&#39; foo=&#39;Bar&#39;</pre><p>Tagged logs:</p><pre>logger.tagged(&#39;Records&#39;) do<br>  logger.debug message: &#39;Finding records...&#39;, <br>               source_class: &#39;ClassName&#39;<br>end</pre><pre>=&gt; timestamp=&#39;2018-09-03 20:04:49 +0000&#39; level=DEBUG message=&#39;Finding records...&#39; source_class=&#39;ClassName&#39; tag=&#39;[Records]&#39;</pre><p>Regular log message:</p><pre>logger.warn(<strong>&#39;</strong>Warning<strong>&#39;</strong>)</pre><pre>=&gt; timestamp=&#39;2018-09-03 20:09:19 +0000&#39; level=WARN message=&#39;Warning&#39;</pre><p>Thanks for reading! <br>If you have any questions or feedback please feel free to reach me on <a href="https://twitter.com/yoletskaya">twitter</a>. Cheers!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d6ad0827a172" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Rails API + JWT auth + VueJS SPA: Part 3, Passwords and Tokens management]]></title>
            <link>https://medium.com/@alieckaja/rails-api-jwt-auth-vuejs-spa-part-3-passwords-and-tokens-management-c1eddc6a49d1?source=rss-ea9bcf695a7d------2</link>
            <guid isPermaLink="false">https://medium.com/p/c1eddc6a49d1</guid>
            <category><![CDATA[vuejs]]></category>
            <category><![CDATA[tutorial]]></category>
            <category><![CDATA[security]]></category>
            <category><![CDATA[ruby-on-rails]]></category>
            <category><![CDATA[web-development]]></category>
            <dc:creator><![CDATA[Julija Alieckaja]]></dc:creator>
            <pubDate>Thu, 12 Jul 2018 09:02:53 GMT</pubDate>
            <atom:updated>2018-07-13T21:22:22.698Z</atom:updated>
            <content:encoded><![CDATA[<p>In two previous articles (<a href="https://blog.usejournal.com/rails-api-jwt-auth-vuejs-spa-eb4cf740a3ae">Part 1</a>, <a href="https://medium.com/@yuliaoletskaya/rails-api-jwt-auth-vuejs-spa-part-2-roles-601e4372a7e7">Part 2</a>) we’ve built a secure todos application with an ability to manage todos, a basic admin panel and a support of 3 different types of user roles.</p><p>In this part we‘ll add a forgot my password feature and an ability to edit user roles via the admin panel.</p><p>Acceptance criteria:<br> — User should be able to restore their password via theforgot my password feature. Once User submits their email, they should receive a secure link via mail with reset password instructions.<br> — On the password reset User should be logged out from all devices. User will have to enter their new password to log back in.<br> — Admin should have an ability to edit all user roles except for their own (poka-yoke).<br> — As User role is stored in the access token’s payload, once user’s role is edited — access token must be flushed, so the user will have to make a new refresh request in order to receive the access token with an updated payload.</p><h3><strong>The Backend</strong></h3><p>As we’re building an API-first application let’s start with the backend as usual. I’d propose to use a classic approach for password resets with unique password reset tokens which are used for generating reset password links.<br>1. Generate a rails migration.</p><pre>$  rails g migration add_reset_password_fields</pre><p>The migration itself:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/a084ffd709375fd4f7f86fc88926e967/href">https://medium.com/media/a084ffd709375fd4f7f86fc88926e967/href</a></iframe><p>2. Now we can add reset password token generation methods to the User model. After the password is reset we should clean up the tokens, so the same token couldn’t be used twice.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/0d5ed99cc310a0cc6ae5fb81eb932e10/href">https://medium.com/media/0d5ed99cc310a0cc6ae5fb81eb932e10/href</a></iframe><p>Now we’re ready to build the password resets controller. The desired flow is as follows: User submits their email (first endpoint), then gets a secure link which leads them to enter-new-password page (second endpoint), and then User enters the data and hits submit (third endpoint). So, we’d need to implement 3 actions within the controller.</p><p>3. Implement POST /password_resets endpoint. This endpoint sends mails so we’ll need to build a mailer class as well.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/71f674f09eae1aee4404670917c94883/href">https://medium.com/media/71f674f09eae1aee4404670917c94883/href</a></iframe><p>Password reset link goes within the mailer template.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/2d3bf1fb1a9c13bdcd74ef07139e1bef/href">https://medium.com/media/2d3bf1fb1a9c13bdcd74ef07139e1bef/href</a></iframe><p>The action itself.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/666e7131132faee05ab9d38792d97c0a/href">https://medium.com/media/666e7131132faee05ab9d38792d97c0a/href</a></iframe><p>Note, that even if a user is not found we should return a successful response anyway so an attacker will not be able to check whether certain email addresses are registered in the system.</p><p>4. Implement GET /reset_passwords/:token/edit . The action itself verifies if a specific reset password token is valid.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/cb019cf72cb23e5e0e450d51b1baf552/href">https://medium.com/media/cb019cf72cb23e5e0e450d51b1baf552/href</a></iframe><p>Custom exception is added as well. Let’s add a handler for this type of exceptions to the application controller.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/b04c6c209ebef6974e6b586edd6e25b8/href">https://medium.com/media/b04c6c209ebef6974e6b586edd6e25b8/href</a></iframe><p>5. Implement PATCH /reset_passwords/:token</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/308597fc901f17ce62f9bb4aec54aae2/href">https://medium.com/media/308597fc901f17ce62f9bb4aec54aae2/href</a></iframe><p>Right after the password reset we clean up the tokens in the update action.<br>The routes.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/36df2e858010b0fe5bd18a6bf846d87a/href">https://medium.com/media/36df2e858010b0fe5bd18a6bf846d87a/href</a></iframe><p>And specs to ensure it works.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/a241015bba5f0a679f38a377e2be836e/href">https://medium.com/media/a241015bba5f0a679f38a377e2be836e/href</a></iframe><p>All of it implements the first AC on the backend. Now we can implement the second one.<br>6. An attentive reader might remember that the session in the application is represented with 2 tokens — access and refresh. They both are stored in redis (partly, for access token only its UID is persisted), so obviously, to flush the session we must delete the tokens from the redis. But first, we should link a user to their sessions to know exactly which sessions to flush. To be able to do that we can use namespaces which can group the sessions by their user (or any other common attributes).<br>Firstly, let’s add a namespace with user ID to all places in the app where we are operating over sessions, specifically signin , signup and refresh controllers.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/0ac89ad2065915772775f446a7a863f5/href">https://medium.com/media/0ac89ad2065915772775f446a7a863f5/href</a></iframe><p>namespace: &quot;user_#{user.id}&quot; attribute is added to the session declaration.</p><p>7. With this being done we can flush all sessions which share a common namespace.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/c079317023679bc774f3d666e75c0229/href">https://medium.com/media/c079317023679bc774f3d666e75c0229/href</a></iframe><p>8. Let’s move on to the next AC and allow Admins to edit user roles. We should keep in mind that while both Admins and Managers are allowed to view users, only Admins should have a permission to edit them (that’s controlled by the allowed_aud method). Also there’s a check in update action validating that Admins shouldn’t be able to modify their own role.<br>To fulfill the fourth AC we’re using flush_namespaced_access_tokens, it will keep the refresh token and remove the access only, which makes user’s locally stored access tokens invalid and user will have to perform a new refresh request.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/dded4238ed830e3729b694396b78a7b2/href">https://medium.com/media/dded4238ed830e3729b694396b78a7b2/href</a></iframe><p>Here‘re the specs.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/24e8874def1936a5cc6b35387493a0b6/href">https://medium.com/media/24e8874def1936a5cc6b35387493a0b6/href</a></iframe><p>API is ready and we can start to work on the frontend.</p><h3>The Frontend</h3><p>Foreword: JS code in this article is simple and straightforward, there‘re endless possibilities of refactoring (removing code-duplicates, using store instead of making API requests on each page load, etc), but all those improvements will lead me to a material for a separate article and I’m too lazy for this, so here goes the most naive JS implementation possible.</p><p>1. First, let’s build ForgotPassword component and add navigation links.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/0152f7693881b96b16c3fb27ef376785/href">https://medium.com/media/0152f7693881b96b16c3fb27ef376785/href</a></iframe><p>Update routes.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/d07a6694f9ea7ab2e76470587e6a7e15/href">https://medium.com/media/d07a6694f9ea7ab2e76470587e6a7e15/href</a></iframe><p>And add router links to Signin/Signup components.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/e26059e32d0d50625b0a365037ef93d8/href">https://medium.com/media/e26059e32d0d50625b0a365037ef93d8/href</a></iframe><p>Here’s the view.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*_HlKYwuox5KdjKdPbLm0ag.png" /><figcaption>(bottom sign up is centered, it only SEEMS crooked BECAUSE OF THE FONT)</figcaption></figure><p>2. ResetPassword.vue component — the one to which leads the link from the reset password email. The component on load sends a GET request to the reset passwords endpoint to verify that the token from URL is correct.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/28594fc32bc8ef1da85e5dc0edd3765a/href">https://medium.com/media/28594fc32bc8ef1da85e5dc0edd3765a/href</a></iframe><p>Routes.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/b19efe8100dadfa95fed26f92307de43/href">https://medium.com/media/b19efe8100dadfa95fed26f92307de43/href</a></iframe><p>Visualisation (all views are pretty similar, but still)</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*BR5u0eNLH4Cd46StIKk86Q.png" /></figure><p>3. Now, when JS client users are able to reset their passwords, let’s add the ability to edit roles. We’ll create a separate Edit components for users in admin space. It also prevents non-admins from accessing users edit page and prohibits admins to modify their own roles.<br>To implement this check let’s add currentUserId as a getter to the Vue store.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/64ee40140f1b33dece91a2292189611c/href">https://medium.com/media/64ee40140f1b33dece91a2292189611c/href</a></iframe><p>The edit component contains a label with the selected user’s email and a select box with available roles list.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/4b5383fe11f2d293f548c0187d67e5d6/href">https://medium.com/media/4b5383fe11f2d293f548c0187d67e5d6/href</a></iframe><p>Routes.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/3a0190062254fb1ea5ec1847329d0f64/href">https://medium.com/media/3a0190062254fb1ea5ec1847329d0f64/href</a></iframe><p>The view.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*MqQmbbEmJ911njS99REzJg.png" /></figure><p>4. We have the component, but there’s no way to navigate to it through the app yet. Let’s add links to the users edit view visible only for Admins.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/0dd33af81017317f3888e18bcb88a694/href">https://medium.com/media/0dd33af81017317f3888e18bcb88a694/href</a></iframe><p>The view (all links are available for Admin, except for the link to their own profile)</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*w3xnlBAw0EHf8Tt8MoHzDQ.png" /></figure><p>5. And the last, but not least. In this JS client we’re storing current user’s info in the local storage. So even in case access token is reseted after refresh on the server and the app automatically requests a new access token — it still doesn’t update user’s info within the store. Let’s fix that and request user info after refresh to surely be up to date with new user’s roles.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/e7ae1fff9de110efa9dbd87c02c57d7e/href">https://medium.com/media/e7ae1fff9de110efa9dbd87c02c57d7e/href</a></iframe><p>And that’s pretty much it, now the AC are met and we are set!</p><p>The application code can be found on <a href="https://github.com/tuwukee/silver-octo-invention">GitHub</a>.<br>Thanks for reading! It was fun making this series of articles. <br>If you have any questions or feedback please feel free to reach me on <a href="https://twitter.com/yoletskaya">twitter</a>. Cheers!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=c1eddc6a49d1" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Rails API + JWT auth + VueJS SPA: Part 2, Roles]]></title>
            <link>https://medium.com/@alieckaja/rails-api-jwt-auth-vuejs-spa-part-2-roles-601e4372a7e7?source=rss-ea9bcf695a7d------2</link>
            <guid isPermaLink="false">https://medium.com/p/601e4372a7e7</guid>
            <category><![CDATA[javascript]]></category>
            <category><![CDATA[tutorial]]></category>
            <category><![CDATA[api]]></category>
            <category><![CDATA[rails]]></category>
            <category><![CDATA[vuejs]]></category>
            <dc:creator><![CDATA[Julija Alieckaja]]></dc:creator>
            <pubDate>Mon, 25 Jun 2018 12:53:56 GMT</pubDate>
            <atom:updated>2018-07-12T09:05:24.713Z</atom:updated>
            <content:encoded><![CDATA[<p>In the <a href="https://blog.usejournal.com/rails-api-jwt-auth-vuejs-spa-eb4cf740a3ae">previous article</a> we’ve built a simple Todos application. Users of this application are able to sign up, sign in, sign out and to manage their own lists of todos.</p><p>This time we’re going to extend the functionality and add a few more features.</p><p>Acceptance criteria:<br> — User model should allow 3 types of roles: Admin, Manager and User (default).<br>— Admins and Managers should have an access to an admin panel.<br> — Within the admin panel Admins and Managers should be able to view the list of all app users.<br> — Within the admin panel only Admin users should be able to view a todo list of a selected user.</p><p>The application itself can be found on <a href="https://github.com/tuwukee/silver-octo-invention">GitHub</a>.</p><h3>The Backend</h3><ol><li>First, we’ll specify allowed roles list within User model.</li></ol><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/3a66fbdf33e7754ff90204973730401a/href">https://medium.com/media/3a66fbdf33e7754ff90204973730401a/href</a></iframe><p>2. Add a role attribute to User model</p><pre>$ rails generate migration add_role_to_users role:string</pre><p>3. Set user role as the default value within the db migration.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/df2f32b3788e3f27c915e8e1f61968c0/href">https://medium.com/media/df2f32b3788e3f27c915e8e1f61968c0/href</a></iframe><p>4. Run rake db:migrate</p><p>5. Now we can add roles to the tokens payload. The general downside of using HTTP-only cookies as a token store — we cannot read the token’s payload on the frontend. We still can read the payload on the backend though, when receiving auth token from the web clients and thus prevent some db hits, as we can fetch and validate the authorization data from the token itself. <br>JWT standard provides a set of standard claims that can be used for token verification:</p><p>iss — Issuer. Identifies principal that issued the JWT.<br>sub — Subject. Identifies the subject of the JWT.<br>aud — Audience. Identifies the recipients that the JWT is intended for. This one can be used to specify the set of accepted roles.<br>exp — Expiration time.<br>nbf — Not before. Identifies the time on which the JWT will start to be accepted for processing.<br>iat — Issued at. Identifies the time at which the JWT was issued. <br>jti — JWT ID. Case sensitive unique identifier of the token even among different issuers.</p><p>If we’re talking about role-granted permissions verification, although we can use totally custom keys in the payload f.e. role key and verify it via Pundit , ActionPolicy or any other authorization solution, in this tutorial I’d like to show how to work with the standard JWT claims. But just for the sake of the interest later in the article I’ll show really quick the Pundit way as well.<br>Let’s update signin and signup controllers and extended the payload to include aud claim with the user’s role — payload = { user_id: user.id, aud: [user.role] }.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/eb2be275a91e0c4b008fdb9dcaaca8d8/href">https://medium.com/media/eb2be275a91e0c4b008fdb9dcaaca8d8/href</a></iframe><p>4. As the frontend cannot read the data from the payload, we must pass user data explicitly from the backend. Let’s build a separate endpoint for this matter (in real life I’d use a serializer in the controller, but I’m too lazy to build it within this example).</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/216926ea7e9d85c4d6f4e63236eb506e/href">https://medium.com/media/216926ea7e9d85c4d6f4e63236eb506e/href</a></iframe><p>And a simple spec to ensure it works.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/41f92bf8d75d53bb877ee7d870d67537/href">https://medium.com/media/41f92bf8d75d53bb877ee7d870d67537/href</a></iframe><p>5. Now we can add users controller to the admin space of the API. We must specify token_claims method in order to let JWT know which claims we’d like to verify. On default it checks only the expiration claim. We’re going to add list of allowed roles — Admin and Manager, and verify them within aud claim. <br>The extremely cool part of it — all the authorization flow is performed without a single database query.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/64d036c5b5d960b576ce2ee51929d5bd/href">https://medium.com/media/64d036c5b5d960b576ce2ee51929d5bd/href</a></iframe><p>With the roles support we should handle 403 errors within the application controller.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/7e1275da2354afb6ca83af9dfc4b713a/href">https://medium.com/media/7e1275da2354afb6ca83af9dfc4b713a/href</a></iframe><p>Specs. Btw, those are demonstration specs, no need to test auth on each controller within the application.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/23025a4294508f8dd1fc09445e1a18dd/href">https://medium.com/media/23025a4294508f8dd1fc09445e1a18dd/href</a></iframe><p>6. And the last one — todos controller, only users with Admin role are allowed to view todos.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/a59e6a68cad5a95361f3fa31f3345246/href">https://medium.com/media/a59e6a68cad5a95361f3fa31f3345246/href</a></iframe><p>Specs.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/f65b99500a304ac7efaa02e4827c120a/href">https://medium.com/media/f65b99500a304ac7efaa02e4827c120a/href</a></iframe><h3><strong>The Frontend</strong></h3><p>1. Let’s add Vuex and Vuex-Persistedstate in order to manage the SPA state in a more convenient way.</p><pre>$ npm install vuex --save<br>$ npm install vuex-persistedstate</pre><p>2. Now we should fetch and store current user within the Vue store. Once we have the store, we can go ahead and move all the auth data to the store as well for the sake of consistance. But first, let’s create the store.</p><pre>$ tree src<br>src<br>├── App.vue<br>├── backend<br>│   └── axios<br>│       └── index.js<br>├── components<br>│   ├── Signin.vue<br>│   ├── Signup.vue<br>│   └── todos<br>│       └── List.vue<br>├── main.js<br>├── router<br>│   └── index.js<br>└── store.js</pre><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/6982e0bcb95bd1509095960c3b956dd9/href">https://medium.com/media/6982e0bcb95bd1509095960c3b956dd9/href</a></iframe><p>3. Update routers to use the data from the store.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/62667c6dbf2eedc3d289b4d4bd204d3d/href">https://medium.com/media/62667c6dbf2eedc3d289b4d4bd204d3d/href</a></iframe><p>4. Now let’s add current user fetching to Signin/Signup components and adjust the rest of the code to use the data from the store.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/05f88bd321ad8e0ffc7a3bf7d9bc9b98/href">https://medium.com/media/05f88bd321ad8e0ffc7a3bf7d9bc9b98/href</a></iframe><p>5. Extract sign out link into a header component and add a link to the admin space, visible to Admins and Managers only.<br>To achieve it we’ll add isAdmin and isManager getters to the store. Vue allows to use more agile solutions to manage roles and permissions, but as it’s not the main goal of the article we’ll not focus on it too much.</p><pre>$ tree src/components</pre><pre>src/components<br>├── AppHeader.vue<br>├── Signin.vue<br>├── Signup.vue<br>└── todos<br>   └── List.vue</pre><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/d13f5a4a3d9f9532d18c3c4e45bb5534/href">https://medium.com/media/d13f5a4a3d9f9532d18c3c4e45bb5534/href</a></iframe><p>Phew, a lot of work is done and all is to be able to show a tiny Admin link in the header for the specific users.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Q78pqzTsOSSq5xpnr6lIMg.png" /></figure><p>6. The link itself isn’t even pointing anywhere, so let’s fix that and add admin space component with users list.</p><pre>$ tree src/components</pre><pre>src/components<br>├── AppHeader.vue<br>├── Signin.vue<br>├── Signup.vue<br>├── admin<br>│   └── users<br>│       └── List.vue<br>└── todos<br>   └── List.vue</pre><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/cf54fb67f4efdcd18cb85d22a71ac253/href">https://medium.com/media/cf54fb67f4efdcd18cb85d22a71ac253/href</a></iframe><p>And while we’re here let’s also add Home link to the header.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/bc4a21aacf7416058449b15b16a79c7c/href">https://medium.com/media/bc4a21aacf7416058449b15b16a79c7c/href</a></iframe><p>Yay, a fresh admin component is here. Note, the right column with todos icon is visible for Admin users only.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*TJD9Kw-AchYhs2miX21c0Q.png" /></figure><p>7. And finally, the last one component — todos per user.</p><pre>$ tree src/components/admin</pre><pre>src/components/admin<br>└── users<br>   ├── List.vue<br>   └── todos<br>      └── List.vue</pre><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/d10af5812b19c248504a5ada1322f323/href">https://medium.com/media/d10af5812b19c248504a5ada1322f323/href</a></iframe><p>Visualisation.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*II02uwpaQ9kUUruleSbw8Q.png" /></figure><p>And done! All acceptance criteria are fulfilled.<br>Here are few fancy gifs showing interactions with the app on behalf of users with different roles.</p><h4>Admin</h4><p>Visiting todos page, creating a new todo and deleting it, then visiting admin space, viewing the list of users and their todos.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*gwTiPYscBzAOvg65YOfpdw.gif" /></figure><h4>Manager</h4><p>Visiting todos page, creating a new todo and deleting it, then visiting admin space, viewing the list of users.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*wn9zGYdbC1VwLe8UTomjyQ.gif" /></figure><h4>User</h4><p>Visiting todos page, creating a new todo, then editing this todo and deleting it.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*HOvAhvqGwMvIxvzII7IDnA.gif" /></figure><p>And as being promised, here’s a quick sample of Pundit policy using JWT payload for authorization. In the example we’re replacing pundit_user with the payload hash. <br>Payload is customizable, so it can contain specific permissions like { can_view_secret_resource: true}, which otherwise would require heavy db queries.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/09202063d454ce273e7c60d9478aa74b/href">https://medium.com/media/09202063d454ce273e7c60d9478aa74b/href</a></iframe><p>In the next part, hopefully final, I’ll show how to edit user roles and reset forgotten passwords, how to properly reset tokens and sessions in case of a role change or user deletion. <br>Thanks for reading!</p><p>UPD: <a href="https://medium.com/@yuliaoletskaya/rails-api-jwt-auth-vuejs-spa-part-3-passwords-and-tokens-management-c1eddc6a49d1">3rd Part of the series.</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=601e4372a7e7" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Rails API + JWT auth + VueJS SPA]]></title>
            <link>https://medium.com/@alieckaja/rails-api-jwt-auth-vuejs-spa-eb4cf740a3ae?source=rss-ea9bcf695a7d------2</link>
            <guid isPermaLink="false">https://medium.com/p/eb4cf740a3ae</guid>
            <category><![CDATA[rails]]></category>
            <category><![CDATA[javascript]]></category>
            <category><![CDATA[tutorial]]></category>
            <category><![CDATA[api]]></category>
            <category><![CDATA[vuejs]]></category>
            <dc:creator><![CDATA[Julija Alieckaja]]></dc:creator>
            <pubDate>Sun, 17 Jun 2018 13:57:05 GMT</pubDate>
            <atom:updated>2019-01-28T20:38:23.791Z</atom:updated>
            <content:encoded><![CDATA[<p>This is a quick guide designed to help newbie developers to build VueJS SPA with authenticated API calls.</p><p>We’ll start with building an API-first REST-ful Rails backend. API-first means that the same API endpoints can be used by different Web/JS clients, mobile applications, 3rd party APIs, and ideally all of them should use a unified auth flow and JWT is a good fit for this goal. While in this article we’ll consider the creation of a single VueJS client, the API itself can be easily adapted to be reused by other API-consumers.</p><p>Tools for the backend: <br>Rails 5.2.0, <br>Ruby 2.4.4,<br><a href="https://github.com/codahale/bcrypt-ruby">gem </a><a href="https://github.com/codahale/bcrypt-ruby">bcrypt </a>1.3.12,<br><a href="https://github.com/tuwukee/jwt_sessions">gem </a><a href="https://github.com/tuwukee/jwt_sessions">jwt_sessions</a> 2.1.0,<br><a href="https://github.com/redis/redis-rb">gem </a><a href="https://github.com/redis/redis-rb">redis</a> 4.0.1,<br><a href="https://github.com/cyu/rack-cors">gem </a><a href="https://github.com/cyu/rack-cors">rack-cors</a> 1.0.2</p><p>We are not tied up with this exact versions, I’m simply listing the ones I have installed on my local environment. Now, let’s build the classic todo app.</p><h3>The Backend</h3><ol><li>Create a rails app rails new silver-octo-invention --api -T. The fancy project name is auto-generated by GitHub.-T option excludes Minitest, the default testing framework. We’ll be using RSpec .</li><li>Adjust Gemfile to look somewhat like this</li></ol><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/4726bfd7f6c76ec1efd47f952946f1ed/href">https://medium.com/media/4726bfd7f6c76ec1efd47f952946f1ed/href</a></iframe><p>3. Run bundle install</p><p>4. Run rails generate rspec:install . This command generates</p><p>.rspec<br>spec/spec_helper.rb<br>spec/rails_helper.rb</p><p>5. Now, let’s create the User model. We’ll start with the minimum required model fields rails g model user email:string password_digest:string</p><p>6. Add null: false settings to the migration strings.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/45e02f4e496a6abd4871755a7573e4aa/href">https://medium.com/media/45e02f4e496a6abd4871755a7573e4aa/href</a></iframe><p>7. Run rails db:migrate</p><p>8. Add has_secure_password method call to the User model.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/d8965223adbc56b7d0b879f68726c77b/href">https://medium.com/media/d8965223adbc56b7d0b879f68726c77b/href</a></iframe><p>9. Now, let’s create todos. Run rails g model todo title:string user:references &amp;&amp; rails db:migrate</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/27a22e96bed5af17e5d3c8857b46a069/href">https://medium.com/media/27a22e96bed5af17e5d3c8857b46a069/href</a></iframe><p>10. Let’s build the controllers. First, we’ll need to include JWTSessions::RailsAuthorization into ApplicationController. The module provides authorization actions which will protect secure endpoints.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/6239d675391f360d0a9a36c201afe79c/href">https://medium.com/media/6239d675391f360d0a9a36c201afe79c/href</a></iframe><p>11. We’ll need exception handlings for unauthorized requests, so let’s add it right away.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/9acb7b1ea6dab60cadfb5ce47656f68a/href">https://medium.com/media/9acb7b1ea6dab60cadfb5ce47656f68a/href</a></iframe><p>12. By the way, in order to use JWT we need to specify the initial configuration. JWTSessions gem by default it uses HS256 algorithm, and it needs an encryption key provided.<br>Also, on default the gem uses redis as a token store, so you’ll need a working redis-server instance. However, there’s a possibility to select memory as a token store (can be useful in test env). More info about specific settings can be found in the README.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/2a70897b4414e7ef3b12065b3147d127/href">https://medium.com/media/2a70897b4414e7ef3b12065b3147d127/href</a></iframe><p>13. Now, let’s create a signup endpoint. With token-based sessions and SPA we have 2 most common options of where to store the tokens on the client — cookies and localStorage. It’s up to developer to decide where they are going to store the tokens. While making the decision, keep in mind — cookies are vulnerable to CSRF and localStorage is vulnerable to XSS attacks. CSRF vulnerability is solvable - I usually prefer http-only cookies as the most secure token store.<br>jwt_sessions gem itself provides the set of tokens — access, refresh, and CSRF for the cases when cookies are chosen as the token store. With this being said, let’s use cookies together with the CSRF token provided by the gem (the gem automatically manages the CSRF validations when JWT is passed by request cookies). <br>The session within the gem is represented as a pair of tokens — access and refresh. Access token has a short life span (default 1 hour), and refresh has a relatively long life span (2 weeks). Expiration times are configurable. Refresh token is used to renew the access once it’s expired.<br>While it makes sense to pass refresh token to external API services or mobile applications — JS clients are usually not secure enough to store the precious refresh token. It’s up to developer to decide which info to pass or not to pass to JS. The jwt_sessions gem provides the possibility to issue a new access token by passing the old expired one, so we can avoid passing the refresh token to JS client. As both refresh and access tokens are linked to each other it will be easy to detect if the access has been stolen from the JS client and flush the leaked session (2 users —the original user and the attacker eventually will have 2 different access tokens pointing to the same refresh token). <br>Now for real, let’s create the signup endpoint. The endpoint must create users, assemble the JWT payload, and pass it via cookies with the response as well as the CSRF token via response body.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/a2ebd4d5ec9c2d428b6cc601122aaac3/href">https://medium.com/media/a2ebd4d5ec9c2d428b6cc601122aaac3/href</a></iframe><p>Specs to ensure the signup works.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/08d540283c260a0f4869774131ec3c51/href">https://medium.com/media/08d540283c260a0f4869774131ec3c51/href</a></iframe><p>14. Now we can build a sign in controller.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/3f0340ec319f04883b246813eb8cc3e5/href">https://medium.com/media/3f0340ec319f04883b246813eb8cc3e5/href</a></iframe><p>And specs for signin.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/79c3a029f6e7c79d83ed101b8fcf9e17/href">https://medium.com/media/79c3a029f6e7c79d83ed101b8fcf9e17/href</a></iframe><p>15. Here goes the refresh endpoint. As we’re building an endpoint for web client — we’ll be renewing a new access with the old expired one. Later we can create a separate set of endpoints to be used by other API-consumers (mobile/etc) which will operate via refresh tokens, but for this case we’re not going to risk it and to show the refresh token to the cruel outer world.<br>We’re expecting only expired access tokens to be used for refresh, so within the refresh_by_access_payload method a block is passed with unauth exception. Optionally, within the block we can notify the support team, flush the session or skip the block and ignore this kind of activity.<br>JWT library automatically checks expiration claims, and to avoid the exception for an expired access token we’ll be using claimless_payload method.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/d85b2c0b0624dcef7fb61d637d8bd927/href">https://medium.com/media/d85b2c0b0624dcef7fb61d637d8bd927/href</a></iframe><p>Refresh specs:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/2ade62ddf3b9a5d0e67a2d7e139704cf/href">https://medium.com/media/2ade62ddf3b9a5d0e67a2d7e139704cf/href</a></iframe><p>16. Almost there. This is the time to build the todos controller.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/5d9a89b8538b28e476c8e29899d933db/href">https://medium.com/media/5d9a89b8538b28e476c8e29899d933db/href</a></iframe><p>To make it work we’ll also need current_user, so let’s add it.<br>After the token is authorized we can dig into the payload and fetch whatever we decided to store within. In our case it’s user_id .</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/9fefd367642244bdbebc2aa562c5bf24/href">https://medium.com/media/9fefd367642244bdbebc2aa562c5bf24/href</a></iframe><p>Generic todos specs:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/6335db5cea0b110d23a95332d989b0bc/href">https://medium.com/media/6335db5cea0b110d23a95332d989b0bc/href</a></iframe><p>17. We’re almost set with the API, it’s possible to sign up, to sign in, to refresh an access_token and to manage todos. But while we’re here, let’s also add ability to log out.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/3f00c6c416d54cffa8c85a7a4807780b/href">https://medium.com/media/3f00c6c416d54cffa8c85a7a4807780b/href</a></iframe><p>18. To allow JS client to send requests to the API we’ll need to set up CORS.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/5b6e2ceba0cc2275aa310e9873583d16/href">https://medium.com/media/5b6e2ceba0cc2275aa310e9873583d16/href</a></iframe><p>Routes configuration:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/8af39b025f89f9ce271b9d74431b3c84/href">https://medium.com/media/8af39b025f89f9ce271b9d74431b3c84/href</a></iframe><h3>The Frontend</h3><p>In this guide we’ll develop a stand alone front end application.</p><p>Tools for the frontend:</p><p>Node.js<br>Node.js Version Manager — <a href="https://github.com/creationix/nvm">NWM</a><br>Node.js Package Manager — NPM<br><a href="https://github.com/vuejs/vue-cli">VueJS CLI</a><br><a href="https://github.com/axios/axios">Axios</a></p><pre>$ node -v<br>v10.4.1<br>$ npm -v<br>6.1.0</pre><ol><li>Install the Vue CLI.</li></ol><pre>$ npm install --global vue-cli</pre><p>2. Initialize the application.</p><pre>$ vue init webpack todos-vue</pre><p>This command is going to ask us a couple of questions.</p><pre>? <strong>Project name</strong> todos-vue<br>? <strong>Project description</strong> Todos Vue.js application<br>? <strong>Author</strong> Yulia Oletskaya &lt;yulia.oletskaya@gmail.com&gt;<br>? <strong>Vue build</strong> standalone<br>? <strong>Install vue-router?</strong> Yes<br>? <strong>Use ESLint to lint your code?</strong> Yes<br>? <strong>Pick an ESLint preset</strong> Standard<br>? <strong>Set up unit tests</strong> Yes<br>? <strong>Pick a test runner</strong> karma<br>? <strong>Setup e2e tests with Nightwatch?</strong> No<br>? <strong>Should we run `npm install` for you after the project has been created? (recommended)</strong> npm</pre><p>3. Start the application</p><pre>$ cd todos-vue<br>$ npm run dev</pre><p>And now it’s live on http://localhost:8080</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*DPSoS719QyoT4IDYX6TTew.png" /><figcaption>VueJS HelloWorld component</figcaption></figure><p>4. Delete auto-generated HelloWorld component and create a new Signin.vue . The app directory should look similar to this one.</p><pre>$ tree . -I &quot;node_modules&quot;<br>.<br>├── README.md<br>├── build<br>│   ├── build.js<br>│   ├── check-versions.js<br>│   ├── logo.png<br>│   ├── utils.js<br>│   ├── vue-loader.conf.js<br>│   ├── webpack.base.conf.js<br>│   ├── webpack.dev.conf.js<br>│   ├── webpack.prod.conf.js<br>│   └── webpack.test.conf.js<br>├── config<br>│   ├── dev.env.js<br>│   ├── index.js<br>│   ├── prod.env.js<br>│   └── test.env.js<br>├── index.html<br>├── package-lock.json<br>├── package.json<br>├── src<br>│   ├── App.vue<br>│   ├── assets<br>│   │   └── logo.png<br>│   ├── components<br>│   │   └── Signin.vue<br>│   ├── main.js<br>│   └── router<br>│       └── index.js<br>├── static<br>└── test<br>   └── unit<br>      ├── index.js<br>      ├── karma.conf.js<br>      └── specs<br>         └── Signin.spec.js</pre><p>5. Let’s add stylesheet links to /index.html</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/87ad42fc70b52506b3dd4904317adfaa/href">https://medium.com/media/87ad42fc70b52506b3dd4904317adfaa/href</a></iframe><p>6. Install axios within todos-vue dir.</p><pre>$ npm install --save axios vue-axios</pre><p>Prepare configuration files.</p><pre>$ tree src/backend</pre><pre>src/backend<br>└── axios<br>   └── index.js</pre><p>Define configuration and make Vue use the axios config.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/5becaa4b554665d632e7b410768b7456/href">https://medium.com/media/5becaa4b554665d632e7b410768b7456/href</a></iframe><p>7. Build Signin component. We are receiving cookies with an access token from the server response and they will be handled by the browser. In the response body we’ll receive CSRF. The CSRF token can be stored right in the localStorage, as we don’t really care even in case it’ll be stolen by a sudden XSS attack.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/3f4d5f7a7fc1f4b62d019eddef2809f3/href">https://medium.com/media/3f4d5f7a7fc1f4b62d019eddef2809f3/href</a></iframe><p>Visualisation:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*8k8zS9w2jFsxVYyclQQWyA.png" /></figure><p>8. To manage all the access/refresh auth we’ll need to build a custom wrapper for axios. The desired flow is following:</p><ul><li>JWT is stored within the HTTP-only cookie;</li><li>For all requests except OPTIONS and GET we must add CSRF token to the request headers;</li><li>Once JWT is expired the next API request with this cookie returns 401;</li><li>At this step we should handle Unauthorized response code and make a refresh request with the expired access token for a new cookie;</li><li>If the request is successful we must retry the original request, otherwise throw up the 401;</li></ul><p>Actually, we’ll need to build even 2 axios wrappers. First one will use interceptors to handle 401s (secure axios instance) and the second one won’t have any special retry listeners (plain axios instance), but will perform refresh and retry requests — it’s needed in order to avoid endless loops.<br>As you might be noticed we’re already using the plain one on Signin as we do not need to retry requests on failed signins/signups:</p><pre>this.$http.plain.post(‘/signin’, { email: this.email, password: this.password })</pre><p>The router configuration:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/71754dac6656b2f1f6f5419e62e3113f/href">https://medium.com/media/71754dac6656b2f1f6f5419e62e3113f/href</a></iframe><p>As the cookie is HTTP-only it’s not easy to detect by JS if it’s present or not. Instead, we’ll be using a simple signedIn flag. Some people might say that the flag can be easily changed in the browser console and so unlogged user will have an access to “secure” URLs. Indeed, the flag can be modified manually. But since it’s only used to prevent redirects to login page, it doesn’t break security anyhow as in case of an attempt to retrieve/update a secure resource without a valid cookie, user will be redirected to the login page anyway.</p><p>9. Now, let’s build Signup component. It’s very similar to Signin.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/4d07e1894f9de347bb23d426d48e4d92/href">https://medium.com/media/4d07e1894f9de347bb23d426d48e4d92/href</a></iframe><p>A bit of visual representation</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*LdyAJ7_1mMON-ULOP5nUPg.png" /></figure><p>10. Finally, we can build a todos component. The implementation is pretty naive, the main goal here is to demonstrate the auth flow with basic CRUD.</p><pre>$ tree src/components</pre><pre>src/components<br>├── Signin.vue<br>├── Signup.vue<br>└── todos<br>   └── List.vue</pre><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/f3ae1d92906860ac0160340782b34535/href">https://medium.com/media/f3ae1d92906860ac0160340782b34535/href</a></iframe><p>Routes.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/885641b742517f68b78c435ffaa1fda3/href">https://medium.com/media/885641b742517f68b78c435ffaa1fda3/href</a></iframe><p>Visualisation.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*EDsUjuT18xp_aGdlILNMCA.png" /></figure><p>Fancy trash icon appears on hover, it’s important, and requires an additional screenshot.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*tdrkABuAl7CqmpvQ9fSvIQ.png" /></figure><p>11. And the last thing — sign out button. Let’s do this.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/3779b13d4cf5ed87e9eeb8b91fbaaf6c/href">https://medium.com/media/3779b13d4cf5ed87e9eeb8b91fbaaf6c/href</a></iframe><p>Visualisation</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*4RAjngNL3CzNuQFabWt3Sw.png" /></figure><p>Wohoo, that’s it, now we have a completely functional secure CRUD VueJS SPA application.</p><p>Thanks for reading!<br>The application itself is available on <a href="https://github.com/tuwukee/silver-octo-invention">GitHub</a>.<br>I plan to continue the series, add admin panel, user roles, extend usage of JWT claims, add permissions to the payload - stay tuned.<br>UPD: <br><a href="https://medium.com/@yuliaoletskaya/rails-api-jwt-auth-vuejs-spa-part-2-roles-601e4372a7e7">The 2nd part of the series.</a><br><a href="https://medium.com/@yuliaoletskaya/rails-api-jwt-auth-vuejs-spa-part-3-passwords-and-tokens-management-c1eddc6a49d1">The 3rd part of the series.</a></p><figure><a href="https://usejournal.com/?utm_source=medium.com&amp;utm_medium=noteworthy_blog&amp;utm_campaign=guest_post_image"><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*f2IVAl0TbsfES9cFGYr40g.png" /></a></figure><p>📝 Read this story later in <a href="https://usejournal.com/?utm_source=medium.com&amp;utm_medium=noteworthy_blog&amp;utm_campaign=guest_post_read_later_text">Journal</a>.</p><p>🗞 Wake up every Sunday morning to the week’s most noteworthy Tech stories, opinions, and news waiting in your inbox: <a href="https://usejournal.com/newsletter/?utm_source=medium.com&amp;utm_medium=noteworthy_blog&amp;utm_campaign=guest_post_text">Get the noteworthy newsletter &gt;</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=eb4cf740a3ae" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>