4Developers 2017 — summary
The 2017 4Developers Festival has come to a pass and it was awesome! For me, this is the obligatory conference that I plan to attend every year from now on. I have been on total o 7 hours of lectures and 2 hours of discussion. All of this time was worth it. Only one lecture of these was a bit weaker, but it still was really informative!
Oh, and you can get cool cups there too ;)
Here’s a quick summary of the most interesting things for me.
Consumer-Driven Contracts is an interesting approach for designing and maintaining REST services. It was presented by Jędrzej Andrykowski and focused on Spring Cloud Contracts implementation. The idea behind this is that when designing a REST service, first you have to define a specification for endpoints — a contract. For example, you define that for particular GET request, with specified parameters, that you will receive a response with status code 200 and some kind of JSON body. Of course, you define exactly what JSON will be returned. You do this for each possible request and response combination. You defined how you will respond with errors etc.
Now, from specification like this, there are generated stubs and unit tests, for both client and server. Thanks to this, both parties can easily verify if the contract is not broken during the development of either side. I plan to take a closer look at this approach in future posts.
This one was a really interesting talk about something that everybody ignores. About java package-private scope and why we should use it. It was presented by Jakub Nabrdalik and focused on this forgotten visibility modifier that no one uses. As Jakub pointed out, everybody now uses the public or private modifier. If something must be hidden, we use private, for things like fields and internal methods. But if we want something to be accessible from outside of class we use public, and that is wrong! Why is it wrong? Well, it’s not wrong if this public thing is our API to the outside world, but if it is our internal API of our library internal communication, it should not be exposed so broadly.
The goes for the public modifier on methods, classes, interfaces — on everything. If we make everything public, someone can use our library in a way that was not intended — but well, if he does this, that is his problem, right? ;) But there is another aspect of this. Imagine that you have created an application, and all classes are public. Now some new developer has to modify a bit of your app, it’s completely new to him, if he looks at it, he will not know where to start, where is an entry point, what this app offers etc. Now, if you have only a few classes public, he will know where to look at the beginning, it will be easier for him to get into the app.
This presentation really got me thinking about the design on a package level of the software that I write.
Another interesting topic, about really hot stuff nowadays. Maybe not so hot as a year ago, but still it is something that everybody likes to talk and listen about. It was presented by Jakub Kubryński and focused on pitfalls of microservice architecture. He pointed out, that this is not the holy grail of software development — and I agree, it looks simple and wonderful on the surface, but when you dig into it a bit more, it gets much more complicated.
The things that I remember the most from this presentation is that, when you migrate from monolith to microservices, you have to keep in mind that your performance will go down it is unavoidable. Method calls inside one JVM will always be faster than any kind of communication between different applications. It doesn’t matter what type of communication you choose, it will be slower, as you will always have some kind of serialization -> communication -> deserialization -> processing -> serialization -> communication -> deserialization instead of invoke->processing->return. More steps will always mean more time, especially steps like serialization/deserialization, these are heavy.
Another thing that is important, is to remember to design your app for failure in mind. You have to assume, that everything that can fail will fail. Every REST invoke will fail eventually and you have to somehow resolve it. You can’t always just throw an exception and return HTTP 500 Internal Server Error. Sometimes you have to somehow try to complete this request successfully. You can retry few times, retry with some kind of debounce time that will wait a bit more after each try. You can also try to have some kind of fallback to other mechanisms, an alternative implementation. Maybe store a request for processing later etc. It all depends on a type of an operation. Some can wait a bit, some can wait even a day or more. For some, you can even return older, cached value if it seems ok. There are many possibilities, and you have to think about them before they occur in the production environment.
This one was great. Presented by Łukasz Szydło, two hours of great antipatterns that you can find in probably almost any codebase with tips on how to fix them. He pointed out few things that have a big impact on applications architecture. The first one was the usage of only one model for every context. For example, imagine that you have a User class, and this class have all kinds of methods like login(), logout(), addToCart(), getRoles(), changePassword(), changeProfilePicture(), postComment(), addArticle() etc. This really complicates things if you have all this functionality in one bag. It is hard to maintain and test. You should split this class into few. For example Subject for authentication and authorization context, Author for posting, Account for profile editing etc. Of course, all these classes can be mapped to the same database entity, to the same table row.
Another thing were too broad transactions. Often we just add @Transactional annotation on some service method, that does all kind of stuff and we are happy. As an example, he presented some kind of shop order making by a user. There was all kind of stuff happening in this service method, like fetching user data, creating order, some other logging stuff, and sending email notification to the user. The problem with this approach is this, that if any of these operations fail, the whole transaction is rolled back. But if you think about it, placing an order is a really simple task. and failing to send a notification should not cancel the order. Order must be placed, notification can be sent later, or event can fail and nothing serious will happen. It will be worse if the user will not be able to place an order at all. The transaction should be only placed on the order placing and some kind of event publishing, so that other system components will take notice and take some action on their own.
There were few more another examples, but for me, the most interesting of them was an aspect of using layered architecture. Layers are not bad, but if our whole application is made of layers that depend on each other can be bad. We often define few layers, for example, RestControllers -> Service -> Repository and we stick to it in our whole application. Each layer requires the layers below to function. In the worse case, we can have an architecture, in which our RestControllers are heavily coupled with our database layer — and this is really bad, we won’t be able to change the database in any way with the approach like that. Sometimes we should have a different set of layers for different functionality. For example, for creating an object, we may have 4 layers, as this is a bit complicated operation most of the time. But for querying, we can have only 2 layers, as this is much simpler. This approach follows CQRS approach, the Command Query Responsibility Segregation, which advice to separate querying and modifying operations in completely different parts of an application, as these operations are completely different. Also, as an alternative to the layered architecture, Łukasz talked a bit about Hexagonal architecture, also known as Ports and Adapters. Here we do not have layers, but we have a core domain, with all business logic and interfaces that specify ports to the outside world, like rest endpoints, or database. Other components can connect to these ports and provide or receive data to and from the domain. It is a really interesting topic that I will get into in future posts.
Discussion with trainers
This part was a bit different, as it was not a lecture or presentation. It was an open discussion with professional IT trainers from Bottega IT company that had different presentations during a day. It’s hard to describe it, as it was pretty long and jumped from topic to topic, but resolved about general IT condition in Poland. About work as a trainer and its challenges. It made me want to become a trainer in the future, but it is a long way to that, but still, I think it is the direction that I should follow. As this motivates me to broaden my knowledge and have an in-depth understanding of the technology that I am using in my work :)
Originally published at DEVelopments.