Hepsiburada Market Software Development Manifesto

Okan YÜKSEL
hepsiburadatech
Published in
7 min readMar 12, 2024

As Hepsiburada Market technology team, we are proud of our ever-increasing count of active users and order volume. This success is a clear result of our dedicated efforts towards reliable, uninterrupted, and user-focused experiences.

We provide services to our customers with expanding business partners and product categories.

With this manifesto, we aim to explain how we achieve this by outlining the strategies applied in the development of our expanding teams and software products.

Software Development Culture

https://www.ualberta.ca/china-institute/research/other/shaolin-kungfu.html

We adhere to development practices aligned with ‘Clean Code’ and ‘Domain Driven Design.’ As a team, we consider the guidance provided by Robert C. Martin, the author of ‘Clean Code: A Handbook of Agile Software Craftsmanship,’ and Eric Evans, the author of ‘Domain-Driven Design: Tackling Complexity in the Heart of Software.’

We document our architecture to correctly position new or existing services within the architecture.

Pair programming and code review are integrated into our software development processes.

We ensure the reliability of our developed code by subjecting it to unit tests, integration tests, and load tests.

To quickly identify and intervene in case of issues, we implement logging (never excessive!) using appropriate methods and adhere to best practices.

Rather than focusing on documentation, we emphasize naming services, class names, functions, and variables correctly to ensure that the code can be read without the need for extensive documentation. When necessary, we provide an adequate (never excessive!) number of comments and/or documents to explain the code.

To achieve short-term project delivery, we may incur technical or business debt. However, we promptly plan for technical debts (opening a backlog task and including it in the current or next sprint items) and address them at the right times.

We standardize our branch management strategy across all projects. We ensure that developments in feature branches are secure, performant, accurate, complete, and have successful test results before merging into the master branch, followed by QA Engineer review and prompt deployment to production. Feature branches should be deployable to test environments unaffected by other developments.

New team members joining us have a first peer in the team, preferably someone in a similar position, to facilitate a smoother onboarding process. If they prefer, we include them in pair programming sessions as much as they desire.

We believe in strong communication within the team, enabling quick organization.

After deploying developments to production, we monitor our systems for any negative impact using monitoring tools.

For completed or nearly completed tasks, especially those in the QA test, sign-off, or dependent stages, we prioritize them to focus on producing the maximum output at the end of the sprint.

The product team clearly defines requirements. After determining technical requirements, the development team comes together for discussion. At the end of this meeting, we answer the following questions as a team, defining the target the story aims to achieve, the necessary technical developments, and metrics: What are we doing? Why are we doing it? How will we do it? How do we collect metrics? How do we monitor it? Do we need toggles for development?

We gather for tech storming, proposing points of improvement in our systems or learning about new technologies we can incorporate into our tech stack. We identify systems where we can apply these suggestions and plan when the work will be done.

At the end of the first week of the sprint, we come together for a sprint review and try to find solutions for stories with risks.

We estimate stories by each member of the Agile team according to the Agile Manifesto.

We regularly plan retrospective meetings, and upon the request of any team member, we schedule retrospective meetings.

Story Leadership

We define ourselves as product engineers, actively participating in product development and design stages, aiming for the highest level of success for the product.

During our planning meetings, we select a ‘Story Leader’ for each story. The Story Leader is typically one of the developers working on the story.

The ‘Story Leader’ is responsible for the complete and timely delivery of the story, in line with the commitment made by the team. The ‘Story Leader’ organizes developers working on different parts of the story and takes preventive measures by anticipating blocking situations.

We strive to deliver tasks in the smallest possible units. By leaving the part of development that is lagging or dependent on other teams as technical debt, we quickly provide solutions to business needs. Once the blocking situation is resolved, we prioritize and eliminate technical debts.

Technical Guide For Developers

https://wallpapercosmos.com/shaolin-kung-fu-wallpapers

Resources we find beneficial for enhancing domain-driven design competency and recommend include the book ‘Domain-Driven Design: Tackling Complexity in the Heart of Software’ by Eric Evans and the ddd-crew GitHub account.

Domain Driven Design Practices That We Offen Apply

Bounded contexts are smaller logical boundaries within a domain that are consistent and as independent as possible. When developing, we consider these boundaries. A context should have all the concepts required by the business but should not include any concepts unrelated to itself, preventing leakage between contexts.

We avoid complexity by keeping business logics on aggregates, reducing code repetition and requiring minimal dependencies when developing tests. Writing unit tests becomes easier.

Value objects and aggregates should have a load method. Load methods should contain domain guards. This way, the object to be created is created through a single method, passing through the control of domain guards.

Domain guards exist only within the domain.

Value objects should be immutable. They should not have setter methods. If an update is needed, we should recreate them using the load method.

We develop getter methods or single-line functions for calculations that contain logic (using more than one property) on our structures, such as aggregates, entities, value objects, to prevent these operations from being done in domain services and thus eliminate code repetition.

Our Clean Code Practices

We detailed this episode in our article ‘Motivation For Clean Code’, which we strongly recommend you to read.

In addition to what we mentioned;

We use ‘Fluent Validation’. Using complex if-else blocks or attributes is not very compatible with SOLID principles. Therefore, we collect our validations in a single method.

We avoid overengineering and prefer simple solutions over complex ones. If we cannot get rid of complexity and have to come up with work-around solutions, we discuss and brainstorm together as a team.

When logging, we follow best practices such as determining log levels and using structured logging.

General Acceptances

RestClients, HttpClients etc. are encapsulated in all our systems. Our domain services are unaware of the technology used for network communication. This precaution reduces technology dependencies.

Our projects are ready to run as much as possible after the repository is cloned. We create config files for IDEs commonly used by the team and Readme.md files for requirements.

We apply the ‘CQRS’ pattern, choosing to separate our ‘Read’ operations from ‘Write’ operations in interfaces.

For our needs, we develop feature toggles, compare toggles, and internal toggles.

  • If we need to activate/deactivate a developed feature based on business needs, we manage this situation with feature toggles, gaining this capability without deployment.
  • If we need to replace an existing development or integration with a different alternative that produces the same data, we compare application outputs, log differences, observe them for a while, and activate our new developments later.
  • If a developed feature needs to be activated for Hepsiburada personnel only, we can achieve this by developing an internal toggle.

Our Hunting Spots In Code Review Sessions

(I am grateful to Mert TALAYOĞLU for his support in this episode.)

During our development process, in order to ensure the quality and maintainability of our codebase, we, as a team, seek answers to the following questions during code reviews:

  • Models, Entities, Value Objects (immutable), and DTOs encapsulate their own business logic in accordance with DDD?
  • Has attention been given to data structure alignment/memory management?
  • Is it in compliance with Restful principles? (Defensive validation, correct usage of method types, and response status codes)
  • Was the development conducted in accordance with clean code practices? (Naming conventions, readable code, low complexity, DRY — Don’t Repeat Yourself etc.)
  • To what extent have issues such as response time increase, performance loss, and information security been evaluated? Can optimizations like caching and projection be considered for resource-intensive operations?
  • Does the development fully meet the business requirements? Are there any technical debts left behind?
  • Are there any hard coded variable definitions? Can these values be read from sidecars or configuration files? Can they be defined as constant variables?
  • Does it adhere to OOP principles (encapsulation, dependency injection, etc.) and SOLID principles?
  • Has the distance between where variables are defined and where they are used been kept short?
  • Are the parameter counts of methods low? Has a DTO been used for more than three parameters?
  • Logic should not be built on null or exceptions. Has exception handling been done correctly? Can it be monitored?
  • Flag parameters should not complicate logic within methods. Is it appropriate?
  • Have design patterns been used in a solution-oriented and requirement-driven manner?
  • Is business logic located in the domain layer?
  • Has the use of reflection been avoided?
  • Has attention been given to Command & Query Separation? Is CQRS applicable?
  • Is there sufficient monitoring? Have necessary logs been written with correct practices?
  • Is there any unnecessary overengineering or forced code?
  • Have project dependencies been structured to meet requirements, without adding excessive dependencies/libraries?
  • Have Git practices been followed? (Small commits, commit messages, clean history, ignored files etc.)

Security

The security of our systems is of utmost importance to us. It is our responsibility towards our users.

First of all, we constantly educate ourselves by following guidelines such as “OWASP API Security”.

We ensure the security of all our internal and external services before taking them to the production stage.

Personal data should be processed/hosted in a legal, fair, and transparent manner.

We conduct penetration tests on all systems at regular intervals.

To ensure our security, we host internal and public services in different layers.

We only communicate our microservices through APIs and message queues. In accordance with the Microservice architecture, we prevent sharing a common data source.

Entity models are never directly included in API responses. Communication is established through contract models.

Some wishes at the end of story

I hope that it will help software teams who read it to determine their software development processes and develop successful software products.

Thanks for your time.

--

--