Java Spring or Spring is coming in DateCoin development

Nikita Anufriev
DateCoin
Published in
4 min readApr 15, 2019

--

Our users often ask via official channels about current stage of product development and technical issues the team faces to. Here we present a technical report intended to answer frequently asked questions in detail and to inform our subscribers about current stage of the project.

Denim. Total upgrade

We started the product development of Denim dating service in 2015 and initially planned to have the same-name website only. It was supposed that relatively simple coding in PHP and site layout would be pretty enough for efficient operation of the whole system. In just a few years the project has grown and the entire infrastructure with API, analytics and statistics services as well as client’s part (frontend, mobile apps, market landings) has been created. As time passed these factors were multiplied on growing audience that naturally caused load increase. Considering all the factors mentioned above we decided to switch to enterprise development. But what could be better than Java to serve this goal? Moreover, the fourth generation of Spring framework has been released recently that we are going to use.

One can claim that enterprise development on PHP is also possible. However, a single fact of absence of class reloading methods for this programming language is already quite confusing. Moreover, the interface itself is useless due to weak typing. In addition to it, a simple task of data processing delay or worker creation hits the wall due to absence of adequate tools for multi-flow app building. It is also worth noting, that the methods of establishing connection pool with database and other storage have many issues.

The development of new solutions for our service is dated back to the time of ICO. To implement neural networks in search engine we had to change an approach for uploading and storage of user photos. To analyze their behavior we had to record and store all activity logs (sign up, profile views, messages). We started reconsidering the entire architecture of the system towards micro services for collecting and processing this data at separate servers.

Moreover, when entering new markets more than one data center is necessary since database access speed differs depending on location. That is how data decentralization and replication have become a paramount goal. To use blockchain for instant user verification, both online and in person we have to implement a completely new way of collecting, storage, and processing of geolocation data. It in turn raises several other problems including synchronization with server and databases as well as subsequent data processing by specialized microservices in background mode.

Finally, we have decided to start from cryptocurrency and token implementation for payment processing and service purchases due to some difficulties originated from third party policies. We had further to postpone an adoption of neural networks for image processing in favor of tools for user behavioral pattern analysis. We have chosen Spring Integration in coupled with RabbitMQ to achieve this goal.

Multilevel analytics implementation

We initially used Google BigQuery for analytics data storage. Moreover, we have promptly developed a solution for analytics data collection. Next, we implemented initial analytics data collection in API and started monitoring incoming data. More specifically, we were interested in figuring out what types of data we received as well as its volumes, analyzing load in terms of data collection and processing for metrics formation. However, shortly after we started using this stack it became obvious the the solution from Google did not meet our needs. Data collection speed (30 events per second) was too slow. Thus we had to look for another solution and finally picked ClickHouse. As an outcome we got a system that is capable of processing 1000 events per second and analytics metrics can be formed just in 1 second.

Another task (somewhat similar to data collection for analytics) is sending notifications (sms, push, email) to users. Our ICO experience has clearly demonstrated that sending such notifications to different countries is not a simple task. To tackle this problem we have developed a service for processing and sending notifications with gateways integration capability for sending sms, emails and push-messages.

Dialog module update

We still had integration problem for neural networks and blockchain unsolved whereas an increased service load started to dictate the rules. The problems with dialog module became obvious. By that time it was two years old and required urgent modernization. In particular, the speed of sending messages should be increased and user behavioral pattern analyzer should be implemented. We chose Spring WebSocket for server part and RabbitMQ as a message broker to solve this complex problem. In course of solution development we had also to replace message storage to Apache Cassandra. Our technical team is currently adopting it. Despite all the technical issues outlined we consider recent release of mobile and web apps to be successful.

Right now our technical team is actively working on service upgrade and optimization, release of new web apps to clear the path for implementation of intelligent algorithms and blockchain for users verification. We have chosen Spring Integration (Spring MVC) to develop customized blockchain solution and RabbitMQ to store the data (RocksDB). Blockchain implementation will be discussed in more detail in future posts.

--

--