Low-cost service bridge

Some time ago a customer of mine requested the development of a low-cost bridge application. The idea was to ‘attach’ a local shopping website to a world-known find & buy website and process international orders. As far as I know, expected profitability was not quite clear. That is why it was meant to be a pilot (economy) project. However, the software had to be reliable, as orders had to be processed properly. After all, we agreed on developing an application with the following key features. It must be:

  • Stable (able to work for many days inline without problems).
  • Cheap (the minimal level of functionality).
  • Easy (the simplest design solutions).

It was a rather interesting project, and there were some nice solutions worth sharing.


In brief, this is a C# console application. It manages requests simultaneously using asynchronous methods. Operations (like ‘search’, ‘order’) are distinguished using the first segment of URL (i.e. ‘/search’, ‘/order’). Data exchange is in JSON. There is no permanent storage at all. The full shopping cycle (from checkout to delivery) does not take more than 30 minutes.

Load balancing

The project does not use any load balancing techniques. Nonetheless, we faced an enormous number of incoming search requests, much more than the backend could handle. To mitigate the issue, two instances of the application run. The first one is located on a fast server with a good connection to the backend and processes search requests only. The second one (so called ‘main’) does the rest.
This trick allows us to update the software without terminating currently processing orders. More about this in the ‘maintenance’ section below.


To resolve the overload issue completely, the application analyses every search request and ignores all low profitable products. That is to say, only several percent of requests are processed, with the maximum profit.


Alas, some stages of an order processing have to share data (information about orders, in particular). Instead of increasing the complexity of the software and setup by facilitating a database, we used in-memory cache. Taking into account a limited number of orders and the maintenance procedure (below) this works really well.


From time to time the software must be updated or reconfigured. To do that properly the system administrator follows this routine:
1. Stop the search instance to ignore new requests.
2. Check log for any orders in progress. If there are some, wait for completion (not longer than 30 minutes).
3. Stop the ‘main’ instance.
4. Update files.
5. Start the ‘main’ instance.
6. Start the search instance.
As a result of that, the system never terminates running orders.

At the moment the project has proven its practicability, and will be used and improved.