Streamlining Logistics With Automated Dispatching Transporter Ranking (Part II)

Arga Ghulam Ahmad
9 min readMay 25, 2023

--

Integrate Elastic Search to Elixir Services

Module dependencies between different components of the matching engine system.

In order to efficiently work with ElasticSearch, a well-designed set of modules has been created for this application. One of the core modules is the ElasticSearch module, which is responsible for establishing a connection to ElasticSearch and carrying out basic operations like searching, querying, and posting data. Another important module is the DynamicQuery module, which is designed to generate dynamic queries based on user-supplied parameters. This module is particularly useful in conjunction with the ElasticSearch module for building more complex queries that can be executed with great efficiency.

For even greater flexibility and control over the queries, the ElasticSearchQueryBuilder module has been developed. This module enables the creation of custom queries that utilize user-supplied parameters. It works seamlessly with the ElasticSearch module to execute intricate queries that are tailored to meet specific requirements.

To ensure that data is consistently synchronized between the database and ElasticSearch, the Double Write module has been created. This module monitors changes made in the database and automatically updates the corresponding documents in ElasticSearch. This approach ensures that search results are always up-to-date and accurate, making the application a reliable tool for users.

Finally, the runtime.exs file is used to set up the configuration for connecting to ElasticSearch. By customizing the configuration parameters in this file, developers can fine-tune the connection settings to achieve the best performance and security. By leveraging these modules and configurations, developers can effectively work with ElasticSearch and build applications that are robust and reliable.

ElasticSearch Module

This module would handle the connection to ElasticSearch, as well as the basic operations of search, query, and post.

defmodule ElasticSearch do
@moduledoc """
This module provides a wrapper around the ElasticSearch API to handle basic operations.
"""
@es_url "http://localhost:9200"
@index "my_index"
def search(query) do
url = "#{@es_url}/#{@index}/_search"
HTTPoison.post(url, {:json, query}, [{"Content-Type", "application/json"}])
end
def query(query) do
url = "#{@es_url}/#{@index}/_query"
HTTPoison.post(url, {:json, query}, [{"Content-Type", "application/json"}])
end
def post(doc, id) do
url = "#{@es_url}/#{@index}/_doc/#{id}"
HTTPoison.post(url, {:json, doc}, [{"Content-Type", "application/json"}])
end
end

Dynamic Query Module

This module would generate dynamic queries based on parameters passed to it. It could be used in conjunction with the ElasticSearch module to generate and execute more complex queries.

defmodule DynamicQuery do
@moduledoc """
This module provides functions to generate dynamic queries based on parameters.
"""
def generate_filter(field, value) when is_integer(value) do
%{range: %{field => %{gte: value, lte: value}}}
end
def generate_filter(field, value) when is_list(value) do
%{bool: %{should: Enum.map(value, &generate_filter(field, &1))}}
end
def generate_query(filters) do
%{query: %{bool: %{filter: Enum.map(filters, &generate_filter(&1, &2))}}}
end
end

Elastic Search Query Builder Module

The Elastic Search Query Builder module is designed to produce custom queries by utilizing user-supplied parameters. When used in tandem with the ElasticSearch module, it can assist in the creation and execution of intricate queries.

defmodule ElasticSearchQueryBuilder do
def build_query(params) do
query = %{
"query": %{
"bool": %{
"filter": build_filter(params),
"should": build_should(params)
}
},
"script_fields": build_script_fields(params),
"_source": build_source(params)
}

query
end
defp build_filter(params) do
filters = []
if Map.has_key?(params, :requirements) do
requirements = Map.get(params, :requirements)
requirements_filters = build_requirements_filters(requirements)
filters = filters ++ requirements_filters
end
filters
end
defp build_requirements_filters(requirements) do
Enum.map(requirements, fn {requirement_id, quantity} ->
%{
"range": {
"requirements.#{requirement_id}": %{
"gte": quantity,
"lte": quantity
}
}
}
end)
end
defp build_should(params) do
should = []
if Map.has_key?(params, :pricings) do
pricings = Map.get(params, :pricings)
pricings_should = build_pricings_should(pricings)
should = should ++ pricings_should
end
should
end
defp build_pricings_should(pricings) do
pricings_map = Enum.reduce(pricings, %{}, fn {key, value}, acc ->
Map.put(acc, key, value)
end)
[
%{
"bool": %{
"boost": 0.8,
"should": build_pricings_should_clause_1(pricings_map)
}
},
%{
"bool": %{
"boost": 0.2,
"should": build_pricings_should_clause_2(pricings_map)
}
}
]
end
end

Double Write Module

The Double Write module is designed to ensure that data is synchronized between the database and Elastic Search. It does this by listening for changes in the database and then updating the corresponding documents in Elastic Search. This module provides a reliable and efficient way to keep the data in Elastic Search up-to-date and consistent with the database, ensuring that search results are always accurate and up-to-date.

defmodule DoubleWrite do
@doc """
Writes a single record to both the database and Elastic Search.
"""
def write(record) do
# Write to the database
Repo.insert(record)
# Write to Elastic Search
Elasticsearch.index(
"my_index",
"my_type",
record.id,
%{title: record.title, description: record.description}
)
end
@doc """
Writes multiple records to both the database and Elastic Search.
"""
def write_batch(records) do
# Write to the database
Repo.insert_all(records)
# Write to Elastic Search
docs =
Enum.map(records, fn record ->
%{_id: record.id, title: record.title, description: record.description}
end)
Elasticsearch.bulk_index("my_index", "my_type", docs)
end
end

Modify Configuration

The runtime.exs file in Elixir can be used to set up the configuration for connecting to Elastic Search. Here's an example of how it can be done:

# config/runtime.exs
config :my_app, MyApp.ElasticSearch,
hosts: ["http://localhost:9200"],
pool_size: 10,
log: true

In this example, we define a configuration for the MyApp.ElasticSearch module, which is responsible for connecting to Elastic Search. We specify the Elastic Search hosts to connect to, the size of the connection pool, and whether to enable logging.

To use this configuration, we need to add it to our config/config.exs file:

# config/runtime.exs
config :my_app, MyApp.ElasticSearch,
hosts: ["http://localhost:9200"],
pool_size: 10,
log: true

This imports the configuration from the runtime.exs file into our main configuration file. We can then access the configuration using Application.get_env/2, like this:

# somewhere in our application code
config = Application.get_env(:my_app, MyApp.ElasticSearch)
# use the config to connect to Elastic Search and perform operations

With this configuration in place, we can use MyApp.ElasticSearch the module to connect to Elastic Search and perform operations such as indexing and querying data.

Challenges

Creating a matching engine using Elastic Search can present difficulties, but there are ways to address these obstacles and ensure success. Below are some of the challenges that may arise and possible solutions to overcome them.

1. Translate Product Requirements into The Working System

  • Ensure a clear understanding of the product requirements.
  • Define the relevant features, functionality, and performance goals.
  • Design the system architecture and components to meet the requirements.
  • Identify the appropriate Elasticsearch features and functions to implement the matching engine.

2. Learn about existing data and how to utilize it

  • Understand the structure and format of the existing data.
  • Determine the relevance of the existing data to the matching engine.
  • Design a data mapping strategy to convert the existing data to Elasticsearch format.
  • Clean and preprocess the existing data to improve search performance.

3. Proof of Concept using Kibana

  • Understand the basics of Kibana and how it can be used to create a proof of concept for a matching engine.
  • Use Kibana’s visualization tools to explore and analyze data.
  • Leverage Kibana’s dashboard and reporting features to create a user-friendly interface.
  • Experiment with different Kibana plugins to extend functionality.

4. Data Modeling

  • Start with a clear understanding of the data structure and relationships.
  • Use a flexible data modeling approach to enable easy scaling.
  • Design an effective mapping strategy to optimize search performance.
  • Consider using nested objects to model complex relationships.

5. Painless Language

  • Understand the basics of the Painless language, which is used in Elasticsearch.
  • Use the Painless language to create custom scripts for complex queries.
  • Leverage the Painless language to implement custom scoring and relevance ranking strategies.
  • Take advantage of Painless’s debugging features to identify and fix script errors.

6. Usage of should, filter, bool, etc

  • Understand the differences between should, filter, and bool clauses.
  • Use the appropriate query clauses to construct complex queries.
  • Leverage the power of Elasticsearch’s query DSL to create efficient queries.
  • Combine multiple clauses to create more complex query logic.

7. Query Design

  • Understand the different types of queries available in Elasticsearch.
  • Select the most appropriate query type for your use case.
  • Use query optimization techniques to improve search performance.
  • Experiment with different query parameters to find the optimal settings for your use case.

8. Relevance Ranking

  • Understand Elasticsearch’s relevance scoring system.
  • Design a custom scoring system that accurately reflects user intent.
  • Use Elasticsearch’s boosting and filtering features to improve relevance ranking.
  • Experiment with different scoring functions to optimize relevance ranking.

9. Integration into the Current System

  • Ensure that the Elasticsearch matching engine can be integrated into the existing system.
  • Design the integration strategy to minimize disruption to existing processes.
  • Test the integration thoroughly to ensure compatibility with the existing system.
  • Implement any necessary changes to the existing system to support the Elasticsearch integration.

10. Double Write between the Database and Elastic Search

  • Understand the need for double-writing data between the database and Elasticsearch.
  • Implement a reliable and efficient method for double-writing data.
  • Ensure that the data in Elasticsearch is consistent with the data in the database.
  • Monitor the double writing process to identify and resolve any issues.

11. Data Consistency

  • Understand Elasticsearch’s distributed architecture and replication model.
  • Use versioning and optimistic concurrency control to ensure data consistency.
  • Monitor Elasticsearch’s cluster health to identify and resolve consistency issues.
  • Consider using Elasticsearch’s cross-cluster replication feature to improve data consistency across multiple clusters.

12. Testing (Unit Test, Integration, and Manual)

  • Design and implement unit tests for individual components of the matching engine.
  • implement integration tests to ensure the various components of the matching engine work together correctly.
  • Perform manual testing to identify and fix any issues missed by automated testing.
  • Use a continuous integration and delivery (CI/CD) pipeline to automate testing and deployment processes.

13. Documentation of the System

  • Create detailed documentation of the Elasticsearch matching engine, including its features, functionality, and performance.
  • Provide clear instructions on how to install, configure, and use the matching engine.
  • Include examples of common use cases and query scenarios.
  • Update documentation regularly to reflect changes in the matching engine.

14. Scaling

  • Plan for scaling from the outset.
  • Use horizontal scaling to improve performance.
  • Use Elasticsearch’s shard allocation awareness feature to improve data locality.
  • Consider using Elasticsearch’s hot-warm architecture to optimize resource usage.

15. Security

  • Understand Elasticsearch’s security features and options.
  • Implement a comprehensive security strategy, including authentication and authorization.
  • Use role-based access control to control access to Elasticsearch data.
  • Monitor Elasticsearch’s security logs to identify and respond to potential security threats.

Results and Outcomes

After much development and planning, Kargo has implemented the first part of its long-term plan for automatic transporter dispatch. The plan consists of three steps: profiling shipper and transporter data, matching and ranking transporters, and dispatching based on recommendations.

The matching engine’s output is displayed by the user interface.

The initial implementation includes requirement profiling and requirement matching. However, the order-matching step is still a complicated aspect of the automatic transporter dispatch process and requires further development.

Overall, the effectiveness of the system has yet to be fully evaluated, as the automatic transporter dispatch feature is still in its early stages. However, the potential impact on shippers and transporters could be significant, as the system has the potential to streamline the delivery process and reduce costs.

Conclusion

In conclusion, this article aimed to address the problem of efficient and effective transporter dispatch in the logistics industry through the proposed solution of automatic transporter dispatch. The article discussed the system requirements and constraints, product requirements and constraints, and technical requirements and constraints necessary for the successful development of the system.

The system development process involved defining criteria, defining the schema, providing example data, defining the query, and executing the query to obtain the desired results. Additionally, the article detailed the integration of Elastic Search to Elixir Services through the use of various modules such as the ElasticSearch Module, Dynamic Query Module, Elastic Search Query Builder Module, and Double Write Module.

However, the development process also presented challenges such as the complexity of the automatic transporter dispatch system, particularly in the order matching stage. Despite these challenges, the results and outcomes of the system development were promising, with the successful execution of the query and integration of Elastic Search to Elixir Services.

Overall, this article serves as a guide for developers and logistics industry professionals interested in implementing automatic transporter dispatch as a means of improving transporter dispatch efficiency and effectiveness. The intended audience includes logistics companies, transportation management software providers, and software developers in general. By following the steps outlined in this article, developers can successfully create a system that streamlines the transporter dispatch process, resulting in increased productivity and reduced costs.

Credits

We would like to take this opportunity to express our heartfelt gratitude to everyone involved in the success of this project. We extend our thanks to Yogi, Adriana, Irvanda, Calvin, Gibran, Khoirul, Ade, Dhito, Surya, and Fachrul for their dedication, hard work, and commitment. Their exceptional skills and expertise played a crucial role in bringing this project to fruition.

The unwavering support and cooperation from the team helped us to overcome numerous challenges and obstacles along the way. We are truly grateful for the opportunity to work with such a talented and driven team, and we look forward to continuing our collaboration in the future. Once again, we thank everyone for their invaluable contributions to making this project a resounding success.

--

--