A never ending Journey

Week 1 — Week 2

Parameter Scan : We have created a new module distributed_parameter_scaning which helps users to provide multiple models and simulations to run for each model and all these models are run in a distributed environment parallelly and then collect results to an array/graph. Interested in knowing a little more ? Read my previous blog . You can also check my pull request related to Parameter Scan

Week 3 — Week 4

Parameter Estimation : To estimate a particular parameter, we now have a new module that also runs in distributed environment. In order to run this, a user provides the model (SBML/Antimony) and bounds of the parameter(s) to estimate. The module internally uses differential evolution where the objective is Sum of Squared Errors. We have tested this for Immigration Death Model and Lotka Volterra Model and presented a poster at Beacon 2017.

Week 5 — Week 6

Sensitivity Analysis : This describes how sensitive a particular parameter is for a small change in its value. Like the previous two modules, sensitivity analysis is a new module where users will provide SBML/Antimony models, a custom simulator where the users will have freedom to define their own pre-simulation and simulations. Along with that, the users will provide the parameters for which bounds are provided — these get changed in simulations (in a distributed environment). The Results of Sensitivity Analysis are categorised as follows

Week 7— Week 9

Experimenting with Apache Livy

With Livy, we are trying to decouple Client interaction and Spark Cluster and integrate with Livy so that the users can still run their jobs from any system.

  1. Every Consumer needs to get register with us.
  2. For every registered user, we shall create a user
  3. Every customer needs to send his/her public key or we can share them password for authentication
  4. Then he can use the wrapper to connect to the server and transfer scripts from his local system
  5. There can be many types of files that he should send

Week 10

Apache Zeppelin was integrated with Apache Livy so that the users can run their spark jobs through Zeppelin which is connected to Livy Server running on the Cluster.

Week 11— Week 12

Dockerization

A docker image container Apache Spark, Apache Zeppelin (connected to Spark Cluster) and latest tellurium build is on its way. By this, we can scale it to any cluster of any size. Here is the link of docker repo.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store