Add a complete test suite to your smart contract in a few lines of code

View of earth from outer space
View of earth from outer space
Photo by NASA on Unsplash.

Welcome to another installment of the Blockchain Everywhere series. In case you missed it, check out the previous part. Last time, we talked a bit about large DeFi protocol integration. One of the main conclusions we can take from that piece is that such a big chunk of work should be supported with complete unit test coverage to verify the formal logic.

That is the subject of today’s article. Let’s talk a bit about unit testing for smart contracts.

Unit tests are one of the most important parts of any smart contract-based protocol. They are checked by the community, they should be run on every commit, and unit test coverage is mandatorily checked by auditors. All common techniques for Solidity unit tests are well known and already described. There are a lot of articles about the Chai framework, truffle test helpers, and even modern OpenZeppelin test helpers. …


Power your DeFi application with an adapter to Curve.Fi liquidity pools

highway curving out of sight in a forest of fir trees
highway curving out of sight in a forest of fir trees
Photo by Nathan Anderson on Unsplash

Editor’s note: This article is provided for education and entertainment purposes only and is not in any way financial advice.

Hello, reader. There is a high probability that you have opened this piece because of some modern, hyped blockchain terms in the title. If so, you are in the right place. There are bunches and bunches of regular tutorials and articles about blockchain and especially smart contracts development. That’s why I decided to focus on the modern approach and pass all the experience I have gathered as a blockchain engineer. Welcome to the first of my Blockchain Everywhere series.

First of all, this piece is not about DeFi in general or Curve.Fi protocol in particular. There are far more informative sources with general information about decentralized finance. And, of course, you can read all you want about Curve.Fi protocol in their official docs or on Twitter. Okay then, what is this all about? The piece you are currently reading centers on the modern approaches in smart contracts development and crucial points for the integration of such a huge protocol into your own application. …


DATA SCIENCE TIPS

Make the data clean

Image for post
Image for post
Photo by Juan Gomez on Unsplash

We all know, that data cleaning is one of the most time-consuming stages in the data analysis process. We need to acquire missing values, check their distribution, figure out the patterns, and make a decision on how to fill the spaces. At this point you should realize, that identification of missing data patterns and correct imputation process will influence further analysis. So, let me introduces a few technics for the common analysis languages: R and Python.

Why the data is missing?

Before we start the imputation process, we should acquire the data first and find the patterns or schemes of missing data. …


DATA SCIENCE TIPS

Course 101 for data digging

Image for post
Image for post
Photo by Daiga Ellaby on Unsplash

Every new data scientist wants to create spectacular visualizations, build progressive prediction models, get sensational insights from the data. Okay, these things are very attractive and beautiful. But sometimes people forget about a lot of “dirty” jobs at the beginning of each analytical process. And this is not even the data cleaning stage.

Before you even start to work with the data, you need … the data!

Yes, it is a common mistake to believe that data analysis starts from dataset cleaning because it refers to the fact you already have the data. Someone might believe that you will always have magically gotten the data from the air. But data scraping, gathering, digging, and collecting are the skills you need to improve constantly. You have to be able to work with different sources, formats, and sometimes even languages, and that fact assumes you have enough skills. …


EPIC CHARTS

Create your project schedule visualization with basic DataViz tools

Some time ago I have got an idea, that there is a lot of information about bar charts, histograms, scatterplots, line charts and other effective but very simple and ordinary instruments for data visualization. Even more, I can assume, that the knowledge of how to build these common charts have to be treated as basic and mandatory for every Data Scientist or DataViz specialist. But a whole variety of plots and visualization exist. They are less common and more specific but surely should be in your inventory. Even more, you also have to be familiar with some modern instruments for such charts creation. …


Software Architecture

The symbiosis of Windows Forms interface and AI pipeline

Image for post
Image for post
Photo by Marius Masalar on Unsplash

No long intro here:
a) we may always run our ML models or neural networks pipelines just in the console;
b) there is no reason why we can’t run it on Windows and add some buttons for clicking.

Though, here we face a problem. We can easily develop either some Python model or C# Windows Forms application. But their connection may be a real pain in the process of AI-powered application development. That’s why this piece focuses on the solution for the intermediate communication between C# and Python parts of the project.

For the project, we will use the loan lending club dataset from Kaggle. …


R tips

Commands you should bring to automatism

Image for post
Image for post
Photo by Matt Artz on Unsplash

Data cleaning is one of the most time-consuming stages of the Data Analysis process. Many of its steps include acquainting the dataset, search of the missing values, their imputation or removing, and possibly a lot of repetitions of the same code lines but for different variables or their combinations. So, we have to search and accept any possibilities to speed the process up.

Some time ago I have presented the article about short Python idioms for missing values in datasets. Today I have prepared the compilation of similar scripts but in R language. …


C++ PATTERNS

Work with text logging and formatting in the most efficient way

Image for post
Image for post
Photo by Patrick Fore on Unsplash

Welcome to the fourth issue of practical C++ solutions. If you have read the previous three articles you surely have noticed that I am focusing on containers design. Let’s step aside for a moment and talk about one of the most important parts of big software projects. It is the software’s state logging. Why is it necessary? It allows tracking all the user’s actions for security reasons or for scenario recreation. And it is really helpful for bug tracking if appropriate trace-level is set up. …


Data visualization tips

My first impression from the Tableau usage and its comparison with Python code

Well, the short answer is yes. Though, there are some cool things I have noticed during my Tableau exploration.

Image for post
Image for post
Photo by Roman Bozhko on Unsplash

Tableau was on the list of my mandatory stops on a journey into data science and data visualization. It is a very cool and convenient instrument for quick dataset acquainting and initial exploratory visualizations. It delivers a lot of tools necessary for the primary data analysis and further exploration. So, it is not a surprise that I am interested in it as a beginner data scientist. Though, first of all, I am a software developer, so I want not just to get a new fancy tool, but to understand, how it works. …


Different approaches for different purposes

Image for post
Image for post
Photo by Markus Spiske on Unsplash.

What is the first step in the data analysis process? I believe that firstly you open the dataset to acquire the data. If you work with Tableau, then it executes most of the work for detecting file types, data types, delimiters, and encoding. But how should you proceed if you work directly in Jupyter?

Here are a few ways to open a dataset depending on the purpose of the analysis and the type of the document.

1. Custom File for Custom Analysis

Working with raw or unprepared data is a common situation. Well, it is one of the stages of a data scientist’s job to prepare a dataset for further analysis or modeling. No friendly CSV format, no structure, custom delimiters, etc. …

About

Pavel Horbonos (Midvel Corp)

Stochastic programmer | Art & Code | https://github.com/Midvel 💻| https://www.instagram.com/midvel.corp 🎨⠀| Blockchain developer in https://blaize.tech/

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store