How To Write Smart Contracts for Blockchain Using Python — Part One

A step-by-step guide to getting started

Luiz Milfont
Jul 26 · 8 min read
Photo by Hitesh Choudhary on Unsplash

In 2019, smart contracts are definitely the new paradigm shift in computer programming. This guide intends to be an introductory path to creating computer programs that are deployed and run in a decentralized blockchain.

A bit of history… Back in the 1950s in the early days of computing, if you wanted to write a piece of code to perform, let’s say, a simple sum operation (considering the Motorola 6502 8-bit CPU), you would end up with something like this:

18 A9 01 69 02 85 31 F6

The above hexadecimal numbers represented the machine language that the CPU could understand to perform an action.

The CPU had an “instruction set”, which means that each number was a command that resulted in an operation made by the processor: addition, subtraction, division, multiplication, load, store, jump, etc.

A programmer needed to know the operation codes by heart and thus memorize which number was equivalent to which command. Not very productive.

Soon, it was clear that a more human approach was required. It was the beginning of a movement towards the creation of higher-level languages, which looked more like spoken language.

So, at first came what became known as mnemonics:

CLC
LDA #$01
ADC #$02
STA $31F6

For each computer operation code, there was now an associated word or symbol that facilitated understanding. So, CLC (clear the carry) was equivalent to 18. LDA (load accumulator) was A9. ADC (add with carry) was 69. And STA (store accumulator) was 85.

This approach to programming was known as assembly language and it was a first step to making programming easier, relieving programmers from tedious tasks, such as remembering numeric codes.

The program above clears the carry, loads the value 01 into the accumulator, adds 02 to it, then stores the resulting number in the memory address 31F6. Now in a far easier way for humans to understand.

As the years passed, new tools were created to make programming more productive, so development environments evolved a lot. The term high-level language appeared.

This means that the higher the level of a programming language, the more it resembles spoken, human language. Similarly, low-level languages were the ones closer to the computer instruction set itself.

In parallel with this evolution of computer languages, there were some paradigm shifts along the way.

The very first computer programs were injected directly into a memory address and then the computer needed to be told the point where the program would start its execution. This was raw machine language computer code, like the one shown at the beginning of this article.

With the advent of the mnemonics, we created what is known as assembler — a piece of software that was responsible for decoding the human-readable mnemonics, converting it to machine language code, injecting it into the correct memory address and telling the CPU to start its execution. Way better!

Although that helped a lot to write and debug software, it was still counter-productive. We needed an easier way to program.


The “BASIC” Language

Beginner’s All-purpose Symbolic Instruction Code (or BASIC) was the first high-level, human-friendly computer language invented, designed in 1964 with the goal being ease of use.

A common BASIC program would look like this:

10 A = 1
20 B = 2
30 SUM = A + B
40 PRINT(SUM)

In BASIC language, each line, identified by a sequential number (10, 20, 30, …) stored a command. Those commands would be run in sequence, one at a time, then the next one.

The program would execute when the user typed the command RUN on the computer screen. Here, we’ve had the first programming paradigm shift.

There was something called an interpreter and its function was to convert each line of the code, in real-time, to its equivalent in machine language and then execute it.

Also, note that the commands were now represented by an English word (such as print). CPU registers (such as accumulator) and memory addresses were replaced by variables. It became a lot easier to program like this!

BASIC, although an extremely powerful high-level language, was too slow to execute because the interpreter needed to convert everything in real-time to machine language.

This needed to be solved.


Compiled Languages

Once more, a paradigm shift occurred, bringing us what we know today as compiled languages.

Compilation meant that we now had an extra step upon executing computer code. The compiler was a piece of software that converted the high-level language written program to entirely machine language, but not in real-time (like BASIC).

Instead, the user had to wait for the process to complete. When, finally, the program was converted (compiled) and available to run, the user asked for the executable to start.

The difference was that it ran a lot faster than old interpreted programs — more productive and time-saving. Another implicit benefit was that the executable could be shared without the source code, avoiding copyright problems.

This was the dawn of a new era and many compiled languages flourished in the ecosystem.

Some examples: Ada, ALGOL, SMALL, Visual Basic, PureBasic, C, C++, Objective-C, Swift, D, C# (to bytecode), Java (to bytecode), CLEO, COBOL, Cobra, Crystal, eC, Eiffel, Sather, Ubercode, Erlang (to bytecode), F# (to bytecode), Factor (later versions), Forth, Fortran, Go, Haskell, Haxe (to bytecode or C++), JOVIAL, Julia, LabVIEW, G, Lisp, Common Lisp, Lush, Mercury, ML, Alice, OCaml, Nim (to C, C++, or Objective-C), Open-URQ, Pascal, Object Pascal, Delphi, Modula-2, Modula-3, Oberon, PL/I, RPG, Rust, Seed7, SPITBOL, Visual Foxpro, Visual Prolog, W, Zig, and many, many others…

Software continued to evolve. And, as suggested early in the 1960s by some computer scientists (namely Alan Kay and Ivan Sutherland), a new approach for systems development was implemented, so that the computer program could better represent our real world.

Object-oriented programming (OOP) was born. Another paradigm shift.

And now, we have the concept of classes and methods:

Although we’ve seen all these new, rich, different approaches to delivering software and how to design the architecture, one thing remained the same: after compiling the code to machine language, it was specific-CPU dependent.

In other words, software created on an IBM PC would not run on an Apple computer, as each one had a different processor with different instruction sets.


Interoperability

Yet another paradigm shift brought us what is known as interoperable languages, such as Java and .NET.

The idea behind this was simple: to create an intermediary (virtual) set of instructions and compile the source code of a program to this middle set of instructions.

Then, on each family of computers, you would have a specific compiler/interpreter from these intermediary instructions to the specific set of instructions for each computer brand. A form of “two-step compiler” which would share a common set of instructions.

This intermediary set of instructions is known as bytecode. The bytecode is run on the JVM (Java virtual machine) or CLR (Common Language Runtime). This made it possible to write a program once and run it anywhere (which became a slogan created by Sun Microsystems for Java).

Although this evolution in computer languages and software architecture made sense in a world where each computer was an isolated island, we soon began to connect our devices through networks.

Network protocols were born, to allow communication between machines through an electronic communication channel.

In 1989, the World Wide Web was invented by Tim Berners-Lee. Now, the software needed to be distributed via a network and we didn’t know what type of computers were connected to it.

New standards were created to solve this problem, and client-server architecture was brought to programming languages. This approach considered that the computer software would now reside on a server, which would deliver information to a client upon certain requests.

This new paradigm shift completely changed the way we work with software and programming. We’ve had to sign up to online hosting services to publish our software. Then, upload the software to the server which responded to users’ requests 24 hours a day.

The paragraph above describes a very recent era of the internet and client-server software. It entirely changed the world as we know it and with it, the way we produce, distribute and consume computer programs.

Although the benefits are unquestionable, it is still an environment based on centralization, which is susceptible to attacks, censorship, and failures. The software depends on the server to always be online.

Even with the performance and scalability problems solved by the use of computer network clouds, it still has the problems of the intermediary — the man in the middle.

The intermediary usually puts hurdles in the way and causes difficulties for the users of a common, shared solution. This can be in the form of high fees, licenses, regional rules, government censorship; somehow always hurting the consumer.

For the sake of a free world, the more decentralized, the better. Although blockchains themselves are still far more expensive and less scalable than their counterparts, usually when a disruptive technology appears, costs fall sharply for consumers’ access to solutions.

As an example, consider how much cheaper and easier it is to send money to a relative in another country with Bitcoin today.


The New Era

We are at the beginning of a new era — the era of decentralized blockchains. Some may call this the internet 3.0.

This is our most recent paradigm shift and, with the advent of smart contracts, brings us a new way of creating computer software.

Smart contracts are programs that run on a decentralized environment. They are disruptive technology, in that they remove the need for an intermediary (middle-man) for certain real-world processes, thus making it cheaper, more accessible, and more efficient.

Common applications suited for being replaced by smart contracts: insurance, testaments, regular payment schedules, healthcare plans, autonomous vehicle economy, games, property exchange, tokenization of assets, mortgages, voting, and many more.

When a smart contract is published, a copy of it will reside on each blockchain server around the world. And, exactly as programmers once had to change how they created software when object-oriented architecture was introduced, we now have to, once again, adapt to this new approach.

You won’t be able to do a stock option Black-Scholes fair price calculation in a smart contract, that’s not what it’s about.


Conclusion

This article is the first part of a series, intending to teach the basics of smart contract programming, using a language based on Python, through a completely online environment IDE that allows editing, testing, debugging, and running smart contracts in a blockchain.

In part two of the series, we will begin a pragmatic approach, targeted to those who already have computer programming experience, but with step-by-step, simple examples.

In the end, you will be able to start coding smart contracts effectively.

Read part two.

Thanks for reading!

Better Programming

Advice for programmers.

Thanks to Solomon Lederer, PhD

Luiz Milfont

Written by

IT Developer

Better Programming

Advice for programmers.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade