Haskell vs Perl 6: First Impressions

LLFOURN
Unraveling the Ouroboros
7 min readMar 26, 2018

So far, I’ve been using Rakudo Perl 6 to demonstrate simple cryptographic schemes. It’s kept the code short and sweet and full of emoji. Since Ouroboros is written in Haskell and investigating it’s consensus algorithm is the point of this publication I thought I should at least dip my toes in it.

I rewrote my previous coin flipping with discrete logarithms code in Haskell. The Perl 6 version is here, and the Haskell version is here. They should be comparable side by side. If you’re a Haskeller I would love for you to check my code and welcome even the most nitpicky of nitpicks.

It’s worth noting that the first implementation of Perl 6 was written in Haskell. You can spot some functional programming influence in Perl 6 if you squint hard enough.

What follows is few things that surprised or confounded me while trying to translate between the two. The Perl 6 code is marked with 🦋 and the Haskell with the half life symbol λ⃝.

Types don’t exist at runtime

In the Perl 6 version, I leverage types at runtime. I used a subset declaration to derive a subtypes of Int with runtime constraints.

# 🦋
# The ranges
constant $ℤ𝑝 = ^𝑝; # Perl 6 for 0..(𝑝-1);
constant $ℤ𝑞 = ^𝑞;
# The types derived from the ranges
subset ℤ𝑝 of Int:D where $ℤ𝑝; # any Int in 0..(𝑝-1)
subset ℤ𝑞 of Int:D where $ℤ𝑞;
# The multiplicative group of order 𝑞 (generated by 𝑔)
subset 𝔾 of ℤ𝑝 where *.expmod(𝑞, 𝑝) == 1

So ℤ𝑞, ℤ𝑞 and 𝔾 are declared at compile time but their constraints (declared with where) are checked at runtime.

With Haskell you can’t do the same thing. Types are a compile-time only construct; you can’t attach runtime constraints to them. Given that Haskell is functional it made sense to write functions defining the constraints:

-- λ⃝
data Group = ℤ𝑝|ℤ𝑞|𝔾 deriving (Show, Eq)
upperBound :: Group -> Integer
upperBound ℤ𝑝 = 𝑝
upperBound ℤ𝑞 = 𝑞
member :: Group -> Integer -> Bool
member 𝔾 x = (member ℤ𝑝 x) && (expmod x 𝑞 𝑝) == 1
member g x = x >= 0 && x < (upperBound g)

The relationship between the two approaches is clear looking at the definitions of 𝔾 side by side:

# 🦋
subset 𝔾 of ℤ𝑝 where *.expmod(𝑞, 𝑝) == 1
-- λ⃝
member 𝔾 x = (member ℤ𝑝 x) && (expmod x 𝑞 𝑝) == 1

The Perl 6 one is neater but the Haskell version was nicely composed without any special keywords.

I use them to define the main discrete logarithm based commitment function like so:

# 🦋
sub COMMIT(ℤ𝑞 \𝑥 --> 𝔾) { expmod(𝑔, 𝑥, 𝑝) }
-- λ⃝
commit :: Integer -> Integer
commit 𝑥 | member ℤ𝑞 𝑥 = expmod 𝑔 𝑥 𝑝
| otherwise = error ( show 𝑥 ++ " isn't a member of ℤ𝑞" )

The main advantage of Perl 6 subset here is that you don’t have to write your own error messages and the signature looks like the mathematical definition of discrete logarithm commitment function.

Recursion Instead of loops

The secret-prompt function takes a function parse as an argument and returns input from the user after it’s been successfully processed byparse. The Perl 6 version uses an until loop while the Haskell version uses recursion.

# 🦋
sub secret-prompt(&parse) {
until defined my $valid = parse(read-line) {
say("Invalid value. Try again.")
}
return $valid;
}
-- λ⃝
secretPrompt :: (String -> Maybe a) -> IO a
secretPrompt parse = do
line <- readLine
case (parse line) of
Nothing -> putStrLn "Invalid value. Try again." >>
secretPrompt parse
Just x -> return x

The imperative version still feels more natural to me. I guess thinking in terms of functions and recursion instead of loops and variables just takes some getting used to. I’m not sure if the Haskell version is succinct as it could be.

What’s cool about the Haskell secretPrompt is how clearly and succinctly you specify the parse argument. In Perl 6, all I can say it has to be callable with the & sigil. In Haskell I define the signature (String -> Maybe a) -> IO a which translates to “The argument function for all calls to secretPrompt must take a String and return a Maybe of any type a. secreptPrompt then returns an IO of the type a”. Of course without this feature you wouldn’t be able to get this to compile in Haskell but I still think it’s pretty nice.

Edit: There is actually a way to do a check on a callable’s signature in Perl 6 at runtime. For example you could do:

sub secret-prompt(&parse:(Str --> Any)) {
until defined my $valid = parse(read-line) {
say("Invalid value. Try again.")
}
return $valid;
}

It’s not as powerful as Haskell’s version which lets you define the return type of the function at compile time based on what the return type of the argument function is. Thanks to Wenzel Peppmeyer for pointing this feature to me in the comments.

When Haskell becomes tedious

Satisfying the Haskell compiler (otherwise known as writing explicitly correct code) can be really tricky. I found myself feeding it a lot of repetitive declarations to keep it happy. It looks like sometimes this is so annoying that Haskell developers add language extensions to the compiler to lighten the load. Here’s how one called “existential quantification” proved to be quite handy.

For each step of the coin flipping game Alice or Rob sends the other a message of keys and values like so:

# 🦋
🧑🏻 ⟹ ( commitment => 𝑐, move => 𝑚 );

To get this to work in Haskell was really difficult because 𝑐 is an Integer and 𝑚 is a Coin. With my object orientated hat on, I thought that all I would need to do is make a class like MessageValue which would define how each type of thing should be printed out:

-- λ⃝
class MessageValue a where
gist :: a -> String
instance MessageValue String where
gist = id
instance MessageValue Integer where
gist x = "0x" ++ showHex x ""
instance MessageValue Coin where
gist = show

And then make the signature for take a list of (String, MessageValue) pairs like so:

-- λ⃝
(⟹) :: (MessageValue a) => Player -> [(String, a)] -> IO()

And call it like:

-- λ⃝
(👩) ⟹ [(“move”, 𝑚), (“commitment”, 𝑐)]

Since 𝑚 and 𝑐’s types are both instances ofMessageValue this should be fine right? Wrong! It doesn’t matter that they are both instances of the MessageValue class they are not the same instance of it. 😫 Arrrrg.

So how do I make them the same type? I made a new type that wraps both Integer and Coin like so:

-- λ⃝
data MessageWrap = MWS String | MWI Integer | MWC Coin
instance Message MessageWrap where
gist (MWS x) = gist x
gist (MWI x) = gist x
gist (MWC x) = gist x

And then use it like:

-- λ⃝
(👩) ⟹ [("move", MWC 𝑚), ("commitment", MWI 𝑐)]

Now we have list of (String, MessageValue) pairs and Haskell is happy again and GHC will compile my code. But this is not ideal because every time I want to print a new type of thing out I have to create a new MWwhatever value for it and then add another repetitive gist (MWwhatever) candidate.

Luckily there’s a compiler extension called ExistentialQuantification that introduces a special forall statement that can tell the compiler what I mean in my fewer statements. I used it like so:

{-# LANGUAGE ExistentialQuantification #-}
data MessageWrap = forall a. MessageValue a => MW a
-- Define gist which unwraps the the type
instance MessageValue MessageWrap where
gist (MW m) = gist m

And then to send the message I just do:

(👩) ⟹ [("move", MW 𝑚), ("commitment", MW 𝑐)]

And everything works! Awesome.

…except as I googled around I found that according to someone who knows what they’re talking about that this is an anti-pattern 😰. @jonathangfischoff thinks I should just do:

(👩) ⟹ [("move", gist 𝑚), ("commitment", gist 𝑐)]

I’m not sure agree. This means if I ever want to change how works internally (like send the communications over the network), I’ll have to change the caller as well which doesn’t sound good. I would like to know whether this is really an anti pattern in this case because it seems pretty good to me.

To compare, my Perl 6 version I just extended the builtin gist function with two multidispatch candidates.

# 🦋
multi gist(Int:D $_) { '0x' ~ .base(16).lc }
multi gist(Enumeration:D $_) is default { .gist }

It seems like the price Haskell pays for having elegant solutions to difficult problems is that some simple problems like printing some stuff out turns into a much harder problem🤔.

When it compiles it’s correct

This is what struck me the most about Haskell. After you go through the suffering to get it to compile, it works! There wasn’t a single time where it compiled and failed at runtime. I’ve never had this experience before even with other statically typed languages. Pure functional + static typing + powerful type inference seems to be local optimum of language design.

Of course it takes much more effort to actually get it to compile. Getting quick feedback through output and errors makes dynamic languages easier to learn in my opinion.

So far I found myself going through fits and starts of fun and frustration with Haskell. In other languages I’ve learned I usually feel pretty confident after a week of messing around with it. But with Haskell, it feels like I’m just getting started. I think next time I need to try to use it to implement some problems with more complicated algorithms and less finicky IO.

Disclaimer: All code is for demonstrative purposes only. Please don’t actually use unicode math characters and emoji in real code. Random numbers are not produced on a secure manner. Don’t take articles on the internet like this one as an implementation recommendation. Don’t even assume it’s correct! As always, never roll your own crypto.

--

--