Designing a programming language, continued

Aidan Fitzgerald
6 min readAug 5, 2015

--

More thoughts about my new-language idea, which eevee code-named “Sylph” in the post that inspired my previous post.

The table data type

Some readers at the Hackathon Hackers asked me for more details about the table data type, including how it will be implemented. As in Java, it shouldn’t really matter to the end user how features are implemented under the hood, but it’s good to know.

I’m thinking that each table will consist of a series of hashtables, 1 for each column plus 1 to look up the columns, and a giant 2-D array where the data goes. The column lookup hashtable maps column names (strings) to the addresses of the column hashtables, each of which maps the corresponding field into a tuple to the address of that tuple. The number of buckets in the column lookup hashtable should be the greatest power of 2 less than or equal to the number of columns. The number of buckets should be the greatest power of two less than or equal to the 3/4 power of the maximum number of rows you expect to store in the table; if you don’t know this ahead of time, e.g. by setting a max in the code or setting a flag when running the compiler in the terminal, the compiler can figure it out… somehow. The idea is to strike the right balance between lookup speed and memory usage, and I’ve heard that memory is more precious than CPU cycles….

Computational features — GPUs, slicing

In my previous post, I promised that I would write a follow-up about the computational features I want in Sylph. Most importantly, I want the compiler to automatically move repetitive, high-volume computations to the GPU. As far as I’ve seen, it’s usually loops that get optimized in this way, but it’s hard to tell if the loop can actually be parallelized, or if each step depends on the last and therefore the steps must be done in series.

List comprehensions should be easy to parallelize, actually, as well as functions that operate on lists. I’ll show you how I think it’ll work with the dot product example from my previous essay:

def * = ~(x:vector(n) ~ y:vector(n)):
return sum [x[i] * y[i] for i in range(n)]

First, the list comprehension is computed. That is, 3n spaces are allocated in the GPU’s memory, and the first 2n spaces are filled with the values in the x and y vectors. Then, n GPU cells multiply the n pairs of components of x and y and put the results into the last n spaces — a vector, which we’ll call z.

Second, the components of z are summed using a divide-and-conquer algorithm in which every two components are added together, and then every two of those sums are added together, and so on until one value remains. This completes in O(log n), whereas the standard algorithm of adding the numbers one at a time to a running total takes O(n).

Hardware note: On systems with dedicated graphics cards, offloading a loop or comprehension to a GPU should only be done if all the steps of shuttling the data to the GPU, computing the result, and shuttling it back to the CPU are faster or more energy-efficient than just doing the whole calculation on the CPU. Systems with integrated graphics (and shared GPU memory) are better in this regard, because GPU math will always be faster than CPU math — sorry, pro gamers!

Bit slicing

Another feature I’d like to include in Sylph is slicing numbers the same way you’d slice a list or string in Python. eevee writes:

I note that a lot of the use of bitwise operators (even in C) is for packing and unpacking bits into and out of a number. That’s ridiculous. You wouldn’t store a struct in a single huge integer and pluck parts out of it with bit ops; why do we tolerate doing that when the fields happen to be smaller than a byte? There should definitely be some kind of language support for this. We’re doing manual bit math just to feed the results to a compiler in 2015!

The syntax will resemble Python slice notation for lists and strings, except that instead of counting from left to right, we count from right to left — so index 0 is the units bit, index 1 is the 2's bit, index 2 is the 4's bit, ad infinitum. For example, rather than writing n & 0xFF to pull out the lowest byte, we write n[0:8]. Instead of n & 0xFF00 >> 8, we write n[8:16].

These are simple examples. Those of you who are expert bit-pluckers are probably going like, meh, it’s not that hard to pull out consecutive bits with bitwise operators. But what if you want non-consecutive bits, or you want to reverse their order? As in Python, the third parameter of the bit slice (after a second colon) is how many places you jump to the left when plucking out bits. So doing [0:16:2] on 01010101 01010101 gives us 11111111. If it’s negative, you jump that many places to the right. So doing [0:16:-1] on 01010101 01010101 gives us 10101010 10101010.

The bit slice is useful for the Cooley–Tukey FFT algorithm, in which the first step is rearranging the elements of the array so that the index of each element becomes bit-reversed. We can write:

// In my previous post, I defined vector(n) as a list of n numbers.
def bitReversal(X:vector(n)):
for k in range(n/2):
swap(k)
// Nested helper function!
def swap(i):
j = i[::-1]
tmp = X[i]
X[i] = X[j]
X[j] = tmp

(I haven’t studied the FFT, but I’m dimly aware that the bit-reversal permutation is involved. I did study the DCT, though.)

I might also extend the bit slice to floating point numbers, where negative indices represent the 1/2's, 1/4's, 1/8's, 1/16's, etc.

Also, if any of the numbers in the expression n[i:j:k] are omitted, the default values should be i = 0, j is the index of the highest bit, and k = 1.

I also want operators for the number of ‘on’ bits, the number of ‘off’ bits, and the number of significant bits (bits between the lowest and highest ‘on’ bits, inclusive).

eevee also wants to get rid of the legacy & | ^ ~ entirely, partly because they’re rarely used (and when they are, they don’t work like they do in C) and partly to free up space on the keyboard for more sophisticated stuff, like | for Unix terminal–style list comprehensions. I already use the ~ instead for lambdas — why not also use |x| for absolute value, n! for factorial, p | q for divisibility (or equivalently, Perl 6's q %% p), ^@ etc. for the control characters, etc? I’m tempted to add the bit ops back in, but they’d have to be defined in such a way that they always work, even on floats… ehhhhh…

More logical operators

We should have logical operators for conditionals and biconditionals, as well as modules that overload the logic ops for fuzzy logic and logic graphs. As with Python’s and or not, these will be keywords, and the syntax will look something like one of these:

// These are all equivalent
if p then q
only if q then p
p only if q
q if p
// These are biconditionals
p if and only if q
p iff q
p == q
// tempting...
not (p xor q)

Undefined should be a valid boolean, too — for example, if p then q should return undefined iff p is false. Its equivalent in fuzzy logic is NaN.

And you need xor in real life, so it makes sense that someone will want it in their fuzzy logic engine. It’s not in Python, though… screw that, I’ll just add it (back) in.

Plus a ? postfix operator to cast stuff to booleans, like in C (x → x != 0). Ruby has something like that.

And then a ! after a boolean is like assert in Java. If the boolean is true, you print the expression that yielded it. If it’s false or undefined, you still print the expression, but through stderr instead, and you terminate. Let me try it:

false!

--

--

Aidan Fitzgerald

CS @Cornell Engineering, founder of Social Hacks, and Co-President of Cornell CS+Social Good.