Llama 3 is the first open source model I’ve found that can write accurate Rust code.

Llama 3’s Rust Skills are Impressive

John Boero
TeraSky
Published in
6 min readApr 29, 2024

--

A lot of the public scoring on open models today are based on general knowledge. Many of these models are impressive for general use but often even a large model will lack some skill I need or have training data too old to matter. This is most evident with Rust. Llama 2 introduced CodeLlama models which specialized in writing Python and other languages but in my experience could never generate working Rust code. In fact not one of my favourite general purpose models can generate current Rust code. At the speed Rust is evolving and deprecating features it would be hard for anything but a fresh model to do this. I have a simple test for judging simple Rust writing skills of a model. GPT-4 can handle this without breaking a sweat but I’ve not seen a private open model produce working Rust with today’s Rust ecosystem until now. CodeLlama 3 is coming soon and hopefully will be an even bigger boost for Rust developers but for now I’ll show my own observations with general-purpose Llama-3-70B-Instruct. My standard test is to write a Rust app to read a local JSON file and validate it against a local schema file. I always like my JSON documents to have discplined schemas.

No JSON docs without schemas!

Below is a conversation trying to get the right answer. The first attempt uses a slightly deprecated version of the validate method which isn’t surprising with how fast Rust moves. Even GPT4 sometimes does this. It takes a bit of cajoling with compiler errors but Llama-3 gets there in three attempts. In the past I’ve gotten there in two. Unfortunately during training a compiler is not available to test the code being fed into the model. This sort of training would make for an ultimate code model but for now it just takes a bit of feeding compiler errors into the context. These attempts extend the conversation context and always presented a challenge in earlier models with 4K limited training context. Take the full conversation below to see how Lllama 3 flexes its 8KB context and handles this:

Conversation

User: Hi Llama. Can you please write me some Rust code? I’d like a simple app that reads a local JSON file called “test.json” and a schema file “schema.json” and validates the test document against the schema, showing any unexpected values. Please also show the Cargo.toml file that should be used to build it.

Llama: Here is a simple Rust application that reads a local JSON file and validates it against a schema using the serde_json and jsonschema crates.

First, here's the Cargo.toml:

[package]
name = "json-validator"
version = "0.1.0"
edition = "2018"

[dependencies]
serde_json = "1.0.64"
jsonschema = "0.16.0"

And here is the Rust code in src/main.rs:

use std::fs;
use serde_json::{Value, json};
use jsonschema::{JSONSchema, Draft};
fn main() -> Result<(), Box<dyn std::error::Error>> {
let test_data = fs::read_to_string("test.json")?;
let schema_data = fs::read_to_string("schema.json")?;
let instance: Value = serde_json::from_str(&test_data)?;
let schema: JSONSchema = serde_json::from_str(&schema_data)?;
if let Err(errors) = schema.validate(&instance, &Draft:: Draft7) {
for error in errors {
println!("Error: {}", error);
}
} else {
println!("JSON is valid");
}
Ok(())
}

This code reads the test.json and schema.json files into strings using fs::read_to_string. It then parses these strings into JSON values using serde_json::from_str.

Next, it validates the instance against the schema using jsonschema::JSONSchema::validate. If there are any errors during validation, they will be printed to the console.

Please note that you need to have Rust and Cargo installed on your system for this code to work. You can install them from the official Rust installation page: https://www.rust-lang.org/tools/install

Also, make sure test.json and schema.json files are in the same directory as your executable or provide full paths to these files.

Let me know if you have any questions!

User: Actually my Rust environment has deprecated the version of the “validate” method you use. It expects just one argument. Here is the error:

error[E0061]: this method takes 1 argument but 2 arguments were supplied
→ src/main.rs:9:33
|
9 | if let Err(errors) = schema.validate(&instance, &Draft:: Draft7) {
| ^^^^^^^^ — — — — — — — — -
| | |
| | unexpected argument of type &Draft
| help: remove the extra argument

Please try adjusting the code to make it work?

Llama: I apologize for the mistake! The validate method has indeed changed in recent versions of the jsonschema crate.

Here's an updated version of the code:

use std::fs;
use serde_json::{Value, json};
use jsonschema::{JSONSchema, Draft};
fn main() -> Result<(), Box<dyn std::error::Error>> {
let test_data = fs::readto_string("test.json")?;
let schema_data = fs::readto_string("schema.json")?;
let instance: Value = serde_json::from_str(&test_data)?;
let schema: JSONSchema = serde_json::from_str(&schema_data)?;
if let Err(errors) = schema.validate(&instance) {
for error in errors {
println!("Error: {}", error);
}
} else {
println!("JSON is valid");
}
Ok(())
}

I removed the &Draft:: Draft7 argument from the validate method call. This should fix the compilation issue.

Let me know if you have any further issues!

User: Thanks but there is still an error. I don’t think serdejson::from_str can read directly into a JSONSchema.

error[E0277]: the trait bound JSONSchema: serde::de::Deserialize<'_> is not satisfied
--> src/main.rs:10:30
|
10 | let schema: JSONSchema = serdejson::from_str(&schema_data)?;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the trait serde::de::Deserialize<'_> is not implemented for JSONSchema
|
= help: the following other types implement trait serde::de::Deserialize<'de>:

Can you think of a way to rewrite this?

Llama: You’re absolutely right again!

The issue is that serdejson::from_str can't directly deserialize into a JSONSchema. Instead, we need to first deserialize into a JSON value using serde_json, and then create a JSONSchema instance from it.

Here's an updated version of the code:

use std::fs;
use serde_json::{Value, json};
use jsonschema::{Draft, JSONSchema};
fn main() -> Result<(), Box<dyn std::error::Error>> {
let test_data = fs::read_to_string("test.json")?;
let schema_data = fs::read_to_string("schema.json")?;
let instance: Value = serdejson::from_str(&test_data)?;
let schema_value: Value = serde_json::from_str(&schema_data).unwrap();
let schema = JSONSchema::compile(&schema_value)?;

if let Err(errors) = schema.validate(&instance) {
for error in errors {
println!("Error: {}", error);
}
} else {
println!("JSON is valid");
}
Ok(())
}

In this version, we first deserialize the schema.json file into a JSON value using serdejson::from_str. Then, we create a JSONSchema instance from that value using JSONSchema::compile.

This should fix the compilation issue. Let me know if you have any further issues!

Results

Finally Llama was able to spit out what I wanted after dealing with three compilation errors. It seems the code was not wrong but was obsolete using older crates or documentation.

Sometimes an LLM needs guidance from compiler errors.

Finally the code will work for me using my rustc v1.77.2. There are warnings about the way crates are used but the code does exactly what I want it to. For a sample we take the basic Person schema and try a negative age.

{ 
“$id”: “https://example.com/person.schema.json",
“$schema”: “https://json-schema.org/draft/2020-12/schema",
“title”: “Person”,
“type”: “object”,
“properties”: {
“firstName”: {
“type”: “string”,
“description”: “The person’s first name.”
},
“lastName”: {
“type”: “string”,
“description”: “The person’s last name.”
},
“age”: {
“description”: “Age in years which must be equal to or greater than zero.”,
“type”: “integer”,
“minimum”: 0
}
}
}
{ 
“firstName”: “John”,
“lastName”: “Doe”,
“age”: -2
}
./target/debug/json_validator 
Error: -2 is less than the minimum of 0

This is something I couldn’t get from any CodeLlama or even GPT-3.5-turbo. Maybe someone else with proper prompt engineering could get something working with obsolete Rust but I could never make it happen with current standards. Models based on 4 year old training data don’t stand a chance writing Rust today.

Conclusion

General purpose LLM ecosystems are maturing beautifully. Coding has always presented more of a problem with reasoning and thought being applied to training data which may or may not be current. Just like a user copying older code from StackOverflow may hit errors, a code assistance bot can present you with invalid code. I would love to train a Rust LLM on a combination of current documentation and actual compiler errors given generated code. That kind of data is much more complicated to train on though. CodeLlama 3 is starting to trickle into the ecosystem and I can’t wait to test drive it with Rust challenges.

--

--

John Boero
TeraSky

I'm not here for popular opinion. I'm here for hard facts and future inevitability. Field CTO for Terasky. American expat in London with 20 years experience.