Grok 1.0 Model Stats

Michael Humor
2 min readMar 31, 2024

--

xAI’s Grok 1.0 model (see Github repo) has 64 layers, 8K context length, in total 314B parameters.

The model architecture is Mixture of 8 Experts (MoE), with 48 attention heads for queries, 8 attention heads for keys/values, and 6,144 embedding dimension.

Notably, the model uses a vocabulary of 131,072 tokens in its tokenizer. Comparatively, mixtral and llama-2 models use only 32000 tokens.

Here is the more info:

llama_model_loader: - kv   0:                       general.architecture str              = grok
llama_model_loader: - kv 1: general.name str = Grok
llama_model_loader: - kv 2: grok.block_count u32 = 64
llama_model_loader: - kv 3: grok.context_length u32 = 8192
llama_model_loader: - kv 4: grok.embedding_length u32 = 6144
llama_model_loader: - kv 5: grok.feed_forward_length u32 = 32768
llama_model_loader: - kv 6: grok.attention.head_count u32 = 48
llama_model_loader: - kv 7: grok.attention.head_count_kv u32 = 8
llama_model_loader: - kv 8: grok.rope.freq_base f32 = 10000.000000
llama_model_loader: - kv 9: grok.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: grok.expert_count u32 = 8
llama_model_loader: - kv 11: grok.expert_used_count u32 = 2
llama_model_loader: - kv 12: general.file_type u32 = 22
llama_model_loader: - kv 13: tokenizer.ggml.model str = llama
llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,131072] = ["[PAD]", "[BOS]", "[EOS]", "[UNK]", ...
llama_model_loader: - kv 15: tokenizer.ggml.scores arr[f32,131072] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,131072] = [3, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 19: tokenizer.ggml.padding_token_id u32 = 0
llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = false
llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 22: general.quantization_version u32 = 2
llama_model_loader: - kv 23: split.no u16 = 0
llama_model_loader: - kv 24: split.count u16 = 9
llama_model_loader: - kv 25: split.tensors.count i32 = 2115
llama_model_loader: - type f32: 257 tensors
llama_model_loader: - type f16: 64 tensors
llama_model_loader: - type q8_0: 128 tensors
llama_model_loader: - type q5_K: 64 tensors
llama_model_loader: - type q6_K: 1 tensors
llama_model_loader: - type iq3_xxs: 880 tensors
llama_model_loader: - type iq3_s: 721 tensors
llm_load_print_meta: arch             = grok
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 131072
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 8192
llm_load_print_meta: n_embd = 6144
llm_load_print_meta: n_head = 48
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 64
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 6
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 32768
llm_load_print_meta: n_expert = 8
llm_load_print_meta: n_expert_used = 2
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 8192
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 314B
llm_load_print_meta: model ftype = IQ3_XS - 3.3 bpw
llm_load_print_meta: model params = 316.49 B
llm_load_print_meta: model size = 120.73 GiB (3.28 BPW)
llm_load_print_meta: general.name = Grok
llm_load_print_meta: BOS token = 1 '[BOS]'
llm_load_print_meta: EOS token = 2 '[EOS]'
llm_load_print_meta: UNK token = 0 '[PAD]'
llm_load_print_meta: PAD token = 0 '[PAD]'
llm_load_print_meta: LF token = 79 '<0x0A>'

--

--