PinnedPublished inStackademicFast and Portable Llama2 Inference on the Heterogeneous EdgeThe Rust+Wasm stack provides a strong alternative to Python in AI inference.Sep 26, 20232Sep 26, 20232
How to set up your Jetson device for LLM inference and fine-tuningThe Jetson AGX Orin 64GB device is the best money can buy for llama2 inference. Here is how.Oct 2, 2023Oct 2, 2023
Published inStackademicWhy did Elon Musk say that Rust is the language of AGI?and why WasmEdge is on the critical path of AGI adoption of Rust!Aug 6, 202358Aug 6, 202358
Running llama2.c in WasmEdgellama2.c runs Llama 2 models without the bloated & pesky Python dependencies. That means that we can get it running in WasmEdge too.Jul 24, 2023Jul 24, 2023
Published inArtificial Intelligence in Plain EnglishHow to Create a Serverless ChatGPT GitHub App in 5 MinutesA ChatGPT bot to respond to your GitHub IssuesMar 21, 2023Mar 21, 2023
A Complete Guide to DCO for Open Source DevelopersWhen you contribute code to an open-source project, such as WasmEdge, you are often asked to sign a DCO (Developer Certificate of Origin)…Oct 31, 2021Oct 31, 2021
Published inWasmA lightweight, safe, portable, and high-performance runtime for DaprWebAssembly programs are embedded into Dapr sidecar applications, and hence can be portable and agnostic to the Dapr host environment.Oct 31, 2021Oct 31, 2021
Published inJavaScript in Plain EnglishCreate High-Performance JavaScript APIs using RustWasmEdge brings together Rust’s performance and JavaScript’s ease of useOct 22, 2021Oct 22, 2021
Published inWasmCloud-native WebAssembly in Service MeshWasmEdge could be the lightweight runtime for sidecar-based microservices or a script runtime for API proxies.Oct 11, 2021Oct 11, 2021