NuNet Development Update: February 2023

NuNet Team
NuNet
Published in
3 min readFeb 7, 2023

Greetings NuNetopians,

The NuNet development team is making final adjustments to our first use cases as we prepare for the release of Public Alpha this year. NuNet is also conducting testing of the network externally with members of the community on our Discord server. Read on for all of this month’s development notes.

Cardano SPO Computing February Report

During testing, we discovered that the remote shell was getting too unreliable and hence this issue was created two weeks ago to report it. The final remote shell ticket is still in review and unmerged because of this unreliability.

Since we were already transitioning to a new p2p networking library (libp2p), the plan now is that the communication issues we faced with the current library (py-ipv8) can be solved with the new one, so we started to expedite the transition (new milestone) in order to replace the networking stack from the ground up and attempt the remote shell again.

So far the transition is going well. This week alone, the new networking got integrated and we’re working on DHT and NAT relay.

We’ll replicate every functionality we had from the previous library on the new one, deprecate the old one, create a release and test new networking within 2 to 3 weeks from now.

As always, all our ml on SPO related issues can be found here.

GPU ML Cloud February Report

For our GPU ML decentralized cloud progress, we can now resume interrupted ML jobs either on PyTorch or TensorFlow. As a real world example, we’ve tested training an open source alternative to ChatGPT, known as PaLM+RLHF and have successfully been able to interrupt and resume it on other machines.

It is important to note that no additional modifications were made to the existing ML on GPU service workflow, in order to test the above real world example, and it was successfully implemented like any other ML training program and dependency.

We are immensely happy to share that different machines on our network can use checkpointing to carry on ML training or any other computational progress. We’ve also designed an ML on GPU test protocol for community testing.

Our ML on GPU service is ready for production level usage. Our device management service (DMS) can now send ML progress logs back to the web app.

References:

https://gitlab.com/groups/nunet/ml-on-gpu/-/issues/?sort=created_date&state=all

https://gitlab.com/groups/nunet/device-management-service/-/issues/?sort=created_date&state=all

GitLab
Compute provider testing protocol for ML on GPUs · Wiki · NuNet / documentation · GitLab

GitLab
Issues · ml-on-gpu · GitLab

NuNet Is Hiring!

NuNet currently has a number of open positions for various roles within the team. If you have the skills and desire to join us in our journey, you can find more information and contact us through our career page.

About NuNet

NuNet lets anyone share and monetize their computing resources, turning cloud computing power from a centralized service into an open protocol powered by blockchain. Find out more via:

--

--