PowerPool Community Call #1, 2024 RECAP — February 16, 2024

Mr FOS
PowerPool
Published in
30 min readFeb 27, 2024

PowerPool has now completed its 1st Community Call AMA of 2024 discussing:

  • Gitbook Community Challenge announcement and objectives
  • PowerPool’s new use cases with PowerAgent v2
  • Testnet Program’s most recent development and progress.
  • Partitura, Breadchain, and Daoism System’s progress and their added value for PowerAgent
  • VRF and its importance.
  • Outlook for Automation: PowerBaskets and billing automation

If you missed the call, we’ve prepared this recap article for you.

Spaces recording: https://youtu.be/kCCuJwt8_Qk

PowerPool Speakers:

Gordon Gekko — Chief Strategist @GordonGekko_CVP

Vasily Sumanov — Director of Research @Vasily_Sumanov

Introduction to PowerPool

Gordon: Welcome everybody to this first introductory Community AMA for 2024. We plan to do at least four Community-only AMAs every year. As you know, we’ve been doing a lot of guest AMAs featuring developers who are already building on top of PowerAgent. But today is just a pure Powerpool Community AMA. We haven’t got a guest today. I’m joined by Vasily Sumanov, who’s our Director of Research. And he’ll tell you a lot about our most recent Gitbook Community Challenge.

While we’re waiting for more people to join this session, let me just — for those of you who don’t know, give an overview of PowerPool and PowerAgent. PowerPool is a fair launch DAO. So there are no VCs. It’s a global Community. We develop and operate the PowerAgent Automation Network, which involves currently hundreds of Keepers running our automation network client nodes to provide automation services on an increasing number of EVM chains. We are and have been for some time, live on Gnosis Chain. We are also actually live on Ethereum mainnet. And we have a clear roadmap to roll out across most major EVM chains. And each of those EVM chain rollouts is going to be earning real native tokens for our Keepers. PowerPool is really a community that’s about earning, even on modest hardware and bandwidth. Our validators earn by being available and executing transactions faithfully.

Now that we have the PowerAgent Automation Network v.2 more or less production-ready, we are focusing on attracting Developers to code automated services that run as Templates on top of PowerAgent. We’re very, very keen to provide proof of concepts to Developers to say, look, you can earn native tokens directly to your own wallets by developing Templated automated services on PowerAgent. You don’t need to go out and find a project. You don’t need to have VCs. You don’t need to work for egotistical founders. And you don’t need silly tokens. You just write services that run on PowerAgent. And we’re going to be developing our DAppStore to push that vision and make Templated services available on trusted links that will not drain wallets.

So, really, whether you’re a validator or a developer and, ideally, some of you are both, this is a great Community to join. With the rate of innovation going as it is and the bull market arriving, there are tremendous opportunities. The purpose of these AMAs is to get more and more of you involved in running validators, writing templated services, and, as we’ll be describing, the key to getting more people involved is to make it easier for them to read through the documentation, access our Forums and socials, and get help from people who’ve already done it.

And that’s the principal theme, I think, of 2024: growing the size of the Community by making it easier for people to join. Once people see it, they’re going to get it. And once they get it, they will want to learn and learn fast. So we have to make that as easy as possible and be as helpful as possible. So, with that, I will introduce Vasily, our Director of Research, and he’ll start off, I think, by talking about the new Gitbook Grants Program. Vasily?

Gitbook Community Challenge

Vasily: Hello, nice to meet everyone. Thank you Gordon for our traditional introduction to PowerPool and our earning options for Keepers and Developers. I think that we should start with the just-announced Gitbook grants or awards contest. We created this over the last month and there are many technical parts of it. All the instructions, node running guides, Job creation guides, et cetera. And now we are also just finishing some minor edits to the general pages, like the dashboard links where you can see all the statistics, how to use Explorer to track all the Jobs, executions, slashing events, and all this kind of stuff.

So what does this latest Gitbook contest mean? It means that if you’re from the PowerPool Community, if you’re running a node or developing some Jobs, and you have some experience already, you can go to the Gitbook, check out the instructions, check out the pages, and if you see the need for some improvements there, you can improve it and provide the suggestions, make a new page, for example, or provide some edits, and for this, you can get prizes. We have allocated $2,000 for the prize pool. So, I think it’s quite a large amount for some just improving our instructions. So, if you have some ideas on how to make our documentation and instructions easier for new users, more readable, just make them better, you can contribute and get your reward. It could amount to several hundred dollars for your suggestions, and I think that is fair enough for these kinds of contributions. So, if you’re a node runner, you can now make the Gitbook instructions better for everyone and at the same time earn a bit from it.

We will publish a little article with all the details of the contest, with all the prizes and all the stuff that you need and you will be able to start contributing right away. A solid Gitbook is very important for PowerPool because we need to acquire new node runners, and new Job creators, and if all the instructions, all examples can be made good enough for the average user, it will be really good.

If you have any questions, please ask. We also will make an article, as I said before, and if you still have some questions about some details, you can DM me or make some comments on Discord or Twitter. Also, we will create a special page in the PowerPool Forum, so you can share your edits there, and your ideas, and make the overall logging transparent. We plan to make all the awards based on public submissions, so everybody will be able to read any kind of submission, and see what kind of work was rewarded, if you have some questions, you can also ask them, so we will try to do it in the most transparent way possible.

PowerPool’s new use cases

Vasily: Today’s next topic is PowerPool’s new use cases. So, at this moment, we are working with Partitura to integrate our DCA strategy into the Partitura UI, it means that you will be able to create your DCA strategy on Arbitrum for buying Ethereum, and as an option, you can also immediately convert this Ethereum to the wrapped staked LST from Lido, so it means that if you need to accumulate some amount of ETH on Arbitrum, you can do it effortlessly using the DCA strategy. You don’t need to know how to code, you don’t need to know the contract addresses, anything,. Using just the PowerPool PowerAgent template, you just launch the strategy from the UI, deposit some Ethereum, and it works from the contract factory. It should be the first case of a quite user-friendly UI for non-technical users. I think this pilot will go well, and I hope it will.

Soon, we will start adding other templated services like limit orders and some other templates. Currently, I think the point of complexity for a user is that you need to configure your Job in the PowerAgent user interface, and it requires defining many parameters like the stake range, choosing types of contracts, and all this. It’s not complicated if you are experienced enough to do it, but if you want to just press the button and get it to work, of course, the PowerAgent interface requires you to learn a lot. But automation is something that is needed by users who are not technically accomplished, so it means that we need to provide templated plug-and-play solutions for them, just one-click, and templated services will be the one-click solution. So thanks to the Partitura integration, and we are collaborating with them a lot, we provided the code and some descriptions of the code already to them, so the first step toward templating an automated service like DCA strategy is done, as I can see. So, Gordon, do you want to add something?

Gordon: Well, regarding the Gitbook challenge, the forum for that is primarily our Community Forum. We need to get a lot more traffic going on our Forum, and there’s now a dedicated page where people can throw out ideas and even collaborate on ideas as regards to improved documentation and so on. I mentioned earlier that, or I made a mistake, actually, I talked about Gitcoin grants. We actually do also have a grants program on Gitcoin attracting developers, so both things exist, but Gitbook is the new effort to improve our documentation.

As regards potential new use cases, if you look on the Forum, you’ll see I’ve posted some new thoughts about future use cases for PowerAgent, which have nothing to do with DeFi, but instead partly with proving the concept that a developer can just put up a templated service and get paid straight into their wallets, no need for a token, no need for a project, no need for founders, and no need for VCs. I think this is a game-changer. We need to get it running, and once it’s up and running, the world will get it, trust me.

The other one that’s more generic is that nowadays there’s a lot of talk about the problems of deep fakes and fake video, fake audio, fake everything, all machine-generated, and the conclusion is always, oh, well, you need to secure it with a hash on-chain, and oh, by the way, identity is the same. Everybody’s fake, so now the only real people are secured with their identities hashed on-chain.

And then finally, there’s the fact that a blockchain really is a calendar. The blockchain tells you exactly when everything happens in time. So if you put those things together, the content, the people, and the calendar, you have all the data you need to make lots and lots of interesting automated calculations. For example, when you think about the real world, you have an electricity meter, you have a gas meter, they’re monitoring your consumption of something, but at the end of the day, there’s a closed-source, proprietary database in the utility that sends you the bill, and you get a bill once a month. As Web 2 comes across towards Web 3, the problem of how do we track events, prove things, and verify identities given that the events and verification data all has to be on-chain, how do we track usage, how do we measure usage, how do we bill for usage? This is, in my opinion, a very important use case for an EVM automation network like PowerAgent.

Now, I don’t know the answers. I don’t know how PowerAgent can step in and supply the primitives that are needed, but what I would say is that this use case is at least as big as DeFi if we can get it right. So the first step is, again, to put up proof of concepts, to propose something, get a grant, and go out and say, yeah, this is the first cut of the primitives such that PowerAgent can help people send bills or collect their money based on on-chain events, identities, and timestamps. This is a generalized extension of the idea that a developer who puts up a Templated automation service should get paid automatically. There’s no reason to use non-native tokens for all this. There’s no reason. If you put up a service, you should get paid in native tokens on that chain directly. Now, we all know that this didn’t happen in crypto in the beginning because there was no trustworthy automation, and nobody was going to go out of their way to manually make an extra payment of a tiny micro-royalty to the developer who developed the service.

But PowerAgent changes all that. It’s a new paradigm. So we need to get it out there and get some simple examples working because I guarantee you, once developers see that they can just spend a few hours in their spare time on Templates and get their money straight to their wallet forever, passive income, they’re going to do that, and they’re going to break down our doors to get to the documentation. But we’re going to need to make it easy for them. Right now the composition of the DAO is skewed. We have more validators than developers. But all those validators will need to pitch in and attract developers because why? Well, because every single one of those Templated automated services will be generating fees for the validators. So it’s a win-win all the way around.

And as I said before, I do not know all the answers to many of these use cases. What I do know is that not a lot of people are paying attention to these use cases because they require automation. It’s the same story with PowerBaskets. Not a lot of people pay attention to multi-token, auto-farmed PowerBaskets because they require a lot of sophisticated automation. But thanks to PowerAgent v.2, we now have that. So we now have the advantage and can think, ah, if I start with an infrastructure platform and automation services network like PowerAgent, what could I do? How should it work? And I think it’s tremendously exciting. There’s lots of stuff that will work totally differently in the future, principally because you now have trustworthy, decentralized automation. You did not have that in the beginning of crypto and blockchains. Everything was manual. Not so now. So it’s like turning a page on crypto and crypto development. But we need to start putting out proof of concepts that people can see and that work and that they can play with.

Right. So, Vasily, next we need to talk about the testnets and where we’re at with recruiting more validators/Keepers.

PowerAgent V2 Testnets program

Vasily: Yeah, so regarding the testnets, what we have now, we have more than 100 active Keepers on Gnosis Chain. The testnet on Gnosis Chain is running really very well. We also have maybe seven to ten Gnosis Chain validators who are currently using PowerAgent Jobs to automate their reward claims. So they have already started to create automation Jobs and these Jobs really work. It’s possibly the first practical use case of PowerAgent so far. And, of course, it’s not DeFi. It’s more about the infrastructure for validators, but it works and people get really excited by that. I want to mention that running on Gnosis Chain is extremely cheap, so it means that you can automate almost everything and not think about the cost at all. Don’t worry about the cost because it’s just pennies, really pennies, or even fractions of pennies.

Regarding PowerAgent on the Ethereum mainnet, we always have our Sepolia testnet running, but currently, we’re not paying so much attention to it. Some time ago, we stopped testing our DAO-funded Jobs on Ethereum because Ethereum Jobs just consumes too much gas just for testing and it’s just spending ETH up front without any real prospective return in terms of users. So we decided not to spend a lot more ETH on sponsoring more proof-concept mainnet Jobs or making grants to increase the number of Keepers on mainnet. Instead, we decided to allocate this ETH to Arbitrum because this L2 could be much more popular in terms of automation. The DCA strategy that I mentioned before will be launched on Arbitrum, so anyone will be able to test it out there using a particular UI.

So Arbitrum is the focus of testing at the moment, but unfortunately, in terms of Keeper numbers, not many people can properly run an Arbitrum node. So this is the main bottleneck there because the Gnosis Chain node is really easy to run, but the Arbitrum node has a lot of issues. Even for our team, we had several crashes of Arbitrum nodes, and some corruption of the database, so we needed to relaunch it and it took a long time to sync. So, yeah, the RPC infrastructure is not as reliable as we want. There are also some decentralized RPC services that we also tested, like DRPC or BlockPy, so we can suggest BlockPy for use by our validators/Keepers. If you want to run a node, but you don’t want to run an Arbitrum RPC because it’s too complicated for you, you can try BlockPy and also start earning ETH on Arbitrum.

We have also launched on Polygon, but the Polygon node-running is even more complicated, because first of all, the Polygon node is much heavier than the Arbitrum one, and it also requires a lot of knowledge to run. The second problem is that there are some technical issues on Polygon that we need to solve first. We found some bugs and tried to identify them properly, so we are waiting for some additional logs to integrate into the node to see how it works.

We are also working on launching PowerAgent on Scroll and Optimism, but Scroll and Optimism require us to first develop our own VRF, verifiable random function since Optimism and Scroll don’t have any kind of built-in source of randomness. Our VRF is still under development, and it’s not an easy task to do. We are hoping to finish the VRF soon, and after that, we will be able to launch PowerAgent on Scroll and Optimism. Also, we have had some quite good discussions with the Scroll team regarding some support for a Scroll Hackathon, and some events with the Scroll team, when we launch PowerAgent there, so they are waiting for us, when we will launch it, and when we will also have the verifiable random function live I think we will get some consistent support from Scroll. They also will possibly use our new verifiable random function for some other protocols on Scroll, so PowerAgent will be not only their automation provider but also an infrastructure provider, just like Chainlink is on some other chains, so the VRF itself should also be a positive new use case for PowerPool. And it can also bring more attention to our Community.

Gordon: Polygon is certainly attracting a lot of attention now, because of their aggregation and their zero-knowledge technology, so I’m hopeful that we can get our Polygon launch organized because they have a pretty well-defined support system for bringing onboard people like us.

Vasily: Yep, but as I said before, the main issue with Polygon is that the Polygon node is quite heavy, and we don’t expect that a lot of Keepers will be able to run their own full node of Polygon. Possibly the decentralized RPC solutions will help with that, but this also requires testing, and we’re still waiting for some improvements to proceed with Polygon testing.

Gordon: And also, in the past, we talked to NEON EVM. NEON is interesting, because they settle on Solana, and they provide a gateway to the Solana world. The problem before with NEON was that the random functions, the randomness, we didn’t have a generalized source of randomness, but now we do.

Vasily: Yeah, so NEON, I think, is a very impressive chain for launching, because it opens the gateway to Solana, as you mentioned before. So yeah, when we have the VRF random function done, I think we will also launch on NEON as well.

Gordon: Is there anything more in particular we need to say about the VRF? I mean, is it actually ready to go? Does it work on any EVM chain?

Vasily: So, you know, the main problem with VRF is that we need some elliptical properties there, and the library that we used before was not suitable for that, so we found some other libraries in JavaScript. And yeah, so we’re working with it, but it’s much more complicated than the initial library. So if we have some JavaScript developers in the Community, we’ll be happy to get some positive support from them. But yeah, we try to handle it on our own.

Partitura, Breadchain, and Daoism Systems progress

Vasily: Our main activity, besides all this technical stuff for the PowerAgent launch, is on the DAOism, Partitura, and BreadChain integrations.

We’ve had several research calls with BreadChain. Currently, they’re trying to build some automated curve pools strategy for their $BREAD token, so we’re also trying to help them with that, so now they will check it on the Sepolia testnet, and if it works well, they will launch it on Gnosis Chain. So we are very closely in touch with BreadChain.

Regarding Partitura, I said before that we’re integrating with them, and we plan to add more and more Templates there.

DAOsim’s project is a work in progress, I cannot say anything today, because to be honest, I don’t know the latest news from them, so I need to check it out. They were quite slow in development, and hopefully, they will activate very quickly.

Outlook for Automation: PowerBaskets

Vasily: Yeah, I think it’s possible we can move to the discussion of some automation narratives, which is really interesting going on in this space. What will be the future narrative for automation? So what do you think? Because we see all this Eigenlayer re-staking. So the industry is buzzing. We have a bull market fueling up a lot of activity.

Gordon: As you know, I believe that the best way to ensure a heavy workload on mainnet is to launch diversified PowerBaskets of whitelisted LSTs and LRTs. We’re getting to the point now where TradFi ETFs are coming in, and the TradFis are going to want to push not just spot ETH, but they’re going to want high-yielding ETH. So it’s only a matter of time before TradFi is pushing diversified baskets of LSTs.

And restaking with LRTS, you know, each LRT has idiosyncratic risks depending on the underlying AVSs. In this sense, if LSTs are like ETH bonds, then LRTs are like ETH junk bonds. But junk bonds in TradFi are always bought by retail as diversified funds. People buy junk diversified bond funds, not individual issues.

Diversification in LSTs and especially LRTs pays a huge premium. Lower risk and higher returns, with potentially better liquidity for the basket tokens.

I think there’s a PowerBasket opportunity in LRTs that is even bigger than the opportunity in LSTs. LRTs are actually another source of extrinsic yield on LSTs. If you have LSTs, to earn extrinsic yield you can LP them by putting them in correlated pairs with ETH and stuff like that, it’s not too bad. Or you can lend them. But also now, if you have LSTs, you can also choose to re-stake them. And so part of the whole automation game is to automate the process by which a diversified basket of LRTs with flexible weights is managed and also automate the process by which a diversified basket of LSTs can optimize their extrinsic yield by including allocations to another sub-basket of LRTs. Launching just these two PowerBaskets on mainnet, in my opinion, would guarantee a massive jobs Keeper workload on mainnet. LST and LRT PowerBaskets are actually the ultimate DeFi applications for automation. And TradFi is going to end up trying to imitate that off-chain. And who knows, someday TradFi might realize that our version is better than theirs. But that’s obviously several years in the future. But nobody questions that LSTs and LRTs need to be put in diversified baskets. It’s literally not a question.

There were lots of questions about the value of diversification in the beginning of DeFi, especially about general ‘indexes’ like the DeFi indexes and Index Coop and stuff like that. But now the opportunity is with LRTs and LSTs, which are already the biggest DeFi instruments in terms of TVL, and are likely to stay the biggest, forever. By now, most Ethereum staking is, soon at least half of it is getting turned into LSTs and LRTs. So that’s just vast. This is billions and billions and billions of TVL in the format of LSTs and LRTs. And that’s going to continue. And eventually, even TradFi is going to buy them. So these PowerBaskets will be by far and away the biggest DeFi application for automation by TVL. And these PowerBasket transactions are big, pooling hundreds and thousands of investors’ LSts and LRTs, spreading even Ethereum mainnet gas costs so much that gas costs cease to matter. With PowerBaskets, the whole issue of gas costs on mainnet, nobody cares. It just doesn’t matter. So that means big, fat automation fees for all of our Keepers running on mainnet. With both LST and LRT PowerBaskets running on mainnet, it just no longer becomes an issue about having enough fees to justify lots of high-earning Keepers on the Ethereum mainnet. And Keepers can invest the ETH automation fees they earn back into our PowerBaskets.

Okay. Are we getting any questions yet?

Questions from the Community

Q1: Would this also require an IoT infrastructure, like helium or similar?

Gordon: I don’t know if the question is about billing or what, but… I mean, you couldn’t really launch an IOT if you didn’t have your own sort of billing arrangement sorted out. It’s kind of built into the IOTs network infrastructure.

I mean, there may be synergies, but the IOT networks in general are not EVM. So we couldn’t really run in conjunction with an IOT network. They’re built from the ground up to do that, whereas the EVM wasn’t. The kind of billing I was talking about is Web 2 stuff that is coming across to Web 3 on EVM because they now have to work with on-chain validation and metering of everything.

The whole reason for PowerAgent is that the EVM design was incomplete. Turing complete, but functionally incomplete. It simply skipped the whole idea of, oh, we need canonical automated execution of contracts. I don’t know why that happened, but it happened. And that’s why PowerAgent is the missing piece. But IOT guys, they build their specialized networks from the ground up. So I suspect they’ve got their billing already sorted out. So my comments about billing and charging are all pretty much confined to the EVM ecosystem because PowerAgent essentially makes EVM more functionally complete, and canonical automation is increasingly desirable and needed.

Q2: Do you plan to cover other chains, like Avalanche, etc?

Vasily: Yeah, I think that we plan to eventually cover all EVM-compatible chains. All significant ones where a user base exists and where automation is required. Of course, Avalanche is also on the radar, so after launching Ethereum, I think Avalanche will also be in our plans.

Gordon: The objective is to go to all EVM, Ethereum Virtual Machines, L1 chains, and L2 layers that we can, that have a population and have usage. And Berachain, I don’t know, is Berachain EVM?

Vasily: As far as I know, yes.

Gordon: Okay, so Berachain, in theory, we can deploy on Berachain.

Vasily: Yep, but we need to finish developing the VRF verifiable random function and deploy it on Scroll and Optimism first. This is the main focus.

Gordon: In terms of the sequencing of the EVM chain rollouts, it’s complex, because we have to look at technical factors on the one hand, and we have to look at ecosystem maturity and demand for automation on the other, and sometimes also grants and encouragement and support that we get from the chain. So all those factors go into deciding which is the next chain and which is the next chain after that. But yes, in theory, nothing stops us from deploying on Berachain.

Q3: Do you think it would be useful to have more basic node running guides for those who are not developers but want to seize your passive income options?

Gordon: Yes, absolutely. The purpose of the new Gitbook grants is to pay people to write all that. People who’ve been through it, people who’ve done it themselves, we’re giving big grants for people who generate that kind of information because we need to widen our funnel. We need to make it easier to become a Keeper. We need to make it easier to become a Template developer.

Q4: Can you elaborate on the academic research aspects related to PowerAgent V2’s future functionality?

PowerPool research papers are available on the Project Wiki.

Vasily: Yeah, so let’s discuss our academic research. The first key problem that we started to solve back in the day was regarding the staking and slashing mechanics. So, the question was how to make the network fault-tolerant to attacks and Keeper misbehavior. Since transaction signing is quite a determined event, the network can always check if the transaction has been signed properly or not. So the network can easily detect if a given Job has been done or not.

The first point that we worked on was to create staking rules: how many tokens each Keeper will stake, and how the weighting based on the stake works. Also, we worked on algorithmic pricing, so how to vary the Keeper’s execution fees with the size of their stake. Because the bigger stake should correspond to bigger execution fees, of course. Because bigger stakes pose bigger risks to Keepers, to node-runners. Because there is stake volatility, there is also the slashing possibility. If a Keeper with a high stake gets slashed, it will lose more tokens than Keepers with lower stakes. So, Keepers need to be paid for this risk. And at the same time, if a higher-staked Keeper is paid for the risk, they also have a bigger responsibility. So, we need to bond the risk and the payment.

The slashing mechanism was developed by the technical team, and the slashing is also quite an important part of the mechanism design. Currently, we define a slashing epoch. If the randomly selected Keeper doesn’t sign the transaction on time and properly, there will always be at least one other Keeper that could slash the failing Keeper and complete the transaction for fees. And it’s also based on the slashing epoch, this rotation of slashers.

We also explored MEV, but we are not professional MEV bot creators or MEV protection creators. So, our main approach is to use Flashbots and use some MEV blockers. There are pieces that are available. We try not to build our own MEV protection solutions. We try to use the best ones available in the market and collaborate with other teams in that. Because the Web3 building process is more about cooperation and using the proper building blocks. No one needs to invent everything.

We can and will use the ZK proofs and all the other stuff related to off-chain conditions, computations, and resolvers that are also currently under development. We think that once all these building blocks are developed, it will provide a lot of new use cases. We are also closely monitoring some other projects in the ecosystem. And when there is anything that we can use, we will use it. For example, we avoid building our own stuff on ZK, because it’s complicated and requires a lot of time.

Before, when the PowerAgent V2 was in its first test release, the Keeper selection algorithm was just round-robin-based Keeper selection. But, what we have now combines random-Keeper selection, and slashing, and now we also have algorithmic pricing. And we have almost finished developing the stake weighting. It’s not in production yet, but the majority of this stuff was already developed to have a bigger probability of selecting Keepers with higher stakes for particular tasks, given the staking range desired by the Job-owner.

All of this is based on published original research that provides a lot of improvements, and hopefully, it will provide even more in the future. If someone wants to also contribute to the research, we’ll be happy to make some co-authorship of papers, or maybe some simulations, and other stuff. So, we’re totally open to that. All our code is open source.

Gordon: There’s a tremendous amount of original research underpinning PowerAgent because to be really decentralized, autonomous, trustworthy, to get all of the features of our value proposition, almost every one of those features required research, and nobody else does any of this. This was a totally new problem specification. Other automation solutions are centralized closed-source solutions, with designated, whitelisted nodes and predictable task assignment. They’re basically just re-implementations of traditional networked architectures. They’re not built from the bottom up to be blockchain-native and match the decentralization ethos of the Ethereum community, or the Ethereum Virtual Machine. So, we were the only ones. There’s nothing else like PowerAgent, and probably never will be.

That’s probably the most important thing you’re ever going to hear me say. I just don’t think anyone else can do this, as long as we keep up our growth and keep building our service network. It just takes too long. What that means, of course, is that if you’re sitting there as an investor or degen, and listening to VCs, influencers, and schiller’s hyping airdrops for the 995th L1, while DA layers are talking about the millions of L2s, roll-ups and AppChains that will be running, you start to realize that, actually, all these other things, L1 Ledgers and L2 Scaling Layers, Rollups, Appchains, there’s already far too many of them.

I believe that other than Ethereum, the only chain tokens that really are interesting as investments are the service network chain tokens. There’s Chainlink, obviously, which is very, very broad, but not optimized for credibly-neutral automation. It is closed source, and not permissionless. It is not credibly neutral in terms of automation. It is focused on being a broad multi-chain multi-service one-stop service platform for TradFi. There’s Gelato, which has become a ‘roll-up as a service’ platform, largely because they don’t have anything like our decentralized, permissionless randomly assigned stake stake-weighted automation technology. They operate only with their own heavy whitelisted nodes and have no slashing or randomness. They are focused on helping big popular protocols become Rollups, especially AppChains. They cannot act as credibly neutral automation agents and are not really aligned with decentralization and modest hardware/bandwidth home staking with no single points of failure, as PowerAgent is.

Service networks like PowerAgent are very interesting not just as technology, but also as investments, because there will never be lots of automation services networks, yet they are increasingly required. This is where a lot of value will eventually accrue. This is what makes for good investments. Investing in something like L1s, L2s, DEXes, Vaults, etc. where you know there are already about 100, and there’s soon to be 101 by forking and copy-pasta, makes zero sense. That’s why PowerPool $CVP is dramatically undervalued. People simply don’t realize that canonical automation services are extremely important given the dominance of the EVM. PowerAgent fills a huge missing functionality gap in the EVM, and it’s the only one that does it in a wholly consistent way.

The next question we have is, Vasily can you see it? It’s about the stability and robustness of the network. Keeper misbehavior and failure. It’s basically about slashing.

Q5: What measures are in place to ensure the stability and robustness of the PowerAgent network, particularly in handling potential Keeper misbehavior or failure to execute tasks?

Vasily: The robustness of the network is achieved by the staking assessment mechanism. Currently, if a Keeper doesn’t sign a transaction on time or on condition, after some small period, he has some time to still execute his Job. But if he still didn’t execute it, the Slasher who is selected to be a slasher in that epoch can slash him and get part of his stake, and at the same time, execute the Job for fees. So right now all the testing is to be sure that all the slashing that will occur in the future will be related not to the technical issues of the RPC or Node software, but to the real Keeper’s misbehavior. So far, in tests, we found out that a lot of slashing events are not related to real misbehavior of Keepers. In the testnets, nobody wants to do any kind of attacks or something like that. So, it’s mainly related to the RPC problems, as it is important to the Node software issues that also happen sometimes because it’s a testnet.

Sometimes there were problems with too frequent tasks. For example, if the block time of the supporting network is quite small and somebody wants to execute some task very frequently, there can be some delay with events receiving from RPC, for example. If the selected Keeper isn’t technically able to execute frequent tasks properly on time, it can be that it is just because he receives the information a little bit too late. In our Explorer, you can see all the slashing statistics, and sometimes the number of slashes grows tremendously. This is mainly because we faced some RPC update events when the RPC version was updated, and this RPC new version did not work properly, or the JSON output was not in the correct format, and the Keeper Node could not receive it. So, all the slashing now is mainly related to these types of technical issues, not to any Keeper’s real misbehavior. So, our job and the job of the Community members that run all their Keeper bots is to contribute the time and effort to the testing, and finally solve all the problems that result in this non-intentional slashing. All this activity is being done using free test $CVP…not real $CVP stakes, so no Keepers are losing any money on these types of issues.

Gordon: That’s all the questions I can see. Can you see any others?

The future of the EVM automation market

Vasily: Nope. Maybe we can also share some of our views on the future of the EVM automation market. So, for example, I saw a lot of ZK projects that will enable off-chain computations. They are in the early development stage, but some of them are really close to being launched in mainnet, so this can be a really big step forward because we don’t need to use our own off-chain computations. That’s what I see.

Gordon: Yeah, I mean, we say that PowerAgent is an automation network, but closely related to automation is off-chain computation. Obviously, there’s a lot of logic to be executed that does not make sense to execute on-chain, and we’re standing there with a service network ready to do that. My question is always about how heavy that makes the hardware. If I’m a really modest guy, and I start just running a validator/Keeper on Gnosis Chain, and suddenly we get a lot of demand for off-chain computation, will my hardware and bandwidth really scale? I mean, will I be able to earn? What I’m saying is, how does getting more and more involved in off-chain computation affect the hardware and bandwidth of our smallest, more modest Keeper hardware and bandwidth?

Vasily: It’s a good question. It depends on what particular off-chain computations should be performed. So, for the simplest one, there are two types of off-chain computations, as I see them. I’m not a professional in some heavy computations, so it’s just my view as a dilettante. There are some computations that are mathematically very simple, but they’re too expensive to be calculated on-chain. For example, the big linear matrix systems or something like that. These can easily be computed by any kind of Keeper node, and the main goal of these computations to be done off-chain is just to save on the gas. And I think that these types of computations wouldn’t affect the performance of our Keepers, and wouldn’t require any additional hardware.

The other type is some really heavy computations, and it’s mostly related to AI, to something really big and complex. And this, of course, would affect our Keepers, and I think that our Keepers won’t need to do that. I think that would be outsourced to professional players. For example, if we have some AI-related computations, the GPUs should do them, and there are a lot of protocols now developing GPU compute service networks that will make computations for AI, right? So, if we offload anything too demanding, offering off-chain computing services will not be a power-hungry business at all. I think that our PowerAgent Keepers will handle only computations that are computationally simple for ordinary Keeper computing hardware.

Gordon: Yeah. Obviously, we’re not going to get into competition with BitTensor and people who have massive GPU farms and things like that. So I think the answer is that basic computations that just are expensive to do on-chain are more than enough demand to generate Keeper off-chain compute fees even on modest hardware. But obviously, we will need a scheduling system so that more modest home staking Keepers are not going to be given tasks that are going to overwhelm their hardware.

Vasily: Yeah, I think that we can wrap up the AMA if you don’t have any other questions or more thoughts on the automation market. I think that the future of automation, in my view, is in our templates, in easy access for users. So, I think that our integration with Partitura is a good first step for that.

Gordon: I’m totally bullish. I think that every day that passes, new automation requirements crop up. I’ve talked a lot about this whole issue of developers earning directly from their templates and getting a proof of concept of that going for a simple service. I’ve talked about the future of Web2 having to find a way to bill for their services when they’re using the chain to verify that the content is real, and the person or the customer is real. And how much time and what value was actually processed to pull all that information off-chain and charge one and credit the other. I think there has to be a role for automation in there somewhere. I don’t know exactly what it is, but I think that that’s one.

And obviously, the diversified, multi-token PowerBaskets, LSTs, LRTs, they’re not going to go away. They’re bigger than ever. TradFi is going to get in there. They’re a huge source of fees, both management and automation fees, especially on Ethereum mainnet, because these are high-value transactions going through, and they’re going to feed a lot of money into our Keepers on most EVM chains, because LSTs are pretty common now if we can get the baskets up and running. That’s already a lot of opportunities.

Plus, of course, all the basic DeFi use cases that we know about, DCA and liquidity and collateral management, and all these things. So the problem is always that we don’t have any shortage of ideas or products or any of that. Our shortage is developers and a few validators on heavy node chains.

So really, the whole question is how to expand the Community. Get heavier node validators in, and get more automation developers in. I think the story that we have is very compelling. It’s just a question of spreading it, just explaining it to everybody. And so this new program on GitBook, it’s about putting all those thoughts into a format where people can find it, and read it and understand it.

Vasily: Yeah, so I think that we need to proceed with building and updating the Community with all the new stuff that we plan to develop and deploy. I am looking forward to the submissions to the GitBook contest. So we will share the article shortly, hopefully today. And if someone wants to contribute, they can also DM me with some ideas and I will help them to navigate in the contest as well.

Gordon: OK, well, thank you, everyone, for listening. We will generate an edited transcript and you will have a link that you can send to people. Even if you’re not a developer, you’re not a validator, there’s still a tremendous amount of value you can add by following me on Twitter, liking things, and forwarding things. I guarantee you right now there’s a bunch of developers out there who are working on projects that have no hope. And their time is just vastly better spent developing automation templates here. And it’s just a question of getting these developers and validators from point A to point B and explaining it to them. And anyone can do that, literally anyone. So help out. That’s all we can really say because the size and power of this Community is a function of how many people are in it.

Vasily: Yeah, so thank you, guys. Thank you, Gordon. And see you all in the brave new automated world!

--

--

PowerPool
PowerPool

Published in PowerPool

DePIN layer powering AI Agents and DeFi automation in multichain universe. We bring superpower to networks with substantial liquidity, massive user base and lots of transactions.

Mr FOS
Mr FOS

Written by Mr FOS

DePIN layer powering AI Agents and DeFi automation in multichain universe. https://powerpool.finance