A Technical Overview of Our Direct Buy Advertising Application’s Hosts Network
We’ve gotten a few questions about how our host mechanism works over the last few weeks. Below is an overview of some of the important details on the technical side of our direct buy marketplace advertising app.
Our longer-term goal is to expand our app into a full-fledged demand side platform for larger advertisers, and our direct buy application will serve as a proof-of-concept.
Hosts verify clicks and impressions using tracking pixels associated with a given ad/ad space. A contract is verified by a pool of multiple hosts, each serving a pixel to ensure redundancy and cross-validation. The optimal case is that hosts serve tracking pixels coupled with ad content hosted on an advertiser’s preferred content delivery network (though in principle hosts can also serve ad content itself if needed). This takes a tremendous computational and network bandwidth burden off of host nodes, allowing each to handle a much larger volume of traffic, and gives autonomy over content delivery options to advertisers, e.g. using their existing network or a geographically optimal CDN for their target audiences. Hosts within a pool are randomly selected among nodes with low latency to the publisher’s ad space. This geographical load and performance balancing can be implemented akin to or using a number of existing protocols, such as Anycast for DNS lookups.
Within a host pool, clicks and impressions are verified and recorded using a state channel with periodic update intervals. The locked state for a host pool’s channel consists of the contract terms agreed upon in a multisig smart contract between an advertiser and a publisher. State updates occur every so often (after a certain number of revenue-generating events for example) whereby a cryptographically secure hash function like an SHA-3 algorithm is used to compare recorded data between host nodes. If hashes match across all or a majority of hosts in a pool, then this essentially represents a signed ‘transaction’ within the pool’s state channel. At a larger interval (e.g. daily), recorded data is finalized to the blockchain if host pool uptime remains sufficient and there is consensus within the channel. In this case, the according amount of XQC/EQC is transferred between advertisers, hosts, and publishers, and these transactions along with an entry for the hosts’ recorded data are committed to the blockchain.
Our model has a number of important technical advantages, including that iteratively updated cross-verified click/impression/payment data can be committed to the blockchain after each state update within a channel. If hosts become compromised or go down, future bad data won’t be published, but all of the good data as of the last update that has been adequately verified by hosts in a distributed fashion can be committed, along with the corresponding payments. Another huge benefit of course is that no single party gets to decide what the correct data is, and a bad acting host does not compromise the integrity of the network. The cryptographic checks within a host pool’s state channel can be used to identify hosts that are consistently reporting bad data (problematic hardware/connectivity or a bad actor) and remove them from the network or enforce penalties.
I hope this provides a helpful overview of some of our tech and how we are leveraging many of the advantages of blockchains. As always, stay tuned for many big updates and announcements in the near future!