We’ve found that using Cloudflare workers with an assortment of different web3 mainnet eth providers effectively mitigates these concerns.
Every time an Audius client initializes, it performs a main-net eth (read-only) contract call to fetch a list of available discovery services for clients to talk to. Even something as simple as viewing the trending tracks on Audius requires first hitting the chain to set the backend against which the client will communicate.This process of picking a backend can be simplified down to listing the available services registered on main-net eth and picking the fastest (in reality, it’s slightly more complex). As you can imagine, this process results in a tremendous amount of reads of a nearly-static list over the main-net ethereum web3 json rpc protocol. Worse still, the client does this on each initial load, where every millisecond counts. The data we get here needs to be fresh, but it doesn’t need to be that fresh as it seldom differs user-to-user across a short time period.
Enabling caching in a proxy layer buys us performance gains as well as reducing our need to depend on a single-point-of-failure eth provider. Win win!
Beyond the performance gains to be had, last week’s Infura outage demonstrated the community’s reliance on a small number of web3 providers.By using this approach, a project can easily swap out providers or add additional providers on the fly, including multiple providers and API keys from the start to increase resilience to these types of issues in the first place.
This means less downtime, reduced costs, and improved load-time.
As it stands today, nearly 80% of all of the Audius client’s onchain requests are being served by the Cloudflare cache with no perceived degradation of service by end users 🎉.Now, any DApp can be more resilient to web3 provider issues of any kind. This should also include fallback logic directly, to avoid the centralization risk introduced by the project team too.Check out our repo! This code is offered as a starting point rather than an all-in-one solution — cache parameters such as TTL are configurable and should be tuned & refined per project.In terms of future work, we can envision a world in which workers are smarter, potentially performing specific actions based on the entire body of the request. One could dynamically forward requests to different providers based on the content, run caching experiments on slices of traffic, or even dynamically switch providers.The future is wide open for this package, and we’re eager to see where the broader web3 community takes it!