- Web3 DevOps Digest
- Posts
- How predictive autoscaling makes PancakeSwap an always-available platform
How predictive autoscaling makes PancakeSwap an always-available platform
We have a Google-proved case for it!
Hey, it’s Daniel again!
I love to see how things evolve—from simple to complex or vice versa. It’s a pleasure to help technologies grow. The one I want to talk about is the scaling of cloud resources.
Autoscaling is guided by various algorithms with a common meaning: if something happens, do the following. But with blockchain becoming popular, use cases that linear logic algorithms can’t cover appear more often. For example, the latency in node deployment becomes a critical issue during sudden load spikes that demand immediate resource allocation. Also, you’ll need to let it go to avoid overprovision as the spike falls slowly.
The solution I want to discuss is the predictive autoscaling we've implemented for PancakeSwap, which addresses the issues of overprovisioning, high latency, and loss of traffic, while enhancing overall system availability.
Predictive autoscaling increases/decreases needed resources PROACTIVELY, which means before the demand appears/gets lost.
Scale one step ahead: PancakeSwap case
PancakeSwap simplifies the blockchain trading experience, making it available literally for anyone. To stay the best among DEXes, they must ensure high availability and unparalleled performance during any event, considering any kind of audience activities. Like IFO, when RPS hits the ceiling and goes beyond the 100,000 threshold.
So, this is a brief story of the implementation of predictive scaling for PancakeSwap:
Growing demand
PancakeSwap saw a massive surge in user activity, emphasizing the importance of a robust and scalable infrastructure when public endpoints started to fail them, causing the growth of latency, junction, and delays. And many angry traders with Twitter and Reddit accounts… If you know what I mean.Dysnix solutions on GCP
PancakeSwap joined forces with Helix Technologies to accommodate this growth and leveraged Google Cloud's solutions. At this point, the Dysnix solution came into the spotlight.
By using Cloud Load Balancing and Google Kubernetes Engine, they could manage immense traffic, maintain 99.99% uptime, and reduce the average request time to a node to just 100ms.
Predictive autoscaling implementation
PredictKube was another pioneering solution our team implemented here. It accurately predicted over 90% of PancakeSwap’s traffic spikes, automating the scaling of blockchain nodes ahead of anticipated traffic surges.
This proactive approach slashed peak response time by 62.5 times and resulted in over 30% cost savings.
Security & future endeavors
Beyond scalability, PancakeSwap is deeply committed to security, especially in smart contracts. As they look forward to enhancing user experiences and expanding their offerings, solutions like BigQuery are on the horizon to simplify blockchain data for users.
To elaborate on details from the cloud provider’s point of view, check the whole Google case here:
Scale one step ahead: Pancake Swap case
As a result, the benefits of PredictKube and resource balancing implementation are enormous. Saving costs, improving reliability, lowering latency once and for all, always-available system—what else can bring more competitive advantages?
And for us, this case becomes a reason to be a little proud of ourselves, seeing such a gigantic DEX “flying” as a swift thanks to our solution.
How do you scale your project, and can it be more efficient?
Thank you for your time, and I hope this email helps you to feel the potential of the transformative impact of predictive autoscaling in any domain. If you care to know what will be in the following email, I tell you right now—I’m going to take you behind the curtains of PredictKube and show how its AI model works.
Warm regards, Daniel