Black Friday has become a major commercial event in Switzerland. And for platforms that aggregate deals and redirect shoppers to merchants, it's the most critical day of the year. One extra second of latency, one page that won't load, and that's thousands of francs in lost revenue.
swiss.blackfriday is Switzerland's leading platform for Black Friday deals. For 364 days a year, traffic is moderate. But on the day itself and the week leading up to it, the site must absorb traffic spikes that multiply normal levels by 50 to 100x. It's an infrastructure challenge that few traditional cloud platforms can handle without specific preparation.
Here's how Hidora Cloud enabled swiss.blackfriday to navigate this period without incident.
The context: a site with two speeds
swiss.blackfriday has a unique traffic profile. For most of the year, the site receives between 2,000 and 5,000 visitors per day. The content is primarily static, with a few dynamic pages for newsletter signups and price alerts.
But starting the third week of November, everything changes. Visitors begin flooding in to preview early deals. Traffic builds progressively until the Black Friday peak itself, when the site can receive over 200,000 visitors in a single day, with peaks of several thousand requests per second.
This traffic pattern poses a fundamental problem: sizing infrastructure for the peak means paying all year for resources used only a few days. But under-sizing means a crashed site on the one day it absolutely must work.
Past failures
Before Hidora, swiss.blackfriday had experienced several difficult Black Fridays. In 2022, the site was hosted with a traditional provider on dedicated servers. The team had planned ahead by renting additional servers for the period, but the scaling was poorly calibrated. Result: the site went down for 45 minutes mid-morning, right at the traffic peak. Estimated losses: over CHF 15,000 in ungenerated affiliate commissions.
The following year, the team tried to solve the problem with a global CDN. This improved things for static content, but dynamic pages (deal search, category filters, price comparisons) continued to overload the origin server. The site didn't go down, but response times exceeded 8 seconds during peak hours, an eternity for users eager to find the right deal.
Preparation with Hidora
When swiss.blackfriday contacted Hidora in September, the objective was clear: zero downtime and response times under 2 seconds, even at maximum peak load.
The Hidora team implemented a three-pronged strategy.
1. Architecture optimized for peaks
The infrastructure was designed specifically for swiss.blackfriday's traffic pattern:
-
Application servers with horizontal auto-scaling. The number of instances adapts automatically to load. Under normal conditions, two instances suffice. During Black Friday, the platform can automatically scale up to twelve instances within minutes, without manual intervention.
-
Database with read replicas. The primary database handles writes (adding deals, signups), while multiple read replicas distribute read queries. This is essential since 95% of Black Friday traffic is read-only.
-
Multi-level caching. A Redis cache handles user sessions and frequent search results. An upstream Varnish cache absorbs repetitive requests for the most popular deal pages. Combined, these two cache layers serve 80% of requests without hitting the database.
-
CDN for static assets. Images, CSS, JavaScript: all static content is served from points of presence close to users, freeing up application server bandwidth.
2. Pre-event load testing
Three weeks before Black Friday, the Hidora team ran a series of load tests simulating real-world conditions:
- Progressive test: ramp from 0 to 5,000 concurrent users over 30 minutes to verify auto-scaling behavior.
- Spike test: sudden injection of 3,000 concurrent users to validate system responsiveness to a sudden spike.
- Endurance test: sustained 2,000 concurrent users for 4 hours to identify potential memory leaks or gradual degradation.
The tests revealed a bottleneck in PHP session handling, which was corrected before the event. Without these tests, this issue would have surfaced in production at the worst possible moment.
3. Monitoring and intervention plan
A dedicated dashboard was configured for Black Friday, displaying in real-time:
- Requests per second
- Response times per page (P50, P95, P99)
- CPU and memory usage for each instance
- Cache hit rate
- Number of active instances and auto-scaling events
A Hidora engineer was on dedicated standby throughout the Black Friday period, with alert thresholds configured to intervene proactively before a problem became visible to users.
The big day: results
Black Friday 2024 went off without a single technical incident.
Traffic handled: 247,000 unique visitors over the day, with a peak of 4,200 requests per second at 10:32 AM.
Response times: P95 stayed below 800 milliseconds throughout the day, even at peak load. The most viewed deal pages loaded in under 400 milliseconds thanks to caching.
Auto-scaling in action: The infrastructure went from 2 to 9 application instances between 8 AM and 10 AM, then gradually scaled back down to 4 instances by late afternoon. All entirely automatic.
Availability: 100%. Zero seconds of downtime. Zero 5xx errors visible to users.
Cost of the operation: The additional cost related to Black Friday scaling amounted to approximately CHF 400 for the week, a fraction of what the previous setup with pre-reserved dedicated servers cost.
Return to normal
One of the most appreciated aspects for the swiss.blackfriday team is the automatic return to baseline configuration. The day after Black Friday, traffic dropped by 80%. The infrastructure automatically adapted by reducing the number of instances. No need to cancel servers, modify configurations, or contact technical support. The December bill returned to its usual level.
This usage-based billing model is fundamentally different from traditional hosting. swiss.blackfriday only pays for extra resources when they're actually used (a few days per year) instead of paying an annual fee sized for the peak.
Key takeaways
swiss.blackfriday's experience illustrates several fundamental principles of peak load management:
Test before, not during. Pre-event load testing identified and resolved a problem that would have caused an outage in production. Investing a few hours in testing means avoiding hours of crisis on the day.
Auto-scaling isn't magic. It must be configured correctly, with the right thresholds and the right trigger metrics. Poorly configured auto-scaling can be worse than no auto-scaling at all, for example spinning up too many instances too late, or triggering oscillations.
Cache is king. For a read-heavy site like swiss.blackfriday, a well-designed caching strategy makes more difference than adding servers. Serving 80% of requests from cache effectively divides the server load by 5.
Proactive monitoring saves revenue. Having a dedicated engineer monitoring metrics in real-time allows intervention before users see a problem. The difference between "we prevented an incident" and "we managed a crisis" is measured in thousands of francs.
For any company facing seasonal or event-driven traffic spikes, the message is clear: a well-configured cloud infrastructure with intelligent auto-scaling isn't a luxury, it's an economic necessity. Paying all year for unused resources is waste. Crashing on peak day is worse.



