Without an accurate understanding of your server capacity, you're subject to failures and catastrophic losses - it doesn't work properly.
You've suffered this yourself; we all have at some point. You rock up to a website or app to buy concert tickets, the must-have Christmas toys, or the latest sale big screen TV that's sporting a 50% off price tag in the Black Friday sales and, boom, the site or app has crashed, or it's running so slow there's no way it'll let you through its checkout system.
Not good.
As a vendor, that's even worse. A crashed website or app delivers 0% of the sales you need. That's 100% losses — probably 100% plus, as you'll likely lose any chances of return custom too.
If you are an NFT provider, then if you cannot accurately manage the rate of transactions on your crypto network then you could be looking at hundreds of millions of dollars in wasted or excessive gas fees.
So, in strides your online queuing system to manage your traffic and save the day. Everyone can relax. But are you really covered? Is everything safe again? And more importantly, do all virtual waiting rooms work properly? Take it from the people who invented it: No, they don't.
Why our rate-based system works far better than all of our competitors' systems
We understand the problems better than anyone — filter too many users into your system, and it will suffer, slow down, or, worst-case scenario, crash and die.
However, flowing too few users is damaging to your earning capacity. Trickling them in at an overly cautious rate means you'll lose the impatient ones — sat in your virtual queue for too long — and you could have handled far more traffic and far more sales to boot.
Understanding that balance and providing the best solution to the problem is precisely what we've developed and why we're happy to boast that we're better than all of our competitors and even the mightiest of vendors, Amazon.
The problems with load estimation
That fictional big dial or counter that gives you data on how many customers are on your website or using your appplication isn't entirely imaginary. However, what it can't tell you is:
- How many users are logged on to your site but are currently looking at another page on an alternative site,
- Are looking at your site on their phone, tablet, and desktop all at once and aren't likely to place an order on all three,
- Have gone to find their credit card,
- Have gone to make a cup of tea,
- Won't complete the transaction until this episode of their favourite show has finished,
- Won't ever complete your transaction process.
The reality of the situation is that between clicking links, you have no idea whether the person is still there or not, because your users only interact with your servers when they open a link. Between those times, the data is not available. There are plenty of reasons why people might appear to be using your website but aren't in the funnel, somewhere between logging on and completing a transaction. Timing out users after a period of inactivity doesn't work properly because there'll be people in your virtual queue waiting for people to time out when they could already be spending.
The problem with measuring these users with any accuracy to dictate how many can safely enter while the same or slightly higher number are leaving is not knowing how much stagnant traffic you have, taking up vital room on your server or cloud services.
Let's talk about concurrent users
To deliver the ideal visitor flow and rate of entry to your site, we, and all other queuing services, need to work out the number of concurrent users your site can safely manage and how many are engaged in the transaction.
But, what dictates a concurrent user?
Concurrent users: The number of people engaged in your transaction flow at any one time.
But then we also have:
Concurrent sessions: The number of people logged on to but not necessarily engaged in your transaction at any one time.
Concurrent requests: The number of HTTP requests being processed by your web server at any one time.
And then there's:
Concurrent connections: The number of TCP/IP Sockets open on your server port at any one time.
A concurrent session might be open but without an active user, for example.
Concurrent requests could mean far less traffic is manageable depending on how many elements they interact with on their visit.
And the number of concurrent connections could deliver false data if the server is set for elongated timeouts or no timeout at all.
Calculating the number of users, requests, or open connections that your site or transaction can manage accurately, and allowing a steady flow of users in as satisfied users leave, doesn't look quite so straightforward now, does it?
The solution is in the science - and it's rate-based
By calculating how many concurrent users a web server can handle, our system can forward them in, one at a time, from the safety of our cloud service, and create very stable load conditions on your website, app or application services.
Where Queue Rate = Concurrent users / Transaction time
Now that seems too simple, but we assure you, with structured testing and application of queuing theory and laws of probability, our QF maths geniuses have found the best method of delivering the ideal number for each website into its system with the highest accuracy.
Need more data? To understand precisely how the maths works, everything you could possibly need to know is explained on our Why use a rate-based Virtual Waiting Room page explaining all the finer details.
If the concurrent users or transaction time figures aren't available, we can set you up with a safe rate that is adjustable as your load increases to pinpoint the precise figure your system can operate at its stable best with accuracy, maximising your sales and services delivery.