Comments for post On the web server scalability and speed are almost the same thing

VINCE CARTER writes: At present, Major brand basketball shoes: Nike, Adidas, Nike Jordan, Converse, etc. Our site can provide you various kinds new and hot selling basketball shoes. Such as powerful and arrogant NBA star caliga: Kobe Bryant basketball shoes, James basketball shoes and unfailing Jordan basketball shoes. There are high quality, competitive prices of all basketball shoes on our site. Create super-class platform of the basketball shoes for you!http://www.basketballshoesmart.com/
facciocose writes: I think that great part of you misses the point here. It's not about the webserver stack. It's the way erb substitutes the template in sinatra. So why do you keep talking about Apache, Webrick, Unicorn, etc.?
myfreeweb writes: Shouldn't you use Unicorn for Ruby serving? Also, Node.js on my '06 Mac mini with a lot of apps open: 3273 requests per second with this code: http://mfwb.us/lhj8 After adding some Mustache (with Milk), no caching, reading from a slow HDD on every request: 1404, code: http://mfwb.us/YjpI And inlined: 3047 http://mfwb.us/EicL Your PHP is just 1378, running 5.3.4 on Snow Leo's built-in Apache. P.S. my post on Apache's I/O fail: http://floatboth.mfwb.us/on-io/
Alex Mikhalev writes: I think you will more enjoy pluggable CTPP template engine from http://sourceforge.net/projects/ctpp/ and C++ Application http://cas.havoc.ru/ CAS. It is hard to beat C++ in terms of speed. And people who created it are really care about scalability ( I have no association with them, but a great respect). These fellows actively object an idea from the ruby world that hardware is cheap and developer's time is more expensive, their point is that for 25% reduction in hardware (consumption) devs can get a bonus of the size of their annual salary :)
Jimbob writes: @Glenn Rempe: You're running on Mac? I thought I heard Mac's have performance issues with ruby? That's awesome that you wrote a rack app. The speed increase really shows how much the framework slows things down. Even then you still have rack in the way, but I don't know how else you could get any closer without a descent modruby. As has been said here and elsewhere, this is really all pointless though because ruby, rack, sinatra, etc are not designed for microbenchmarks - they are designed and optimized for full-featured web applications.
raggi writes: n.b. the config.ru is executable on any system which supports shbang args through execve.
raggi writes: https://github.com/raggi/antirez-sinatra
antirez writes: I substituted the template stuff in my code with that: https://github.com/antirez/nolate And now the /fast nad /nolate take the same time with apache benchmark. So I guess, there is some truth that template substitution should not be so slow I guess. This is the code I added in the example posted: get '/nolate/:username' do @var = "Hello #{params[:username]}" nlt "index.erb" end
sloser writes: IMHO It makes no sense to do such comparisons. HTML is two times faster than your PHP example. The point is that for any bit serious web site you must use the cache, whether it is written in PHP, Ruby, Python or LOLCODE.
Nikkau writes: @antirez And inline template? It's not better than a global var?
antirez writes: @Ryan: this is how I run it: $ export RACK_ENV=production $ run app.rb I also tested another version that loads the template just one time and put it into a global var, then I just call 'erubis' against that var. Again the trivial template substitution is absolutely measurable. I'm lame as a Ruby programmer, but there is definitely something odd.
Niko writes: Ryan is right (at least in my case *blush*). Updated the ab numbers in the GIST with production numbers. https://gist.github.com/899394
Ryan Tomayko writes: You're running under development mode which reloads a number of things (templates included) as a developer convenience. Set RACK_ENV=production (environment variable) and run the benchmarks again.
Niko writes: And I noticed another thing in my recent tests: Apache/PHP did well up to about 300 parallel connection. The throughput droppped significantly above. Unicorn and Thin don't degrade up to 1000 parallel requests. In my tests when using 1000 parallel connection Unicorn with 4 workers serving a JSON string defined inline was 50% faster than Apache served the same JSON as static file. I didn't do any specific Apache configuration for these tests. So I can state with the same right as Antirez is stating "In Ruby the default is slow": PHP/Apache can't handle high concurrency well. ;)
Niko writes: I've done some testing myself (no PHP, just Sinatra & Thin, ruby 1.9.2) on my MBP: https://gist.github.com/899394 Conclusions: * Simple substitution is faster than full fledged template engines * Reading files or not doesn't matter (filesystem cache works) * erb and erubis is fast compared to haml or slim Other thoughts: PHP perhaps uses a bytecode cache to cache the compiled template. Seem pretty efficient. I recently benchmarked PHP vs. Sinatra with actions that do one single Redis lookup per request. PHP clocked in ~900req/second. Then we patched predis to use a unix domain socket connection to Redis. That improved things to about 1200req/sec. Today Sinatra is running driven by Unicorn with 4 workers delivering about 4200req/second. So depending on what you do one or the other language/library/template/driver combination is faster. (surprise, surpise). And Ruby isn't slow.
John D. Rowell writes: Template rendering is not what Ruby frameworks are optimized for. Someone could dish out a pure C version of erb or haml and use it as a native extension. So, that would be faster, but in production it wouldn't make any difference. If you're rendering the same stuff over and over again in production you're Doing it Wrong (TM). That being a very popular (perhaps the most popular) use case for Redis--page, fragment, etc. caching--it's weird to see this discussed here. There's a minimum performance level that you must achieve to have a scalable system. One can't take EC2's micro instances and scale anything compute-intensive, for instance. But 300 req/s (and my 2002 desktop dishes out more than that--shame on you Apple!) would still be pretty scalable if you could keep things in that ballpark. 3ms/req/core can handle more hits than probably 99.99% of today's websites get using only a handfull of nodes (if that). So, if you can cache most of your rendering and use async calls to other systems (like databases and queues), it's pretty easy to keep your average throughput at those levels, while keeping memory usage low (which PHP also does, but other ugly stuff like Java can't manage). Then again, if you're doing number crunching or complex document parsing and that sort of tasks, you shouldn't be using pure Ruby because your base performance will be too low to scale. And so we get to what Ruby web frameworks are actually good at, which is orchestrating complex systems in a beautiful and maintainable way. As long as your supporting architecture is in place, including caching, fast and efficient database lookups, proper separation between static and dynamic content, appropriate planning for handling long lasting requests, and all of the other best practices that all scalable sites implement (or at least should), you'll be just fine. As a last thought, the best way to prove any of the points is these comments would probably be setting up a few benchmarks on Github and running them on a cheap EC2 small instance. I've been doing a lot of Ruby benchmarking myself lately and got excellent results with Nginx + Passenger 3 + Ruby 1.9.2 + Sinatra which I'd be happy to make a recipe for if the Github benchmark idea gets any traction.
antirez writes: @Konstantin: you are contradicting yourself posting this numbers ;) If /slow and /fast are different then the template substitution is taking an huge time in your test as well. Did you noticed the multiplication factor is the same in your benchmark and in mine? Substituting "Hello" in a one line template should not be measurable with apache benchmark unless there is something really odd going on.
Konstantin Haase writes: You are likely to run in development mode, which will indeed reload the template on every request, or you are running on Ruby 1.8.6, which will, due to a bug in Ruby, also cause templates to be reread (and reparsed, which is way slower than your simple subst). Running exactly your app on an two year old MBP on Ruby 1.9.2: /slow development: 654 req/sec production: 1086.68 req/sec /fast development: 1168.39 req/sec development: 1390.48 req/sec /subst development: 1006.90 req/sec development: 1020.50 req/sec You run in production mode with `-e production`. Your PHP app gives me 1608.77 req/sec (PHP 5.3.3).
Glenn Rempe writes: Update to my previous comment. I actually squeezed an additional ~200 req/s by calling Thin directly with a config.ru rackup file and running thin daemonized. ~1796 req/s for the fast action in Sinatra. See : http://bit.ly/g1XZw2 @jimbob : thx for pointing out he was using mongrel, somehow I missed that on first read. Your request for me to run the PHP on the same hardware is of course fair. As expected it is *very* fast. Likely as others have pointed out due to the fact that running this does not invoke any sort of PHP framework (e.g. Cake). Here are the results from my stock OS X 10.6.7 Apache with the same 10000 req and concurrency of 10: Requests per second: 12608.59 [#/sec] (mean) Fast, no doubt. Not as fast as if we had written it in C, but Fast. :-) That being said, I would guess that a more Apples to Apples comparison of these two bits of code is likely the PHP script vs. a Ruby Rack application which doesn't include any web application framework (Sinatra or Rails) and seems more equivalent to straight up PHP code. I whipped up a tiny Rack app, which I think is more similar to the /fast/:username action in the previous code for comparison (its also in the repository I linked to above). Many folks are using Rack apps (or Rails 'Metal') to avoid framework code for actions that need to be very very fast in the real world or in artificial benchmark like this one. The rack app runs at about: Requests per second: 4100.23 [#/sec] (mean) Time per request: 2.439 [ms] (mean) Not as fast as the PHP, but still *very* fast. Bottom line again is that this kind of test, while fun and provocative, is meaningless in the real world as these simple actions do not reflect the real world usage of templating frameworks, databases, network latency, etc. that will get in your way long before you get anywhere near these kinds of numbers. Not to mention the caching layer which is likely far faster than either of these tools and is a standard layer in any real world application at scale. The caching layer is really the equalizer between any two real world apps written using different tools. My advice still stands, use the one that you feel makes you more productive, has the tools and extensions you need available, and is more fun. Cheers.
bjpirt writes: Whilst I agree with what you are saying... speed != scalability speed == efficiency You could still have a fast web service that was architecturally unable to scale - it just might take you longer to get to the pain point. Equally you could have a well architected but slow app that continues to scale as you throw more hardware at it but ends up costing you more money. I actually think that speed could be seen as more of a business metric because it boils down to what it will cost you to handle more traffic. But I do share your frustration with the speed of Ruby web apps!
antirez writes: I used Mongrel, Sinatra run in production mode. The numbers are low just because it is a MBA 11" that is very very slow compared to an MBP. I think many missed the point that if a template substitution takes all this time, substituting a real web page that involved a few templates N times in a single page completely trashed the performances to 10-30 requests/second in a *fast* server. So the problem is not 300 or 600 requests per second, but a trivial site that is unable to serve more than 10 requests per second, with a latency penalty of 100 ms per user.
Jimbob writes: @AX: At one time, mongrel was the cream of the crop for production ruby/rails/rack webservers. I ran one for a year or two before replacing it with Thin. Granted, today I don't know why anyone would pick mongrel for either production OR development, but I bet there are at least some leftovers from mongrel's heyday out there.
Jimbob writes: @Glenn Rempe: any chance you could run the php app on that same hardware? I'd love to see how it compares, even if php still is faster. Also, the article states he used mongrel (also not the best choice), not webrick.
Matt McKnight writes: For apples to apples here you should run in Phusion Passenger, not mongrel. Indications are it would be faster. http://blog.phusion.nl/2010/06/10/the-road-to-passenger-3-technology-preview-1-performance-2/
Glenn Rempe writes: I think you made a serious mistake when running this simple Sinatra app locally. We can't verify, since you broke the first rule of fake benchmarks and say nothing about how you ran this app on your local machine, but I'll assume for the moment that you were using the webrick Ruby server which is intended only for development and testing and is *never* used in a production like environment. This is why people who host real apps always choose Passenger, Thin, Mongrel, etc... I just benchmarked your code on my local machine and achieved results that were definitely on par with the PHP script results you saw when I ran the code using the 'Thin' web server (which will get used by default if the gem is installed). Here is the code for those who want to try it for themselves (a very slightly modified copy of the same code you posted above). Follow the instructions in the README.txt to try it for yourself. My machine is a new one, YMMV. https://github.com/grempe/speedy RESULTS : My Results? Between 800 and 1500+ requests per second as reported by Apache Bench which is roughly 3x (!) what you reported. Apache Bench was run with 1000 requests and a concurrency of 5 each time with a pre-measurement warmup run: ab -n 1000 -c 10 http://127.0.0.1:4567/slow/glenn => Requests per second: 826.95 [#/sec] (mean) => Time per request: 12.093 [ms] (mean) ab -n 1000 -c 10 http://127.0.0.1:4567/subst/glenn => Requests per second: 1362.72 [#/sec] (mean) => Time per request: 7.338 [ms] (mean) ab -n 1000 -c 10 http://127.0.0.1:4567/fast/glenn => Requests per second: 1519.45 [#/sec] (mean) => Time per request: 6.581 [ms] (mean) I'm kind of disappointed that you posted this rather FUD'y post in the first place. This is a partisan argument that has been hashed over a million times. Using PHP, Python, Ruby? Good! Use what makes you happy. These simple benchmarks are largely meaningless in a real production environment which is a much more complicated beast. Bottom line, Ruby is not 'slow' when compared to other dynamic languages. Your choice of language in the real world is much more affected by your skills and plans rather than any artificial and meaningless benchmark. Cheers.
JavaRocks writes: thats it 1,500 requests per sec!! In JAVA i can get 80,000 requests per sec. http://bit.ly/dIrh0J Yes, you sound just as stupid.
derp writes: april fools? no?... well, ruby is plenty fast enough for plenty of companies to make money with. no amount of blog articles or incomprehensible comments is going to change that, so let's all just get back to work shall we? wait, i forgot, my app is done because i wrote it in ruby, so i'm going to nap a bit instead while you force-pass php like a kidney stone
Myxomatosis writes: Stop arguing about "scalability" and what it means, etc. The point here is that writing Ruby for the web is SLOW. It NEEDS to be faster.
Trent Strong writes: > That makes no sense at all. Scalability has shit to do with either speed or how much hardware you use. >Scalable means doubling the hardware nearly doubles the throughput. If your app can be sped up by a predictable amount by adding more machine, it's scalable; that's it. I think you are vastly over simplifying the scalability concept. As engineers in the real world, not in some universe of semantics, the notion of scalability of client/server architecture is affected by many factors, definitely including the throughput of a single machine/process, but also including very important factors like cost. The cost to scale a web application serving R requests/s on a single server to R_TOT total requests/s is pretty simple: $COST = R_TOT/R * (COST_OF_SERVER) If you are trying to scale a web application and can only serve some dismal throughput with each web server (say ~10 req/s), the cost to scale your app is going to be much higher if you actually have the need to scale, and could be the difference between a successfully bootstrapped company and burning through VC. Not to mention, even with load balancers like HAProxy and the like, adding more web servers to your architecture adds more complexity to your system, which will introduce more (vague, undefined, definitely real) costs down the road. In the real world, we have to be scalable over many different factors.
BraveNewCurrency writes: I'm ecstatic that people like Salvatore obsess over every CPU instruction. (Remember when they were 1ms each?) But we shouldn't force *everyone* to obsess over performance, or nothing useful will ever get built. Let's look at what happens when the ERB framework gets 50% faster: Going from 600 request per second to 1200 requests per second sounds impressive. But that's really only saving us 833 nanoseconds per request. For most non-trivial apps, that extra overhead is just a rounding error. Overall, the app doesn't get twice as fast -- it gets 1% faster. I'm all for speeding up ERB (in fact, there are dozens of alternatives, can we get rid of ERB?). But let's not pretend that ERB is always going to be the bottleneck of your website. A big thanks to the Redis team -- they worry about performance so we don't have to!
Ramon Leon writes: > In the web speed is scalability because every web server is conceptually a parallel computation unit. So if the web page generation takes 10 ms instead of 100 ms I can server everything with just 10% of the hardware. That makes no sense at all. Scalability has shit to do with either speed or how much hardware you use. Scalable means doubling the hardware nearly doubles the throughput. If your app can be sped up by a predictable amount by adding more machine, it's scalable; that's it.
etaty writes: For Mongrel2+PHP you can use the Photon Framework http://www.photon-project.com/doc/performance "The baseline Photon returns about 2500 transactions per second. More than 90% of the PHP baseline!"
watsonian writes: It seems to me that the primary issue here is that you're using a templating library in the slow Ruby test and are simply echoing the text out a la the /fast Ruby test in PHP. A fairer comparison would be to use some PHP templating system to render a single line with an evaluated variable Also, what AB settings were you using (or did I miss them?)? In a similar /fast test using Mongrel I'm seeing ~875 reqs/sec.
AX writes: You're right about performance, but "slow by default" is just wrong here. You're testing a production-type architecture (apache+PHP) against a development-type Ruby architecture (nobody uses Mongrel, for instance). More to the point, on either language you'd have caching, etc. enabled so even 1500 reqs/s would be slower than either site in reality. Basically, you're doing a benchmark that doesn't correlate well with real world high-performance situations. You're right that nobody's going to optimize performance for a development environment.
antirez writes: @Me: I used mod_ruby directly without any RoR in the middle, and I think that when RoR runs used to run on Mod Ruby did something strange that killed the performances or alike. But per-se mod_ruby is very fast, it's the same interpreter again and again serving requests, so it is very dangerous (if you leak a file, it is leaked forever, not just for that request), but also very fast.
Me writes: @antirez: mod_ruby? maybe your definition of "fast" differs from mine. One of our first mistakes was to use mod_ruby, after we switched to mongrel we had "better" performance, but not nearly great.
Me writes: I've had similar issues in production environments where php just scaled better. The last time I touched RoR was 2 years ago when I tried to get our inhouse app to scale as well as the php stuff - the ruby side took a lot more creativity to get it to even come close. Looks like things haven't changed in those 2 years.
antirez writes: Oh! Somebody mentioned mod_ruby? it's dead but it is very fast! I used it in the past for large production sites with good results. Of course without erb or other things like this, but with my own micro framework.
Jimbob writes: "I think people asking about what is being used for Ruby are missing the point." I imagine they just want to be sure you are using the latest and greatest - ie. Ruby 1.9.2 on Passenger 3.0.5. Even then, you are not talking about an apples-to-apples comparison, because you are still comparing a ruby "framework" (Sinatra), to (I assume) a single file php script. To be more apples-to-apples, you would have to use something like cakephp for the php part. Unfortunately, mod_ruby is pretty much dead, so there is no simple way to put up a single file ruby script on the web. Also, as the web application size and complexity grows, I would imagine the speed difference between php and the ruby framework would diminish.
antirez writes: @Kent: I think by default Sinatra is processing the same template again and again, reading it, processing it, every time. This is what PHP is doing as well. So I think your example is apple to oranges ;)
Kent writes: Apple to Oranges. Try using any PHP framework like Cake and you will get some comparative results. Using thin with plain ERB I get over 3000 req/s: https://gist.github.com/898929
Joran Greef writes: I have used Ruby and PHP. Have you tried Hello World in Node?
Chris writes: I think people asking about what is being used for Ruby are missing the point. I'm a PHP guy who also does Python stuff and has used Rails in the past. Antirez is right in asking why are things so slow out of the box for his Ruby solution compared to the PHP one. As for those in the "oh anyone using a framework in production will optimize it" are also missing that point. What does it say about a particular framework that you have to optimize it to go from 250 requests per second to 1500 requests? If you are having to do things outside of the framework itself to make it happen (ie adding caching layers) doesn't that strike you as, I dunno, wrong?
HN Reader writes: Could you please post the version of Ruby and the Ruby server you used?
FunnyGuy writes: Testing on localhost doesn't give any indication of speed from the user perspective. Network congestion is far more damning, i.e., it doesn't matter how fast your app runs locally if the network is clogged...
Colin writes: First, anyone using any framework in any language in production will configure it to optimize performance and reduce page generation time. If your site expects any interesting request rate, then optimization will always be required, regardless of the framework. An Hello World example using a "default" configuration is never representative of how any real application will behave. For your specific tests, could you please share the PHP and Ruby code you used so we can assess you are comparing oranges to oranges?
from a php developer who loves ruby writes: i really enjoy ruby, but unfortunately i have never been able to justify the time investment to use it for web applications. this is because it has always lacked maturity in critical areas such as unicode and performance. you are right to point out this problem, and it is a problem. good luck to the ruby community in resolving this...
Mikushi writes: Interesting, always had that feeling about Ruby. Even your 1500req/sec seems low to me for a simple Hello World in PHP. Do you have the sample code somewhere? With a simple framework, data fetching from MySQL (fairly basic though), no caching, i still can get to 1600req/sec on Apache2.2+PHP5.3 .
Dan writes: If you can throw money at a problem to make it go away, it scales.
home