Unlimited free bandwidth!* (*Some limitations apply.)

We’ve been hard at work behind the scenes developing the next-generation of our core hosting technology, and we’re ready to move it to public testing. It has some exciting new features:

  • TLS enhancements
  • HTTP/2 support
  • Automatic gzip compression
  • Major Access Control List (ACL) improvements
  • Shared IP blacklist support
  • Websockets support
  • Wildcard alias support

To encourage people to help us test out the new stuff, we’re exempting participating HTTP requests from bandwidth charges for the duration of the test. You can opt-in to the test for a particular site by selecting the “Use Free Beta Bandwidth” action on the Site Information panel for that site in our member interface. That page has all the fine print about the test, which mostly cover two central points:

  • Reminding people that it is a test and things might not work.
  • Clarifying that although there is no fixed limit to the amount of bandwidth a site can use under this test, there is a “floating” limit: don’t cause problems.
    • This test (and the free bandwidth) will run through at least March 15th, 2016.

      Below, we’ll also discuss each new feature briefly.

      TLS enhancements

      The major enhancement to TLS (transport layer security, the technology that makes http:// URLs into https:// URLs) has to do with scalability. As people may know, we currently use Apache as a front-end TLS processor. As a test, we generated test keys and certificates for every site we host and loaded them all into a single Apache config just to see what would happen. The resulting server process took nine minutes to start and consumed over 32 GiB of RAM before answering its first request. That’s… not going to work. So we’ve written a great deal of custom code to solve that problem.

      We’ve also always been worried that the overhead of TLS would require us to charge more for using it. One side-effect of this work is that we’ve reduced the fixed resources required to support TLS so much that we can now definitively say that that won’t be an issue.

      The new system also improves the performance of TLS requests, and made a couple of other changes we were able to backport to the existing setup. First, we’ve eliminated TLS as a single point of failure. Second, due to our use of Apache as a TLS frontend, the last hop of an HTTPS request is handled as unsecured HTTP on our local LAN. Although the probability of anyone monitoring our local LAN without our knowledge is pretty small, in a post-Snowden world one has to acknowledge that taking reasonable precautions against improbable things isn’t as paranoid as it used to be. So last-hop HTTP traffic (all last-hop HTTP traffic, not just HTTPS) is now secured with IPSEC while it is on our LAN.

      We’ll have more to say about TLS in the near future.

      HTTP/2

      RFC 2616 established HTTP/1.1 way back in 1999. It took many years for it to get properly adopted. Since then, there have been many attempts to improve it, like SPDY. In the end, RFC 7540 laid out HTTP/2 as the official successor, bringing many of the advantages of SPDY and similar protocols, and a lot of combined wisdom and experience to the new protocol.

      Our beta service supports HTTP/2 right now.

      In order to take advantage of it with a web browser, you need TLS configured. HTTP/2 can work over unencrypted connections, and although we do support that, no browser does. The default for the future of web browsing is intended to be encryption.

      Automatic gzip compression

      Contrary to popular belief, we’ve supported gzip encoding for a long time. The problem historically was that getting there has been a bit too tedious for most people. Delivering gzip-encoded static content requires maintaining two copies (regular and compressed) and twiddling around in .htaccess. Dynamic content is much easier; we’ve actually enabled gzip encoding for PHP by default since PHP 5.5. But still the word on the street is that we don’t have it, because when people think compression, they think mod_deflate.

      We’ve never supported mod_deflate because it’s one of those solutions that is simultaneously easy and terrible. With mod_deflate, if someone requests a piece of static content and says they support gzip encoding, the server compresses the content and sends it to them. If another person requests the same content and says they support gzip encoding, the server compresses the same content again the same way, and sends it to them. Over and over, performing the same compression on the same input every time, wasting lots of resources and hurting the throughput of the server. (In testing, we found it was not unusual for requests handled this way to take longer than if no compression was used, even though the overall size is smaller.) Easy. And terrible.

      Our beta service is capable of fully automatic gzip encoding of any compressible content. If someone requests a piece of static content and says they support gzip encoding, our system compresses the content and sends it to them. And then it stuffs it in a cache, so when the next person requests the same content with compression, it’s already ready to go.

      Major Access Control List (ACL) improvements

      ACLs (currently called IP access control in our UI) are how you decide who is or isn’t allowed on your site. People use them to block spammers and bandwidth leeches, or to limit access to their home network while a site is being developed.

      First and foremost, the performance of ACLs has been dramatically improved with the new software. We greatly underestimated the degree to which some people would get carried away with ACLs. The site on our network with the largest ACL currently has over 4000 entries. That takes a lot of processing and really slows down access to that site. We could argue that such a large ACL is fundamentally unreasonable and that if using it has a performance impact, so be it. Or we could make the new system capable of processing an incoming request against that site’s ACL in 3 microseconds. We chose the latter.

      At the same time, we’ve also dramatically expanded what can be included in an ACL. It’s now possible to filter inbound requests based not only on IP address (now including both IPv4 and IPv6) but also on protocol (http or https), request method (GET, POST, etc.), and URL prefix. So, as a purely hypothetical example that I’m sure won’t be of any practical interest, an ACL can now be used to block POST requests to a WordPress blog’s login script if they don’t originate from specific IP’s you know are OK without interfering with public access to the rest of the site.

      Shared IP blacklists

      We’ve also added the ability to filter incoming requests against a sort of giant shared ACL, a list of IPs flagged for bad behavior.

      We haven’t turned this on yet, because we’d really like to include Project Honeypot’s http:bl in the list, but we’d need their cooperation to set that up, and they haven’t gotten back to us yet.

      We can’t guarantee this will be effective, attacks tend to adapt and some botnets are huge, but we’re committed to finding new and better ways to keep our members’ sites safe.

      Regardless of how the details shake out, this feature will be opt-in. At some point in the distant future, well after this test is over, if the shared list works really well and causes few problems, we may eventually make it the default for new sites. We’ll wait a long while on that, and then make the right decision at that time.

      Websockets support

      Websockets are a way to convert a web request into an efficient bidirectional pipe between a web browser and a server. They’re super handy for high-performance and interactive apps. They were very high on the list of things there was absolutely no way our infrastructure could ever support. Yesterday.

      When things settle down, we’ll try to do a brief tutorial showing how to use them.

      Wildcard aliases

      Wildcard aliases refers to the ability to add an alias like *.example.org to your site and have all traffic for whatever name people enter (e.g. www.example.org, an.example.org, another.example.org, whatever.example.org, perfect.example.org, etc.) wind up on that site.

      We’ve never supported wildcard aliases because they’re not super-common (in most cases, example.org/perfect is just as good as perfect.example.org) and because our existing system uses a hash table to speed up alias lookups; you can’t hash wildcards. The new system removes this limitation without sacrificing performance. We still don’t recommend using them unless you have a specific need, but there are a couple of use cases where there’s just no substitute. (One site which is perhaps not surprisingly no longer hosted here had 6000 aliases at the time it was deleted. That same site today could have gotten by with one wildcard alias.)

      The “specific beats general” rule applies to wildcard aliases. If the site example has the alias www.example.org and the site wild-example has the alias *.example.org, requests for www.example.org will go to example not wild-example.

      A caveat

      Although these features now exist with the beta service, most of them aren’t reflected in the UI yet (where applicable). It seemed cruel to provide an interface to set up cool functionality that wasn’t actually available. 🙂

      Now that it is, we’ll be rectifying that over the coming weeks as we refine and troubleshoot everything. In the meantime, if you want early access to one of the features listed here that requires custom configuration and you’re a subscription member, just drop us a line through our site and we’ll see what we can do.

      Last words

      This is at once very exciting and very daunting. The software being replaced is 15 years old and showing its age; the new features we’re bringing out are fantastic (and, in some cases, long overdue) and we couldn’t have done them with the old architecture. But on the other hand, the old software is a legitimate tough guy. It’s handled tens of billions of web requests. It built our business from nothing. We know exactly what it does under a dozen different types of DDOS attack. And here we are, replacing it.

      There is absolutely, positively no way the new software is as bug-free or battle-tested as the old stuff. The latest bug logged against the existing software was a memory leak in 2009. The latest bug against the new software was fixed less than 24 hours ago. There will be problems. (Which we’ll fix.) Then there will be more problems. (Which we’ll fix.) It will inevitably crash at the worst possible time at least once. (Which we’ll fix.) And, there will no doubt be something obscure that works great on the current system but which doesn’t work on the new one that we won’t be able to fix. (But not to worry, we’ll be keeping the old one around for quite awhile.)

      So this is a daunting move for us, but we’ve never made decisions based on fear and we’re not going to start now. So it’s time to push this technology out of the lab and onto the street so it can get started on its five hundred fights.

      Please help us out and opt as many sites as you can into the beta, so we can test against the broadest possible cross-section of traffic and types of site content. Every little bit helps!

      Thanks for your time, help, and support!

      17 Comments

      RSS feed for comments on this post.

      1. I’m so glad I’m too busy with my day job to have made more progress on one of my side projects, because websockets is going to be awesome.

        Good work guys (even though you’re not finished yet).

        Comment by Scott Robison — February 19, 2016 #

      2. Damn, wish I was actively working on something to use some of these features. Sadly, I see myself not needing these things until they are out of beta.

        Comment by MiquelFire — February 19, 2016 #

      3. Way to go, guys! Very cool stuff.

        Comment by Braden — February 19, 2016 #

      4. Awesome! I am working on something right now that will need websocket support.

        Comment by Sam Shores — February 19, 2016 #

      5. This is amazing! Thanks so much! I hope Let’s Encrypt support is next!

        Comment by Steven — February 20, 2016 #

      6. Wow, awesome! But you know that Christmas is still 10 months off, right? 🙂

        I have a small project with a custom server written in Haskell that needs WebSockets to work. I will look into trying it out.

        Comment by apfelmus — February 20, 2016 #

      7. Thanks for the upgrades guys! Really glad to be a part of this hosting service, you all seem to have things in real control and order!

        Comment by Jony — February 20, 2016 #

      8. Websockets – very nice. What server side language can we create the server code with? Example: Node.js

        https://www.npmjs.com/package/nodejs-websocket

        Comment by Steve — February 23, 2016 #

      9. Node.js is probably the easiest — that’s what our demo site uses — but any language that we support that can implement WebSockets can be made to work. Someone is going to have a site here with WebSockets running in Haskell before you know it.

        WebSockets support went live Sunday night and the NPM websockets module is already available in the white realm.

        -jdw

        Comment by jdw — February 23, 2016 #

      10. Really looking forward to progress on the TLS front. Thanks for all your hard work!

        Comment by Kinak — March 2, 2016 #

      11. What progress are you looking forward to? Our TLS support is already excellent; it’s well-supported, easier to set up, more reliable than ever, and requires no fighting with config files on your part (unless you want to bump the automatic A-rating from SSLLabs to an A+ by enabling strict transport security). What the beta offers is primarily scalability, so we have a viable answer to the question “What would happen if every member decided to activate TLS on all their sites tomorrow?” As with all parts of our service, we’ll keep working to make it even better, but it’s pretty darned good as-is.
        -jdw

        Comment by jdw — March 2, 2016 #

      12. Nice! Thanks for the WebSocket support! I’ve been hoping for that one for a while!

        Comment by Wizek — March 10, 2016 #

      13. I’m not Kinak, but the improvement I am really jonesing for on the TLS front is a better interface to certificate management. I see that you’ve said on the member forum that integrated support for Let’s Encrypt is coming, and that’ll make me 100% happy, but probably not everyone wants to jump ship from their existing CAs; options for those people that are less manual and error-prone would be nice too.

        Comment by Zack — March 16, 2016 #

      14. does the automatic gzip compression require any setup?

        Comment by ACE — April 7, 2016 #

      15. It does not. -jdw

        Comment by jdw — April 7, 2016 #

      16. Does the gzip compression also work the other way? If I’m serving only compressed files to save disk space, do you serve and cache the uncompressed files?

        Comment by SV — April 12, 2016 #

      17. Good question, I’m not sure that’s been tested. In theory it should work, though you would definitely want to try it out thoroughly before adopting it as a strategy. Some fussing with headers might be required. -jdw

        Comment by jdw — April 12, 2016 #

      Sorry, the comment form is closed at this time.

Entries Feed and comments Feed feeds. Valid XHTML and CSS.
Powered by WordPress. Hosted by NearlyFreeSpeech.NET.

NFSN