NearlyFreeSpeech.NET Blog A blog from the staff at NearlyFreeSpeech.NET. Fri, 19 Feb 2016 00:40:04 +0000 en-US hourly 1 Unlimited free bandwidth!* (*Some limitations apply.) Fri, 19 Feb 2016 00:36:41 +0000 We’ve been hard at work behind the scenes developing the next-generation of our core hosting technology, and we’re ready to move it to public testing. It has some exciting new features:

  • TLS enhancements
  • HTTP/2 support
  • Automatic gzip compression
  • Major Access Control List (ACL) improvements
  • Shared IP blacklist support
  • Websockets support
  • Wildcard alias support

To encourage people to help us test out the new stuff, we’re exempting participating HTTP requests from bandwidth charges for the duration of the test. You can opt-in to the test for a particular site by selecting the “Use Free Beta Bandwidth” action on the Site Information panel for that site in our member interface. That page has all the fine print about the test, which mostly cover two central points:

  • Reminding people that it is a test and things might not work.
  • Clarifying that although there is no fixed limit to the amount of bandwidth a site can use under this test, there is a “floating” limit: don’t cause problems.
    • This test (and the free bandwidth) will run through at least March 15th, 2016.

      Below, we’ll also discuss each new feature briefly.

      TLS enhancements

      The major enhancement to TLS (transport layer security, the technology that makes http:// URLs into https:// URLs) has to do with scalability. As people may know, we currently use Apache as a front-end TLS processor. As a test, we generated test keys and certificates for every site we host and loaded them all into a single Apache config just to see what would happen. The resulting server process took nine minutes to start and consumed over 32 GiB of RAM before answering its first request. That’s… not going to work. So we’ve written a great deal of custom code to solve that problem.

      We’ve also always been worried that the overhead of TLS would require us to charge more for using it. One side-effect of this work is that we’ve reduced the fixed resources required to support TLS so much that we can now definitively say that that won’t be an issue.

      The new system also improves the performance of TLS requests, and made a couple of other changes we were able to backport to the existing setup. First, we’ve eliminated TLS as a single point of failure. Second, due to our use of Apache as a TLS frontend, the last hop of an HTTPS request is handled as unsecured HTTP on our local LAN. Although the probability of anyone monitoring our local LAN without our knowledge is pretty small, in a post-Snowden world one has to acknowledge that taking reasonable precautions against improbable things isn’t as paranoid as it used to be. So last-hop HTTP traffic (all last-hop HTTP traffic, not just HTTPS) is now secured with IPSEC while it is on our LAN.

      We’ll have more to say about TLS in the near future.


      RFC 2616 established HTTP/1.1 way back in 1999. It took many years for it to get properly adopted. Since then, there have been many attempts to improve it, like SPDY. In the end, RFC 7540 laid out HTTP/2 as the official successor, bringing many of the advantages of SPDY and similar protocols, and a lot of combined wisdom and experience to the new protocol.

      Our beta service supports HTTP/2 right now.

      In order to take advantage of it with a web browser, you need TLS configured. HTTP/2 can work over unencrypted connections, and although we do support that, no browser does. The default for the future of web browsing is intended to be encryption.

      Automatic gzip compression

      Contrary to popular belief, we’ve supported gzip encoding for a long time. The problem historically was that getting there has been a bit too tedious for most people. Delivering gzip-encoded static content requires maintaining two copies (regular and compressed) and twiddling around in .htaccess. Dynamic content is much easier; we’ve actually enabled gzip encoding for PHP by default since PHP 5.5. But still the word on the street is that we don’t have it, because when people think compression, they think mod_deflate.

      We’ve never supported mod_deflate because it’s one of those solutions that is simultaneously easy and terrible. With mod_deflate, if someone requests a piece of static content and says they support gzip encoding, the server compresses the content and sends it to them. If another person requests the same content and says they support gzip encoding, the server compresses the same content again the same way, and sends it to them. Over and over, performing the same compression on the same input every time, wasting lots of resources and hurting the throughput of the server. (In testing, we found it was not unusual for requests handled this way to take longer than if no compression was used, even though the overall size is smaller.) Easy. And terrible.

      Our beta service is capable of fully automatic gzip encoding of any compressible content. If someone requests a piece of static content and says they support gzip encoding, our system compresses the content and sends it to them. And then it stuffs it in a cache, so when the next person requests the same content with compression, it’s already ready to go.

      Major Access Control List (ACL) improvements

      ACLs (currently called IP access control in our UI) are how you decide who is or isn’t allowed on your site. People use them to block spammers and bandwidth leeches, or to limit access to their home network while a site is being developed.

      First and foremost, the performance of ACLs has been dramatically improved with the new software. We greatly underestimated the degree to which some people would get carried away with ACLs. The site on our network with the largest ACL currently has over 4000 entries. That takes a lot of processing and really slows down access to that site. We could argue that such a large ACL is fundamentally unreasonable and that if using it has a performance impact, so be it. Or we could make the new system capable of processing an incoming request against that site’s ACL in 3 microseconds. We chose the latter.

      At the same time, we’ve also dramatically expanded what can be included in an ACL. It’s now possible to filter inbound requests based not only on IP address (now including both IPv4 and IPv6) but also on protocol (http or https), request method (GET, POST, etc.), and URL prefix. So, as a purely hypothetical example that I’m sure won’t be of any practical interest, an ACL can now be used to block POST requests to a WordPress blog’s login script if they don’t originate from specific IP’s you know are OK without interfering with public access to the rest of the site.

      Shared IP blacklists

      We’ve also added the ability to filter incoming requests against a sort of giant shared ACL, a list of IPs flagged for bad behavior.

      We haven’t turned this on yet, because we’d really like to include Project Honeypot’s http:bl in the list, but we’d need their cooperation to set that up, and they haven’t gotten back to us yet.

      We can’t guarantee this will be effective, attacks tend to adapt and some botnets are huge, but we’re committed to finding new and better ways to keep our members’ sites safe.

      Regardless of how the details shake out, this feature will be opt-in. At some point in the distant future, well after this test is over, if the shared list works really well and causes few problems, we may eventually make it the default for new sites. We’ll wait a long while on that, and then make the right decision at that time.

      Websockets support

      Websockets are a way to convert a web request into an efficient bidirectional pipe between a web browser and a server. They’re super handy for high-performance and interactive apps. They were very high on the list of things there was absolutely no way our infrastructure could ever support. Yesterday.

      When things settle down, we’ll try to do a brief tutorial showing how to use them.

      Wildcard aliases

      Wildcard aliases refers to the ability to add an alias like * to your site and have all traffic for whatever name people enter (e.g.,,,,, etc.) wind up on that site.

      We’ve never supported wildcard aliases because they’re not super-common (in most cases, is just as good as and because our existing system uses a hash table to speed up alias lookups; you can’t hash wildcards. The new system removes this limitation without sacrificing performance. We still don’t recommend using them unless you have a specific need, but there are a couple of use cases where there’s just no substitute. (One site which is perhaps not surprisingly no longer hosted here had 6000 aliases at the time it was deleted. That same site today could have gotten by with one wildcard alias.)

      The “specific beats general” rule applies to wildcard aliases. If the site example has the alias and the site wild-example has the alias *, requests for will go to example not wild-example.

      A caveat

      Although these features now exist with the beta service, most of them aren’t reflected in the UI yet (where applicable). It seemed cruel to provide an interface to set up cool functionality that wasn’t actually available. 🙂

      Now that it is, we’ll be rectifying that over the coming weeks as we refine and troubleshoot everything. In the meantime, if you want early access to one of the features listed here that requires custom configuration and you’re a subscription member, just drop us a line through our site and we’ll see what we can do.

      Last words

      This is at once very exciting and very daunting. The software being replaced is 15 years old and showing its age; the new features we’re bringing out are fantastic (and, in some cases, long overdue) and we couldn’t have done them with the old architecture. But on the other hand, the old software is a legitimate tough guy. It’s handled tens of billions of web requests. It built our business from nothing. We know exactly what it does under a dozen different types of DDOS attack. And here we are, replacing it.

      There is absolutely, positively no way the new software is as bug-free or battle-tested as the old stuff. The latest bug logged against the existing software was a memory leak in 2009. The latest bug against the new software was fixed less than 24 hours ago. There will be problems. (Which we’ll fix.) Then there will be more problems. (Which we’ll fix.) It will inevitably crash at the worst possible time at least once. (Which we’ll fix.) And, there will no doubt be something obscure that works great on the current system but which doesn’t work on the new one that we won’t be able to fix. (But not to worry, we’ll be keeping the old one around for quite awhile.)

      So this is a daunting move for us, but we’ve never made decisions based on fear and we’re not going to start now. So it’s time to push this technology out of the lab and onto the street so it can get started on its five hundred fights.

      Please help us out and opt as many sites as you can into the beta, so we can test against the broadest possible cross-section of traffic and types of site content. Every little bit helps!

      Thanks for your time, help, and support!

      ]]> 17 ICANN’s assault on personal and small business privacy Sat, 27 Jun 2015 00:22:18 +0000 TLDR

      This post is extremely long and detailed and is on quite a dense subject. Here is the short version.

      Trouble is brewing.

      ICANN, the body that has a monopoly on domain registrations, is now planning to attempt to take over domain privacy providers (like RespectMyPrivacy) as well. Driven in no small part by the people who brought you SOPA, they have a three-step plan:

      1. They will introduce a new accreditation program for domain privacy providers, complete with fees and compliance headaches. (Meaning higher costs for you.)
      2. As a condition of accreditation, require domain privacy providers to adopt privacy-eviscerating policies that mandate disclosure and, in some cases, publication of your private information based on very low standards.
      3. They will require ICANN-accredited domain registrars (i.e. all domain registrars) to refuse to accept registrations that use a non-accredited domain privacy provider, thus driving any privacy provider that actually plans to provide privacy right out of business.
      4. Here are some of the great ideas they’re considering:

        • Barring privacy providers from requiring a court order, warrant, or subpoena before turning over your data.
        • A policy based on the “don’t ask questions, just do it” model of the DMCA. Except that with the DMCA your site can be put back after an error or bogus request; your privacy can never be put back.
        • Requiring privacy providers to honor law enforcement requests to turn information over secretly, even when under no legal obligation to do so.
        • Outright banning the use of privacy services for any domain for which any site in that domain involves e-commerce.

        If this happens, domain privacy will become little more than a fig leaf. Your private information will be available to anyone who can write a convincing-looking letter, and you may or may not be able to find out that it was disclosed.

        The whole proposal is a giant pile of BS that does nothing but service ICANN’s friends in governments and intellectual property (think RIAA/MPAA) at the expense of anyone who’s ever set up a web site and thought that maybe it would be good if their detractors didn’t have their home address. But as much as some at ICANN want to, they can’t just scrap privacy services. ICANN’s members are domain registrars and they make a lot of money from it. So this is the compromise: providers can still sell privacy, it just won’t actually do any good, and when they hand over your info, if they tell you about it at all, they’ll blame ICANN and say their hands are tied by the policies they have to follow.

        If you think maybe paying a lot more for a lot less privacy isn’t such a great idea, ICANN is accepting public comment on this subject until July 7th, 2015. You can email them at or fill out their online template if you prefer.

        If you do feel like submitting a comment on this, I encourage you to read this whole post (and, if you have time, the working group report). The more informed you are, the more effective your comments will be.

        The full story

        If you’ve never heard of ICANN, you could perhaps be forgiven for that. The Internet Corporation for Assigned Names and Numbers (ICANN) is the behind-the-scenes non-governmental organization that runs Internet domain registration.

        If you are familiar with them, it may be thanks to some of their greatest hits:

        • ICANN is the organization that granted Verisign a(n effectively) perpetual monopoly over .com and .net, complete with provisions for automatic regular price increases without any sort of oversight or justification.
        • ICANN is the reason why we have to hassle you repeatedly when your domain expires, even if you tell us in no uncertain terms that you want it to expire.
        • ICANN is behind the policy that requires your domains to be suspended if you don’t respond to email verifications that have ICANN-mandated text that frequently trips spam filters.
        • ICANN is a “non-profit” that is massively profitable. The fees they charge (which are ultimately borne by you the domain registrant) are so far in excess of what they need to operate that as of the end of 2013, they had $168M in cash on hand.
        • It’s ICANN that requires that when you register a domain, you make your full name, address, telephone number, and email address available in the public whois database, helping to make sure that anyone who might object, stalkers, creepers, criminals, mentally unbalanced people, big corporations or anyone else to find, harass, and possibly murder you.

        ICANN is sad

        For several years, something has been bothering ICANN. They’re worried that their treasured public whois database isn’t “effective” enough. (Some of us strongly feel that the public whois database is a menace and should not exist at all, but ICANN is not at home to that point of view.) Part of the effectiveness problem, they posit, stems from inaccurate information. And they’ve tried to address that with programs like WAPS (the “whois accuracy program specification” that leads to your domain being suspended for not clicking a link in a spammy-looking email).

        But the real “problem” with the “effectiveness” of the public whois database is the proliferation of privacy and proxy contact services (like RespectMyPrivacy). These services allow you to outsource the service of making it possible to contact you by receiving mail, telephone calls, email, and faxes on your behalf and forwarding them to you. This is an invaluable service for anyone who may want to register a domain name but doesn’t have a (required) phone number. Or anyone who doesn’t want to put their home address on their blog about abuses by their local police department. Or anyone who doesn’t have a corporate legal department to hide behind, in an era when death threats, rape threats, and tricking SWAT into raiding people’s houses, all as retaliation for what people say online, are everyday occurrences.

        So ICANN is looking to put a stop to that.

        Their planned method of doing so is to introduce a new accreditation program for privacy and proxy providers, complete with fees, compliance requirements, and strict guidelines on how they can operate, and then to require accredited domain registrars to refuse any registration that uses a non-accredited privacy or proxy service.

        That is, itself, a disturbing abuse of their monopoly position in the domain registration market to gain control of a related industry. Where does that end? How long before your ICANN-accredited domain registrar must refuse any registration that uses a non-accredited web host? How long before your ICANN-accredited web host requires you to use an ICANN-accredited payment processor? Or an ICANN-accredited blog software vendor? (Some large hosting/domain companies would just love the ability to dictate what providers you use for every aspect of your online presence.) If you’re a tech-head, and this sounds familiar, it may be because Microsoft was sued by the DoJ for using their Windows monopoly to force Internet Explorer on the world. However, the DoJ will not be our friend here as there are few things they despise more than online privacy.

        They claim this is to protect registrants, but their actions do not bear this out. This is the initial report of their working group, and here are some of the ways they want to “protect” registrants:

        • “Domains used for online financial transactions for commercial purpose should be ineligible for privacy and proxy registrations.” (Yeah, your home-based business? Sorry about that.)
        • The working group is still debating whether accredited proxy providers would be required to comply with law enforcement requests not to tell a registrant about an inquiry, even and expressly in the absence of any legal requirement to do so. (Thankfully we live in a world where abuse of investigative powers by government agencies never happens. Oh, hang on a second…)
        • Requiring a court order to release information to someone who asks for it is specifically called out as prohibited. I.e. an accredited privacy or proxy provider would be required to have a policy allowing disclosure of your private information based solely on “well it sounds like they have a good reason.” (Copyright and trademark issues have been specifically called out as nigh-unchallengeable examples of “a good reason.” Criticize a big company by name? “Trademark!” They get your info.)

        Having read the entire 98 page working group report, it sounds like their goal is to adopt “don’t ask, don’t tell” as a policy; you can keep your information private as long as no one asks for it.

        Much of the proposed policy is misguided on a technical level as well. There are many areas where the privacy and proxy provider would be required to take actions that such a provider can typically only do if they are also your domain registrar. Actions like publishing something in the whois database entry for your domain — like your contact information, often without your consent and possibly without telling you first. Only your registrar can do that. It could well be that independent companies (like RespectMyPrivacy) that exist only to protect your privacy will no longer be allowed to exist. Only “captive” services — those run by the registrars themselves — will be able to meet the proposed requirements. And I’m sure no one reading this has ever had a problem with one of those.

        There are also huge issues the working group hasn’t considered at all, like correlation. What if Jane Smith has an online business and a blog? Even if her blog is “allowed” to have a private registration, her business may not be. (I say “allowed” because the nerve of a group of self-appointed people deciding who deserves privacy and who doesn’t galls me. Like speech, privacy is an inalienable right.) If someone doesn’t like the content of her blog, do we think they won’t look at her business domain to get her home address just because it’s unrelated? That’s pretty farfetched. Correlating details from multiple unrelated sources, and lying to get them are standard practice for Internet harassers and “doxxers.”

        But, really, ICANN as an international organization tasked with managing domain names, should not be sticking its nose into issues related to the content. Which is ultimately what this is about. What determines if your domain will be eligible for privacy services? It’s content. What determines if your info will be revealed to anyone who asks? Your content. This is a massive effort by the “if you have nothing to hide, you have nothing to fear” crowd to undermine anonymous online speech.

        Why are we telling you about this? Because right now the working group is soliciting public comment. You have the opportunity to make your voice heard. (Although given ICANN’s past disregard for the registrant constituency it supposedly serves, I won’t pretend that I’m expecting miracles. That doesn’t mean you shouldn’t do it. This isn’t a situation where we expect to tell them and for them to listen, this is a situation where we feel it will be important later to be able to say “we told you and you didn’t listen.”

        What do we think about this?

        There are real issues with privacy and proxy services. There’s a lot of trust there, as it is almost always possible for such a provider to hijack your domain if they decide they want it. So there is real potential for abuse, and some oversight really could help keep the industry clear of unethical providers. There are also some services that are really inadequate, like the registrar-affiliated ones that (in violation of already-existing registrar rules) plaster “POSTAL MAIL DISCARDED” in the address field.

        Along that line, the working does have some good ideas for policies that privacy and proxy services not interested in screwing their customers would have. And anytime a good idea comes up, it doesn’t matter the source, so it’s certainly given some food for thought for how to improve things. But RespectMyPrivacy doesn’t need to be forced to improve things for its customers; that’s its job. So whatever good ideas do come out of this process, we’ll take ’em.

        However, ICANN has demonstrated again and again that they prioritize the concerns of their executives, law enforcement agencies, intellectual property holders, registries and registrars; registrants are dead last by a wide margin. They are not an organization that most people would trust to look out for the best interests of registrants. We certainly wouldn’t.

        If ICANN wants to develop an accreditation program for privacy and proxy providers, even if that’s nowhere in their official mission, they should feel free to do so. If they developed a good one, RespectMyPrivacy would do it. This isn’t a good one.

        But even if they do develop an accreditation program for privacy and proxy providers, ICANN absolutely must not require accredited domain registrars to refuse to accept registrations that use privacy and proxy services not accredited by ICANN. That its morally bankrupt to do so really ought to be enough, but it’s also illegal. Their accredited privacy and proxy providers must succeed or fail on their own, not be handed success by banning everything else.

        What to do?

        The working group is soliciting feedback from the public on these issues, among others:

        • Should registrants of domain names associated with commercial activities and which are used for online financial transactions be prohibited from using, or continuing to use, privacy and proxy services?
        • If they do prohibit privacy and proxy services for domains that perform either “commercial” or “transactional” activities, should they define “commercial” or “transactional?” (No, I am not making this up.)
        • Should it be mandatory for accredited P/P providers to comply with express LEA requests not to notify a customer?
        • Should there be mandatory Publication for certain types of activity e.g. malware/viruses or violation of terms of service relating to illegal activity? (In this context, “Publication” means canceling the privacy service and posting all details in the public whois database.)
        • Should a similar framework and/or considerations apply to requests made by third parties other than LEA and intellectual property rights-holders?

        You can send your thoughts on these matters or on other aspects of the proposal to by July 7, 2015. You may also fill out their online template if you prefer.

        Please take a few minutes to tell the working group that you value your online privacy and that you oppose any proposal that will make it easier for large, powerful organizations and dangerous individuals to get at their critics. Tell them that policies that require providers to have low standards for disclosure of personal information harm that privacy. And please remind them that imposing requirements on privacy and proxy providers that are really the province of domain registrars will only create a broken, unworkable system that creates more problems than it purports to solve.

        ]]> 17 New payment features, methods, and options Wed, 13 May 2015 01:20:52 +0000 We’ve added a number of new payment features and options to our site designed to make things better for our members. This includes a new deposit form that allows arbitrary deposit amounts, the ability to choose either a specific payment or a specific deposit amount and let our system work out the fees, support for Dwolla and Bitcoin as payment methods, and the option to set up your site to accept contributions from the general public toward its hosting costs.

        New payment features

        Because of the admittedly complex nature of our “payment-fee+rebate=deposit” system and the number of factors that went into calculating the rebate, we have traditionally limited people to fixed deposit amounts. Thanks to a few simplifications in the rebate calculations (which have caused some deposits to become a few cents cheaper and the cost of others to go up a few cents) and some other behind-the-scenes changes, we’ve (finally!) been able to do away with that limitation. This is something that people have been asking for for a long time, and it’s frankly overdue, so we’re very happy that this feature has finally made it to production.

        As a counterpart to that change, we’ve also placed the decision about how to calculate deposit fees into your hands. If you want to pay $20, have the $1.00 net fee deducted and receive $19.00 in your account, that’s the way it has always worked and it still does today. However, if you want to deposit $20.00 into your account, you now also have the option to click a button and let the system figure out that to do that, you should make a $21.03 payment. This will be very helpful for many use cases, like depositing just enough to register that domain name. (But don’t forget to account for privacy service!)

        New payment methods

        At long last, we have introduced support for Dwolla. Although limited to US customers, Dwolla is an interesting alternative to traditional payment processors. As they have a very favorable fee structure, we quite naturally hope they will be very popular with our members.

        And, at some past point, I was caught in public stating that we not would revisit our decision not to accept bitcoin until after we had implemented arbitrary HTTP servers and we had implemented Dwolla support. Well, we now have both of those things. And thanks to the bitcoin experts at Bitpay, we now have bitcoin support as well! Bitpay not only made integration very straightforward for us, but they also assume most of the volatility risk by accepting bitcoins from you and paying us in US dollars. That combination removed most of our major objections to bitcoin acceptance and made it easy for us to change our long-held position on this.

        Checks aren’t a new payment option, but they’re definitely improved. If you’ve ever sent one, you’re familiar with the onerous deposit form we make you fill out and include. Well, computers are supposed to automate routine tasks, so that form is now filled out automatically. All you have to do is click to print when you set up a mailed-in deposit and the form happens for you. Or, use the new streamlined process for online bill-pay. Either way, you’ll save some time and hassle.

        Finally, on the credit card front, in addition to Visa, Mastercard, American Express and Discover, we now also accept JCB and Diner’s Club cards. Or, we assume we do. We’ve never actually seen either one to try it.

        New payment options

        Is your website something you do to make information available to the world? Does the world find it helpful, and often tell you that. Have you ever thought to yourself, “hey, if I’m doing all the work on this site to help/entertain others, why am I the one paying for it?”

        Do you run a popular forum site with lots of users that would be more than happy to help pitch in to cover the costs, but maybe you don’t want 1000 people sending $2.00 checks to your house?

        Or, are you a web developer who hosts sites with us for clients who want to pay their own bills, but for whom the responsibility and technical requirements of their own NearlyFreeSpeech.NET membership are not a good fit?

        We now have the ability to let you allow the public (or a subset of the public) to make contributions to a site’s hosting costs through our service, without those contributors having to have a NearlyFreeSpeech.NET membership.

        This is something we’ve always wanted to do, but we’ve only ever done it poorly. We’ve offered ad-hoc options for accepting donations via PayPal to a few sites, and we’ve had a form on our site to allow anyone to mail-in a check. That’s all done with. As of today, sites that apply to accept contributions (and that are accepted), can send their contributors to a special page on our public site that will let them use any of the payment options we support. Their contributions will go directly into your account to help fund your site.

        The contribution process also has privacy built into its very core. The contributor identifies themselves to us, so we can process their payment, and you identify yourself to us so we can provide you hosting. But we do not identify them to you, nor you to them. During the process, they have the option to send you a brief message which they can, if they choose, use to identify themselves to you. Each contribution will also be assigned a unique ID code provided to both parties. The name of the site receiving the contribution, the ID code, the amount of the contribution, and the contributor’s message (if any) will be the only information shared between the contributor and the site operator.

        Accepting payments from the general public is fraught with risk, and we need to make sure this feature isn’t abused or used as an eCommerce substitute, so there is naturally a solid page of fine print associated with accepting contributions for your site, and it will always be at our discretion whether or not to accept contributions for a particular site. If you’re interested in finding out more about this capability, start here.

        In addition to the contribution feature, we’ve also made it easier than ever to transfer funds between NearlyFreeSpeech.NET memberships. If you know someone’s account number, you can transfer funds from your account to theirs instantly by using the “Transfer Funds Between Accounts” action on the Accounts tab in our member interface. Just like always, you can also use this option to transfer funds between your own accounts; it’s just easier and prettier now.

        Perhaps this stuff is not as exciting as adding features or upgrading our equipment, and in some cases it required a ton of work to rebuild what we already had — work no one will ever see — in order to make a better foundation for the new stuff. But it really was a ton of work to build and test all of this, so we’re thrilled that it’s ready and really hope our members will find the new functionality helpful and get good use out of it!

        ]]> 13
        Upcoming updates, upgrades, and maintenance Tue, 03 Mar 2015 23:45:24 +0000 We have accumulated some housekeeping tasks that we’ll be taking care of over the next couple of months. They’re all necessary things to make sure our service keeps running at its best, and though we work hard to prevent these types of things from impacting services, occasionally they do intrude. As a result, we want to let everyone know what we’re up to and what the effects will be.

        Retiring file server f2

        We still have quite a few sites using the file server designated as “f2.” This is the oldest file server still in service, and although it has been a great performer for many years, it is reaching the end of its useful life. It is also one of two remaining file servers (and the only one that holds member site files) that has a single point of failure. Our newer file servers use different technology; they are faster (100% SSD), have no single points of failure, allow hardware maintenance while they are running, and allow us to make major changes (like adding capacity or rebalancing files) behind the scenes without you having to change the configuration of your site.

        So, we are quite anxious to get rid of f2. We’ve been offering voluntary upgrades for some time now, but it’s time to move things along. We’ve set an upgrade date and time for every site on f2 in April. If you have a site on this file server, you can see your upgrade time in our member interface and, if it doesn’t suit you, upgrade at any earlier time or postpone it closer to the end of April.

        Please note, the file server f2 is distinct from and has no relation to site storage nodes that contain the text fs2. If your site’s file storage tag contains fs2, you are not affected by this.

        Migrating a site does entail placing it into maintenance mode briefly, for a period proportional to the size of the site. Beyond that it usually has no ill effects. Some sites do have complications, especially if they have hardcoded paths in their .htaccess files. After our system migrates your site, it will attempt to scan the site for affected files and send you an email listing them if it finds any. This isn’t 100% foolproof, but we previously did it for a lot more sites under considerably greater pressure with the f5 server, and problems were relatively few and far between.

        Discontinuing PHP Flex

        As part of our continued (slow) migration away from Apache 2.2, we will be discontinuing PHP Flex. PHP Flex refers to running PHP as a CGI script, which is a terrible way to do things. In the bad old days, it was useful in some cases for compatibility with PHP applications that didn’t work with safe_mode, if you didn’t mind the horrible performance. But, even in the bad old days, it mostly ended up being used not because it was necessary, but because it was easier than dealing with safe_mode.

        These days, PHP safe_mode is long gone, so there’s no real reason to have PHP Flex anymore. Our new PHP types are highly compatible with (and much faster than) PHP Flex, and most people have already happily upgraded. However, there are still some stragglers out there and, as time goes by, they are starting to have problems. Those problems often completely go away simply by switching to a currently-supported version of PHP. Thus, we feel it’s time to phase out PHP Flex. In the month of April, we will auto-migrate PHP Flex sites (which mostly run PHP 5.3 and in some cases 5.2) to PHP 5.5.

        MySQL software upgrades

        We are currently working on both long-term and short-term upgrades for MySQL. In the short term, we need to perform a series of OS and MySQL server updates on existing MySQL processes to keep them up-to-date and secure. This will require either one or two brief downtimes for each MySQL node, typically about 5-10 minutes. We will be performing these updates throughout the month of March, and we will announce them on our network status feed (viewable on our site and Twitter).

        In the long term, MariaDB 5.3 is getting a bit long in the tooth, so we are working to jump straight to MariaDB 10 and all its great new functionality, as well as offering better scalability and configuration flexibility. This is likely to be somewhat more resource intensive, and hence more expensive, so it will be optional for people who are perfectly happy with the way things are. (If you like your MySQL plan, you can keep it!) More on this as it gets closer to release.

        Physical maintenance

        We also need to do some maintenance on the power feeds to one of our server shelves. Ordinarily that isn’t an issue that affects our members, but in this case it’s being converted between 120V and 208V. Hypothetically that can be done while the equipment is running, but doing so entails a nonzero risk of death by electrocution and after careful consideration we’ve decided that none of the current field techs are expendable at this time. Also, it could burn down the datacenter. So, we’re going to go ahead and do it by the book, which means shutting it off.

        That’s a few dozen CPU cores and hundreds of gigs of RAM we need to take offline for a little while. In a real disaster, our infrastructure could survive, but there would be a period of degraded service while things balance out on the remaining hardware. That period would be significantly longer and affect significantly more people than the actual maintenance. So, we feel our best course of action is just to shut it off for the few minutes it will take to rewire the power feeds. The service impact should be low, but will probably not be zero.

        We want to complete the MySQL maintenance listed above first, so we are likely to do this toward the end of March. We will post updates on our network status feed with more precise timing as we get closer.

        Realm upgrade reminder

        We have finally finished rolling sites off of the dreaded “legacy” realms (freebsd6, freebsd72, and 2011Q4). Every site is now on a “color” realm. This means that, for people who have selected late realm upgrades for their site in our UI and who are currently running on the red realm, they will receive an automatic upgrade to violet in April, after quarterly realm rotation has occurred. Compatibility between the two is excellent and we anticipate very few problems.

        That’s all for now. All in all, the upgrades and maintenance shouldn’t affect too many people, but we regret and apologize in advance for any problems they do cause. These steps are part of a process designed to eliminate some very old stuff that causes stuff like this to be intrusive. In other words, the goal is to do this maintenance is in large part so that the next time we do it, you’ll be even less likely to notice.

        Thanks for reading!

        ]]> 6
        Domain registration updates Tue, 02 Dec 2014 22:50:54 +0000 We have a few small updates to announce with respect to domain registration. There will be some small price changes, both up and down. We are also adding support for registering some of the new competitive gTLD’s. Finally, for those who choose our RespectMyPrivacy service, it will now be automatically prepaid (including the 10% prepayment discount) during all registrations and renewals.

        Domain registration pricing changes

        In the past, our system has always limited us to charging the exact same amount for a domain registration, regardless of TLD. But actually TLD prices vary widely, so that not only meant that we weren’t always able to give the best price, but also, a lot of higher-priced TLDs were simply not feasible to add. That limitation has been removed.

        The main effect is that the prices for some TLD’s (biz, info, and org) will be going up a little, but prices for others (com, net, and name) will be going down a little. Here’s a quick table:

        TLD Before After
        com $9.49 $9.34
        net $9.49 $9.34
        org $9.49 $9.89
        biz $9.49 $10.29
        info $9.49 $9.79
        name $9.49 $8.99

        Across the entire portfolio of our members’ registered domain names, this is a net decrease in cost, largely because com is so much more popular than everything else. And those price decreases will be live today. However, we don’t want to spring price increases on people without notice, so those won’t take effect until January 1st. If you’ve got any of those, get your renewals in now!

        This change will also let us track various “promotions” that some gTLDs run fairly often on new registrations. (Mainly biz, info, and org.) We’re not a huge fan of these “deals;” they never apply to transfers, renewals, or multiple-year registrations so it seems like they only exist to lure people in with artificially low first-year prices. But, still, if we can go get a lower price for our members, why wouldn’t we? For example, through the end of the year, new 1-year registrations in biz can be had for $3.49 and new 1-year registrations in org are $4.99. If that’s something you were going to do anyway, congratulations, we found you a few extra bucks.

        New gTLDs

        We’re also adding support for three additional gTLDs: .click, .club, and .guru. This is sort of an experiment. Of the dozens of competitive gTLDs that have been introduced in the past year or so, we picked these three for two key reasons:

        A) They were easy to implement.
        B) According to registration statistics, they are relatively popular.

        In principle, we’re huge fans of gTLD diversification. The .com TLD is kind of a lost cause. Not only is it getting pretty crowded, it is run by Verisign, and ICANN gave them a sweetheart contract that auto-renews into perpetuity and allows significant price increases at regular intervals that require no technical or business justification. Things are not going to improve over time in .com. GTLD diversification is really the only way to compete with that, by breaking the “.com = the Internet” default mindset. And they should be able to compete vigorously on price and quality.

        In practice, that hasn’t worked out yet. Most of the new gTLDs have been backed by speculators who seem to be in a race to see how ridiculous they can be. True story: there is a .luxury TLD that is almost $500/year. With a few exceptions (.click seems pretty low-cost at $6.59), they’ve all got some idea of why they’re worth vastly more than regular domains. We think that with a few exceptions, they’re probably wrong about that. But it’ll take some time, possibly a couple of years, for the operators of most of those gTLD’s to figure out that they’re not one of the exceptions. Once that happens, we anticipate the cost level of competitive gTLD registrations will fall sharply; there’s no reason a general-purpose “domains for the rest of us” gTLD couldn’t operate profitably charging $5/year or less. But we’ve been wrong before, and the stakes to enter the gTLD game are a cool $500,000, so don’t hold your breath.

        Still, we’re sticking our toes in the water to try to help things along. If this works out, we’ll rapidly add a lot more gTLD’s. Pretty much anything that’s not too much hassle and doesn’t make that weird “huhuhuh” laugh come out when we look at the price. Feel free to campaign for your favorite in the comments!

        To avoid raising any hopes, I want to make clear that we still don’t have any immediate plans to support in-house registration of ccTLDs (e.g. .uk and .de). They present both legal and technical problems that simply don’t exist with the new gTLDS. We fiddle with .us from time to time, since the hassles there are only technical, but we were not able to add it for this update.

        Expanding prepaid privacy

        For most of our services, pay-per-day makes a lot of sense: they can be added and removed at any time. But pay-per-day privacy service isn’t a good fit for pay-per-year domains, and that occasionally leads to problems.

        Sometimes, people register a domain with privacy and then forget about the privacy. Likewise, a lot of people are really confused because privacy service doesn’t go away when the domain expires. Instead, it hangs around for about 75 more days, until the deletion date, since that’s how long a domain’s information remains publicly visible. And, on top of those two things, privacy is the only service intentionally allowed to overdraft member accounts.

        The interaction between these factors can (and, unfortunately, sometimes does) lead to what we could euphemistically call a bad member experience, and it’s one of a few specific things about our service that contribute to the creation of angry ex-members.

        To combat that, we have long offered the option to prepay RespectMyPrivacy service on domains in exchange for a 10% discount. It generally doesn’t benefit us for people to pay in advance, which is why most services don’t offer that type of discount, but in this case, giving up that 10% is worth it to prevent the chance of overdrawn accounts and the occasional nasty comments about our character that sometimes result. But anecdotal reports indicate that relatively few people know about that option.

        As a result, we’re automatically going to apply privacy prepayment, with the discount, during all future registrations and renewals of domains that use our privacy service. We will soon expand that to prepaying privacy on auto-renewals and transfers as well. If you have a domain that isn’t currently prepaid, nothing will change until you renew it. (But the option to log in and manually prepay is still there if you want to save the money.)

        That’s similar to what other providers do. Less similar to what they do, privacy service will remain fully pro-rata refundable if you remove it from a domain or if you transfer the domain away prior to its deletion. So this is mostly a price cut and a convenience increase at the cost of a slight loss of flexibility that has limited utility and a nasty habit of blowing up in our faces. 🙂

        ]]> 14
        How-To: Django on NearlyFreeSpeech.NET Mon, 17 Nov 2014 00:18:38 +0000 Now that our persistent process feature is out of beta, this is the first in a series of brief tutorials designed to show how to make use of the feature. In this example, we’ll deploy a minimal Django site using WSGI. Although a lot of this is specific to Django, it also demonstrates most of the steps you would use with other frameworks, like Node.JS or Ruby on Rails. (And we’ll be adding how-to articles for those in the future.)

        Getting your site ready for Django

        First, create the site. When you get to the, “Server Type” panel, select the “[Production] Apache 2.4 Generic” option.


        (You can also use the “[Production] Custom” option; it’s faster if you want Django to serve the whole site, but in this example, we’re also going to demonstrate how to let our Apache server handle a directory of static images.)

        Once that’s done, you’ll immediately notice the new “Daemons” and “Proxies” boxes on the site info panel:


        but you can ignore those for now. We’ll get back to them.

        If it’s still 2014 when you read this, our base Django environment hasn’t been around very long, so it hasn’t had time to work its way into the default realm for new sites. (That’ll be happening in January 2015, so if you’re reading this in the future, you may be able to skip this step. Also, hello future, please send lotto numbers!) So for now you’ll need to update your site realm to indigo or white to get the newest code. Just click the “Edit” button on the “CGI/SSH Realm” line of your site’s Config Information box:


        And choose the “indigo” or “white” realm. For this article, we’ll use the indigo realm:


        Install Django via ssh

        Next, log into the ssh server to set up the actual Django app.

        $ ssh jdw_django@nfsnssh
        [django /home/public]$ mkdir images
        [django /home/public]$ cd /home/protected
        [django /home/protected]$ mkdir django
        [django /home/protected]$ cd django/
        [django /home/protected/django]$ django-admin startproject helloworld .
        [django /home/protected/django]$ python migrate
        Operations to perform:
        Apply all migrations: admin, contenttypes, auth, sessions
        Running migrations:
        Applying contenttypes.0001_initial... OK
        Applying auth.0001_initial... OK
        Applying admin.0001_initial... OK
        Applying sessions.0001_initial... OK
        [django /home/protected/django]$ cd ..

        If you were expecting a bunch of stuff here involving Python’s virtualenv feature, you can totally do that if you want. It’s handy if you need a bunch of python modules we don’t provide. We don’t need it for this article, but if you need it, you probably already know what it is, how it works, and where to insert it into the steps above.

        Next, we need a run script. A run script is how our system starts your daemon. You can use it to customize command line arguments and environment variables (or to jump into a Python virtualenv) before your daemon starts. The main thing to be aware of with run scripts is that they need to run the actual daemon in the foreground, which can sometimes be tricky. But that’s how Django rolls anyway, so we won’t have any problems there.

        You can use whatever text editor you want to create your run script. (Just make sure if you create it on Windows that it winds up with Unix line endings.) Ordinarily I would use the one true editor (vi) at this point, but the run script is very simple and vi isn’t photogenic, so we’ll just enter it directly:

        [django /home/protected]$ cat > <<NFSN_FEEL_THE_POWER
        > #!/bin/sh
        > exec python runserver
        [django /home/protected]$ chmod a+x

        At this point, django is pretty much set up. If you want to prove it, you can try running it from the command line:

        [django /home/protected]$ cd django/
        [django /home/protected/django]$ ../
        Performing system checks...

        System check identified no issues (0 silenced).
        November 16, 2014 - 21:36:18
        Django version 1.7, using settings 'helloworld.settings'
        Starting development server at
        Quit the server with CONTROL-C.

        Now, the ssh server is a restricted environment, so you can’t access anything running there from anywhere but there. So we can open another ssh window to check it out:

        [django /home/public]$ curl -i http://localhost:8000/
        HTTP/1.0 200 OK
        Date: Sun, 16 Nov 2014 21:39:04 GMT
        Server: WSGIServer/0.1 Python/2.7.8
        Vary: Cookie
        X-Frame-Options: SAMEORIGIN
        Content-Type: text/html

        <!DOCTYPE html>
        ... blah blah blah ...
        <h1>It worked!</h1>
        <h2>Congratulations on your first Django-powered page.</h2>
        ... blah blah blah ...

        Looks good! So now we can close the second ssh window, and go back to the first one where we’ll see our footprints:

        [16/Nov/2014 21:39:04] "GET / HTTP/1.1" 200 1759

        From there, follow the instructions to quit the server (hit CONTROL-C). But leave this ssh session around. We’ll come back to it later.

        Now, we’ve got to tell our system about Django, so it will get started (and if ever necessary, restarted) Back to the UI!

        Telling our system about Django

        First, we’ll add a Daemon for Django from the Site Information panel in the member interface:


        Shocking no-one, this will need some configuration:


        The tag is just a short name for the daemon. Tags are unique on a per-site basis, so everybody can have a django of their very own, but only one per site. (If for some reason you needed another, there’s nothing wrong with django2.) It’ll also need to know the name of the run script we created and where to run it from. In this case, we want to be inside the Django directory so when the run script will be able to find And we run it as the web user, which is what you should always do for a daemon that serves web pages. (Other types of daemons, like custom databases, should probably run as “me.”)

        Next, we’ll have to add two proxy entries, one to send most of the site’s traffic to Django, and one to exclude some static files we don’t want Django to handle.

        The first proxy entry will send most of the site’s requests to Django. It’s added from the Site Information Panel:


        And configured like this:


        Python takes care of mapping HTTP to WSGI for us, so this is an HTTP proxy. It’s handling the whole site, so the base URI is /. The document root value is usually / unless your custom server needs something different. (For example, PHP-FPM wants the absolute path to your site’s top-level PHP files.) Any port from 1024 to 65535 can be used as long as the same value is used both in our UI and in the configuration of the daemon. We’ll use 8000 for the target port because that’s what Django already said it wanted when we ran it on the ssh server above. And unlike the ssh server, you don’t have to worry about what anyone else is doing. Every site can use whatever ports are needed in this range.

        If we wanted Django to handle absolutely the whole site, we’d use the “Direct” option to bypass Apache entirely. That’s faster and scales better, so it’s often a good choice. Our network will still automatically reverse proxy your static content whenever possible, so it doesn’t much matter that Django isn’t optimized for that.

        But here we want to exclude the /images/ directory, so it doesn’t get sent to Django. To do this, we’ll leave Direct unchecked, add that proxy, and then go back to the Site Information panel to add a second entry:


        And configure it as a “none” option, which tells our system to send requests for some URLs back through Apache to a directory under public:


        In this case, we want /images/ to point to the “images” directory we created in /home/public way back at the beginning, so both paths will be “/images/” as shown. The port value doesn’t matter for a “none” proxy; and it won’t be used.


        Once this is all done, Django is ready to spring into action. Our UI should look like this:


        And the live site looks like this:


        (Assuming you use an improbably small but conveniently-screenshot-sized browser window. Also note that we served the image above from the django site’s static images directory we set up.)

        Of course, when it says “you haven’t actually done any work yet,” it’s understating the case a little. Setting up Django isn’t effortless, but it is pretty easy.

        Interacting with your pet Daemon

        Now, if we head back to ssh, we can interact with our daemon a bit. First, we’ll check out its output. This is particularly helpful for troubleshooting a run script in case your daemon won’t start.

        [django /home/public/images]$ cd /home/logs
        [django /home/logs]$ ls
        [django /home/logs]$ cat daemon_django.log
        [16/Nov/2014 22:05:41] "GET / HTTP/1.1" 200 1759
        [16/Nov/2014 22:05:42] "GET /favicon.ico HTTP/1.1" 404 1935
        [16/Nov/2014 22:33:40] "GET / HTTP/1.1" 200 1759
        [16/Nov/2014 22:56:45] "GET / HTTP/1.1" 200 1759

        But you can also connect to your daemon if you want.

        [django /home/logs]$ curl -i http://django.local:8000/
        HTTP/1.0 200 OK
        Date: Sun, 16 Nov 2014 23:37:39 GMT
        Server: WSGIServer/0.1 Python/2.7.8
        Vary: Cookie
        X-Frame-Options: SAMEORIGIN
        Content-Type: text/html

        ... blah blah blah ...

        This isn’t super-useful for Django, but it’s handy for other processes like databases, so you can connect to them with admin tools. Just change “django” to your actual site’s short name as shown in our UI.

        From here, the next step is to create an amazing and cool Django-powered site hosted on our service. That is left as an exercise for the reader.

        If you want to learn more about Django, check out the DjangoGirls tutorial. (Also works for boys.) If you’ve done all the steps above, you can try picking up their tutorial here.

        That’s it for this intro to the persistent process feature. Next time, Node.JS!

        ]]> 8
        More power, more control, more insight, less cost. Wed, 24 Sep 2014 19:51:35 +0000 We’ve made some very big changes to the fundamental nature of our service. We now support persistent processes and delegating requests for your site to those processes using HTTP, SCGI, or FastCGI. We’ve added the ability to graph up to two weeks of your site’s resource usage. And we’ve slashed resource charges by almost 70%.

        More power & more control.

        Ever since we started, one of the inviolable limitations of our service is that we don’t let you run your own persistent processes. No more! Using the technology underpinning our NFGI PHP implementation, we’ve added support for configuring and controlling persistent processes from our member interface. These processes run as part of your web site, and when you’re not around, our system takes care of them. For example, it keeps an eye on them and can restart them if they exit.

        There are two main uses for this feature.

        First, although we support a huge variety of languages for web development, unless your favorite language is spelled PHP then you were stuck running it as a CGI script. For any kind of modern web framework, that causes some substantial performance problems. Ruby on Rails, Django, Catalyst, Node.JS, Network.Wai? Sure, technically you can run them here, but it’s not what you’d call a good idea.

        Take that long-standing well-known fact about our service, and pitch it straight into the trash. We’ve built a new server type that can delegate incoming requests to the web application server of your choice running as a persistent process. This currently supports HTTP, FastCGI, or SCGI. (Although it’s not clear that anyone actually uses SCGI.) You can even mix-and-match technologies on different paths of the same site. Or with HTTP, you have the option to take advantage of our Apache-based infrastructure or to bypass it entirely for maximum speed and control. (Either way our edge network will still have your back, accelerating static content and protecting you from a variety of web ne’er-do-wells, so you can focus on your app.)

        Second, there’s no requirement that your persistent process be a web app. Although nothing but web apps will be accessible from outside our network, from inside our network, you have a lot more flexibility. Want a private memcached instance to turbocharge your site? You got it. Redis? Check. PostgreSQL? Now possible.

        This feature is going to go through a very brief open beta period for a couple of weeks; we want a bit of a slow start because trying to anticipate all the creative things our members will do is simply impossible, so we want to keep a close eye on the first handful of setups to see if anything needs tweaking.

        During the beta, persistent processes will only be available on the proxy site type. If that’s not what you need, but you still want to try this out, you can always set up a process on a proxy site and talk to it from another site hosted here. Or you can just set up PHP-FPM.

        To get in on this now, just log in to our member interface and submit an assistance request. We’ll convert the site(s) of your choice and you can take it from there; These features are fully implemented and integrated into the UI. Just start by setting the server type of your site to “Custom” or “Apache 2.4 Generic” and the Proxy and Daemon management options will appear on the Site Information Panel.

        As this comes out of beta, watch our blog for a series of tutorial articles demonstrating how to use this feature in conjunction with various technologies.

        Although it might seem like it at first glance, this does not turn our service into a VPS. We (still) have no interest in going down that road. This feature is immensely powerful, but it doesn’t include root access and can’t make anything but web services available over the Internet. If you want to run an email server, voice chat server, or game server, you’ll still need a VPS for that.

        This is designed as an alternative for people running web sites who might have felt forced to move to a VPS or dedicated server in the past to get more flexibility (at the cost of all the 24×7 administrative headaches that come with it). Like all of our services, it’s pay-as-you-go and fully managed, which means that we’re the ones keeping all that infrastructure up-to-date, secure, tuned, and running. You just do what you do. We’ll take care of the rest.

        More insight.

        One of our most common requests that has always had a technical barrier is getting you better visibility into the resource usage of your site. We’ve now been able to add an interactive Javascript chart that shows up to two weeks of RAM & CPU usage for most currently-supported server types (all the Apache 2.4 + PHP site types and the new proxy-based server type).

        As part of this change, we’ve introduced a new internal tracking system that can take resource measurements on a per-site basis for a bazillion tiny sites without collapsing under its own weight. Naturally we’ve applied that directly to our resource-based billing. So if the random sampling element of our resource billing made you uncomfortable, just make sure your site is using one of the above types, and your usage will be calculated using the new, deterministic method.

        In the future, we hope to expand on this functionality to gather and report more statistics about your site that will help you explore and optimize your site’s resource usage.

        Less cost.

        Our resource-based billing has been a huge success. Between the economies of scale associated with the popularity of this model and the slow decline in the cost of hardware, we’ve been able to greatly reduce what we charge for resources. The cost of a resource accounting unit (RAU) has dropped from 3.25 RAUs = $0.01 to 10.59 RAUs = $0.01. That’s almost a 70% drop. You don’t need to do anything to take advantage of this; it’s automatically applied to all resource-billed sites. This change means that over 2/3rds of resource-billed dynamic sites (and well over 99% of static sites) now pay less than $0.01/day for their resource usage.

        These are all great changes. They will make more power available to more people for less money. But these changes are all steppingstones. Each one can be improved, extended, enhanced. And that’s not all. We’ll have more good stuff for you in the coming months. But now, we get to my favorite part, where we hand this stuff to our members and see the amazing things they’ll do with it.

        Updated 2014-11-16: The daemon and proxy features went out of beta a few weeks ago with almost no fanfare. We’ve updated this entry to reflect the non-beta setup process; also be sure to check out the first in our how-to series about these features, which covers Django.

        ]]> 24
        Improving support and how we talk about it Sat, 16 Aug 2014 03:45:59 +0000 Looking back, it’s hard to believe we’ve been using our new subscription-based support system for almost eight months. Wow, time flies. Overall, we’ve been very happy with this change. We really enjoy getting to spend time working with our members and helping them be successful with our service. The response from the people who’ve used it has also been almost uniformly very positive. So, right now, we feel like we finally got it right. Which isn’t to say that we can’t do even better.

        It’s hard even to describe how fantastic it is for support to (finally) be self-financing like the rest of our service. It’s good for everybody, even people who don’t ever use it, because support no longer takes time and money away from other areas. To give just one example, it’s enabled us to put more sysadmin time into managing software upgrades, so we have more realms, more software packages, and upgrades make it to you quicker. Case in point: the PHP developers released PHP 5.3.29, a final post-EOL release of PHP 5.3 (which we salute them for doing) yesterday. It went live on our system several hours ago. That used to take us two weeks.

        A rose by another name…

        We have found that there’s still room for improvement, though, particularly in terms of how we talk about support with people who don’t understand the subscription model. The best way we’ve found to explain how it works is this: there is a choice inherent in our system. If you want individual support, like other hosting companies, we offer that for a small monthly fee, like other hosting companies, and the service we provide is a very good value. If you (like most of our members) are comfortable supporting yourself or using our community-based support options, you can get a substantial discount by foregoing the support.

        Problem is, that isn’t how we were explaining it. So we’re revamping how we talk about subscription-based support quite a bit, though we’re making very few actual changes to how it works. We’re starting with the name. The “support subscription” is now called the “subscription membership.” If you don’t have one, you have a “baseline membership.”

        Rather than having support as an optional add-on to your membership, we’ve decided to present the “support or not” choice as two types of memberships: baseline and subscription. The baseline membership is exactly what everyone is used to: a membership where the only costs are based on the hosting resources used, and support is available through the forum, FAQ, and wiki. The subscription membership has a $5.00 setup fee the first time, costs up to $5.00 per month, and includes individual support through email and our site.

        So, basically, exactly the same as before except we’re presenting it as an either-or choice between two options that include different things rather than one plan with an optional add-on. This seems to be a lot more familiar to a lot of people, as it’s commonly not just by hosting companies, but also all sorts of software products and services. It also lets us present the two choices in an easy-to-understand table.

        But don’t worry, we’re still us and it’ll still be a cold day in hell before we start offering “Gold,” “Silver,” and “Bronze” packages and filling up that table with lots of “UNLIMITED,” exclamation points!!!, and misleading monthly costs based on paying for thirty years in advance. We just want the people coming to us from those companies to better understand what they are (or aren’t) getting from us.

        We’ve also changed how we describe the term of subscription membership. With support subscriptions, there was a minimum term of five months. With subscription memberships, there is a one-time fee for switching back to baseline membership that decreases over time, reaching zero after five months. These are almost identical. The only difference is that with a support subscription, a person who wanted to get rid of it had to make a note on their calendar to log in after the five months and switch it off. Subscription memberships can be switched off at any time. Importantly, subscription members who pay the switch-back fee still get the benefits of subscription corresponding to what they’re paying. So both plans end up costing the same and providing the same benefits for the same length of time. Just a big change in description, and a minor improvement in convenience.

        The only material change we’ve made affects only people who have a baseline membership, switch to a subscription membership, switch back to a baseline membership, and then switch to a subscription membership again. The cost to set up the first subscription will be the same as before, $5.00. The cost to set up the second subscription, however, will range from $0.00 to $25.00 based on how long they were unsubscribed. The purpose of that is very simple and we’re not going to try to pretend it’s anything other than what it is: a way to ensure that unless that person waits a very long time between subscriptions, they were better off staying subscribed. Support subscriptions were never intended to be per-incident support, and the same is true of subscription memberships.

        (We remain uninterested in providing per-incident support; we’ve learned our lesson there. The subscription plan is definitely a case where we did the right thing after exhausting all the other options.)

        Making life better for baseline members

        Aside from support, there are a few things that come up where people need us to do stuff. This includes things like recovering a domain that’s gone into redemption, restoring files that have been deleted, or having us generate an SSL certificate for you. After we introduced subscription-based support, people who weren’t subscribers had a rough time getting access to that stuff, and the workarounds we came up with were, charitably, suboptimal. (“Open an unpaid support ticket and then post the issue number on the forum so we can find it.” Ugh.)

        To address that, we’ve expanded our Assistance Request feature. These new assistance requests aren’t free, but they outline the costs right in the description and are available to all members. There is still an advantage for subscription members, though. Since they’re already funding our ability to have people there to interact with them through their subscription fee, they get almost all of these at lower or no cost.

        This will make things a lot easier for baseline members who run into those weird edge cases where they need something specific from us, but for which a support subscription is not appropriate. If we find more things that fit that bill, we’ll add them.

        Cleaning up our support tab

        The other major issue we’ve found was the support tab of our UI. There’s a word that is often applied to well-designed interfaces: discoverability. It refers to how easy it is to figure out how the interface works just by browsing around. The old support tab lit a poop-filled bag on discoverability’s front porch and ran off. We have a bunch of different support options to choose from and there’s almost no overlap so it’s easy to get it wrong. And effectively all of them were at least two clicks from the support tab, often hidden behind labels or link text that, sure, it’s relevant. (If you squint a little.) This worked fine for the (admittedly large) percentage of our membership that was born with genetic knowledge of how hosting works. But for people who were already having a rough time and were looking for help, it tended to kick them while they were down. Which is not cool. So we’ve cleaned that up.

        The new support tab does three basic things:

        • It lists your open and recent issues (if any).
        • It briefly explains the difference between baseline and subscription membership and tells you which one you are and how to change.
        • It enumerates and directly links to all the support options available to you and, if you don’t know which one to use, gives you a specific recommendation about where to start based on what type of membership you have.

        We’ve separated our system status out to its own page, which shows both open system issues and anything from our Twitter feed that’s marked as a network status update. (#status) We’ve also added notification about open system issues to the main member page so you’ll see them right when you log in.

        It’s our hope that the changes to our site will make support easier to find and obtain, and that the changes to how we talk about support will make the cost vs. support tradeoff easier to understand, and help each member feel like they’re choosing the right option for them.

        ]]> 8
        Automatic file server upgrades Fri, 01 Aug 2014 06:28:21 +0000 As most of our members are aware, one of our older file servers, f5, has been causing intermittent problems. The time has come to move the sites still using it to newer, faster, more reliable equipment. The ability to do that manually has been available in our UI for about a week now, and it has not surprisingly been pretty popular. But after that server caused additional downtime this past week, we’re moving to the next phase: moving sites automatically.

        We’ve been testing the replacement file servers for some time now, with hundreds of test sites and various use cases, and they have done very well. Naturally, we’re still paranoid that something will go wrong, but in addition to the testing we have an aggressive backup and replication schedule. So it’s time to move ahead.

        Beginning August 4th and continuing through the end of the month, we will start automatically migrating affected sites. If you have any, they are marked with an asterisk on the Sites tab in our UI, with more details on the Site Info Panel for each affected site. The Site Info Panel will also let you adjust the scheduled upgraded, allowing you to migrate a site at any time or (to an extent) postpone an upgrade that is scheduled at a bad time for you.

        Most sites don’t need to make any changes as a result of this migration. Based on our testing and the sites that have voluntarily migrated thus far, less than 1% of sites need anything modified to continue working after the upgrade. These changes are related to hardcoding absolute paths that won’t be valid after the migration. I.e. anything starting with /f5/sitename/. These fall into two broad categories.

        First, .htaccess files. If you’re using HTTP basic authentication or something similar in your .htaccess file that uses absolute pathnames, those will have to be changed after the migration. You’ll be able to get the new path to use from your site info panel after the migration.

        Second, if you’re still using PHP 5.3 Fast and you have hardcoded paths in your PHP code, those will also need to be updated. Using hardcoded paths in this situation was never recommended; it’s always preferable to use a preset variable like $_SERVER[‘DOCUMENT_ROOT’] or $_SERVER[‘NFSN_SITE_ROOT’] if at all possible. PHP 5.3 has also been obsolete for a long time. So if you find yourself in that situation, this is a great time to upgrade that from our UI as well. You’ll still have to change the paths, but this will be the last time. All the currently-supported versions of PHP (5.4 and later) use /home-based paths, just like CGI and ssh, and those never change.

        To help you find out if your site needs to be modified, we’ve developed a scan which is run during the migration. When the migration is finished, it will email you to let you know it’s done and whether or not it found any potential problems. It may not catch every possible issue, but it does a very very good job.

        Once f5 is no longer in use, it’ll be tempting to give it the full Office Space treatment due to the problems it has caused, but the truth is that it served us incredibly well for a long time, so giving it a salute as it is ejected into space in a decaying orbit into the sun would better fit the totality of its service. (Although that’s admittedly not in the budget, so recycling is a more likely outcome unless the console prints “Will I dream?” as we shut it down for the last time, in which case we probably won’t have the heart.)

        Although only a tiny fraction of our members will have even minor problems with this upgrade, each and every one of our members and each and every one of their sites is important to us. If you do run into any snags related to migrated sites (or, really, anything else), please feel free to post on our forum and we’ll do what we can to help you out. (But please don’t post about them here; blog comments are a terrible venue for providing support, second only to Twitter in sheer awfulness and unnecessary difficulty.)

        ]]> 10
        Software updates… update Thu, 03 Jul 2014 05:40:12 +0000 We’ve released some software updates that bring new options for both PHP and CGI. PHP 5.5 is upgraded from beta to stable, a PHP 5.6 developer preview is available, and new stable-track and beta realms offer updated language versions and software tools for ssh and CGI usage.

        PHP 5.5 is out of beta

        We have upgraded PHP 5.5 from beta to production status. We’re going to kick it around for a couple more months before it becomes the default for new sites, mainly because we want to be very sure that WordPress has finally ironed out all the plugin-related kinks people are likely to encounter on a new install.

        While PHP 5.5 was in beta, it was offered only in stochastic flavor with the resource charge waived. This state of affairs will continue until August 1st, then charges will revert to normal. Within the next couple of weeks, we will create a non-stochastic PHP 5.5 server type for those people who prefer that option.

        Once PHP 5.5 becomes the default for new sites, we will phase out the ability to create new PHP 5.3 sites or downgrade existing sites to it. Sites currently on PHP 5.3 will not be forced to upgrade for as long as we can reasonably continue to support it. (And if PHP 5.2 was any example, that might be a pretty long time.) What we may do, however, is attempt to determine how much it costs us to maintain PHP 5.3 (which is no longer supported by its developers) and apportion that cost to sites still using it so that as the number falls, the cost for the people who don’t want to switch will rise, which should eventually create a feedback loop that helps get everybody up to date. But we’re a ways off of that yet and if we do pursue that, it will announced in advance and trivial to avoid by upgrading to a currently-supported version.

        PHP 5.6 developer preview

        We have also introduced an experimental PHP 5.6 server type. Like the 5.5 beta, it is available as a stochastic type with waived resource charges. Unlike the 5.5 beta, PHP 5.6 is not even released yet by its developers; it is still being finalized and tested. The version currently on offer at the time of this writing is 5.6-RC1. So inasmuch as beta services aren’t supported for production usage, even though they usually work fine, this is a way past that. Don’t use PHP 5.6 for anything remotely important yet. At present, it’s best suited for developers who want to experiment with the new language features or test their software for compatibility.

        New site realms and updates

        Site realms are the collection of software used over ssh and by CGI applications. For the third quarter, they’ve seen several updates:

        • We’ve released two new realms, one stable-track (“blue”) and one beta (“white”).
        • The experimental (“black”) realm has seen upgrades to perl 5.20 and Ruby 2.1.
        • The “green” stable realm has become the default for new sites. (We’re excited about that, as it is the first 64-bit clean stable realm, which allows the return of Go and Racket as languages supported out of the box.)
        • The “orange” stable realm moves to deprecated status and will no longer receive regular updates.

        Although we don’t like involuntary upgrades due to the possibility of causing disruption, they are a necessity from a standpoint of keeping our systems secure and up-to-date without creating a steadily escalating workload that would drive prices up and responsiveness down.

        As we accrue stable realms, we’ve been continually revisiting the question of how long to allow people to continue using deprecated realms. Our current target is to let you go up to a year without an involuntary upgrade. The “red” realm will be the first one to see this happen; around the first of the year we will start bumping red sites to the then-default stable realm.

        However, there are still a few stragglers in pre-rainbow realms. We plan to start bumping sites off of those realms starting in October of this year. When the newly-released blue realm becomes the default for new sites, we will start involuntary migrations for anyone using the following obsolete-years-ago realms: freebsd6, freebsd72, 2011Q4. Because those are so far out of date, compatibility issues are likely (if the sites using such old stuff even still active), so we strongly encourage people to upgrade on their own timeline rather than running out the clock.

        We are also going to phase out the “yellow” beta realm very quickly. The “white” realm is basically the same thing, only it is 64-bit clean with all the goodness that brings. So beginning in August of this year we will start moving everyone from “yellow” to “white” and then the “yellow” name will be retired for awhile.

        For any site staring down the barrel of an involuntary realm upgrade, we will update our member interface to make this as visible as possible.

        Other housekeeping notes

        The following unrelated projects are still underway, and will receive separate updates when we have more to announce:

        • Migrating to new file server technology: Progress is slow but steady. We ran into some hardware issues which we believe are now resolved and currently we are focused on training on the new setup and on doing everything possible to minimize any inconvenience associated with the migration.
        • Expanding NFGI to support more languages: This has proved a little trickier than expected, but work is progressing. We are trying to see if we can develop a compatibility layer that would let us target a bunch of technologies at once.
        • There has also been a lot of physical reorganization going on to increase redundancy and shift some things around to make room for all the new stuff being added on both the front end and back end to improve performance, reliability, and scalability. We’re doing everything we can to prevent this from being service-affecting or requiring any scheduled downtime.

        That’s all for now!

        ]]> 8