NearlyFreeSpeech.NET Blog A blog from the staff at NearlyFreeSpeech.NET. 2024-05-10T22:31:54Z https://blog.nearlyfreespeech.net/feed/atom/ jdw https://www.nearlyfreespeech.net/ <![CDATA[Automatic TLS is now a thing]]> https://blog.nearlyfreespeech.net/?p=806 2024-05-10T22:31:54Z 2024-05-10T22:31:54Z We are rolling out new automatic TLS infrastructure that does not require members to set up or maintain anything. This means that, for new sites, aliases will get TLS automatically within a few minutes after they are set up and working. This works transparently with all site types, including custom processes and proxies. It doesn’t cost anything, you don’t have to do anything to set it up, and you don’t have to do anything to renew it.

Existing sites that we can detect to be using tls-setup.sh will be migrated to this setup over the next few weeks. That process is completely transparent, and our system attempts to disable the tls-setup.sh scheduled task once it is complete. Once that’s done, we’ll start adding automatic TLS to other existing sites. Our goal is to have TLS available on all aliases of all sites hosted here by the end of June. We will be monitoring the rollout and taking steps to improve the diagnostics and reporting.

This doesn’t affect the ability of sites to be accessed via HTTP, although we (continue to) strongly discourage that.

If this has been enabled for your site, you’ll see the 🔁 emoji next to aliases other than the permanent .nfshost.com alias in the Site Names & Aliases panel on your Site Information panel in the Member Interface.

Our special thanks to Let’s Encrypt, whose service provider integration makes this possible.

]]>
8
jdw https://www.nearlyfreespeech.net/ <![CDATA[Small Christmas upgrades]]> https://blog.nearlyfreespeech.net/?p=794 2024-03-16T05:59:25Z 2023-12-26T02:31:36Z We’ve got a couple of small updates to announce today:

  • One server type to rule them all?
  • A forum facelift.

The Kitchen Sink: One server type to rule them all?

For quite some time, we have offered the “Apache 2.4, PHP, CGI” for PHP users and the “Apache 2.4 Generic” type for people who want to run Node.JS or other custom web server processes. It’s possible to run PHP under Apache 2.4 Generic as FastCGI, but it often requires you to rethink the structure of your application. That can be tricky if you didn’t write the application. There are workarounds but… they kinda suck. We’ve heard from several people in that position that they hate being in that position and wish they could just have both. Now, they can. We’ve added a server type called “The Kitchen Sink” that includes support for Apache, native PHP support, CGI, and custom daemons and proxies. This is great for people who need to run apps that are part PHP and part not, as well as for people with PHP apps who want to jam something like Memcached or Redis in there with it.

“The Kitchen Sink” is available on an experimental basis right now; from the Site Information panel, it’s enabled by editing your site’s service type. It may… or may not… get a different name when it leaves experimental status.

A forum facelift

As some of our longtime members may know, our member forum was originally based on phpBB 2. phpBB 2, however, reached end-of-life in 2009, so we’ve long since ditched most of the innards in favor of more modern, up-to-date code designed for PHP 8, not PHP 3. What we haven’t ditched, until now, is the now-brutally-outdated early-2000s-era aesthetic. We’ve made some behind-the-scenes fixes over the last couple of weeks designed to make the forum easier to understand and use. We’ve also finally put some effort into the design. That update went out today.

This is nothing groundbreaking; we tend to avoid “groundbreaking” when it comes to usability. But it’s a solid update from the early 2000s to at least the mid-2010s. This also isn’t the most important thing in the world but we wanted to close out the year with something a little bit fun but still beneficial. It’s been an absolutely wild year!

We hope the functional and design changes will make the forum easier to browse and read and more pleasant to interact with. I’ll still post there as my usual curmudgeonly self, though; apparently, no upgrade can fix that.

More information about the forum updates is available in the forum.

That’s all for now! More updates to come as soon as we’re finished tamping down the bugs flushed out by these!

]]>
5
jdw https://www.nearlyfreespeech.net/ <![CDATA[Bigger, better, faster, more]]> https://blog.nearlyfreespeech.net/?p=773 2023-08-22T12:05:19Z 2023-08-22T04:12:01Z I debated whether to write a humorous intro, but I’ve ultimately decided it’s more important to get succinct information out to everyone, so here’s the TLDR:
Over the next few weeks, we will migrate NearlyFreeSpeech.NET to all-new equipment and greatly upgraded network infrastructure.

  • We’re replacing our Intel Xeon servers with brand-new AMD Epyc servers.
  • All our existing file storage will be migrated from SATA SSDs to NVMe PCIe 4.0 SSDs.
  • Most of our content will be served from New York City rather than Phoenix after the upgrade.
  • Various things may be intermittently weird or slow for the next couple of weeks as we shift them around, but we’re working hard to minimize and avoid disruptions to hosted services.

NearlyFreeSpeech goes Team Red

There’s no question that Intel has been good to us. Xeons are great processors. But these days, AMD Epyc… wow. The processors aren’t cheap, but the compute performance and I/O bandwidth are outstanding. 128 PCIe 4.0 lanes? Per CPU? Except for the speedup, this change should be transparent to most people. By and large, we’ve tried to protect people from building things too specific to exact CPU models by masking certain features, but there is probably some random instruction set supported on the old machines that isn’t present on the new ones. So if you’ve done something super-weird, you may have to recompile.

I don’t want to make any specific promises about performance. After all the speculative branch execution fixes, the security layers needed for our system to protect you properly, and other overhead, these things never quite reach their maximum potential. But, so far, they’re so fast!

Here’s the catch. Some ancient site plans bill based on storage space but not CPU usage. These plans have been gone for about ten years. They were an incredibly bad deal for people who wanted to store lots of data, but it cost basically nothing if your site was tiny and used lots of CPU. That wasn’t sustainable for us. We grandfathered those sites at the time because we’ve always paid a flat rate for a fixed amount of electricity whether we use it or not, and those sites have been running on the same hardware ever since (Intel Xeon X5680s!). Neither of those things will be true going forward, so it’s the end of the road for those plans. We plan to temporarily allocate a bare minimum amount of hardware to those sites for a few months and then let affected people know that they’ll be migrated to current plans around the end of the year.

If you want to check this now:

  1. Go to the Site Information panel for your site.
  2. Find the “Billing Information” box.
  3. If there’s been a red-text message “($10.24/GB/Month – Legacy Billing!)” on the “Storage Class” line for the last ten years, you’re affected.

To change it, find the “Config Information” box and edit the Server Type. Pick the closest option. (If in doubt, “Apache 2.4, PHP, CGI.”)

Quoth the raven, “NVMe more!”

It’s something of a sore point that our file storage performance has always been a bit lackluster. That’s largely because of the tremendous overhead in ensuring your data is incredibly safe. Switching from SATA SSDs to NVMe will give a healthy boost in that area. The drives are much faster, and the electrical path between a site and its data will be shorter and faster. And it’ll give all those Epyc PCIe lanes something to do.

But there’s a little more to the story. To get adequate resiliency, sacrificing some performance is a necessary evil. It just flat-out takes longer to write to multiple SSDs in multiple physical servers and wait for confirmation than to YOLO your data into the write cache of a device plugged into the motherboard and hope for the best. We accept that. And we’ve always accepted that our less-than-stellar filesystem performance was the compromise we had to make to get the level of resiliency we wanted. However, we’ve always suspected we were giving up too much. It’s taken years, but we’ve finally confirmed that some weird firmware issues have created intermittent slowness above and beyond the necessary overhead.

So we expect our filesystem performance to be dramatically better after the upgrade. Don’t get me wrong; it won’t be a miracle. The fastest SAN in the world is still slower than the NVMe M.2 SSD on the average gaming PC (or cheap VPS). But one keeps multiple copies of your data live at all times and does streaming backups, and one doesn’t. And it should be a hell of a lot better than it has been.

Related to this, we’ve made some structural changes to site storage that will make moving them easier and faster. That has some other benefits we care a lot about that you probably don’t, like making storage accounting super fast. It should also make some other neat things possible. But we need to explore that a little more before we announce anything.

New York, New York!

Things have changed quite a bit since we started. As much as I love Phoenix, it’s not the Internet hub it was when I lived there in the 1990s. While some benefits remain, I no longer believe it’s the best place for our service. We see dumb stuff we can’t control, like Internet backbones routing traffic for the US east coast and Europe from Phoenix through Los Angeles because it’s cheaper. New York, on the other hand, is functionally the center of the Internet. (More specifically, the old Western Union building at 60 Hudson Street in Manhattan.)

It will surprise no one that Manhattan real estate is not exactly in our budget, but we got close. And, more importantly, we are parked directly on top of the fiber serving that building. It’d cost about ten times more to shave 0.1 milliseconds of our ping times.

This change will make life demonstrably better for most people visiting hosted sites; they’re in the eastern US and Europe. But we’re not leaving the west hanging out to dry. We can finally do what I always wanted: deploy our own CDN. After we’re finished, traffic for customer sites will be able to hit local servers in Phoenix, New York, and Boston. Those servers will transparently call back to the core for interactive stuff but can serve static content directly, much like our front-end servers do today. That’s already tested and working. You might be using it right now.

The new design is completely flexible. It doesn’t matter where your site is located; traffic enters our network at the closest point to the requestor, and then our system does the right thing to handle it with maximum efficiency.

It’s now technically possible for us to run your site’s PHP in New York, store your files in Boston, and have your MySQL database in Phoenix. But “could” doesn’t always mean “should.” We’re still constrained by the speed of light; a two-thousand-mile round trip on every database query would suck pretty hard. (But I’ve done it myself with the staging version of the member site. It works!) So everything’s going to New York for now.

Keeping it weird

This change means we have to move all your data across the country. Sometime in the next few weeks, each site and MySQL process will be briefly placed in maintenance and migrated across our network from Phoenix to New York. For most sites, this should take less than a minute. We’ll start with static sites because they don’t have any external dependencies. Then we’ll move each member’s stuff all at once so we don’t put your MySQL processes and site software into a long-distance relationship for more than a few minutes. Once we have a specific schedule, we’ll attempt to make some information and, hopefully, some control available via the member UI to help you further minimize disruption. But our goal is that most people won’t even notice.
There may be some other weirdness during this period, like slowness on the ssh server, and you may actually have to start paying attention to what ssh hostname to use. All that will be sorted out by the time we’re done.

Some longtime members may recall the 2007 move where it took us over a day to move our service a few miles across town. At the time, we wrote, “Should we ever need to move facilities in the future, no matter how long it takes or how much it costs, we will just build out the new facility in its entirety, move all the services between the two live facilities, and then burn down the old one for the insurance money.” Oh my god, it took a long time and cost so much money, but that’s exactly what’s happening. (Sans burning down the facility! We love our Phoenix facility and hope to continue to have equipment there as long as Arizona remains capable of sustaining human life.)

Final thoughts

These changes represent an enormous investment. Thus, much like everyone else these past couple of years, we will have to pass along a huge price increase.

No, just kidding.

Our prices will stay exactly the same, at least for now. (Except for domain registration, where constant pricing fuckery at the registries and registrar remain the status quo. Sadly, there’s nothing we can do about that. Yet.) In fact, they might go down. We bill based on how much CPU time you use, and it’s likely to take less time to do the same amount of work on the new hardware.

The last few years have been pretty weird. COVID aside, NearlyFreeSpeech.NET has been keeping pretty quiet. There’s a reason for that. I’m proud of what NearlyFreeSpeech.NET is. But there’s a gap between what is and what I think should be. There always has been. And that gap is probably bigger than you think.

So I spent some time… OK, nearly three years… more or less preserving the status quo while I did a very deep dive to learn some things I felt I needed to know. And then, I spent a year paying off tech debt, like getting our UI code cleaned up and onto PHP 8 and setting up this move. So four years went by awfully fast with little visible change, all in pursuit of a long-term plan. And in a few weeks, we’ll be finished. With the foundation.

“It’s a bold strategy, Cotton. Let’s see if it pays off for ’em!”

]]>
19
jdw https://www.nearlyfreespeech.net/ <![CDATA[Hey! What happened to 2023Q2?]]> https://blog.nearlyfreespeech.net/?p=767 2023-07-28T21:01:25Z 2023-07-28T21:01:25Z You may have noticed that production sites with normal updates are being upgraded from 2022Q4 to 2023Q1, and non-production sites are being upgraded from 2023Q1 to 2023Q3. So what happened to 2023Q2?

Wrangling the amount of pre-built software we do is a constant challenge. Something is always changing. And changes frequently break stuff. Several things changed around the same time earlier this year, especially some stuff related to Python, the FreeBSD ports-building process, and other more niche languages that our members care about, like Haskell and Octave. Some of those had nasty interactions. We also have some other changes in the works that have impacted this. (It will be an Epyc change. More details coming soon!)

To make a long story short, we spent so long on the 2023Q2 quarterly software build that it was July, and we still had problems. We finally have a clean build that passes all of our hundreds of internal tests. But we also have the 2023Q3 quarterly build running just as smoothly. Since 2023Q2 won’t get any security updates through the FreeBSD ports team, having our non-production members test it doesn’t seem useful. And we’re sure not going to roll it out to production sites untested.

And so, we are skipping it. The default realm for production sites will be the (now very thoroughly tested) 2023Q1 realm. And the default realm for non-production sites will be the shiny new 2023Q3 realm. As always, we’ll backport security fixes as needed from 2023Q3 to 2023Q1.

No more PHP 7!

For those sites being upgraded from 2022Q4 to 2023Q1, it’s worth reiterating that PHP 7.4 was deprecated in 2021, and security support ended in November 2022. If your site still runs on PHP 7 eight months later, you’re in for a bad time. The PHP developers are ardent adherents of “move fast & break things,” and backward compatibility is the thing they break the most. Back in February, we posted information about this, including some advice for updating, in our forums.

]]>
4
jdw https://www.nearlyfreespeech.net/ <![CDATA[NearlyFreeSpeech.NET turns 20 today]]> https://blog.nearlyfreespeech.net/?p=755 2022-01-18T17:05:07Z 2022-01-18T17:05:07Z The NearlyFreeSpeech.NET domain was registered on January 18, 2002. We’re 20 years old today. Wow! So much has changed between then and now. And so much hasn’t.

Looking forward to the next 20!

]]>
17
jdw https://www.nearlyfreespeech.net/ <![CDATA[Free Speech in 2021]]> https://blog.nearlyfreespeech.net/?p=743 2021-01-19T19:47:20Z 2021-01-19T19:35:28Z So, a bunch of people suddenly discovered they care deeply about free speech immediately after a handful of racists faced even mild consequences for plotting a literal insurrection.

That does not reflect well on those people.

We’ve received quite a few emails (and signups) from them in the past week or so. They appear to believe that “free speech” means they can say whatever they want without repercussions. (It does not.) They expect us to agree with them about that. (We do not.) And they believe they’re entitled to our reassurance and, in some cases, assistance. (They are not.)

We have zero time and even less energy to waste on such nonsense. It is also difficult to express the full magnitude of our disinterest in passing some Internet Randolorian’s “free speech” litmus test. So we close all such inquiries without responding.

But I do want to make some things crystal clear.

First, we’ve been in the free speech business for nearly 20 years. We are experts at this. (We are capable of seeing through even sophisticated arguments like: “I said it. Therefore, it’s speech. Free speech is speech. Therefore what I said is free speech!”) So if getting your content online depends on your web host misunderstanding what free speech is, please save yourself some time; we’re not the right service for you.

Second, yes, hosting illegal content on our system will get you kicked off. You won’t get a refund. But it does not end there. There’s a school of thought that we can’t possibly be a “real” free speech host if we ever cooperate with the authorities. We didn’t go to that school. If you abuse our service to break the law, we will not only cooperate, we will turn you in ourselves.

When we cooperate with law enforcement, we do not do so blindly. We review their activities, both for abuse of power and to make sure proper processes are followed to protect our members’ rights. Such things are vanishingly rare, not (as they are sometimes depicted) the default. So if you’re expecting that we will automatically say, “Shove it, coppah!” anytime the police come calling about your site, we’re not the right service for you.

Finally, if you’re a racist, we’re not on your side. We are not your allies. We are not sympathizers. The “Free Speech” in our company name is not a secret dog whistle to you. We believe that America accomplished what it has despite the hatred and bigotry that has always plagued us, not because of it. We believe diversity is America’s spicy secret sauce, which we love. And we have no interest in living in a sea of mayonnaise. We do not want you to host your garbage here. We will not lift one finger to help you do that. We will kick you off the instant you give us a reason. We’re not the right service for you.

]]>
jdw https://www.nearlyfreespeech.net/ <![CDATA[Maintenance for Christmas]]> https://blog.nearlyfreespeech.net/?p=737 2019-12-24T18:11:50Z 2019-12-24T18:11:25Z Christmas Eve and Christmas Day are the lowest-usage days of the year (both in terms of member activity and in terms of visits to member sites), so we are going to roll out some core system upgrades over the next 36 hours. These updates relate mostly to file servers.

Despite having no single point of failure from the hardware perspective, each site’s content is still backed by a single system image (necessary for coherency), so these updates may cause some temporary disruptions to affected sites. We will do our best to minimize that.

We do also plan to upgrade our core database servers. These are fully redundant, so we do not anticipate disruption, but the possibility does exist. We hope this upgrade will resolve an issue that mainly manifests as intermittent errors in our member interface early in the morning (UTC) on Sundays.

]]>
1
jdw https://www.nearlyfreespeech.net/ <![CDATA[Act now: The latest effort to censor you (FOSTA) is here!]]> https://blog.nearlyfreespeech.net/?p=733 2018-02-28T21:09:18Z 2018-02-28T21:09:18Z The US House of Representatives has just passed a bill called FOSTA (the “Fight Online Sex Trafficking Act”). This bill is headed to the senate. It needs to be stopped.

This bill is, as the name implies, ostensibly intended to fight sex trafficking. Sex trafficking is awful, and should be fought. But a lot of sex trafficking experts think that this bill won’t have that effect. That it will actually make things much worse for sex workers. For example, those sex trafficking victims that are supposed to be protected may suddenly find it illegal to talk about their experiences. Whoops.

(Yes, that’s a Jezebel link. If they don’t match your politics, fair enough, try Reason. Pretty much nobody on any side thinks this is a good idea, except a handful of underinformed celebrities. This is not a right-left issue.)

That’s probably reason enough not to pass it, or at least to go back and take another look. But that’s not the end of the story.

An amendment slipped into the bill also proposes to override section 230 of the Communications Decency Act. Without overstating the case in any way, CDA 230 is the reason small companies like ours can exist. It protects us from liability for the actions and content of our customers. That means if you don’t like what one of our customers has to say, you can’t sue us about it. The First Amendment is great, and we love it, but in everyday practice, CDA 230 is what keeps rich people and companies from filing nuisance lawsuits to force us to either censor our customers at their behest or drown in legal fees. They know that, and they hate it.

As the EFF has pointed out, if this protection is weakened, pretty soon the small voices will be silenced. Not because what they have to say is illegal, but simply because it might be. Fear of liability will force providers like us to either moderate all the content that appears on our service — massively Orwellian and expensive — or simply proactively disallow anything that might possibly create liability. Or just shut down and leave the Internet to the likes of Facebook.

In that climate, the only people who will be able to have websites will be people who can afford teams of lawyers and people who only say things so boring that they don’t run any risk of creating liability. Remember when mass communication consisted of three broadcast TV channels and everything said on them had to be approved by the channel’s “Standards & Practices” department, which censored much more than any law required them to because that was cheaper than fighting? Do you miss those days?

If you’re not that worried about us, that’s fine. Here’s why you should still care. Does your website have a forum? Does your blog allow comments? Do you have a feedback form? A wiki? Could someone post spam anywhere on your site offering sex for money? If so, enjoy your ten years in Federal prison. (And yes, we’ve seen several cases where people engaging in illegal activity find unmonitored corners of sites that allow user-contributed content and use them to communicate. We act to shut that down when we find out about it, but we’re strongly against sending the operators of those sites — or us — to prison for “facilitating” those communications.)

This sort of crackdown on online communication has been attempted several times in the past, usually around intellectual property. (Remember SOPA, PIPA, etc.?) But intellectual property owners, despite being good lobbyists, aren’t very sympathetic public figures. Sex trafficking victims are.

That is definitely reason enough not to pass it. But that’s still not the end of the story.

If you live in the United States and you ever took even a high-school level civics class, you probably ran across the concept of an ex post facto law. This refers to a situation where, if I’m in government and you do something legal that I don’t like, I make a law against it, I make that law retroactive, and then I use it to prosecute you for what you already did. That’s not how law works, and it’s not allowed.

But FOSTA contains this little tidbit:

(b) EFFECTIVE DATE.—The amendments made by this section shall take effect on the date of the enactment of this Act, and the amendment made by subsection (a) shall apply regardless of whether the conduct alleged occurred, or is alleged to have occurred, before, on, or after such date of enactment.

Whoops. I guess Mrs. Mimi Walters of California (the author of the text above) skipped civics class. To be fair to Mrs. Walters, the US Constitution is very vague on this point, and the language is convoluted and hard to follow. (“No Bill of Attainder or ex post facto Law shall be passed.” – Article 1, Section 9)

That’s not the only problem, nor is it only my opinion. The US Department of Justice agrees, raising “serious constitutional concern” about the ex post facto nature of the law and states that the is broader than necessary (meaning it criminalizes not only more than it needs to, but also more than the authors think it does). They are also concerned that despite making so much stuff illegal, this bill makes it harder to prosecute the actual sex traffickers.

When the Department of Justice tells you you’re making too much stuff illegal, obviously you take a step back and fix things… unless you’re the US House of Representatives.
In that case, you pass it as-is 388-25.

That’s right, the US House passed a bill that, our own liability concerns aside, makes it harder to prosecute sex traffickers, but criminalizes people speaking out against sex trafficking, including former victims. What the hell? Do sex traffickers suddenly have really good lobbyists?

The bill has now moved on to the US Senate. Internet superhero Senator Ron Wyden of Oregon is doing his best to save us all once again, as he has done so many times before. But he needs our help. If you’re in the US, please call or email your senators today and urge them to send FOSTA back to the drawing board in favor of something Constitutional, limited, and effective. FOSTA is none of those things.

]]>
14
jdw https://www.nearlyfreespeech.net/ <![CDATA[Significant pricing updates are coming soon]]> https://blog.nearlyfreespeech.net/?p=599 2017-11-03T19:51:34Z 2017-09-25T22:45:05Z I’ve started and stopped writing a bunch of posts over the past few weeks. Recent events have really crystallized some issues that we’ve been looking at for over a year. Those posts are largely ideological in nature, and they tend to ramble on at very great length.

This isn’t intended to be such a post.

This is a post to acknowledge that our service has a couple of serious issues that require more urgent attention.

  1. The cost and threat of DDOS attacks is escalating so quickly that unless we act, they will drive us out of business.
  2. It’s time for us to move toward ICANN accreditation.

Addressing these concerns will require some major shifts in our pricing model. To be completely honest and upfront, these changes will primarily affect those sites and members that are currently paying the least. After the changes, the prices at the low end will still be low, but not as low, and in some cases, lower-cost sites may have a different set of options than they have in the past.

Summary of changes

Here’s a short summary of the changes we are making:

  • We are eliminating the current static/dynamic site distinction in favor of three tiers (plan selection available on November 1st, existing sites with no plan billed as Non-Production as of November 1st, and as Production sites as of December 1st):
    • Non-production site ($0.01/day) – Static and dynamic functionality, limited realm selection, limited # per membership, might be automatically opted in to some betas, not for production usage or revenue-generating sites;
    • Production site ($0.05/day) – The “standard” option. Full set of realms, unlimited quantity, no automatic beta opt-ins, usable for production or revenue-generating sites; and
    • Critical site ($0.50/day) – An option unlike anything we’ve (publicly) offered before with 24×7 custom monitoring.
  • Bandwidth charges will largely be eliminated (not officially, but in practice, no one will pay them). (Effective November 1st for new sites, on or before December 1st for existing sites.)
  • Resource (CPU/RAM) charges will decrease over 60%, from 17.22 RAUS/penny to 44.64 RAUS/penny.
    (Effective November 1st for all sites that use resource billing.)
  • Domain registrations will increase by $1.00/year. (Effective October 1st.)
  • We will be adding a significant fee to certain violations of our Terms & Conditions of Service that are extremely rare but cause significant hassle for us. (Effective immediately, with a limited amnesty period until December 1st.)

Below, we’ll discuss why we need these changes and then go through them in detail.

About DDOS attacks & bandwidth

Defending against DDOS attacks is very difficult and very expensive. The first key step to fight them off is being able to absorb the inbound traffic they generate. That takes massive amounts of bandwidth. And although already massive, the amount needed is constantly increasing. Right now, we have enough bandwidth to cope with about 70% of the attacks we encounter. That 70%, of course, starts from the bottom. And the success rate is falling. Larger attacks can knock our entire service offline, at least for a little while.

Unfortunately, this is an area where “pay for what you use” simply no longer works. We charge for bandwidth based on outbound traffic from member sites. But we pay for vastly more bandwidth than that to deal with attacks, and we need more still.

Worse, the equipment needed to mitigate DDOS attacks at wire speeds of 40Gbps and 100Gbps is pick-your-jaw-up expensive compared to 10Gbps gear (or, really, anything else we currently use), and it needs both costly maintenance agreements to stay up to date with evolving attack techniques and well-paid professionals to keep it all working.

So what we currently charge is not covering what we currently pay, much less the additional costs it would take to get the success rate up to 85-95% and keep it there as attacks increase in frequency and scale. That’s got to change. We need to increase our revenue, not just enough to cover our current costs, but to cover the additional costs of the bandwidth, equipment, and people necessary to make sure our service is secure, protected, and reliable, now and in the future. That entails charging everyone more.

Most people’s first reaction to that is, “Well, I’m not getting attacked, why should I have to pay more?” Here’s why: most of the sites affected by any given DDOS attack are not the site being targeted. That’s getting worse. We’re seeing more and more attacks on infrastructure, rather than hosting IPs. In such attacks, collateral damage is the whole point. Those typically affect everyone, and are often much harder to deal with. When your site is negatively affected by a DDOS attack on someone else, these are the resources we need to have even a chance of doing something about it.

Even if an attack targets a specific site we host, attacks are frequently non-specific enough that we never figure out who it is. But in those cases where a specific site is being targeted and we know which site it is, does it matter? Sometimes it’s easy to argue that the site operator must have done something to provoke an attack. And sometimes that’s even true. But it’s frequently not. In the longest and most debilitating attack we’ve ever experienced where we identified the target, that target was a small site about a video game. Did that site “deserve it?” How about the website for the orphanage using the same IP address? Should we just wash our hands of it and tell that person, “Sorry, you have to go elsewhere, video games are way too controversial for us to handle?”

Even if we do know the target of the attack, and it’s not just “us,” it’s not like we can send that member a bill. I mean, we could, but nobody is going to pay a surprise $10,000 bill from a service that claims that you’ll never be on the hook for more than the amount of money you decide in advance to put in.

“Who should have to pay for protection from DDOS attacks?” is a trick question. No one should. Period. Because DDOS attacks shouldn’t happen. They definitely shouldn’t be several orders of magnitude easier and cheaper to launch than they are to defend against. But “should” and “is” are miles apart, and we have to take the world as it is, not as it should be. So who does have to pay for protection from DDOS attacks? Everyone.

Protection isn’t optional. There’s no point in offering our service if the minute anyone calls our bluff we have to fold. We’ll never be able to provide every site protection against every attack. There are other — much more expensive — services like Cloudflare’s $200/month “Business” plan that can help sites worried about the last few % of attacks we can’t hope to mitigate. But we can do a lot better than this, and we must do better than this. Free speech, it turns out, is heinously expensive, and the cost is rising rapidly.

Hence, we’re changing our pricing to reflect that all sites, regardless of size or activity, must contribute toward the collective cost of protection from attacks. That wasn’t an easy decision, but we believe it’s the only workable option in the current online climate.

New site types

Non-Production Sites ($0.01/day)

We know that some people who host with us really do need the absolute lowest possible price, and this is it. This is a plan designed to minimize the price by reducing access to some features that are expensive for us, and giving us some flexibility with respect to how we host these sites. As such, this offering represents a subsidy; it’s below our costs. That has three main consequences:

  • Non-Production Sites may not be used for production services, or for sites that generate any kind of revenue. If you’re making money on your site, we’re not willing to lose money to host it. Personal sites are OK. Beta sites are OK. Development sites are OK. Resume and portfolio sites are OK.
  • Non-Production Sites must constitute no more than half of the sites on your membership, rounded up. So if you’ve got one site, it can be non-production. If you’ve got more, you’ll have to maintain a mix. If your ratio falls out of balance (e.g. by deleting Production sites), we’ll apply an adjustment charge.
  • Non-Production Sites will help us improve our service. When we need sites to help test new features in the future, those sites will be automatically drawn as needed from the pool of Non-Production sites. They will also be limited to upcoming stable, beta and experimental realms to help us make sure third-party software updates are production-ready. (Realms are the huge collections of third-party software we provide preinstalled for all sites. Stable realms are rotated quarterly, and beta and experimental realms receive continuous rolling updates.)

If any of those limitations are undesirable for any reason, then a Production Site will be the more appropriate option.

We’ve found it’s faster, easier, and cheaper for us to develop and test new functionality when we have a robust base of real-world sites to test against. Our free bandwidth beta is a good example of such a test. That’s valuable to us, and that’s why we’re willing to offer this service at this price. We’ll make every effort to limit testing to sites that are compatible with whatever is being tested, and if the test breaks a site we’ll either opt it out if we detect that, or provide you a way to do so. And we’ll spread tests around to the best of our ability; it’s not our intent to draft every site into every test.

Whether or not a non-production site is participating in a test, we’ll still take feedback and problem reports about them, and we’ll still do everything we’re currently doing to keep them up and running.

Since this plan represents a subsidy, we have a certain amount of money in mind that we’re willing to “pay” our members for giving us the extra flexibility this plan provides. We’ll periodically evaluate the total number of non-production sites and may adjust the cost of this option if adoption rates are materially higher or lower than we expect. If needed, changes will be infrequent and announced in advance.

Each non-production site will also be able to use at least one gigabyte of bandwidth per day without being charged extra.

Production Sites ($0.05/day)

Production sites are the closest equivalent to current sites. As such, there’s not much to say about them. They will have full access to all features and realms, support static and dynamic content, and may be used for any production or revenue-generating purposes. There is no restriction to the number of them you can have. They may have some ability to opt into future tests, but that ability may be limited.

Although we won’t require it, we strongly recommend that if you are generating revenue from a production site, you should have a subscription membership. The cost is very small and if there is real money on the line then being able to ask for our insight and help when there’s a problem can be invaluable. It’s a good investment in your own success.

Each production site can use at least ten gigabytes of bandwidth each day without being charged extra.

Critical Sites ($0.50/day)

This is a new category of site available only to subscription members. It reflects a generalization of an informal arrangement we have with a few of our existing members who have sites that need special monitoring. This service is for sites that are particularly sensitive or important, sites that aren’t for the business, sites that are the business.

The higher price includes a custom entry for the site in our network monitoring system. That notification can be set to alert you, us, or both, in the event of a disruption. (We do reserve the right to adjust how vigorously it notifies us based on the frequency of monitoring events that result from issues determined to be under your control.) In the event of a disruption to our service, we will use the additional monitoring to ensure that service is restored as soon as possible.

Also, with critical sites, if we detect an issue, we will be able to devote more effort to investigating what’s going on to see if there is any helpful information we can proactively provide to the site operator, or if there is anything we can do on our end to help restore proper operation.

This doesn’t change the fundamental do-it-yourself nature of our service, nor will it keep sites online if they are hacked or compromised. To give an example, if we detect a WordPress blog is compromised, we disable it and our system sends out an automated notification via email to the site operator to let them know to take care of it. If that site were a critical site, we would still have to disable it, but we would send out a manual notification instead, including whatever information we can find about what’s going on, and if it were agreed upon in advance, we’d be able to alert the site operator by SMS as well.

If your site is vital to your business or generates enough revenue that you feel that a little extra attention is worth it, this is the option for you.

Each critical site can use at least 100 gigabytes of bandwidth per day without being charged extra.

General notes on site types

All site types will have full access to static and dynamic content, daemons, scheduled tasks, TLS, the ssh command line, and many other crucial features for modern sites.

With respect to bandwidth usage, each site type specifies an amount per day that won’t be charged, and that amount says “at least.” Officially, the policy is that usage in excess of the stated amount will be billable at the flat price of $0.10/GiB. However, right now, we don’t actually plan to bother keeping track of that. The goal is to cover our bandwidth costs, which are driven by attacks, not regular usage. If we’ve already charged you for inbound bandwidth to protect from attacks as part of the base site cost, we shouldn’t need to charge you again for outbound bandwidth because it’s already been paid for. So as long as site charges cover the bandwidth costs, as we expect them to, we can hold off on charging for overages. Naturally, if a site starts to use so much bandwidth that it’s becoming problematic, we’ll deal with it, but for now I kinda want people to give it their best shot. 🙂

The other “catch” is that it will be possible to upgrade sites to a higher plan, but it will not be possible to downgrade to a lower plan. Once a site is upgraded, that upgrade is permanent. Likewise, the daily costs will continue to apply even if the site is disabled.

This change will become effective for new sites on November 1st. At that time, existing sites will remain unchanged, but will be charged as Non-Production Sites. The option to choose a plan will be available in the UI on that date.

Starting December 1st, all existing sites that have not chosen a plan will be treated and billed as Production sites, but they will still have the option to make an initial plan selection. (So if you don’t get to this before the deadline, you’ll still be able to choose Non-Production for existing sites.)

Resource cost reductions

CPU and RAM resource charges for all site types will also decrease dramatically, from 17.22 RAUs per penny to 44.64 RAUs per penny. That’s a decrease of over 60% and will apply to both MySQL and sites. It should help make hosting larger, more active sites more affordable.

The pricing for storage resources will remain unchanged for now. The reliability of our current storage has been very good, but the cost (to us) is really high in terms of both TiB/$ and IOPS/$, and that shows through in the pricing (to you). We want to bring this down, and if we can stop spending every penny we have on bandwidth, I believe some more R&D in this area will really pay off.

Domain Registration

Since 2006, we’ve used the services of a third-party company to offer domain registration. That worked out really well for many years. Then, in 2014, that company was acquired by another, much larger, company. Its new parent owns a large number of other hosting companies that compete directly with us.

Initially, things were fine. But over the past twelve months or so, there have been a few incidents where our customers didn’t get the quality of service we expect. We’re concerned that it will be increasingly problematic to outsource such a critical function to a company that is essentially owned by the competition. Not to mention our lack of control over things that are very frustrating to us and our members, like “monetization*” of expired domains, is getting pretty old.

Unfortunately, the other wholesale options in the industry really aren’t any better. To make a material difference, we need an ICANN-accredited registrar. That’s a complex and expensive undertaking. To that end, we’ll need to raise prices on domain registration, first to fund the accreditation effort, and then to meet the additional ongoing requirements ICANN imposes on accredited registrars in a way that will provide a quality of service that both we and our members will be satisfied with.

The fixed costs of operating a registrar are particularly burdensome for smaller companies like ours that can’t amortize those costs over millions of names, especially if we don’t want to get any “monetization*” on us. Accordingly, we will raise the price of all domain registrations by $1.00/year on October 1st and use that money to pursue domain registration independence.

*”Monetization” is a technical term used by the domain registration industry that means “we’ll do anything, however scummy, if there’s money in it.” It’s never a good thing and it has to be handled with scare quotes because if you get any on you, the smell never washes away.

Enforcing our Terms & Conditions of Service

Our Terms & Conditions of Service are some of the shortest and easiest to comply with in the industry. Over 99% of our members are happy to do so. But the ones who don’t are getting to be a real source of hassle and frustration for us. People don’t follow our policies, they create a jam for themselves, and then they invariably expect us to fix that jam for them. And we have enough members that even “less than 1%” is enough people to cause a steady stream of headaches.

Unlike, say, DDOS attacks, these are avoidable problems that people absolutely did bring on themselves, and for which they alone should bear the costs and consequences. That’s not currently the case. You’re paying for their mistakes.

Effective immediately, if we detect the following situations, we will apply a $50.00 fee (per incident) to cover the cost of having our staff clean up the mess and advise the people involved on how our service works and what they should be doing instead:

  • One person who has multiple memberships. We will consolidate the multiple memberships into one membership with multiple accounts. (Definitely do not create multiple memberships trying to get around the per-membership limit on non-production sites!)
  • Multiple people sharing access to a membership. We will reset the password, get ID and a promise not to do it again from the named member, and provide references to documentation on how to use our powerful account sharing features.
  • Memberships that have been transferred from one person to another. We will require the recipient to transfer the membership back to the named member or, if we are unable to establish who that is, have the person create a membership in their own name, pay the $50 fee, and then we will transfer any accounts on the invalidated membership to the new person.

Pretty much, unless you’ve done something it says in bold print not to do on our signup page, you should never see this fee. But we’ve got to get the people who think they are exempt from what few rules we have to either straighten up and fly right or find some other host. Really, we’re fine with either choice.

If you are currently doing one of these things and we haven’t caught you yet, you will have until December 1st to take steps on your own to resolve the issue. We will waive the $50.00 fee if we learn about a violation from your good-faith effort to correct it. (E.g. if you request a transfer to consolidate your two memberships before we find out about it, we will process the request without applying the fee if it is submitted before December 1st.) If you need our help to resolve the issue before then, you’ll need a subscription membership, which is a much better deal than this fee.

Final thoughts

It’s very likely as a result of these changes, members paying more than a few dollars a month will wind up paying less, possibly substantially less. They’ll probably be very happy. But there aren’t very many of those.

There are others who will be able to make a few changes to consolidate usage and keep their costs about the same. Our per-alias site root feature may be particularly helpful for people with large numbers of static sites.

But, taken on a purely percentage basis, these changes are a large increase for many people. In many cases, that increase will be from pennies per year to dollars per year. Some of them will see the value in taking these steps and will be happy to pay more to support them. Some people… won’t.

For those who need it, it’ll still be possible to host a site with us for well under a dollar a month, and while that option is not without tradeoffs, that site will in many cases be able to do more than it could in the past.

For those who can afford it, and those who want us to support their business, we’re asking them to do a little more to help support our business in return.

A fair amount of this change is motivated by the inescapable truth that offering service at the price we’re offering it increasingly requires making compromises on the quality and future viability of our service. We’re not willing to do that. I have always believed that we are about offering a good value rather than the lowest price, and these changes reflect that goal.

For people who would prefer to accept those compromises to get the absolute rock-bottom price, there are and always will be providers who are willing to make them. But we are already farther down that road than I am comfortable with. I don’t like it here, and I don’t like what I can see ahead. We are changing course. We hope you’ll follow the new path with us, but if not, we’ll definitely understand.

]]>
103
jdw https://www.nearlyfreespeech.net/ <![CDATA[Unlimited free bandwidth!* (*Some limitations apply.)]]> https://blog.nearlyfreespeech.net/?p=566 2016-02-19T00:40:04Z 2016-02-19T00:36:41Z We’ve been hard at work behind the scenes developing the next-generation of our core hosting technology, and we’re ready to move it to public testing. It has some exciting new features:

  • TLS enhancements
  • HTTP/2 support
  • Automatic gzip compression
  • Major Access Control List (ACL) improvements
  • Shared IP blacklist support
  • Websockets support
  • Wildcard alias support

To encourage people to help us test out the new stuff, we’re exempting participating HTTP requests from bandwidth charges for the duration of the test. You can opt-in to the test for a particular site by selecting the “Use Free Beta Bandwidth” action on the Site Information panel for that site in our member interface. That page has all the fine print about the test, which mostly cover two central points:

  • Reminding people that it is a test and things might not work.
  • Clarifying that although there is no fixed limit to the amount of bandwidth a site can use under this test, there is a “floating” limit: don’t cause problems.
    • This test (and the free bandwidth) will run through at least March 15th, 2016.

      Below, we’ll also discuss each new feature briefly.

      TLS enhancements

      The major enhancement to TLS (transport layer security, the technology that makes http:// URLs into https:// URLs) has to do with scalability. As people may know, we currently use Apache as a front-end TLS processor. As a test, we generated test keys and certificates for every site we host and loaded them all into a single Apache config just to see what would happen. The resulting server process took nine minutes to start and consumed over 32 GiB of RAM before answering its first request. That’s… not going to work. So we’ve written a great deal of custom code to solve that problem.

      We’ve also always been worried that the overhead of TLS would require us to charge more for using it. One side-effect of this work is that we’ve reduced the fixed resources required to support TLS so much that we can now definitively say that that won’t be an issue.

      The new system also improves the performance of TLS requests, and made a couple of other changes we were able to backport to the existing setup. First, we’ve eliminated TLS as a single point of failure. Second, due to our use of Apache as a TLS frontend, the last hop of an HTTPS request is handled as unsecured HTTP on our local LAN. Although the probability of anyone monitoring our local LAN without our knowledge is pretty small, in a post-Snowden world one has to acknowledge that taking reasonable precautions against improbable things isn’t as paranoid as it used to be. So last-hop HTTP traffic (all last-hop HTTP traffic, not just HTTPS) is now secured with IPSEC while it is on our LAN.

      We’ll have more to say about TLS in the near future.

      HTTP/2

      RFC 2616 established HTTP/1.1 way back in 1999. It took many years for it to get properly adopted. Since then, there have been many attempts to improve it, like SPDY. In the end, RFC 7540 laid out HTTP/2 as the official successor, bringing many of the advantages of SPDY and similar protocols, and a lot of combined wisdom and experience to the new protocol.

      Our beta service supports HTTP/2 right now.

      In order to take advantage of it with a web browser, you need TLS configured. HTTP/2 can work over unencrypted connections, and although we do support that, no browser does. The default for the future of web browsing is intended to be encryption.

      Automatic gzip compression

      Contrary to popular belief, we’ve supported gzip encoding for a long time. The problem historically was that getting there has been a bit too tedious for most people. Delivering gzip-encoded static content requires maintaining two copies (regular and compressed) and twiddling around in .htaccess. Dynamic content is much easier; we’ve actually enabled gzip encoding for PHP by default since PHP 5.5. But still the word on the street is that we don’t have it, because when people think compression, they think mod_deflate.

      We’ve never supported mod_deflate because it’s one of those solutions that is simultaneously easy and terrible. With mod_deflate, if someone requests a piece of static content and says they support gzip encoding, the server compresses the content and sends it to them. If another person requests the same content and says they support gzip encoding, the server compresses the same content again the same way, and sends it to them. Over and over, performing the same compression on the same input every time, wasting lots of resources and hurting the throughput of the server. (In testing, we found it was not unusual for requests handled this way to take longer than if no compression was used, even though the overall size is smaller.) Easy. And terrible.

      Our beta service is capable of fully automatic gzip encoding of any compressible content. If someone requests a piece of static content and says they support gzip encoding, our system compresses the content and sends it to them. And then it stuffs it in a cache, so when the next person requests the same content with compression, it’s already ready to go.

      Major Access Control List (ACL) improvements

      ACLs (currently called IP access control in our UI) are how you decide who is or isn’t allowed on your site. People use them to block spammers and bandwidth leeches, or to limit access to their home network while a site is being developed.

      First and foremost, the performance of ACLs has been dramatically improved with the new software. We greatly underestimated the degree to which some people would get carried away with ACLs. The site on our network with the largest ACL currently has over 4000 entries. That takes a lot of processing and really slows down access to that site. We could argue that such a large ACL is fundamentally unreasonable and that if using it has a performance impact, so be it. Or we could make the new system capable of processing an incoming request against that site’s ACL in 3 microseconds. We chose the latter.

      At the same time, we’ve also dramatically expanded what can be included in an ACL. It’s now possible to filter inbound requests based not only on IP address (now including both IPv4 and IPv6) but also on protocol (http or https), request method (GET, POST, etc.), and URL prefix. So, as a purely hypothetical example that I’m sure won’t be of any practical interest, an ACL can now be used to block POST requests to a WordPress blog’s login script if they don’t originate from specific IP’s you know are OK without interfering with public access to the rest of the site.

      Shared IP blacklists

      We’ve also added the ability to filter incoming requests against a sort of giant shared ACL, a list of IPs flagged for bad behavior.

      We haven’t turned this on yet, because we’d really like to include Project Honeypot’s http:bl in the list, but we’d need their cooperation to set that up, and they haven’t gotten back to us yet.

      We can’t guarantee this will be effective, attacks tend to adapt and some botnets are huge, but we’re committed to finding new and better ways to keep our members’ sites safe.

      Regardless of how the details shake out, this feature will be opt-in. At some point in the distant future, well after this test is over, if the shared list works really well and causes few problems, we may eventually make it the default for new sites. We’ll wait a long while on that, and then make the right decision at that time.

      Websockets support

      Websockets are a way to convert a web request into an efficient bidirectional pipe between a web browser and a server. They’re super handy for high-performance and interactive apps. They were very high on the list of things there was absolutely no way our infrastructure could ever support. Yesterday.

      When things settle down, we’ll try to do a brief tutorial showing how to use them.

      Wildcard aliases

      Wildcard aliases refers to the ability to add an alias like *.example.org to your site and have all traffic for whatever name people enter (e.g. www.example.org, an.example.org, another.example.org, whatever.example.org, perfect.example.org, etc.) wind up on that site.

      We’ve never supported wildcard aliases because they’re not super-common (in most cases, example.org/perfect is just as good as perfect.example.org) and because our existing system uses a hash table to speed up alias lookups; you can’t hash wildcards. The new system removes this limitation without sacrificing performance. We still don’t recommend using them unless you have a specific need, but there are a couple of use cases where there’s just no substitute. (One site which is perhaps not surprisingly no longer hosted here had 6000 aliases at the time it was deleted. That same site today could have gotten by with one wildcard alias.)

      The “specific beats general” rule applies to wildcard aliases. If the site example has the alias www.example.org and the site wild-example has the alias *.example.org, requests for www.example.org will go to example not wild-example.

      A caveat

      Although these features now exist with the beta service, most of them aren’t reflected in the UI yet (where applicable). It seemed cruel to provide an interface to set up cool functionality that wasn’t actually available. 🙂

      Now that it is, we’ll be rectifying that over the coming weeks as we refine and troubleshoot everything. In the meantime, if you want early access to one of the features listed here that requires custom configuration and you’re a subscription member, just drop us a line through our site and we’ll see what we can do.

      Last words

      This is at once very exciting and very daunting. The software being replaced is 15 years old and showing its age; the new features we’re bringing out are fantastic (and, in some cases, long overdue) and we couldn’t have done them with the old architecture. But on the other hand, the old software is a legitimate tough guy. It’s handled tens of billions of web requests. It built our business from nothing. We know exactly what it does under a dozen different types of DDOS attack. And here we are, replacing it.

      There is absolutely, positively no way the new software is as bug-free or battle-tested as the old stuff. The latest bug logged against the existing software was a memory leak in 2009. The latest bug against the new software was fixed less than 24 hours ago. There will be problems. (Which we’ll fix.) Then there will be more problems. (Which we’ll fix.) It will inevitably crash at the worst possible time at least once. (Which we’ll fix.) And, there will no doubt be something obscure that works great on the current system but which doesn’t work on the new one that we won’t be able to fix. (But not to worry, we’ll be keeping the old one around for quite awhile.)

      So this is a daunting move for us, but we’ve never made decisions based on fear and we’re not going to start now. So it’s time to push this technology out of the lab and onto the street so it can get started on its five hundred fights.

      Please help us out and opt as many sites as you can into the beta, so we can test against the broadest possible cross-section of traffic and types of site content. Every little bit helps!

      Thanks for your time, help, and support!

      ]]> 17