Is NearlyFreeSpeech.NET anti-AI?

I read recently that we were.
Are we?
Hmm. Not… exactly? A bit, I guess? But not. It’s complicated.
It’s certainly understandable why it might come across that way. We are, rather notoriously, anti-stupidity. And we’re anti-exploitation. And we’re not super fond of bullshit. LLMs, the specific type of generative AI most prominent in the public consciousness, do seem to be a fascinatingly complex mechanism powered by money and energy (and water, apparently?) that transforms stupidity and exploitation into previously-unimaginable quantities of bullshit. There’s a lot of overlap, is what I’m saying.
We are strongly against AI bots scraping our members’ sites. But that’s less because it’s got anything to do with AI and more because the bots are incredibly stupid and it’s exploitative. They’re also, regrettably, very difficult to stop. But we’re doing our best.
We’re also pretty concerned about the AI surveillance state represented by companies like Palantir and Flock Safety. LLMs get all the press, but (to shamelessly steal a wonderful idiom in their honor) these guys seem like the ones who have dedicated their lives to inventing the Torment Nexus from the sci-fi classic “Don’t Invent the Torment Nexus.” To them, Cyberpunk 2077’s Night City isn’t dystopian, it’s aspirational.
Deepfakes, of course, are pretty terrible. GenAI for the people who looked at the Internet and said, “You know what this needs? More nonconsensual pornography!”
Voice-cloning is also pretty scary. Or, at least, it’s scary that it’s trivially easy. How far are we from the Fisher-Price Forge n’ Fraud play set?
These are all things that I’ve tried. I want to keep an open mind. And I want to understand what it’s capable of for myself. As hard as it would be to justify the wholesale theft of words, images, videos and likenesses that these models wouldn’t exist without, it’s like, OK, hit me. Show me why it’s all worth it. Show me what it can do!
So I set my home security system up to tell me when my dog is in the backyard, though sometimes it thinks she’s a cat. Or a squirrel. I’ve let coding agents write unit tests for me, which were actually mostly decent, except for that time an incorrect test failed so the agent quietly changed the code to also be wrong so the test would pass. I even vibe-coded a web frontend for a tool I use every day. I will never understand that code (partly because it’s terrible and partly because it’s all in Javascript), which seems like a problem, but it technically works. I’ve messed around with image generation, though I managed to do it without violating anyone. It seems like it only “works” if you give it vague instructions and aren’t invested in obtaining a specific result. I even cloned a voice, though I chose one that was already synthetic. (Ada from Satisfactory, if you must know.)
And, yes, I’ve tried chatbots, both local and the big names. I even found a case where they’re helpful! Say you’re writing a story or trying to plan something out. You get stuck, you give the AI some background and ask it what to do. And what it tells you is dumb and wrong. And then you tell it, “No, that’s dumb and wrong, because (reason).” Repeat a few times, and pay attention to the (reason)s. A lot of the time, you’ll wind up articulating important details about the situation that you didn’t know you knew. This only works with LLMs because, I find, human beings get peevish if you ask them for ideas and then repeatedly tell them why all of their suggestions are dumb and wrong.
I don’t know. The promise of AI seems great. But the current reality is an AI slop, revenge porn, security nightmare, technofascist horror show built by sacrificing every fixed representation of human creativity on the altar of “number go up.” All for what? The most expensive mediocrity the world has ever known?
I think if AI were in the hands of, say, our members, that maybe we’d get more of the promise and less of… that. It is really tempting to try it, because I trust our members and the world they would build vastly more than any of the billionaires who seem half a cackle from a volcano lair. But I went to a performance at a local university’s theater last week. At the beginning, they played a recording about how important it is to acknowledge that the university is built on stolen land. (Feel however you want to feel about that.) If we did try to take some of the more open tools and make them available, knowing how they were trained, damn, but that feels a whole lot like building on stolen land. And I’m not sure a recording would make me OK with it.
Also, as a more practical matter, I’m not thrilled that 1TB of ECC DDR5-6400 RAM costs over $30,000 right now. Guess we’re going to be in the group figuring out how to do more with less while our techno-overlords keep working on adding erotic mode to ChatGPT.
So, yeah. NearlyFreeSpeech.NET anti-AI? Honestly, I don’t know where people get these wild ideas! 🙄

No Comments yet »

RSS feed for comments on this post. TrackBack URI

Leave a comment

All comments are moderated before appearing here. Spam, off-topic comments, trolls, and pretty much anything that makes us roll our eyes will be discarded and no one will ever see them. This blog is our free speech platform, not yours. If you want a website to share your thoughts, get your own!

Comments seeking tech support also will not be approved. If you're a member of our service and you need help, please review your support options here or ask on ask on our forum.

If you wish to contact NearlyFreeSpeech.NET, please visit our site or email support@NearlyFreeSpeech.NET.

XHTML: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

Entries Feed and comments Feed feeds. Valid XHTML and CSS.
Powered by WordPress. Hosted by NearlyFreeSpeech.NET.

NFSN