

Lemmy.ML is full of ML. So go get one.
Ah, I see the issue. This might help.
I don’t use Lemmy outside of Beehaw. “ML” doesn’t mean that instance to me. I don’t care for that instance.


Lemmy.ML is full of ML. So go get one.
Ah, I see the issue. This might help.
I don’t use Lemmy outside of Beehaw. “ML” doesn’t mean that instance to me. I don’t care for that instance.


Alright, let’s take this seriously for a sec: You want me to go dig up someone who doesn’t know or care about any of what’s going on in this thread, have them sign up for Lemmy and post here, all so they can stand there in front of you? Listen to yourself. You’re talking about people like they’re things I can just pick up and dangle in front of you like car keys.
I’m not here to arm-wrestle you. You disagree. I get it. We can move on now.


For the record, I’m talking about people holding the ideology that instance was named after, not the instance itself. I don’t peruse Lemmy outside of Beehaw, so I don’t know enough about the instance to comment on it. (That said, I’ll take your word for it, considering users from there have attacked me for absurd reasons before.)


If you don’t want to believe me, just say that and leave the snark behind. This isn’t Reddit.


deleted by creator


There are no doubt countless programs to scan QR codes on a desktop computer, and I know similar exists for phones. A camera is not needed.
At the same time though, that begs the question of what, exactly, is going to prevent an AI from doing the same goddamn thing? So it’s still shit.
I’m going assume your link is coming to a good conclusion. I find the idea that cattle farming produces a lot of greenhouse gases to be very believable, and so I will take that as a given. But even with that in mind, the argument doesn’t hold.
First, people can be mad about two things at once. We don’t have to pick between being upset about one contributor of climate change versus another, we can just be upset at both. Besides, I think it’s safe to say that cattle farming is a better use of resources than AI is. Like yeah, sure, I think it has some serious excesses. There’s animal welfare issues, the aforementioned climate problems, and just the general problems of rampant and negligent industrialization writ large. But even after all that, it’s still feeding people. AI doesn’t have that silver lining,[1] so the comparison is unfair as well as unnecessary.
As for the IP argument, no, I didn’t shoot my own argument down. Please do not mistake my good faith self-examination for a failure. Like I said, it’s still perfectly viable to hate AI for that reason, and I explained why — just because there are better reasons doesn’t make that one invalid. I have no idea why you’d think AI companies aren’t still training on small creator’s works en masse though.[2] To me, that’s wrong at face value, but to explain:
Training is one of the biggest things that AI companies are constantly pushing for, because they believe that’s the primary vector by which the technology has (allegedly) improved. It’s one of the biggest sources of the environmental problem. And even if that wasn’t among their top priorities, why would they stop? Scraping is cheap. Several of them committed massive acts of literally-illegal piracy to do it. They’re clearly willing to jump hurdles for even a theoretical benefit, so why quit? Why ever quit?
With regards to your anger: Alright, yeah, I understand that. I disagree, for a variety of reasons that are probably obvious by now. To me, you’ve either been mislead, or – knowing how AI sometimes affects people – you may have used AI yourself and become somewhat dependent on it. I dunno. But I’ve been mad about stuff before and said rude shit because of it, so I can relate.
I think the helpful thing to be reminded of in this context, then, is that if you want to convince people, this can’t be how you try. People do not take well to “telling it how it is,” or any other form of tough-love style argumentation. They get defensive. It’s completely counter-productive and only helps to alienate people from you. Which is a pain in the ass, I know; slowing down to say something kinder has huge friction, while venting what you actually feel is satisfying. But unless venting is the goal, you want the former. Gentle words and impersonal, non-accusatory language can go a long way; even if people get mad it you for that, they’re still more likely to introspect after.
I’m sure you disagree about this, but debating the utility of AI would be a topic unto itself, so I’m leaving it out for now. ↩︎
Though I’m not sure you actually do believe that! I mean, you’re saying “just pull live from the web for specifics now,” and… what do you think I’m talking about, if not that? What’s “clean data” to you? Comments like these, where we never consented? That’s not clean to me at all. ↩︎
When I see people begin their Anti-AI arguements with “it’s bad for the environment” I tune out completely. These motherfuckers have been driving gasoline powered vehicles around for decades,
So, we’re starting with this:

and are totally fine with natural gas fired power plants.
…and a complete assumption about the author’s opinions. One that is in direct contradiction to what they’ve said in the article. I shouldn’t need to elaborate on why this is a bad start.
Then you discount IP theft as a concept, when caring for creator’s works (and encouraging more) is what IP was invented for. And yeah, it’s grown massively out of control. There’s a reason Cory Doctorow and many others have suggested that concern for copyright is the wrong reason to hate AI. But if you ask me, you still can hate AI for that when it comes to small creators, who cannot meaningfully weaponize the broken aspects of it. And those creators are precisely who AI companies disproportionately steal from.
Lastly, you end your comment the same way you started it, only now it’s even more like the meme. The entire post is about how they quit their job because they now felt staying was unsound from both ethical and practical perspectives. That is a direct example of them following their morals.
I believe the arguments you make here are bad, but the condescension dripping off your post – especially when you’re attacking the author for hypocrisies that aren’t even real – is much worse. That’s Reddit behavior, and it’s not helping anyone.


That’s very understandable. While I think we disagree on the utility of AI (since I feel that it is more harmful than it is useful, and am unsure how much that would change post-bubble), I do agree that this is a likely path for the gov’t to take and would leave the most serious things completely unaddressed while also clamping down on some things that shouldn’t be to begin with. Heck, in many regards, you could say the GUARD act is this problem in motion.
For me, I guess, the bubble and its effects on us are just so ridiculous and exhausting at this point that it’s hard for me to worry about things like this. Though I do vehemently hate government use of AI especially; using it at all is a problem in my mind, but using it specifically to deliberately hurt people is reprehensibly disgusting.


AI-driven
I look forward to the future, where we may see entire buildings aflame because someone thought a guesswork machine was the best thing to rely on for fire safety.
(Also, I don’t know if it’d apply here, but there are some known problems with infrasound.)


I think it’s fine if people are mad at both. By all means, encourage people to be angry at the responsible companies. But you don’t gotta defend the tech to do that.
Besides, as far as I’m concerned, strong anti-AI sentiment does actually help temper the harms of the tech and its owners. Is it a permanent solution? Obviously not, no — you’re very correct that the groups and people hard-pusing AI are much more important targets for ire. But two pressures are better than one.


“You may flame me now, for I am full of love”
So, I was gonna disagree for various reasons, but this suggests you are posting specifically to incite arguments. Are you? Because that’s not what good faith looks like.


I’m aware. I think the primary difference between this bill and that general age-gating push is that AI itself does cause very real harm. To everyone, really. I’m not sure I’d even say children are particularly vulnerable.
Regardless, I came to the conclusion that the bill isn’t worth it as-is in my newer analysis post.


Okay, so I’ve read the full bill now, and I gotta say I don’t feel as conflicted about this anymore. The EFF’s article looks like it has a lot of bad takes in it now; my (still not insignificant) doubts on this bill now come from the fact that I’m not a lawyer and thus cannot foresee the consequences of this as well, and the fact that a decent bill can still be implemented horribly by idiotic companies.
(I wrote so much here I ended up needing to break out the header markdown. Apologies in advance!)
I don’t think the bill’s definition of chatbots is actually bad at all. Quoting directly:
(2) ARTIFICIAL INTELLIGENCE CHATBOT.—The term ‘‘artificial intelligence chatbot’’—
(A) means any interactive computer service or software application that—
(i) produces new expressive content or
responses not fully predetermined by the
developer or operator of the service or ap-
plication; and
(ii) accepts open-ended natural-lan-
guage or multimodal user input and pro-
duces adaptive or context-responsive out-
put; and
(B) does not include an interactive com-
puter service or software application—
(i) the responses of which are limited
to contextualized replies; and
(ii) that is unable to respond on a
range of topics outside of a narrow speci-
fied purpose
Notice the frequent use of the word “and” here, rather than “or.” Do I think there are no possible holes in this? No. And again, I’m no lawyer. But my main concern here would be restricting programs that aren’t LLMs, and this seems to do a good job of avoiding that.[1] The EFF is concerned this would restrict people from, say, cheating on homework. It would. I don’t care about that and I don’t think they should either, for reasons addressed in my comment above.
It’s not as bad as it sounded to me, but it’s still not acceptable. Quoting again:
(5) REASONABLE AGE VERIFICATION MEAS-
URE.—The term ‘‘reasonable age verification meas-
ure’’ means a method that is authenticated to relate
to a user of an artificial intelligence chatbot, such
as—
(A) a government-issued identification; or
(B) any other commercially reasonable
method that can reliably and accurately—
(i) determine whether a user is an
adult; and
(ii) prevent access by minors to AI
companions, as required by section 6.
(6) REASONABLE AGE VERIFICATION PROC-
ESS.—The term ‘‘reasonable age verification proc-
ess’’ means an age verification process employed by
a covered entity that—
(A) uses one or more reasonable age
verification measures in order to verify the age
of a user of an artificial intelligence chatbot
owned, operated, or otherwise made available by
the covered entity;
(B) provides that requiring a user to con-
firm that the user is not a minor, or to insert
the user’s birth date, is not sufficient to con-
stitute a reasonable age verification measure;
(C) ensures that each user is subjected to
each reasonable age verification measure used
by the covered entity as part of the age
verification process; and
(D) does not base verification of a user’s
age on factors such as whether the user shares
an Internet Protocol address, hardware identi-
fier, or other technical indicator with another
user determined to not be a minor.
The reason I say this is “not as bad as it sounded” is primarily because it’s open-ended.[2] An actually acceptable, privacy-preserving age verification method would be legal here and is not actively prevented. But that’s about all the faith I can muster for it. This law could be good if we had age-gating tech that could actually be trusted, and indeed if this law passes it might become good if we were ever to develop such a thing.
But we don’t have that, and I do not trust for-profit corporations to ever make one, and in such a context this law runs the risk of causing serious issues. Namely, I would be concerned that – contrary to what the EFF states – companies would decide that the path of least resistance would involve continuing to use AI and implementing accounts and age verification for their services anyway. We’d move from having shitty AI chatbot customer support people shouldn’t use, to shitty AI chatbot customer support that is considered so important that the company mandates everyone get age-checked to view a support page.
It’s unlikely, since the tech the law mandates is extensive enough to be an expensive hurdle to set up that really isn’t worth it for any company that doesn’t outright rely on AI to do their core business. But since when has sense mattered in the so-called AI age?
There’s also the privacy issue of the age gating. Which is omnipresent as ever with these sorts of things. All the bill offers on that front is this:
(5) AGE VERIFICATION MEASURE DATA SECU-
RITY.—A covered entity—
(A) shall establish, implement, and main-
tain reasonable data security to—
(i) limit collection of personal data to
that which is minimally necessary to verify
a user’s age or maintain compliance with
this Act; and
(ii) protect such age verification data
against unauthorized access;
(B) shall protect such age verification data
against unauthorized access;
(C) shall protect the integrity and con-
fidentiality of such data by only transmitting
such data using industry-standard encryption
protocols;
(D) shall retain such data for no longer
than is reasonably necessary to verify a user’s
age or maintain compliance with this Act; and
(E) may not share with, transfer to, or sell
to, any other entity such data.
5(E) here is great. I wouldn’t know if it’s foolproof, and it’s probably not, but it looks good. As for the rest? Seems very unrestricted and lacking definitions to me. Words like “reasonable” are great to use if you want to allow for a broad range of methods for tackling an issue, but I don’t think that move is reasonable when it comes to PII security. With “industry-standard encryption protocols” being as rigorous as the security standards get, the bill may as well just say “try not to fuck up,” and the track record for this is, uh, poor.
So yeah, all in all, way better than the EFF is putting it. But unfortunately the problems are bad enough that I’m not convinced this bill should pass. At least, not while the massive bad-faith age-gating push is currently strangling the internet. I hate AI, and it is absolutely hurting people, but if we’re to have this then privacy-perserving (and secure) tech is a must and has to be created first.
“AI companion” uses this definition and then further narrows it to things like “human-like” and “is designed to encourage or facilitate the simulation of […] friendship” and such, so I’m not worried about that either. ↩︎
6(B) and 6(D) are notable in their being specific exclusions; “I am not a minor” buttons and “enter your birthdate” fields are explicitly disallowed as age verification methods, as is using the same machine as a different, already-verified user. ↩︎


On one hand, that “everyday use” of AI is genuinely some of the most harmful use there is. People fall into delusions because of that shit, and even when they don’t they get massively overconfident about the answers they get, even despite significant error rates. Not to mention the privacy invasion that occurs with those systems, or the, you know, huge environmental damage.
In particular, this paragraph is doing a lot to make the bill sound better:
Under the GUARD Act’s broad definitions, a high school student could be barred from asking homework help tools questions about algebra problems. A teenager trying to return a product could be kicked out of a standard customer-service chat.
Yeah. These tools are dangerous. Fucking adults are using them wildly irresponsibly, for God’s sake.
On the other, this is very similar to the push for “protecting” kids from “pornography.” I don’t trust this to not result in massive proliferation of invasive age-gating systems regardless of any AI use at all. We’ll get the worst of both worlds, won’t we?


I think this kind of rhetoric is best saved for when AI is not currently one of the most harmful things in society today. Argue it’s a hammer all you like; people aren’t going to be receptive when that hammer is currently being used to beat their faces in, and making that argument at such a time isn’t exactly sympathetic.


Beehaw, and even Lemmy more broadly, is very anti-AI. Feel free to die on the metaphorical hill if you so wish.
Save the usefulness debate for someone else, though. If you still believe in LLMs even after all this time, then I can’t trust you haven’t fallen victim to cognitive surrender — and as such, I can’t trust you write your own posts. I’d rather spend my energy elsewhere.


Glazing AI on this site sure is a choice.


The CEO? Yeah sure, go ahead!
I haven’t played Civ 7, but it’s mostly for lack of money, because honestly? In no world can I call this change “wrong.” This is experimentation to me. I like experimentation. And for that matter, I like the concept too. Civilizations change throughout history, that’s how that works! And it can introduce opportunities to fix the issue I usually have with Civ games where I run out of things to do (like exploring) and get bored before the game’s over.
Now, I have no idea if it actually pulled that off; maybe they fucked it up bad. But most of the complaints I’m seeing are from folks who really mostly just seem like they didn’t want their cheese moved. And while that’s understandable, I think we’ve got enough Civ games that do the usual Civ stuff by now. If you want that, why not just play 5 or 6?
I really, really hope this doesn’t prevent future Civ games from trying new stuff out. Triple-A games take few enough risks as it is.