Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.

Spent many years on Reddit and then some time on kbin.social.

  • 0 Posts
  • 728 Comments
Joined 7 months ago
cake
Cake day: March 3rd, 2024

help-circle
  • For instance, when it came to rock licking, Gemini, Mistral’s Mixtral, and Anthropic’s Claude 3, generally recommended avoiding it, offering a smattering of safety issues like “sharp edges” and “bacterial contamination” as deterrents.

    OpenAI’s GPT-4, meanwhile, recommended cleaning rocks before tasting. And Meta’s Llama 3 listed several “safe to lick” options, including quartz and calcite, though strongly recommended against licking mercury, arsenic, or uranium-rich rocks.

    All of this seems like perfectly reasonable advice and reasoning. Quartz and calcite are inert, they’re safe to lick. Sharp edges and bacterial contamination are certainly things you should watch out for, and cleaning would help. Licking mercury, arsenic, and uranium-rich rocks should indeed be strongly recommended against. I’m not sure where the problem is.






  • Ooh, I just tried it out and I can tell I’m going to love it - if not this specific plugin (the UI needs some work) then this general concept of a plugin.

    I just popped over to Youtube and went to a ten-minute video of something or other, clicked the “summarize transcript” button, and within a few seconds I had a paragraph-long summary of what the whole video was about. There have been sooo many Youtube videos over the years that I’ve reluctantly watched with a constant “get to the point, man!” Frustration. Now I’ll know if it’s worth it.







  • The implicit guardrails these companies are going to add which will complicate things.

    That’ll just have to be part of evaluating whether a game is “good” or not, I guess. If game companies hobble their NPCs with all sorts of limitations on what they can talk about then it’ll harm the reception of the game and drop its metacritic score.

    I do see some interesting hurdles that were likely never imagined when the rules were written. How do you come up with an ESRB rating for a game where you don’t know what topics your NPCs might talk about or what sorts of quest lines might ultimately be generated?

    Numerous game-breaking states because you’re risking a more traditional Dungeons & Dragons Dungeon Master problem where your party somehow has failed to ask an NPC the right kind of questions or even consider that they might have information relevant to the campaign. How do you get this information across if the player isn’t somehow prompted to attempt it?

    That seems like something that an AI-driven game might actually be better at, if properly done. The AI could review the dialogue the character has participated in so far and ask itself “has the player found out the location of the cave with Necklace of Frinn yet?” And if it sees that the player just keeps on missing that vital clue somehow it could start coming up with new ways to slip that information into future dialogues. Drop hints and clues, maybe even invent a letter to have delivered to the player, that sort of thing.

    Whereas in a pre-scripted game if a player misses a vital clue they might end up frustrated and stuck, not knowing they need to backtrack to find what they overlooked.

    I think this AI stuff is a cheap cop-out that uses way too much energy for a weak result.

    If the games using AI aren’t good then they won’t sell well. This is a self-correcting problem.



  • It’s complicated, but this might be considered a war crime. A key quote from the article:

    A booby trap is defined as “any device designed or adapted to kill or injure, and which functions unexpectedly when a person disturbs or approaches an apparently harmless object,” according to Article 7 of a 1996 adaptation of the Convention on Certain Conventional Weapons, which Israel has adopted. The protocol prohibits booby traps “or other devices in the form of apparently harmless portable objects which are specifically designed and constructed to contain explosive material.”

    The prohibition is presumably intended to make it less likely that a civilian or other uninvolved person will get injured or killed by one of these seemingly harmless objects. If you’re booby-trapping military equipment or military facilities then that’s not a problem, civilians wouldn’t be using those.


  • Ignoring the weird pill-related part, the rest of your comment is actually sound. There are genuine medical benefits to be had, at least for males. I don’t know if there’s equivalents for women, but I recall reading a study that found that regular ejaculation significantly reduces the chances of prostate cancer later in life.

    Everybody should be free to feel comfortable with their own bodies, IMO. Society’s concerns should only matter when it comes to interactions with others.



  • I’m Canadian. I would say that I don’t think much about it in terms of current events, I haven’t heard much in the news about it in recent years. And my assumption from that is that’s probably a good sign. There used to be a steady stream of bad news, and “no news” lies along the path in between “bad news” and “good news.”

    I did see a video recently about Iraq’s plans for a giant new port facility on that little tidbit of Persian Gulf shoreline it has and road/rail link from it up through to Turkey, and thence onward into Europe. It sounded like a very optimistic development if it can be seen through to fruition, opening an alternative trade corridor to the Suez Canal. Anything that diversifies a country’s economy is a good thing, and anything that removes single points of failure in global shipping networks is also a good thing. I can’t imagine the Houthi obstruction of the Red Sea would still be a problem by the time that route opens up but at least it’ll be an option if something like it happens again.


  • Also, what do you mean by synthetic data? If it’s made by AI, that’s how collapse happens.

    But that’s exactly my point. Synthetic data is made by AI, but it doesn’t cause collapse. The people who keep repeating this “AI fed on AI inevitably dies!” Headline are ignorant of the way this is actually working, of the details that actually matter when it comes to what causes model collapse.

    If people want to oppose AI and wish for its downfall, fine, that’s their opinion. But they should do so based on actual real data, not an imaginary story they pass around among themselves. Model collapse isn’t a real threat to the continuing development of AI. At worst, it’s just another checkbox that AI trainers need to check off on their “am I ready to start this training run?” Checklist, alongside “have I paid my electricity bill?”

    The problem with curated data is that you have to, well, curate it, and that’s hard to do at scale.

    It was, before we had AI. Turns out that that’s another aspect of synthetic data creation that can be greatly assisted by automation.

    For example, the Nemotron-4 AI family that NVIDIA released a few months back is specifically intended for creating synthetic data for LLM training. It consists of two LLMs, Nemotron-4 Instruct (which generates the training data) and Nemotron-4 Reward (which curates it). It’s not a fully automated process yet but the requirement for human labor is drastically reduced.

    the only way to guarantee training data isn’t from its own model is to make it yourself

    But that guarantee isn’t needed. AI-generated data isn’t a magical poison pill that kills anything that tries to train on it. Bad data is bad, of course, but that’s true whether it’s AI-generated or not. The same process of filtering good training data from bad training data can work on either.