• 0 Posts
  • 18 Comments
Joined 1 year ago
cake
Cake day: June 29th, 2023

help-circle







  • Personally, I’ve found that LLMs are best as discussion partners, to put it in the broadest terms possible. They do well for things you would use a human discussion partner for IRL.

    • “I’ve written this thing. Criticize it as if you were the recipient/judge of that thing. How could it be improved?” (Then address its criticisms in your thing… it’s surprisingly good at revealing ways to make your “thing” better, in my experience)
    • “I have this personal problem.” (Tell it to keep responses short. Have a natural conversation with it. This is best done spoken out loud if you are using ChatGPT; prevents you from overthinking responses, and forces you to keep the conversation moving. Takes fifteen minutes or more but you will end up with some good advice related to your situation nearly every time. I’ve used this to work out several things internally much better than just thinking on my own. A therapist would be better, but this is surprisingly good.)
    • I’ve also had it be useful for various reasons to tell it to play a character as I describe, and then speak to the character in a pretend scenario to work out something related. Use your imagination for how this might be helpful to you. In this case, tell it to not ask you so many questions, and to only ask questions when the character would truly want to ask a question. Helps keep it more normal; otherwise (in the case of ChatGPT which I’m most familiar with) it will always end every response with a question. Often that’s useful, like in the previous example, but in this case it is not.
    • etc.

    For anything but criticism of something written, I find that the “spoken conversation” features are most useful. I use it a lot in the car during my commute.

    For what it’s worth, in case this makes it sound like I’m a writer and my examples are only writing-related, I’m actually not a writer. I’m a software engineer. The first example can apply to writing an application or a proposal or whatever. Second is basically just therapy. Third is more abstract, and often about indirect self-improvement. There are plenty more things that are good for discussion partners, though. I’m sure anyone reading can come up with a few themselves.



  • I’m not the above poster, but I really appreciate your argument. I think many people overcorrect in their minds about whether or not these models learn the way we do, and they miss the fact that they do behave very similarly to parts of our own systems. I’ve generally found that that overcorrection leads to bad arguments about copyright violation and ethical concerns.

    However, your point is very interesting (and it is thankfully independent of that overcorrection). We’ve never had to worry about nonhuman personhood in any amount of seriousness in the past, so it’s strangely not obvious despite how obvious it should be: it’s okay to treat real people as special, even in the face of the arguable personhood of a sufficiently advanced machine. One good reason the machine can be treated differently is because we made it for us, like everything else we make.

    I think there still is one related but dangling ethical question. What about machines that are made for us but we decide for whatever reason that they are equivalent in sentience and consciousness to humans?

    A human has rights and can take what they’ve learned and make works inspired by it for money, or for someone else to make money through them. They are well within their rights to do so. A machine that we’ve decided is equivalent in sentience to a human, though… can that nonhuman person go take what it’s learned and make works inspired by it so that another person can make money through them?

    If they SHOULDN’T be allowed to do that, then it’s notable that this scenario is only separated from what we have now by a gap in technology.

    If they SHOULD be allowed to do that (which we could make a good argument for, since we’ve agreed that it is a sentient being) then the technology gap is again notable.

    I don’t think the size of the technology gap actually matters here, logically; I think you can hand-wave it away pretty easily and apply it to our current situation rather than a future one. My guess, though, is that the size of the gap is of intuitive importance to anyone thinking about it (I’m no different) and most people would answer one way or the other depending on how big they perceive the technology gap to be.


  • 4K HDR

    Normally I use kdenlive to edit video, which supports 4K AFAIK, but although that doesn’t support HDR it looks like DaVinci Resolve supports both.

    Taxes

    That’s surprising. Turbotax and Quickbooks have online options, and there are a few native apps like GnuCash, but I haven’t used them—TurboTax works for me.

    GarageBand

    Yeah that’s too bad. I hear good things about Ardour, though. Also, bandlab if you’re okay with a webapp.

    Netflix

    I only stream on an actual TV, not my computer, so I haven’t done this in a while, but I thought you could do this in Firefox with DRM enabled? If not, seems like there are addons which enable it. Might be outdated knowledge.

    vector illustration

    Fun is hard to come by

    git client

    Git clients all suck for me, CLI is the way to go. However, my co-workers that use git clients all use GitKraken (on macOS) and that is available on Linux, too.

    screen recording was also painful

    Won’t argue with you there. Don’t know why it doesn’t have first-class support in many distros. I hear OBS Studio works well for this if you want to do anything fancy with the recording, otherwise there are plenty of apps for this (Kazam might be a simpler choice).

    barely meets my use cases

    I think really (considering the above) your main issue is that you just have some strong software preferences. There are certainly ways to meet most if not all of the use cases you listed. It requires a big change in workflow, though.

    For what it’s worth, I find that most of the issues with software alternatives in Linux is that everyone often recommends free/GPL replacements, which are invariably worse than the commercial/non-free software the user is used to. But there is paid software in Linux land, too, remember. In my case, I have often found that if I can pay for the software it will be better, and if there’s a webapp version of something non-free it will often be better than the native FOSS alternative. There are many notable exceptions to that rule, but money does solve the occasional headache.






  • That’s a very interesting suggestion and I’d love to see it done, actually, regardless of what I’m about to write.

    The problem is that mods aren’t bot sweepers or disinformation sniffers. They’re just regular people… and there are relatively few of them. They probably have, on average, a better radar than most users, but when it comes to malicious actors they aren’t going to be perfect. More importantly, they have a finite amount of time and effort they can put into moderation. It’s way better to organically crowd-source these kinds of things if it’s possible, and the kind of community Lemmy has makes it possible.

    Banning these comments makes the community susceptible to all kinds of manipulation, especially in the run-up to a US election (let alone this one). The benefit of banning these comments is comparatively very minimal: effectively removing one type of ad hominem attack in arguments that have always featured ad hominem attacks, in one form or another.



  • keegomatic@lemmy.worldtoWorld News@lemmy.worldSidebar Update: Civility
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    3
    ·
    4 months ago

    I think that public call-outs of suspicious behavior is the only real and continuous way to teach new or under-informed users what bots and disinformation actors (ESPECIALLY these) sound like. I don’t remember the last time I personally called out someone I thought was a paid/malicious account or a bot… maybe never have on Lemmy. But despite the incivility, I truly believe the publicity of these comments is good for creating a resilient community.

    I’ve been on forums or aggregators similar to Lemmy for a long time, and I think I have a pretty good radar when it comes to identifying suspicious account behavior. I think reading occasional accusations from within your community help you think critically about what’s being espoused in the thread, what the motivations of different users are, and whether to disbelieve or believe the accuser.

    Yes, sometimes it’s used as a personal attack. But it’s better to have it out in the open so that the reality of online discourse (extremely frequent attempted manipulation of opinions) is clear to everyone, and the community can respond positively or negatively to it and organically support users that are likely victims.