• 0 Posts
  • 7 Comments
Joined 1 year ago
cake
Cake day: June 1st, 2023

help-circle
  • In their interactions and personal knowledge, perhaps he was. If you personally don’t know Danny or anyone else involved, your only exposure is what you’ve heard presented and made public. If you personally knew Danny and hadn’t witnessed any of these crimes yourself, you now have a conflicted view of someone who is both your friend and now guilty of 2 counts of sexual assault. While that conviction almost certainly changes your relationship going forward, it doesn’t change how you thought of that individual beforehand.

    Ashton and Mila were asked to write letters of character that described the Danny they knew. It doesn’t change the outcome of the trial, but as with matters that carry different sentencing structures awarded by the judge, a judge will often take letters like this to determine what is appropriate. Is there a chance that the defendant will repeat this offense? What punishment, if any, will be restorative to the victims? How does this punishment affect everyone, including families established years afterwards? Is the defendant the same person today as they were when they committed these crimes?

    These aren’t matters easily decided and therfore it isn’t surprising to see letters of character submitted either as part of the trial or during sentencing. If there is a patten of behavior, then sentencing might be maximum allowed, but if there’s no clear discernable behavior, then sentencing might be light.

    I don’t know all the details that was considered, but based on my knowledge from reports, I think 15 years concurrent would have been appropriate. However, I don’t have all the evidence or material to make an informed decision. I don’t look upon these letters ss reflecting poorly on Ashton or Mila as they were just doing what was asked of them to help give the judge the context necessary to carry out an appropriate sentence. They aren’t guilty of doing anything wrong, more than the lawyers defending a now convicted and sentenced rapist.



  • I try to use both equally, because I’m always on the hook for picking the “doomed” standard in any 50/50 contest.

    I can relate to that. It usually isn’t a coin flip for me though. I’ll align with some technology over another because I truly can see an advantage. That technology might be the underdog from the beginning. Consider that we’re evaluating Firefish vs. Lemmy vs. Kbin whereas all of them combined are the underdog for certain more well established social forums. I engage with all three (and others still), because I don’t know the future.



  • I think a human might consider the meaning about what is being said whereas an LLM is only going to consider what token is the best one to use next. Humans might not be infallible, but they are presently better at detecting obvious BS that would slip undetected past an AI.

    Maybe this is an opportunity we haven’t considered. This is the chance to create a Turing CAPTCHA Test. We can’t use Glorbo to do so, because it has been written, but perhaps it makes sense that there is a nonsensical code phrase people can use to identify AIs, both with markers intentionally added to LLM training models, buried in articles written by human authors, and a challenge/response which is never written down and only passed verbally through real human-human interactions.


  • If a human can access your public repo and read comments posted on public forums, are they stealing your code? LLMs are just aggregators of a great many resources and they aren’t doing anything more than a biological human can already do. The LLM can do so more efficiently than a biological human, while perhaps being more prone to error as it doesn’t completely understand why something is written the way it is. As such any current AI model is prone to signpost errors, but in my experience it has been very good at organizing the broader solution.

    I can give you two examples. I started trying to find out how a .Net API call was made. I was trying to implement a retry logic for a call, and I got the answer I asked. I then realized that the AI could do more for me. I asked it to write the routine for me and it suggested using a library which is well suited for that purpose. I asked that it rewrite it without using an external library and it spit it out. I could have written this completely from scratch, in fact I had already come up with something similar but I was missing the API call I was initially looking for. That said, the result actually had some parts I would have had to go back and add, so it saved me a lot of time doing something I already knew how to do.

    In a second case, I asked if to solve a problem which at its heart was a binary search. To validate that the answer was correct it would need to go one extra step, but to answer the question it wasn’t necessary to actually perform that last validation step. I was looking for the answer 10, but I got the AI to give me answers in the range of 9-11. It understand the basic concepts, but it still needs a biological human to validate what it generates.