• 0 Posts
  • 15 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle

  • isildun@sh.itjust.workstoGreentext@sh.itjust.worksAnon boots up a game
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    edit-2
    2 months ago

    Actually, this isn’t the worst idea. It can be hard to tell what kind of input device the player’s using, especially on PC. Are you using kb+m, xbox controller, psx controller, generic bargain bin controller, etc? Also you can’t just assume that because a controller’s connected the player is going to use it (and lots of games do… much to my dismay since they make me go disconnect the controller). Once the player presses at least one button you can tailor all the inputs to that thing.



  • The long story short is that you are being made to (by default) give up rights that you should have, particularly around class action lawsuits. It’s strictly bad for you and strictly good for the company. They probably shouldn’t be allowed to do this. Since they are, the only thing we can do to protest it is to opt-out.

    Maybe you’ll never sue discord. But maybe someday there will be a lawsuit brought against discord by someone else. A few ideas for topics might include a security vulnerability that leaks personal information, the use of discord content for AI training data (e.g. copyright issues), or the safety of minors online. If you don’t opt-out, you can’t be a part of such lawsuits if they ever become relevant. This overall weakens these lawsuits and empowers companies like discord to do more shady things with less fear of repercussions.

    And, since the vast majority of people will never opt-out (since you’re opted in by default) these kinds of lawsuits are weakened from the start. That’s why every company in the US is doing this forced arbitration thing. At this point, they would be crazy not to since it’s such a good thing for them and the average person doesn’t care enough about it.


  • Some of that is content categorization in the eyes of the all-seeing algorithm. Let’s say you upload a type of content “A” that gets big views but you’ve been uploading a type of content “B” that gets small views for a while. The youtube algorithm will aggressively try to grow content A and massively deprioritize content B, even among other channels that produce content B.

    A guy I know who does youtube/twitch had to create a second channel for his content B because it would get sub-1k views when he would get tens of thousands of views on his content A. Just by uploading somewhere else he started to get higher view counts.

    Exactly why that happens isn’t known, but a common theory is that youtube wants to push what it knows works. They have no real reason to give your content B a chance because they know content A will sell. And they do this even though this outcome was the result of a feedback loop.


  • Not necessarily. For a game like this that only functions online, you could presumably determine all the possible server calls and point them to a server you own. You could do this purely via clever network settings without modifying the game at all. If you could do that, the game would run fine and you could even use the original authentication server to ensure the user holds a valid license.

    At that point, you “just” need to implement and run a server for the game. This also doesn’t involve modifying the game, but could run afoul of potential laws against reverse engineering if not done in a clean room manner (I’m not a lawyer so there could be other things too since unfortunately US law tends to not favor the end user).

    Regardless of any of that, it always feels silly to me when companies fight tooth-and-nail against people not only performing free work and hosting for a dead game but ALSO trying to ensure people actually own the game before playing on their private server. Of course they could just use 🏴‍☠️ versions and black-hole the authentication server. All the company does by withdrawing licenses is ensure they have to skip authentication so the company loses out.


  • I’m almost starting to wonder if that’s the plan. Just keep saying “IPO IPO IPO” to get funding from over-eager VCs who want a piece of the IPO before it becomes widely available.

    But then you just never IPO. Keep making minor to moderate mistakes along the way so you can be all “weeeeell we would have IPO’d but insert thing here so we want to wait another 6 months to let it die down”. Repeat until you’re ready to quit, then actually IPO and ride the initial IPO high all the way down via golden parachute.






  • I’m not the person who found it originally, but I understand how they did it. We have three useful data points: you are 2.6 km from Burger King in Italy, that BK is on a street called "Via " and you are 9792 km from Burger King in Malaysia.

    1. The upper BK in Malaysia is not censored, so we have its exact location.
    2. Find a place in Italy that is 9792 km away using the Measure Distance tool on something like Google Maps.
    3. Even though there are potentially multiple valid locations in Italy, we know you’re within 2.6 km of another BK. Florence is sensible because there are BKs near the 9792 km mark.
    4. Once we do that, we can find a spot that is both 9792 km from Malaysia BK and 2.6 km from a nearby BK on a street called “Via”, effectively finding where the image was taken.


    It’s not perfect but it works well! This is the principle of how your GPS works. It’s called triangulation. We only had distance to two points and one of them doesn’t tell us the sub-kilometer distance. If we had distance to three points, we could find your EXACT location, within some error depending on how detailed the distance information was.



  • Copilot, yes. You can find some reasonable alternatives out there but I don’t know if I would use the word “great”.

    GPT-4… not really. Unless you’ve got serious technical knowledge, serious hardware, and lots of time to experiment you’re not going to find anything even remotely close to GPT-4. Probably the best the “average” person can do is run quantized Llama-2 on an M1 (or better) Macbook making use of the unified memory. Lack of GPU VRAM makes running even the “basic” models a challenge. And, for the record, this will still perform substantially worse than GPT-4.

    If you’re willing to pony up, you can get some hardware on the usual cloud providers but it will not be cheap and it will still require some serious effort since you’re basically going to have to fine-tune your own LLM to get anywhere in the same ballpark as GPT-4.