Well I didn’t want to have a bio, but Lemmy doesn’t let me null it out, so I guess I’ll figure out something to put here later.

  • 0 Posts
  • 59 Comments
Joined 6 months ago
cake
Cake day: May 10th, 2024

help-circle
  • I assume you’re not using iMessage anyway then because Apple’s Messages stack isn’t open source. If you’re not using iMessage anyway, it shouldn’t matter to you what Beeper Mini is doing. This app isn’t for the ultra paranoid. Neither is Google’s RCS in Google Messages. This is where Signal and Matrix would be better choices. If you are using iMessage on an Apple device, you’re choosing to trust Apple despite their app being closed source and you’re not choosing to trust Beeper, which is fine and I don’t judge you at all for that stance. But at that point, your qualms aren’t simply about Beeper Mini being closed source, the implication is that you don’t trust Beeper as a company and/or its developers which, again, is a valid stance even if it’s one I don’t share.

    But I am personally pretty sure I can trust Beeper and Apple enough with my relatively meaningless conversations.



  • By that logic, there’s nothing guaranteeing iMessage on iPhones is secure or private either because it’s closed source. If you don’t want to trust Beeper mini, you’ll be free to run their iMessage bridge on your own Matrix stack when they open source it at some point, which they’re promising to do (and you still won’t know that Apple isn’t scraping your messages on the iOS side). When I decide to trust a company, it’s because I look at what they’re transparently communicating to their end users. Every indication is that they are trying to get out of the middle of handling encrypted messages. Their first move to make this happen was allowing people to self host their own Beeper bridges (which you can still do with Beeper Cloud if you prefer and you will know that your messages are always encrypted within the Beeper infrastructure). They aren’t going to release the source for their client ever because that’s the only way they make any money.


  • What you're describing is only possible on de-anonymized platforms that essentially have "know your customer" type policies where users have to provide some kind of proof of their identity. While I agree that there is value in social spaces where everyone generally knows the people they're interacting with are who they say they are, I don't think this is ever going to be feasible in a federated social platform. I think Facebook is the closest thing we have to what you're describing, to be honest, and I believe Meta has even kicked around having a more sandboxed Instagram for minors (though I don't use Instagram, so I'm not certain on the details there).

    For me, in most cases on a platform like Lemmy, a person's age is not something I care about. I care about what people are sharing and saying. But then again, none of my interests for online discussion at this point in my life are really age centric. I think there are clearly better platforms than Lemmy if people want to guarantee they're only interacting within their age specific peer groups.




  • I would highly recommend the recent Freakonomics Radio series about whaling. It’s Episodes 549-551 and the bonus episode from 2023-08-06. If you’re firmly against killing any living creature (or at least sentient creatures), I highly doubt it will change your mind (and I don’t think that it should or that it tries to), but I also think it is really fascinating learning about the history of the whaling industry and hearing the perspective of a modern whaler in the bonus episode. Putting aside the obvious ethical issues with killing sentient creatures, it’s interesting to consider things like whether there’s a sustainable level of whaling, what a sustainable quota would look like, and how much we’re in competition with certain whale species for harvesting fish as food for our own species. I personally appreciated how unbiased Freakonomics tried to be in their discussion of the topic.











  • If ChatGPT only costs $700k to run per day and they have a $10b war-chest, assuming there were no other overhead/development costs, OpenAI could run ChatGPT for 39 years. I’m not saying the premise of the article is flawed, but seeing as those are the only 2 relevant data points that they presented in this (honestly poorly written) article, I’m more than a little dubious.

    But, as a thought experiment, let’s say there’s some truth to the claim that they’re burning through their stack of money in just one year. If things get too dire, Microsoft will just buy 51% or more of OpenAI (they’re going to be at 49% anyway after the $10b deal), take controlling interest, and figure out a way to make it profitable.

    What’s most likely going to happen is OpenAI is going to continue finding ways to cut costs like caching common query responses for free users (and possibly even entire conversations, assuming they get some common follow-up responses). They’ll likely iterate on their infrastructure and cut costs for running new queries. Then they’ll charge enough for their APIs to start making a lot of money. Needless to say, I do not see OpenAI going bankrupt next year. I think they’re going to be profitable within 5-10 years. Microsoft is not dumb and they will not let OpenAI fail.