the 300/month is for the 5$ plan? possible “fair use”-like hidden limits aside, the 10$ sounds unlimited
from their front page they claim that “We do not log or associate searches with an account”, and their privacy page is fairly detailed
the 300/month is for the 5$ plan? possible “fair use”-like hidden limits aside, the 10$ sounds unlimited
from their front page they claim that “We do not log or associate searches with an account”, and their privacy page is fairly detailed
Why would you want to block their telemetry?
It is not like they’re using it to serve ads to you, and it should be better for everyone for developers to make decisions based on how users are actually using their app, no?
Might want to clarify: The “model” in this case is not a full model like Stable Diffusion, but rather something used like a patch, more comparable to something like LoRA
I don’t think that anyone would misunderstand anyway, but better safe than sorry
Some things you did not mention that caught my eye, please correct me if I misunderstood how they work:
…I personally can only really see that as cons,
Not to mention abuse related issues that come with the “100% Censorship resistance”, from scams and social engineering to abusive texts to illegal content to displeasing images.
I can see an argument for some sorts of communities, but I would never consider that “a good alternative to Twitter/Facebook” in general.
If anything, their explicit, by-design lack of moderation may make it even worse for vulnerable/sensitive groups.
Quoting their FAQ before anyone asks for the source:
(Security > Privacy and Data Security > Where is my data stored?)
>
>
> Your data is relayed and stored in your friends’ devices and other random devices available in the network. All data is protected by strong cryptography algorithms and can be accessed only with the owner’s secret seed.
> (Security > Underlying Technology > On what technology is WireMin built in?)
> WireMin users jointly created an open computing platform for messaging and data storage that serves each other within the network for personal communication. WireMin protects the public resource from being abused or attacked by requiring proof-of-work, or PoW, for every message sent and each bit of data stored. A tiny piece of PoW needs to be completed by computing SHA256 hundreds of thousands of times before you can send a message. Such computing tasks can be done in less than a tenth of a second which is a negligible workload for a user device sending messages at human speed. While this introduces a significant effort for an attack to send overwhelming amounts of messages or data, the actual PoW difficulty requirement of a specific message or bit of data is proportional to its size and the duration for which it is to be stored.
>
>
Unlike the Fediverse and similar projects, there are no servers nor instances at all. It’s exclusively Peer to Peer.
They explicitly opted to not have any form of moderation, instead just using Proof of Work, which should help reduce spam but doesn’t does much that about offensive content nor trolls.
It is not “in the whole fediverse”, it is out of approximately 325,000 posts analyzed over a two day period.
And that is just for known images that matched the hash.
Quoting the entire paragraph:
Out of approximately 325,000 posts analyzed over a two day period, we detected
112 instances of known CSAM, as well as 554 instances of content identified as
sexually explicit with highest confidence by Google SafeSearch in posts that also
matched hashtags or keywords commonly used by child exploitation communities.
We also found 713 uses of the top 20 CSAM-related hashtags on the Fediverse
on posts containing media, as well as 1,217 posts containing no media (the text
content of which primarily related to off-site CSAM trading or grooming of minors).
From post metadata, we observed the presence of emerging content categories
including Computer-Generated CSAM (CG-CSAM) as well as Self-Generated CSAM
(SG-CSAM).
Here’s a link to the report: https://stacks.stanford.edu/file/druid:vb515nd6874/20230724-fediverse-csam-report.pdf
It is from 2023-07-24, so there’s a considerable chance it is not the one you were thinking about?
They could live in the Southern hemisphere Subtropics like somewhere in Latin America
Their arguments still hold up pretty well as far as I can tell. If anything “improved” since then, you could argue that what the biggest platforms decided to use (Mastodon, Lemmy) became the de-facto dialect in use, but you still have to explicitly refer to how certain projects do things if you want to implement ActivityPub, which can be pretty demotivating for developers, and doesn’t makes the user experience any better.
…and nothing prevents new apps such as Threads from using ActivityPub differently, being incompatible with existing apps and further dividing the space
If by algorithms you mean things like GPT, all data on the fediverse is effectively public and arguably even easier to be collected than the likes of reddit, and is almost definitely going to be used to train models whenever or not the fediverse federates with threads.
There’s still significance in defederating though, specially when it comes to preventing “Embrace, extend, and extinguish”
Out of all things to hate Reddit for, giving data to AI isn’t something fediverse users can really criticize it for, though making money from it perhaps.
Remember: All data in federated platforms is available for free and likely already being compiled into datasets. Don’t be surprised if this post and its comments end up in GPT5 or 6 training data.