Google has reportedly removed much of Twitter’s links from its search results after the social network’s owner Elon Musk announced reading tweets would be limited.
Search Engine Roundtable found that Google had removed 52% of Twitter links since the crackdown began last week. Twitter now blocks users who are not logged in and sets limits on reading tweets.
According to Barry Schwartz, Google reported 471 million Twitter URLs as of Friday. But by Monday morning, that number had plummeted to 227 million.
“For normal indexing of these Twitter URLs, it seems like these tweets are dropping out of the sky,” Schwartz wrote.
Platformer reported last month that Twitter refused to pay its bill for Google Cloud services.
I feel like Google is going to have to find a way to effectively index federated content at some point. The only way to really get human information is from sites like Reddit and Twitter. And both of those platforms seem to be dedicated to completely imploding at the moment.
It’s already indexing it.
duckduckgo (who uses Microsoft’s index I believe) is able to find Lemmy instances already.
problem is since every instance has its own domain you cannot search all of Lemmy or the more obscure fediverse.
lemmy.world
,beehaw.org
,programming.dev
are all different “websites”.I append “reddit” to my query when I want to search reddit for a human answer to a question. Can’t do that with Lemmy, unless the instance is branded as Lemmy.
Unless there will be an org or volunteers that indexes federated instances and makes them available to search engines to they can be differentiated, finding stuff in the fediverse might be difficult…
Would be lovely if we could just start a search with
fedi:
oractivityPub:
There’s nothing about the content being federated that makes it hard or impossible to index. Each instance is just a website with a public webpage that a bot can read. That all a search engine needs to index it. The worst case scenario is the bot will find the same content on multiple instances.
I did read that the website is loaded entirely through JavaScript and that maybe the Google bot doesn’t execute JavaScript so can’t see the text. I don’t know if that’s still a problem in 2023, though.
This article says it’s not a problem, but I didn’t read past the tl;dr, so maybe there’s a caveat. Like maybe it has to use a popular framework like React or something to work.
https://searchengineland.com/tested-googlebot-crawls-javascript-heres-learned-220157
Googlebot does execute Javascript, but since rendering JS needs much more resources, JS crawling will happen significantly less then simple http crawling. That’s why all big sites still return server side rendered content.
Isn’t it automatically indexed? I mean, I can go to lemmy.world in a browser and see the content, wouldn’t Google’s indexing bots do the same?
Fuck Google, if Lemmy continues to take off we can just develop better search tools within the fediverse. The wider internet has been colonized, the path forward cannot rely on big tech corporations.
I’m not a programmer/developer so I don’t even understand the scale of the work that has yet to be done. But I am deeply committed to upsetting the status quo, and this platform feels distinctly revolutionary. Can’t wait to see what the future holds for Lemmy.
Ask and ye shall receive. Just saw this post!
https://lemmy.world/post/963301
I want a federated wiki. But bad actors resistant somehow.
Lol nice. The pace of development of Lemmy is unreal. I’ve only been here a month and it’s already so much easier to use than in the beginning.
It’s all well and good to have a revolution, but if nobody knows you’re having one then nothing really changes. There are still benefits to centralised services, one of which being scale. To effectively index so much data you need scale, which is why smaller search engines tend to be just white labels of things like Bing.
100k people isn’t nobody. Centralized services can be useful at times, but there is no fundamental law preventing a decentralized system from providing the same functionality.
The value of indexing data drops drastically when much of that data is junk, as is the case in the wider internet. Because Lemmy is a federation, there is a built in system to filter the junk.
It already is.
Just put ‘site:lemmy.world’ into Google to see what it has indexed on that instance for example. I don’t think Lemmy is optimised for search yet, but I saw some GitHub threads around the topic.
Honestly, and I hate this, but I doubt they will. The majority of people will never go federated, even though it’s so easy, because they suck.
It’s the difference between a mom and pop restaurant and McDonald’s.
We don’t need everybody to go to the mom and pop restaurant. Just enough of us to keep it afloat.
People at large really need to remember not every kind of growth is good: it has to be sustainable, and only happen until where it’s needed.
Unlimited growth is basically cancer, and that’s what big corpos feel to society tbh.
You can tell from how many upvotes this has that there are just as many idiots here as on reddit lol
Google can definitely index Lemmy