Sir, this isn’t a Twitch.
I’m the administrator of kbin.life, a general purpose/tech orientated kbin instance.
Sir, this isn’t a Twitch.
Going to second other comments. Even without archinstall. It feels like it will be harder than it is. Umm, just save yourself a bit of time and configure the network and install a console editor (nano/vim whatever) while in the chroot (if going full manual). It was a minor pain to work around that for me.
There are pages discussing how to do everything (helps to have a laptop with browser, or a phone to look them up). At the end, you generally know exactly what you installed (OK no-one watches all the dependencies), and I’ve found any borks that happen easy to fix because I know what I installed.
I remember those times too. The difference today is that there are so many more libraries and projects use those libraries a lot more often.
So using configure and make means that the user also has the responsibility of ensuring all those libraries are up to date. Which again if we’re talking about not using binary install, each also need a regular configure/make process too. It’s not that unusual for large packages to have dependencies on 100+ libraries. At which point building and maintaining the build for all of them yourself becomes untenable really. However I think gentoo exists to automate a lot of this while still building from source.
I understand why binaries with references to other binary packages for prerequisites are used. I also understand where the limits of this are and why the AppImage/Flatpak/snaps exist. I just don’t particularly like the latter as a concept. But accept there’s times you might need them.
I looked at that. Actually I would argue that was even more negligence by the management there. I mean they couldn’t even say how long he’d not been working for.
But in reality he was paid for at least 6 years of work (and they suspected more) and only fined for 1 year of pay. So, he’s still a winner I think. And yes, public funds likely did help in bringing that case forward.
Most larger private businesses tend to avoid going to a court for such things unless they need to in my experience.
You can make fun of managers not doing work. You know what’s worse than someone at manager/director level that doesn’t do any work? One that insists on doing so! Trust me, first hand experience.
I don’t know if they have much of a case to sue you, if you fall through the cracks on their own negligence. Fire you, yes. Sue, I am doubtful most larger businesses would even try. They’d rather solve the problem and sweep it under the carpet in my experience. Not USA experience of course, but still the attitude would be similar I expect.
I would worry a bit about whether they’re allowed to give negative references though. Because if so, it might not be so easy to get another job after.
Best move would be to line up another job to start like a month before the review, and never reach the review stage. Even if discovered, most people that would “know” wouldn’t really be driven to report anything if they’re leaving anyway. The “not my problem, and this will make it my problem” attitude in big companies is real.
Yes, but it seems the French language pack is a dependency for pretty much everything else! Who knew?
This does tally up with what I’ve been hearing. Where I’m at there’s been a few hires straight into senior. I’ve not heard of an official junior freeze. At the same time it’s been a long time since I’ve seen a new one.
The problem, as I commented prior, is that if we no longer bring in junior devs to gain this kind of experience, we lose the flow of junior -> senior. But in most places, the people making the decisions won’t consider anything beyond the end of the current fin year.
I don’t think developers are doing it. It’s managers making this kind of decision I’d say.
I’ve been told about companies in the same field as mine with a hiring freeze on juniors. So it’s kinda second hand.
I think it goes further than that. There’s two things happening with regard to AI and software development.
1: Stack overflow has become less common as a resource to solve problems. This, as you say has a problem of input into LLMs for future problems to solve.
2: Junior developers are being hired less because of AI. I assume the idea is that seniors will use AI in the same way they would usually use juniors. Except, they’ve done what business always does. Not think one bit about the future. Today’s senior developers are yesterdays junior developers.
The combination of AI performance drop due to point 1, and the lack of new developers because of point 2 makes for potentially, a bad future for the profession.
Specifically answering this question. It works transparently with IPv4. Organisations running servers can run both IPv4 and IPv6 operations with very little effort on their part. ISPs can deploy this and router makers include support with only a reasonable amount of effort.
As users AND servers get IPv6 addresses, in the background they will just be used. At some point there would be so much IPv6 adoption they could turn off IPv4. There is a thing called “6to4” but dual stack has (I think rightly) became the main way people run both.
In the UK I think at least half the ISPs provide IPv6 now. I think also in Europe it’s the same or better. But still we’re far from replacing IPv4 and I wonder when it might ever happen.
I’m going to just answer each point in turn. Maybe it’s useful. I don’t know.
It offers a shitload of IP addresses
It does. Generally most ISPs assign each user the equivalent of the IPv4 address space multiplied by itself. There’s a lot of address space to go around.
They look really complicated
This is true. But you rarely need to remember a full IP address. Most resources you access via DNS. If you have servers on your own network you will probably need to remember your own prefix (first 3 or 4 blocks of 4 hex numbers) and your servers you want to access would likely be ::1 and ::2 etc in that allocation. So you’d learn them. Also most routers allow for local DNS entries and there’s other things that will help here.
Something about every device in your local network being visible from everywhere?
This is a concern, but that’s mostly because router makers now are often badly configuring their routers. The correct way to configure a router is to allow outgoing/established connections by default and block all incoming (until you specifically open a port). Once this is done the security is very similar to NAT.
Some claim it obsoletes NAT?
Yes, NAT was created to make a small address space work in an era of multiple internet consumers behind a single connection. But when each device can get a routable IPv6 address, NAT is not needed. However the security I talk about above IS essential to apply to consumer routers.
Now, I’ll elaborate on some of the features of IPv6 (a lot of which are just not being used when they could have been).
IPv6 privacy extensions (RFC4951)
This allows normal client machines (the kind that would usually be behind NAT entirely) to have a similar level of security and privacy provided by NAT. One concern with just plain IPv6 with a fixed IPv6 allocation is that people could ID a specific machine from web logs etc and could be used against you in privacy terms. This extension ensures that you have multiple active IPv6 addresses. One could be the one you perhaps have some ports open on. That address will not be used for outgoing connections. A random IP will be used for outgoing connections and this IP will not have any ports open and will change frequently. I think on windows this is enabled by default (when you look in ipconfig you will often see multiple “temporary addresses”).
Harder to portscan
Currently it doesn’t take THAT long to portscan the whole IPv4 address space. And because almost every public address is hosting multiple hosts behind it, there’s a good chance ports will be open on a lot of the IPs scanned.
With IPv6 the public address space is huge. With normal machines having their allocations made randomly within a huge allocation per user and every IP would still need every port scanned. This makes active port scanning much harder. The above privacy extensions also mean that passive port scanning (port scanning IPs found in web logs for example) is harder too.
User experience
Provided consumer routers are configured well from the factory and ISPs are making sensible decisions regarding allocation of address space, the user will benefit from the advantages and not even know they’re using IPv6 in many cases. When you go to google/facebook/youtube etc you will be on IPv6 and not even know it.
We used to have it terrible in the UK in the 90s and 2000s. Basic ADSL was trialled in 1999 and available in maybe late 2000 I think. But it stagnated for a while.
When it came to fibre, interesting things are happening. As well as the “national” (although privatised) telco installing it, there are many independent companies fitting it. Where I live I have the option of the official telco (1000/110) and a private company (1000/1000). Of course I chose the latter :P
Some people have 3 or more options.
Yeah in the future there might well be a handful of overall winners that vacuum up the losers and carve up the territory. But right now, it’s a good time for the normal people… At least for internet.
EDIT: Just to add, some are ISPs and will only sell their own product. Some are wholesale, so even if they’re the only company in your area, you can often buy from multiple ISPs through them.
This one threw me off. I’d muted discord by mistake. Weirdly voice still works. I spent ages checking and double checking settings to see why I wasn’t getting notification sounds and the ptt sound. Dismissing any mute possibility because voice was working.
When I found it was this…
I’m on a pretty old version of mbin (I have some modifications I made for federation issues back when it was kbin). I need to spend a weekend to pilot an upgrade and make sure I can run it safely live.
But even then it’s better in some ways already and I never feel like I’m missing something from lemmy. But I think just calling the whole thing lemmy puts off people that are seeing things through a political lens.
Pretty sure that’s only true about Lemmy. There are other threadiverse apps. The mistake is people calling the threadiverse lemmy.
These days with UEFI it’s much less likely to break things. Worse case though you just boot from a LIVE USB boot, chroot in and rerun grub/your bootloader installer. Often even if windows puts its own bootloader first, you can choose your bootloader from the bios boot menu and just rerun the bootloader installer.
It used to be a lot worse.
20 years into the trump dynasty dictatorship, they will still be saying “thanks Biden” to every financial inconvenience.
All of this is layman with some basic understanding only.
So, on the one hand in our galaxy alone there are between 100 and 400 billion stars (wikipedia), now a lot of those have no planets, but of course a lot have many more than our system does. So at least the same number in planets. There’s a good chance there’s more than one planet capable to supporting life among that number.
In fact as we improve our ability to observe our galaxy we are able to verify more and more viable planets and even a reasonable number that are similar to our own planet.
This means that there’s definitely going to be a reasonable chance that somewhere, life has evolved to similar or beyond our level already.
But, this for sure doesn’t mean there’s any reason to expect visitors. That’s because even if they can travel at the speed of light, it’s still going to be thousands of years for the majority of them to reach us, provided they even choose to come to us. Because, from where they are they wouldn’t be able to make out our radio signals, nor likely any other signs of life. So we’d be one of many “potentially live bearing” planets.
So, just my opinion. I think the chance of life being out there is reasonably high, the chance of actually being visited (assuming it holds true that we cannot travel faster than light) is probably very very low.