Both Intel and AMD invest a lot into open source drivers, firmware and userspace applications, but also due to the nature of X86_64’s UEFI, a lot of the proprietary crap is loaded in ROM on the motherboard, and as microcode.
Both Intel and AMD invest a lot into open source drivers, firmware and userspace applications, but also due to the nature of X86_64’s UEFI, a lot of the proprietary crap is loaded in ROM on the motherboard, and as microcode.
I work with SoC suppliers, including Qualcomm and can confirm; you need to sign an NDA to get a highly patched old orphaned kernel, often with drivers that are provided only as precompiled binaries, preventing you updating the kernel yourself.
If you want that source code, you need to also pay a lot of money yearly to be a Qualcomm partner and even then you still might not have access to the sources for all the binaries you use. Even when you do get the sources, don’t expect them to be updated for new kernel compatibility; you’ve gotta do that yourself.
Many other manufacturers do this as well, but few are as bad. The environment is getting better, but it seems to be a feature that many large manufacturers feel they can live without.
I’ve seen some optometry equipment running RHEL
About a year ago I moved to Hyprland & Wayfire for my NVIDIA & Intel boxes. Moved NVIDIA to Radeon a few months back and had mixed results.
Recently tried Plasma 6 for experimental HDR and am impressed.
Yeah in the short term there are going to be a lot of lose/lose scenarios for them, but this is the stupid prize for playing stupid games with what they released.
I hope they stock it out, games like No Man’s Sky show both that a developer who cares enough to try can earn back the trust of a player base, and that the process to do so requires a lot of work.
I recently bought a 7800 XT for the same reason, NVIDIA drivers giving me trouble in games and generally making it harder to maintain my system. Unfortunately I ran headfirst into the 6.6 reset bug that made general usage an absolute nightmare.
Open source drivers are still miles ahead of NVIDIA’s binary blob if only because I could shift to 6.7 when it released to fix it, but I guess GPU drivers are always going to be GPU drivers.
Yes, but also from an implementation perspective: if I’m making code that might kill somebody if it fails, I want it to be as deterministic and simple as possible. Under no circumstances do I want it:
deleted by creator
Typically no, the top two PCIE x16 slots are normally directly to the CPU, though when both are plugged in they will drop down to both being x8 connectivity.
Any PCIE x4 or X1 are off the chipset, as well as some IO, and any third or fourth x16 slots.
So yes, motherboards typically do implement more IO connectivity than can be used simultaneously, though they will try to avoid disabling USB ports or dropping their speed since regular customers will not understand why.
Most firewalls are at their safest when you first get them i.e by default they block everything coming in. As you start doing port forwarding and the like you start making the network selectively less secure; that's when you have to pay attention.
I had an EdgeRouter X for years before I started my job. They are solid devices, and I'd definitely put them above most consumer routers.
Because they only charge for the hardware, they will eventually run into the same disincentive to provide consistent timely updates. If you do buy an Ubiquiti or similar enthusiast brand, do still keep an eye out for the CVEs that don't get patched.
I build Linux routers for my day job. Some advice:
your firewall should be an appliance first and foremost; you apply appropriate settings and then other than periodic updates, you should leave it TF alone. If your firewall is on a machine that you regularly modify, you will one day change your firewall settings unknowingly. Put all your other devices behind said firewall appliance. A physical device is best, since correctly forwarding everything to your firewall comes under the "will one day unknowingly modify" category.
use open source firewall & routing software such as OpenWRT and PFSense. Any commercial router that keeps up to date and patches security vulnerabilities, you cannot afford.
The difficulty is that a VPN isn't just a product like ProtonVPN, it's a huge family of software and protocols.
You can block vpn.protonvpn.com, but since most operating systems come with VPN functionality out of the box, you'd have to start listening to all traffic (not just DNS lookups) and blocking ALL packets that might be VPN traffic without causing regular disruption to non-vpn traffic.
TL;DR: it's easy to prevent unmotivated users from downloading a VPN app. It's practically impossible to block a motivated user from using a VPN, and they're the users you particularly care about.
What kind of idiot workplace would allow that? Perhaps if you don't assume the people you talk to are literally brain-dead, you might understand what they're saying.
hat's a bad faith interpretation of "the people control the means of production".
I want you to consider the difference between the work needed to complete a task, and the work needed to manage a workplace: for one of those tasks, only the experts in that task can meaningfully contribute to the outcome, whereas for the other, everybody who is part of the workplace has meaningful input.
I don't know about your experience, but everywhere I've worked there have been people "on the ground" who get to see the inefficiencies in the logistics of their day to day jobs; in a good job a manager will listen and implement changes, but why should the workers be beholden to this middleman who doesn't know how the job works?
I've also had plenty of roles where management have been "telling me where to cut".
Sure, but that's not necessarily a bad thing; if the Linux version is missing useful output that would be bad, but if the DX to Vulkan translation ironed out a performance regression, or the scheduler works better in this scenario, or filesystem access had issues with NTFS it could also cause performance differences in Linux favour.
It opens the door to more manufacturers since there is no ISA licence fees. While the AMD/Intel duopoly is being fairly competitive at the moment, it really doesn’t have to be. Only think back to how bad it was late 2000s to 2015.
I imagine a plethora of core designers, soc vendors and platform creators filling their own niches; lowest cost, lowest power, HW accelerators, highest core count etc.
I don’t see the raw performance of AMD/Intel being surpassed soon, just because of the sheer total R&D years each has, but that doesn’t mean there aren’t other areas better suited to a different architectural approach.
I don’t see the problem, I also don’t see how this is a novel situation.
The technical merits of system level protocols only really affect the user insofar as they make it easier for userspace application writers to make their software. This is why we have the distinction, so that users never have to change the underlying software, and when they choose to it’s because everything just works.
Sure but why open their code without getting the integration benefits?
Kernel modules don’t have to be open source provided they follow certain rules like not using gpl only symbols. This is the same reason you can use an NVIDIA driver.
Its not enforced so much by law as what the fsf and Linux foundation can prove and are willing to pursue; going after a company that size is expensive, especially when they’re a Linux foundation partner. A lot of major Linux foundation partners are actively breaking the GPL.