Well, it can’t run multithreaded jobs at full speed.
Exhibit A: The latest AMD patch for multicore scheduling across NUMA.
Well, it can’t run multithreaded jobs at full speed.
Exhibit A: The latest AMD patch for multicore scheduling across NUMA.
improvements to quality of life;
Native Americans: “Beg your pardon?”
Meh, not nearly as configurable as linux, some things you can’t change.
NFS beats SMB into a cocked hat.
You start spending more time in a terminal on linux, because you’re not dealing with your machine, you’re always connecting to other machines with their resources to do things. Yeah a terminal on windows makes a difference, and I ran cygwin for a while, it’s still not clean.
Installing software sucks, either having to download or the few stuff that goes through a store. Not that building from source is much better, but most stuff comes from distro repos now.
Once I got lxc containers though, actually once I tried freebsd I lost my windows tolerance. Being able to construct a new effective “OS” with a few keystrokes is incredible, install progarms there, even graphical ones, no trace on your main system. There’s just no answer.
Also plasma is an awesome DE.
Let’s be fair, Ms was vastly outrunning Intel for a long time, it’s only slowed down recently, and now the problem isn’t single-thread bloat so much as it is an absolute lack of multicore scaling for almost all applications except some games, and even then windows fights as hard as it possibly can to stop you, like amd just proved yet again.
Yes, mostly the applications aren’t there, if you need real cpu power (or gpu for that matter), you’re running linux or on the cloud.
But we are reaching a point where the desktop has to either be relegated to the level of embedded terminal (ie ugly tablet, before it’s dropped altogether), or make the leap to genuine compute tool, and I fear we’re going to see the former.
Single-thread is really hard, we’ve basically saturated our l1 working set size, adding more doesn’t help much. Trying to extend the vector length just makes physical design harder and that reduces clock speed. The predictors are pretty good, and Apple finally kicked everyone up the ass to increase OOO like they should have.
Also, software still kind of sucks. It’s better than it was, but we need to improve it, the bloat is just barely being handled by silicon gains.
Flash was the epochal change, maybe we have some new form of hybrid storage but that doesn’t seem likely right now, Apple might do it to cut costs while preserving performance, actually yeah I see them trying to have their cake and eat it too.
Otherwise I don’t know, we need a better way to deal with GPUs, there’s nothing else that can move the needle, except true heterogenous core clusters, but I haven’t been able to sell that to anyone so far, they all think it’s a great idea, that someone else should do.
Even beyond that, short of something like blender, Windows just can’t handle that kind of horsepower, it’s not designed for it and the UI bogs down fairly fast.
Linux, otoh, I find can eat as much CPU as you throw at it, but often many graphics applications start bogging down the X server for me.
So I have a windows machine with the best GPU but passable cpu and a decent workstation gpu with insane cpu power on linux.
I’m considering it, but only just, my 5800x is good enough for most gaming, which is GPU bound anyway, and I run a dual xeon rig for my workstation.
zen 2-4 took care of a lot of the demand, we all have 8-16 cores now, what else could they give us?
They probably moved it to somewhere under /usr or /var/lib.
They’re working on HK.
Xi thought he had HK under control and could move directly on to Taiwan. His navy explained to him they were decades away from actually trying to invade.
It’s their worst case scenario: An educated, wealthy population with good relations to everywhere that didn’t exterminate 50m of their own citizens through their stupidity.
You’re delusional, they’ll defend it by saying ‘the Chinese Communist Party has lifted blah blah blah people out of poverty!!!’ notwithstanding it’s the major reason they were in poverty, Taiwan had become one of the richest and freest countries decades earlier, and they didn’t murder 50m+ of their own citizens for lulz.
So long as the CCP isn’t murdering them at that exact moment, they’re 100% on board because their whole life is built around everything the west doing must be evil (closer to 40/60 imho, but that beats the 80/20 of the ccp).
Meh, gaming pc of theseus, you replace the mobo less often than a console Gen, more if you want.
I think that’s overkill, but a Steam Deck is on par with a PS5, but portable, and for a cheap dock and a ps5 controller you can play it like a console.
Linux has made such leaps though, have a container with lutris and vulkan and it can handle most basic gaming that doesn’t deal with modern AAA titles.
The real point is that you can upgrade it incrementally, you don’t have to throw it away, and upgrading will allow you to play all your old games from generation to generation without having to rebuy them for the latest Gen.
Nobody uses that, they use the spec number because that’s what they’ve been taught, and they identify with it more than the incredibly stupid ‘full/high/super/duper/ultramegahyperspeed’ convention which the idiots at the siig decided to break again in 3.2.
Everybody literally on the planet agrees the system is moronic, you’re literally the only person who dissents, congratulations on that.
I’m talking about using the standard traditionally to denote the performance of the connection.
You don’t go around talking about your “Usb 3.0 device” that runs at 480mbps unless you’re trying to be a massive dickhole.
That’s what I’m talking about.
They’re bad because manufacturers want to pass their usb 2.0 gear as “usb 3.0 compliant”, which it technically is, and their usb 3.0 gear as “usb 3.2” because 3.2 Gen 1x1 is also 5gbps.
Also the whole alternate mode is awesome, but cheap hub chips don’t bother trying to support it and the only people who do are the laptop ports so they can save $.40 on a separate hdmi port.
And don’t get me started on all the USB-c chargers that only put out 1.5a because it’s just a normal 7805 on the back end.
2 things:
This has never been unknown, this is one of the fundamental attack vectors against TOR, the IM protocol seemed to make correlation easier due to its real time nature.
They added a protection layer called Vanguard, to ensure the internal exit nodes were fixed to reduce the likelihood that you could track a circuit with a small number of compromised internal exit nodes. This seems like it would help due to reducing likelihood of sampling.
Other state actors might try, but they’re not in the same league in terms of resources, IIRC there are a LOT of exit nodes in Virginia.
tl;dr - The protocol is mostly safe, it doesn’t matter if people try to compromise it, the nature of TOR means multiple parties trying to compromise nodes make the network more secure as each faction hides a portion of data from the others, and only by sharing can the network be truly broken. Good luck with that.