• 0 Posts
  • 14 Comments
Joined 1 year ago
cake
Cake day: July 5th, 2023

help-circle
  • Why has the world gotten both “more intelligent” and yet fundamentally more stupid at the same time? Serious question.

    Because it’s not actually always true that garbage in = garbage out. DeepMind’s Alpha Zero trained itself from a very bad chess player to significantly better than any human has ever been, by simply playing chess games against itself and updating its parameters for evaluating which chess positions were better than which. All the system needed was a rule set for chess, a way to define winners and losers and draws, and then a training procedure that optimized for winning rather than drawing, and drawing rather than losing if a win was no longer available.

    Face swaps and deep fakes in general relied on adversarial training as well, where they learned how to trick themselves, then how to detect those tricks, then improve on both ends.

    Some tech guys thought they could bring that adversarial dynamic for improving models to generative AI, where they could train on inputs and improve over those inputs. But the problem is that there isn’t a good definition of “good” or “bad” inputs, and so the feedback loop in this case poisons itself when it starts optimizing on criteria different from what humans would consider good or bad.

    So it’s less like other AI type technologies that came before, and more like how Netflix poisoned its own recommendation engine by producing its own content informed by that recommendation engine. When you can passively observe trends and connections you might be able to model those trends. But once you start actually feeding back into the data by producing shows and movies that you predict will do well, the feedback loop gets unpredictable and doesn’t actually work that well when you’re over-fitting the training data with new stuff your model thinks might be “good.”


  • Lol they will the second they get hit with that “you need to get parental consent” screen, that’s how it happened to us all.

    The normie services are increasingly tied to real world identities, through verification methods that involve phone numbers and often government-issued IDs. As the regulatory requirements tighten on these services, it’ll be increasingly more difficult to create anonymous/alt accounts. Just because it was easy to anonymously create a new Gmail or a new Instagram account 10 years ago doesn’t mean it’s easy today. It’s a common complaint that things like an Oculus requires a Meta account that requires some tedious verification.

    I don’t think it’ll ever be perfect, but it will probably be enough for the network effects of these types of services to be severely dampened (and then a feedback loop where the difficult-to-use-as-a-teen services have too much friction and aren’t being used, so nobody else feels it is worth the effort to set up). Especially if teens’ parent-supervised accounts are locked to their devices, in an increasingly cloud-reliant hardware world.


  • If you’re 25 now, you were 15 during the early wild west days of smartphone adoption, while we as a society were just figuring that stuff out.

    Since that time, the major tech companies that control a big chunk of our digital identities have made pretty big moves at recording family relationships between accounts. I’m a parent in a mixed Android/iOS family, and it’s pretty clear that Apple and Google have it figured out pretty well: child accounts linked to dates of birth that automatically change permissions and parental controls over time, based on age (including severing the parental controls when they turn 18). Some of it is obvious, like billing controls (nobody wants their teen running up hundreds of dollars in microtransactions), app controls, screen time/app time monitoring, location sharing, password resets, etc. Some of it is convenience factor, like shared media accounts/subscriptions by household (different Apple TV+ profiles but all on the same paid subscription), etc.

    I haven’t made child accounts for my kids on Meta. But I probably will whenever they’re old enough to use chat (and they’ll want WhatsApp accounts). Still, looking over the parent/child settings on Facebook accounts, it’ll probably be pretty straightforward to create accounts for them, link a parent/child relationship, and then have another dashboard to manage as a parent. Especially if something like Oculus takes off and that’s yet another account to deal with paid apps or subscriptions.

    There might even be network effects, where people who have child accounts are limited in the adult accounts they can interact with, and the social circle’s equilibrium naturally tends towards all child accounts (or the opposite, where everyone gets themselves an adult account).

    The fact is, many of the digital natives of Gen Alpha aren’t actually going to be as tech savvy as their parents as they dip their toes into the world of the internet. Because they won’t need to figure stuff out on their own to the same degree.




  • display - USB-C at work, HDMI (through USB-C dock) at home

    Obviously you can’t use an HDMI port that you don’t have, but I gotta ask: if you had one of the newer MBPs with built-in HDMI, would you be using that HDMI port? Because it sounds like you wouldn’t, and that you’d still rely on the USB-C dock to do everything.

    And that’s been my position this whole thread. I think that the MBP’s return of the HDMI port was greeted with lots of fanfare, but I don’t actually know anyone who switched back to HDMI.


  • Yeah, I’m not going to throw out perfectly good hardware just to unify cables somewhat.

    I was referring to the replacement of HDMI 2.0 stuff with 2.1 stuff - not seeing an advantage to choosing HDMI 2.1 over Thunderbolt. And then there’s the support hell of intermingled HDMI 2.0 and 2.1 stuff, including cables and ports and dongles and adapters.

    Either way, I’m still stuck on the idea of direct HDMI use as being so ubiquitous that it warrants being built into a non-gaming laptop that already has Thunderbolt and DP (and USB-PD) support through the preexisting USB-C ports.

    Thunderbolt only works for workstations if the monitor supports it

    Even if driving multiple monitors over HDMI or DVI or DP or VGA or whatever, the dock that actually connects directly to the laptop is best served with Thunderbolt over USB-C, since we’d expect the monitors and docking station (and power cords and an external keyboard/mouse and maybe even ethernet) to all remain stationary. That particular link in the chain is better served as a single Thunderbolt connection, rather than hooking up multiple cables representing display signal data, other signal data, and power. And this tech is older than HDMI 2.1!

    So I’m not seeing that type of HDMI use as a significant percentage of users, enough to justify including on literally every 14" or 16" Macbook Pro with their integrated GPUs. At least not in workplaces.


  • You use HDMI for all those use cases? Seems like Thunderbolt is a much better dock for workstations, and DisplayPort is generally better for computer monitors and the resolution/refresh rates useful for that kind of work. The broad support of cables and HDMI displays is for HDMI 2.0, which caps at 4k60. By the time HDMI 2.1 hit the market, Thunderbolt and DisplayPort Alt mode had been out for a few years, so it would’ve made more sense to just upgrade to Thunderbolt rather than getting an all new HDMI lineup.



  • Now, I don’t know if it’s in USBC cables

    It’s not. Apple specifically follows the USB-PD standard, and went a long way in getting all the other competing standards (Qualcomm’s Quick Charge, Samsung Adaptive Fast Charge) to become compatible with USB-PD. Now, pretty much every USB-C to USB-C cable supports USB-PD.

    Also a shout out to Google Engineer Benson Leung who went on a spree of testing cables and wall adapters for compliance with standards after a charger set his tablet on fire. The work he did between 2016-2018 went a long way in getting bad cables taken off the market.