Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)J
Posts
2
Comments
305
Joined
3 yr. ago

  • I'd say it's more convenience than elitism.

    I'm in BTN and it's the only indexer I use for my Sonarr instance because it has absolutely everything. I've never not been able to find something and almost everything I download will saturate my 1.2 Gbps connection.

    For Radarr I don't have any private trackers and it takes 35 public trackers to get coverage that is almost as good. The options I'm given are way less organized and download speeds are a gamble. It's not really an issue because I rarely watch movies, but I definitely understand why private trackers are so sought after. I'll eventually try to get into some smaller ones which tend to be pretty easy to do.

  • Glad to help!

    The reason it works is because telecom providers use DNS-based throttling instead of deep packet inspection to selectively limit bandwidth to video sites. They have a massive list of all the popular streaming sites (YouTube, AppleTV, Netflix, etc.) and then throttle the sites in the list. When providers say "unlimited 480p video streaming" they actually have no clue what video quality you are watching. They just pick a bandwidth limitation that would only allow 480p video to play without buffering.

    They could in theory use network traffic analysis to identify video websites which have bursty bandwidth patterns (due to the nature of video buffers), but this would be more difficult, more expensive, and extremely prone to false positives.

  • I've been using Google's Gemini and it's pretty good at interpreting fucked up or imperfect smart commands. For example we have some lights named "Chrimas Lights" and it will turn those on and off by referring to them as Christmas lights. It can also do multiple commands in a row without being overly explicit. So you can say "set lights to x%, make them yellow, and turn them off in an hour and set my TV to volume x" and it'll do it no problem. The old assistant could not do anything even close to this.

    It's also much faster and processes words as fast if not faster than a human can. From finishing a command to the command being executed seems to be about 1/10th of a second which makes me wonder if it's doing any sort of inferencing on the back end. It's one of the best LLM integrations I've seen so far.

  • I've had ProtonVPN for 3 years now and I have 0 complaints.

    It's the only VPN I've ever used that doesn't have less bandwidth on VPN than off. I regularly saturate my gigabit connection for hours at a time with 0 issues or throttling, and tunnel my torrent client's traffic through it 24/7. It also allows me to watch 4k content on mobile data without throttling and circumvent my phone provider's restrictions on hotspot/tethering that they want me to pay $30/month to remove.

    Best $5/month I've ever spent.

  • I've noticed it's less common in the city and more common in rural areas. I live in SF and people here don't call them gas stations unless they have gas, but in the Central Valley this is extremely common.

    I grew up there and I always forget how much more "proper" I speak at home vs where I grew up. My partner sometimes struggles to understand what I'm trying to say a lot of the time when I slip back into it when speaking with my family. Gas station is just one of the many overly generic terms. Another one is "Vallarta" which doesn't necessarily mean the chain grocery store Vallarta, but a Mexican grocery store usually selling produce and with a meat counter.

  • Gas station is a somewhat colloquial form of bodega/corner store in the US. Often corner stores without gas stations will still be referred to as gas stations. Sometimes they're also called convenience stores.

  • Auto save with Google Docs style snapshots has so little overhead I'd hardly consider it a trade-off. We have insane amounts of disk storage and extremely reliable non-volatile memory. The only reason against it that I can conceive of is confidential data you don't ever want to exist outside of volatile memory.

    All modern word processors use auto save and it kinda blows my mind libre does not do this.

  • This is part of why getting credit cards early (if you're capable of being responsible with them) is so important. All my oldest credit lines are credit cards (I have 4 of them), so any future loans will be taking my average credit line down instead of up. As a result I'll always have those old credit lines and my score will only go up when I pay things off completely.

  • I got stopped with a Panettone once. Thankfully this was in EWR so the Italian-American gate agent understood why I'd be smuggling one to the west coast.

  • That's interesting given that in California pre-school is 4-yo and kindergarten is the year after that.

  • Almost every personal computer that isn't a MacBook is poorly secured due to the lack of filesystem encryption as a default. No one encrypts their data at rest, and as such you just have to pull their drive and read it with another computer. Hell, I don't encrypt my entire file system despite being aware of this because of the inconvenience of added boot time, but everything that matters is encrypted and backed up across multiple devices.

    The best thing anyone can do is keep the amount of critical, digital data they have to a minimum, keep that data encrypted and backed up, and use a password manager properly. That alone makes it exceedingly unlikely you will ever be a victim of cybercrime solely because you're more of a pain in the ass to compromise than 99.9% of the world.

    I personally have almost 10TB of data between all my systems, but of that maybe 10 MB is actually valuable to anyone but me.

  • I don't think so because it requires you to provide proof you work there actively, and those who leave are assigned alumni and grandfathered in. It's mainly just lots of PIP and toxicity that is discussed, and memeing about how dog shit things are.

  • I can't think of any neighborhood in SF where I'd choose one of these places over literally anywhere else. Too much good cheap food here.

  • This is already a thing. I'm part of a 25k person Discord server for Amazon/AWS employees both current and former. We often discussed a ton about the company's inner workings, navigating the toxic AF environment, and helping people find other jobs. Nothing ever trade secret level, but that Discord would give any competitor a massive leg up in direct competition with Amazon.

  • It's trained on western media so this shouldn't be surprising as those are the two biggest threats to the western world. An AI trained on China's intranet would likely nuke the US, Russia, and select SEA countries.

  • I mean I live in the most expensive region of the US and live pretty comfortably, but go off paying to see ads and have content taken from you I guess.

  • Please never bring up CNF again. I'm a year out of college, two years out of finite automata, and I still shudder when it's brought up.

  • Do yourself a favor and rent a floor sander for like $80 bucks a day and save your orbital sander for the edges next time. That or duct tape the orbital sander to a stick.

  • That was a pretty interesting read. However, I think it's attributing correlation and causation a little too strongly. The overall vibe of the article was that developers who use Copilot are writing worse code across the board. I don't necessarily think this is the case for a few reasons.

    The first is that Copilot is just a tool and just like any tool it can easily be misused. It definitely makes programming accessible to people who it would not have been accessible to before. We have to keep in mind that it is allowing a lot of people who are very new to programming to make massive programs that they otherwise would not have been able to make. It's also going to be relied on more heavily by those who are newer because it's a more useful tool to them, but it will also allow them to learn more quickly.

    The second is that they use a graph with an unlabeled y-axis to show an increase in reverts, and then never mention any indication of whether it is raw lines of code or percentage of lines of code. This is a problem because copilot allows people to write a fuck ton more code. Like it legitimately makes me write at least 40% more. Any increase in revisions are simply a function of writing more code. I actually feel like it leads to me reverting a lesser percentage of lines of code because it forces me to reread the code that the AI outputs multiple times to ensure its validity.

    This ultimately comes down to the developer who's using the AI. It shouldn't be writing massive complex functions. It's just an advanced, context-aware autocomplete that happens to save a ton of typing. Sure, you can let it run off and write massive parts of your code base, but that's akin to hitting the next word suggestion on your phone keyboard a few dozen times and expecting something coherent.

    I don't see it much differently than when high level languages first became a thing. The introduction of Python allowed a lot of people who would never have written code in their life to immediately jump in and be productive. They both provide accessibility to more people than the tools before them, and I don't think that's a bad thing even if there are some negative side effects. Besides, in anything that really matters there should be thorough code reviews and strict standards. If janky AI generated code is getting into production that is a process issue, not a tooling issue.