Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)L
Posts
166
Comments
1508
Joined
3 yr. ago

  • I think you're describing district heating, which works great in places that planned ahead and buried the necessary plumbing so that the waste heat from nearby industrial processes can be beneficially used to heat nearby homes and offices.

    The detail, however, is that those industrial processes are diverting the heat to the district plumbing, but if nobody needs heating (eg 40 C summer weather), then they will vent the heat using air cooling to the atmosphere. That is to say, the demand for heating will vary at times, and this is fine because the industrial process can just go back to dumping the heat into the air.

    This doesn't work for AI data centers because the amount of "waste" heat (eg 100+ megawatts) is well in excess of any nearby demand for heating. To quantify demand, I looked to the district heating system of Ulaanbaatar, the capital city of Mongolia, home to 1.67 million people, and the coldest capital city in the world by average annual temperature:

    the Ulaanbaatar District Heating Company, encompassing 13,500 buildings with a total connected capacity of 3924 MW

    The system serves 60% of the population, so about 1 million people. Where in the mostly-temperate USA could a 4 gigawatt AI data center be located so that it's right next to 1 million people that need 24/7 heating as though they lived in Mongolia?

    Scaling down to a 100 megawatt data center, the demand would be for a population of 25,000 living in essentially arctic conditions. Such places already have district heating, such as in Alaska. So if a smaller AI data center shows up, it just means the existing non-AJ heat source would fall back to dumping heat into the air.

    In the end, there are very few places that need heating all year round, but AI datacenters would be producing heat all year round. Even if the heat were used for something outlandish, like heating every square meter of public roadway, that still might not be enough demand to quench these behemoth AI datacenters. And that's before the cost of building out the district heating system.

    We should definitely build district heating systems where they make sense, but building them so AI data centers can exist would be doing the right thing for the most terrible of reasons

  • While not strictly biofouling, the marine environment can definitely be affected by introducing hotter water where it didn't exist prior, in and around the outflow pipe. Seaside nuclear power stations that use seawater cooling need to be mindful to diffuse the heated water over a large area, to minimize the ecological impact. Citation: https://ui.adsabs.harvard.edu/abs/2025EcInd.17012986J/abstract

  • There is almost certainly an impact somewhere, but I don't have the data to know where it is. My conjecture is that a localized mass of steam would cause convection currents and drive microweather phenomena, especially downwind of such an air cooled facility. I'm not sure rain is necessarily the result, unless there's a sizable mountain downwind, since although hot air will rise, it might run out of steam (pun intended) before cooling down enough to fully condense out. So it might just be adding a layer of humidity that floats a few hundred meters above the surface.

    But even that could be devastating, if said layer blocks natural convection currents over a downwind town or city. It could act as a thermal cap, making that town warmer at night, because heat rising from the city would meet that humid layer and get absorbed by the water. The thermal capacity of water comes into play again, but this time against the city.

    Heat energy is a driver for cyclones, such as when the warm, moist water of the Caribbean accelerates air as it approaches the southern USA, and only once landborne does it start to slow down due to drag and losing its energy source. I doubt we'll ever have an AI-induced hurricane, but in a situation where there's already an energetic weather event, it cannot possibly help to be adding heat to that situation.

    I defer to the meteorologists to say what happens to the local weather and climate, and biologists on what happens to humans and wildlife. But I can't see it being good, no.

  • Air cooling is feasible, as evidenced by existing power stations that use air cooling. A lot of newer nuclear generation use water cooling, being sited along the ocean and in the multi gigawatt range. But we can also find examples of inland power stations that have no water connection, and therefore need some massive cooling towers. Here is one in Germany that has a 2.2 GW rating and a 200 meter tall tower: https://en.wikipedia.org/wiki/Niederaussem_Power_Station

    This is, as you can imagine, rather expensive to build, but it's doable. Cooling a coal fire is not substantially different than cooling compute loads in a data center, as it's all just a matter of moving heat around. Will there be differences due to the base temperature of coal versus GPUs? Yes, since the ratio of input to ambient temperature matters. But on the flip side, this should make it easier to construct, as the plumbing for lower temperatures is simpler.

    Mechanical engineers can chime in on feasibility for AI data centers, but seeing as it hasn't been done, it's probably still cost related.

  • Darn, you're right, the hours fell off in my dimensional analysis. Corrected, although 6.9 hours for a pool isn't much time for swimming at all.

  • Other commenters correctly describe the cost analysis for using evaporative cooling, but I'll add one more reason why it's the preferred method when water is available: evaporating water can dissipate truly outlandish amounts of heat with very few moving parts.

    Harkening back to high school physics class, water -- like all other substances -- has a certain thermal capacity, meaning the energy needed to increase the temperature of 1 kg of water by 1 degree C. The specific thermal capacity of water is already quite high, at 4184 J/(kg*C), besting all the common metals and only losing to lithium, hydrogen, and ammonia. In nature, this means that large bodies of water are natural moderators of temperature, because water can absorb an entire day's worth of sunlight energy but not substantially change the water temperature.

    But where water really trounces the competition is its "heat of vaporization". This is the extra energy needed for liquid water to become vapor; simply bringing water to 100 C is not sufficient to make it airborne. Water has a value of 2146 kJ/kg. Simplifying to where 1 kg of water is 1 liter of water, we can convert this unit into something more familiar: 0.596 kWh/L.

    What these two physical properties of water tell us is that if our city water comes out of the pipe at 20 C, then to get it to 100 C to boil, we need the difference (80) times the thermal capacity (4184 J/kg*C), which is 334,720 J/kg . Using the same simplification from earlier, that comes out to be 0.093 kWh/L. And then to actual make the boiling liquid become a vapor (so that it'll float away), we then need 0.596 kWh/L on top of that.

    Let that sink in for a moment: the energy to turn water into vapor (0.596 kWh/L) is six times higher than the energy (0.093 kWh/L) to raise liquid water from 20 C to 100 C. That's truly incredible, for a non-toxic, life-compatible substance that we can (but should we?) safely dump into the environment. If you total the two values, one liter of water can dissipate 0.69 kWh of energy per liter. Nice!

    In the context of a 100 megawatt data center (which apparently is what the industry considers as the smallest "hyperscale data center"), if that facility used only evaporative cooling, the water requirement would be 144,927 L/hour. That is an Olympic-size swimming pool every 6.9 seconds hours. Not nice!

    And AI datacenters are only getting larger, with some reaching into the low single-digits of gigawatts. But what is the alternative to cooling the more-modest data center from earlier? The reality is that the universe only provides for three forms of heat transfer: conduction, convection, and radiation. The heat from data centers cannot be concentrated into a laser and radiated into space, and we don't have some sort of underground granite mountain that the data centers can conduct their heat into. Convection is precisely the idea of storing the heat into a substance (eg water, air) and then jettisoning the substance.

    So if we don't want to use water, then we have to use air. But for the two qualities of water that make it an excellent substance for evaporative cooling, air doesn't come close -- 1003 J/(kg*C) and no heat of vaporization, because air is already gaseous. That means we need to move ungodly amounts of air to dissipate 100 megawatts. But humanity has already invented the means to do this, by a clever structure that naturally encourages air to flow through it.

    The only caveat is that the clever structure is a cooling tower, and is characteristic of nuclear power stations. It's also used for non-nuclear power station cooling, but it's most famous in the nuclear context, where generators are well into the gigawatt range. Should AI datacenters use nuclear-sized air cooling towers instead of water evaporation? It would work, but even as someone that's not anti-nuclear, the optics of raising a cooling tower in rural America just to cool a datacenter would be untenable. And that's probably why no AI datacenter has done that.

    To be abundantly clear, I'd rather not have AI datacenters at all. But since the question was why water consumption is such a big deal, it might be best to say that it's a physics problem: there isn't any other readily-available way to provide cooling for 100+ megawatts, without building a 100+ meter tower. Water is always going to be cheaper and more on-hand than concrete.

  • License is the legal instrument which makes open source software/hardware/silicon possible, describing precisely what rights are granted or retained. The term "open source" usually means the definition propounded by the Open Source Initiative (OSI) but sometimes not in certain contexts. At the very minimum, an OSI-compliant open source license will allow any distribution of the software without having to seek additional permission from the author, must be accompanied with access to the source code, and the software does not come with provisos outright prohibiting its use for certain endeavors.

    That last point is about the "use" of the software, and is a crucial distinction between "open source" and "source available". To have source available means the source code can be examined, but usually cannot be compiled. An open source license explicitly allows all uses, but possibly with additional obligations. For example, the AGPL license allows software to be used to run a server, but creates an obligation to provide the server source code to all users that connect. Whereas something like the MIT 0-clause license has zero additional obligations, while allowing the broadest use. When a license is both Open Source and allows free use, it is known as a FOSS license.

    The exact verbiage of a license are the domain of lawyers, being a legal document. But the choice of license is down to the software author or corporate owner, and is a multifaceted consideration, including marketability, compatibility with other software, and whether it's more important that the code gets used or that it forever remains available.

    The latter is the major battleground for advocates of permissive versus copyleft licenses. Some software (eg reference cryptographic algorithms) have the priority that the absolute most number of people should use them, so a permissive license makes sense. While other software (eg desktop 3D rendering suite Blender) have a priority that nobody can ever take it private by adding proprietary-only features.

    Choosing open source is easy, but choosing a license to effect that choice can get tricky. For authors publishing their software, the choice may very well change the course of history (ie Linux GPL-2). For consumers or businesses using software, the license dictates how changes can be distributed.

  • This blog post comes to me at an interesting time, for I've been gathering info to rebuild my router using FreeBSD. Specifically, I bought a hard-copy of The Book Of PF, 4th Edition, for configuring PF for routing and firewalling. Like with all good firewalls, the PF rulesets start with blocking all traffic. But unlike the VyOS-based rules used by my outgoing Ubiquiti router, PF does not implicitly include rules for common use-cases, such as enabling hairpin NAT for Legacy IP. Nor does the syntax assume that rules are only for inbound, as the shortest syntax will actually apply a rule in both directions on every interface.

    To that end, one of the tenants for configuring a PF firewall is to also filter outbound traffic, as a matter of: 1) asserting control over the network, and 2) implementing the principle of least privilege. I can reasonably accept that my home's guest WiFi network should be fairly free flowing for outbound traffic, but that shouldn't apply to my IoT VLAN. Quite frankly, my IoT VLAN only allows outbound connections to four specific NTP servers hosted by ntp.org, because my thermostat has a badly-designed real-time clock and I refuse to allow network access for devices that historically never needed it.

    Before containers, firewalls implemented the DMZ idea, where any host that runs an externally-accessible service would be within the DMZ, to prevent infiltrating the broader LAN if something goes wrong. Your solution achieves a sort-of DMZ, but does it at the Docker host. Whereas a true DMZ would segment the rest of your network off, so as to further reduce risk, since iptables is the only line of defense.

    That said, zooming out, this caught my attention:

    The breaking point came when I wanted to host Gemini FastAPI, a project that wraps Google’s internal Gemini API into an OpenAI-compatible interface, useful for using your Gemini Pro subscription outside Google’s walled garden. The catch: it needs your browser cookies, which means full access to your Google account.

    The very premise of Gemini FastAPI seems flawed to me, if it's trying to create a wrapper when Google clearly does not want that to exist. The challenges that you observed, such as the brittleness of IP allowlists, would suggest to me that the overall endeavor is going to be brittle, by Google's design.

    To be clear, that doesn't mean you shouldn't pursue this, in the same way that yt-dlp exists for the legitimate use for accessing YouTube. But what both yt-dlp and Gemini FastAPI will never escape is that they only exist because Google hasn't cracked down on it further. When every indication is showing that this is the road with even more trouble beyond the next curve, is this what you want to invest time and effort into? There are other platforms and protocols that replace YouTube, or at least minimize one's dependency on a clearly antagonistic host.

    At bottom, I think the question is whether connecting to Gemini is really worth all of this trouble, when they evidently don't want you to do this, and it adds yet another dependency upon Google. Even if you believe Google is 100% benevolent and their lack of a built-in support for using Gemini externally is just a minor oversight, you will have to pick which services you will base your own infrastructure upon. This is, after all, c/selfhosted.

  • The premise is good, but the linked article is too short to explain why protocols encourage decentralization, which protects against authoritarism, censorship, and promotes bona-fide free speech (not to be confused with "BuH mAh FrEe SpEeCH!" morons that only like free speech when it agrees with them and don't when it doesn't).

    For a more lengthy discussion, which includes Internet history, the legacy of the USA's Section 230 of the CDA and how that impacts the modern web, and what precisely a protocol should avoid doing to successfully achieve the goal of practical decentralization, Mike Masnick's 2019 paper "Protocols, not Platforms" is particular apt.

    Yes, I know I've mentioned him a number of times in my comments, but there aren't too many people who are abreast of technologcal history, the legal framework surrounding the internet, and are skilled writers to condense into words the necessary clarity upon which to build an internet that works for everyone, not just the rich or few.

    As a note, BlueSky was directly inspired by his paper and he now sits on the board of BlueSky. Is that antithetical to his 2019 paper? I don't think so, since commercial success of a protocol is how it has staying power: Amazon's S3 API, email's SMTP, and QUIC are all examples of protocols where everyone benefits by their ubiquity, but they had to be commercialized first, by the likes of AWS, AOL and CompuServe, and Google. BlueSky's opponent is not another protocol like ActivityPub, but rather they challenge the platform formerly known as Twitter. The very existence of a bridge between the ATmosphere and the Fediverse proves that platforms are the real enemy, and we all need to keep that in mind.

    No enemies to the left.

  • Your understanding is not wrong, but "within namespaces" is doing a lot of the heavy lifting. After all, there isn't just one namespace but many simultaneous namespaces at play. A process namespace is where process IDs (PIDs) begin from 1 and fork()'d processes are assigned incrementing PIDs. These values are meaningless outside of the namespace, and might even get mapped to different values in the parent namespace. A process namespace gives the appearance that the process with PID 1 is the init process, which is customarily the first userspace process started once the kernel is running.

    There are also network namespaces, where network interfaces (netif) can be switched (Layer 2) or routed (Layer 3), independent of what the global/default/parent network namespace is doing. This gives the appearance that all the network configuration is wholly independent, and allows neat things like crafting specialty routing (eg Kubernetes overlay networks).

    Then there are user namespaces, where the root user has the appearance of total authority, and normal users can be created, but these are entirely distinct from the global/default/parent users and groups on the machine. This pairs well with filesystem namespaces, where a sub-tree of the real filesystem is treated as though it is a full tree, which allows the namespaced users to do standard manipulations like changing file ownership or permissions. This is essentially what UNIX chroot() does, but IIRC, chroot() did not also create user namespaces.

    Taken together, namespaces in Linux are less about isolation -- although they certainly work for that -- and more about abstracting everything else in userspace away: no need to deal with other people's processes, netifs, files. It's like having the whole machine to yourself. In the history of computer science, isolation is often achieved precisely by making everything else invisible and out of the way. Virtual Memory did that, as did x86 Protected Mode, as did Virtual Machines. And so too does namespacing. Containers are the result of namespacing all the key kernel interfaces.

    Perhaps the crucial thing then is what interfaces aren't namespaced. In Linux, a big one is device drivers. Folks that want to share a USB TV capture card or a PCIe GPU or even a sub-NIC using SR-IOV, will find that /dev files are not namespaces. They exist in the global space and aren't isolated. So the only thing that can be done is to pretend to "move" the device file into a container, with everyone else promising not to try using that device anyway. This is not isolation because accidental or malicious action will break it. To do "device isolation" would require every driver to be namespace aware, so that it could treat requests from two different namespaces as distinct. That does not exist at all in Linux, and such low-level work continues to be difficult with containers, often surprising people that think that Linux containers are complete abstractions. They are not.

  • The Oxford comma would be mandatory.

  • freebies @sh.itjust.works

    Walgreens: free 8x10 print. Use code MVPMOM . Exp 10 May EOD

    photo.walgreens.com /store/prints-and-enlargements-details
  • There are terminology issues here, both in the Lemmy post title, in the article body, and in the article's TL;DR. Basically, nothing is internally consistent except maybe the OCI Runtime spec itself, although its terminological relevancy is a separate issue.

    Lemmy title: Containers are not Linux containers

    Article title: What Is a Standard Container: Diving Into the OCI Runtime Spec

    Both titles imply the existence of non-Linux containers, yet only the latter actually describes the contents of the article, specifically naming the "other" type of container, being "Standard Containers" defined by the OCI Runtime spec. As a title, I greatly prefer the latter, whereas the former is unnecessarily antagonistic.

    That aside, the article could really be helped by a central glossary section, as it refers to all of these as containers, without prefacing that these can all validly be called "containers":

    • OCI-compliant containers
    • Standard containers
    • Linux containers
    • Docker containers
    • Kata VM-based containers
    • Other VM-based containers that have been deprecated

    If the goal was to distinguish what each of these mean, the article doesn't do that great of a job, other than to say "these exist and aren't Linux containers, except Linux containers are obviously Linux containers".

    Reframing what I think the article tried to convey, while borrowing some terminology from C++/Python, the OCI Runtime specification defines an Abstract Base Class known as a Standard Container. A Standard Container supports the most minimal functions of starting and stopping an execution runtime. For Linux, FreeBSD, Kata, etc, those containers are subclasses of the Standard Container.

    For the most part, unless your containerized application is purely computational and has zero dependencies upon the OS, your container will be one of the subclasses. There are essentially zero practical container images that can meet the zero-dependency requirements of being a Standard Container. So while it's true that any runtime capable of running the container subclasses could also run a Standard Container, it is of little value in production. Hence why I assert that it's an abstract base class: it cannot really be instantiated in real life.

    This is the reality of containers: none can abstract away an application's dependency upon the OS. The container will still rely upon Win32 calls, POSIX calls, /proc, BSD sockets, or whatever else. So necessarily, all practical containers need a kernel layer. Even the case of Kata's VM-based containers just mean that the kernel is included within the container. Portability in this context just means that the kernel version can change beneath, but you cannot take a Linux container and run it on FreeBSD, not without shims and other runtime kludges.

  • Detest things like zelle which just feels like a scam to me. I absolutely wish the bank bill pay system was more advanced. Like I could have a qr code and say bill me here and they could have one to say set us up as a place to pay for with this qr code.

    Does your current bank not do this with Zelle QR codes? At least in my bank's mobile app, it'll happily generate a QR code for my account as a Zelle destination, which other people can scan and then pay me. Or they can scan and send me a request, which I can then accept and pay them. I use Zelle precisely when everyone else would use Venmo, because I don't want yet-another institution to have my bank details, and since Zelle is integrated with my bank, I already have to trust them anyway.

    It's not a bill.com-esque invoicing system, but maybe somebody will build atop Zelle to do exactly that. I will say that tying Zelle to a phone number or email is a bit limiting, though, and maybe one day there will be "usernames" for Zelle, encoded purely in QR codes.

  • No Stupid Questions @lemmy.world

    How prevalent are cash transactions in the USA?

  • More details and reporting from the local NPR affiliate for Miami-Dade County, on the present financial difficulties for Brightline: https://www.wlrn.org/business/2026-03-16/brightline-financial-troubles-debt-credit

    Specifically this:

    Trains can still run should Brightline default on its loans. The company’s total debt load is more than $4 billion, spread over different timelines and with different seniority. Revenue bonds issued in 2024 recently traded for 33 cents on the dollar, a clear sign of growing market worries about Brightline’s ability to make its payments on time.

    IMO, this just means the rail business is making money, but not quickly enough to pay off the debts on their fixed schedule. Bankruptcy protection would allow restructuring this debt, and no sensible bankruptcy plan would involve cutting the very trains that provide the revenue to pay the debts.

    Regarding investor money, I'd think a regional real estate company would be a good investor, since rail service benefits adjacent land, and development adjacent to rail benefits the train service. Though admittedly, commercial real estate currently has its own headwinds to face up against. VC money is a non-starter, because the service is already running and that's not what VCs do. PE firms could also invest, but I think enough people are aware of what happens when private equity touches anything: enshittification and short-term cash extraction, destroying any long-term value.

  • The other commenter have described the challenge, but I'd like to clarify the terminology, since the distinctions might not be obvious. For tech, we generally speak of the separate qualities of being Free (as in, use it however you want) and Open (aka being open to study, reimplement, and extend). If both qualities are had, then that's called Free And Open.

    The most common designation is for software, which if both Free and Open, then that's Free And Open-Source Software (FOSS). Examples include the Linux kernel (GPL license) and FreeBSD in its entirety (MIT license). This means you can remake the software and use it how you like.

    For hardware, there's also the equivalent concept of Free And Open, and that means the PCB design can be remade and used for whatever you want. If you wish to use Free And Open hardware for war or for hobby use, that's entirely up to you.

    But there's also the realm of silicon, which is the most esoteric and specialized, and there's a lot less Free And Open silicon designs available. For example, the x86 CPU architecture is not Free nor Open. It is patented and its logic is proprietary and trade secrets of Intel, AMD, and Via. They document the behavior of registers, but they never publish the silicon designs so that you could make your own at home.

    ARM is slightly different, in that they'll gladly help you build your own ARM silicon (eg Apple Silicon system-on-chips) but you need to pay them a license. So it's not Free nor Open because: 1) you have to pay money, and 2) the plans aren't available for examination until you pay up.

    LoRA silicon is more akin to x86, because they just don't publish anything except the register behavior. The license to use the LoRA design is baked into the sale price that the LoRA Alliance charges. And yet still, you at home receive no right to remix or examine that silicon design yourself, unless you do actual reverse engineering. And even then, they have patents.

    LoRA is not Open nor Free silicon. And it never claimed to be. Meshtastic and MeshCore use Free and Open hardware and software but that's it. You do not have as many rights to the silicon as you do for the hardware and software.

  • Kings and queens are known by their first name

    I can concede this.

    the distinction is because of the assumption the predecessor is alive while Junior and The Third are around

    Whereas this cannot possibly make sense, because knowing which person is which would still be relevant after they're all dead. See Pliny The Elder versus Pliny The Younger, Alexander Dumas (father vs son), and MLK (senior and junior).

  • Seeing as people can change their own name to whatever they want, including if there is no preceding generation with that name, then no, there's no particular issue with suffixes on names.

    I'd like to point out that in the English-speaking world, the English (and now British) Monarchy increments the generation number without regard for the immediately preceding generation. As in, Elizabeth II was crowned 300+ years after Elizabeth I. So it is well accepted that ordering doesn't necessarily matter and there is no hard rule against it.

  • MeshCore @feddit.org

    MeshCore's problem with security | Alainx277's Blog

    alainx277.com /posts/meshcores-problem-with-security/
  • MeshCore @feddit.org

    MeshOS Keygen - License Key Generator for MeshCore (T-Deck / Android)

    meshoskey.com
  • freebies @sh.itjust.works

    CVS: free 8x10 print. Use code FREE4APRIL . Exp 28 April EOD

    www.cvs.com /photo/create/builder
  • Programmer Humor @lemmy.ml

    Mike Masnick | Claude Code Just Rickrolled Me

    bsky.app /profile/did:plc:cak4klqoj3bqgk5rj6b4f5do/post/3mjnc2o3nhk2q
  • freebies @sh.itjust.works

    Walgreens: free 8x10 print. Use code RAINYPRINT . Exp 7 April EOD

    photo.walgreens.com /store/prints-and-enlargements-details
  • freebies @sh.itjust.works

    CVS: 2x free 5x7 prints. Use code 2DAYSONLY . Exp 30 March EOD

    www.cvs.com /photo/create/builder
  • freebies @sh.itjust.works

    CVS: free 8x10 print. Use code MARCH810 . Exp 16 March EOD

    www.cvs.com /photo/create/builder
  • freebies @sh.itjust.works

    Walgreens: free 8x10 print. Use code PRINT4YOU . Exp 17 March EOD

    photo.walgreens.com /store/prints-and-enlargements-details
  • freebies @sh.itjust.works

    Walgreens: 2x free 5x7 prints. Use code 2LARGE . Exp 11 March EOD

    photo.walgreens.com /store/prints-and-enlargements-details
  • Cast Iron @lemmy.world

    A friend's grandmother-original pan, stripped and seasoned

  • freebies @sh.itjust.works

    CVS: 10x free 4x6 prints. Use code FLASH10 . Exp 25 February EOD

    www.cvs.com /photo/create/builder
  • freebies @sh.itjust.works

    Walgreens: free 8x10 print. Use code ITSFREE810 . Exp 26 February EOD

    photo.walgreens.com /store/prints-and-enlargements-details
  • freebies @sh.itjust.works

    Walgreens: free 8x10 print. Use code HEARTS . Exp 14 February EOD

    photo.walgreens.com /store/prints-and-enlargements-details
  • Trains @midwest.social

    Santa Clara Co.: VTA Reports Record-Breaking Ridership For 2026 Super Bowl

    www.sfgate.com /news/bayarea/article/santa-clara-co-vta-reports-record-breaking-21342247.php
  • freebies @sh.itjust.works

    CVS: free 8x10 print. Use code FREE4FEB . Exp 9 February EOD

    www.cvs.com /photo/create/builder
  • freebies @sh.itjust.works

    Walgreens: 2x free 5x7 prints. Use code 2LARGE . Exp 4 February EOD

    photo.walgreens.com /store/prints-and-enlargements-details
  • Not Just Bikes @feddit.nl

    Disneyland-area bus operator will close down in March 2026

    rideart.org /anaheim-transportation-network-announces-wind-down-of-operations/