An update on GitHub availability

(github.blog)

362 points | by salkahfi 21 hours ago

60 comments

  • bartread 34 minutes ago
    > I wanted to give an update on GitHub’s availability in light of two recent incidents.

    [Emphasis mine]

    Vlad, you are living in a very different world to me.

    GitHub has suffered dozens and dozens of outages since the beginning of the year. It is notably less available and reliable than it was even as recently as last year. People have created dashboards and heatmaps showing how bad GitHub has become. At least one of those has made it to the front page of Hacker News. In fact its unreliability and persistent availability issues have become a frequent topic of conversation across sites and communities frequented its users - of which HN and Reddit are two obvious examples. At this point GitHub's unreliability risks becoming a meme, if it hasn't already done so.

    The only thing your post makes clear is that your priorities ARE NOT clear.

    > Our priorities are clear: availability first, then capacity, then new features.

    WRONG!

    Your priorities are:

    1. Availability 2. Availability 3. Availability

    You have NO OTHER PRIORITIES.

    If you want other priorities, focus on AVAILABILITY for 6 months and then come back and we can all have a serious conversation about something else.

    In the meantime, you need to understand that GitHub's reliability over months and months - not just in April - has been completely unacceptable.

    Focus on fixing that and on nothing else.

  • embedding-shape 20 hours ago
    Hah, love that now they say "Our priorities are clear: availability first, then capacity, then new features" when 6 months ago, it was seemingly exactly the same except Azure supposedly was gonna save them:

    > GitHub Will Prioritize Migrating to Azure Over Feature Development - GitHub is working on migrating all of its infrastructure to Azure, even though this means it'll have to delay some feature development.

    > In a message to GitHub’s staff, CTO Vladimir Fedorov notes that GitHub is constrained on capacity in its Virginia data center. “It’s existential for us to keep up with the demands of AI and Copilot, which are changing how people use GitHub,” he writes.

    https://thenewstack.io/github-will-prioritize-migrating-to-a...

    So the currently delayed feature development is now gonna be further delayed, yet almost every week we see new features and changes, just the other day the single issues view was changed, as just one example. And it was "existential" 6 months ago yet they keep stumbling on the exact same issue today?

    Even if they're focused exclusively on reliability and uptime, we get the experience that we have today, kind of incredible how a company with the resources of Microsoft seemingly are unable to stop continuously shot themselves in the foot. It's kind of impressive actually. As icing on the cake, they've decided to buy up all popular developer services then migrate them all to the same platform, great idea too.

    • madeofpalk 19 hours ago
      This seems uncharitable. Priorities aren't exclusive, especially at scale across large engineering orgs like GitHub. It could be that these are the top level priorities, but teams or individuals who aren't able to contribute to these priorities will work on other things like new features.
      • voncheese 16 hours ago
        Agree that priorities aren't exclusive and there may be teams/individuals that aren't able to contribute if they stay in their current teams/roles

        Where it becomes questionable though is when enough progress isn't being made on the top priority (reliability). If Github is being true to their word, they need to be pulling people off of teams that are working on features to work on reliability so that top priority gets the resourcing it needs.

        Given the pace of improvement, and the cited example of moving to Azure from months ago, it's not super clear they are doing that. Also not clear that they aren't, maybe the move to Azure is just a more than 6mo project no matter how many people are on it.

        • estimator7292 14 hours ago
          Sure, but frontend devs fundamentally cannot contribute to the structural reliability issues.

          The person who rewrote the issue page view probably doesn't know anything about multi-cloud scaling for millions of users with Azure-crippling throughput. That's an incredibly specialized set of knowledge and experience that is utterly disjunct to frontend work.

          But at the same time, given the state that GitHub is in, I personally wouldn't want to allow any devs to push anything to prod that doesn't immediately affect stability. I'd completely freeze frontend work until the infrastructure is more stable. But then again I write C for microcontrollers so what do I know?

          • tedd4u 13 hours ago
            I don't know their architecture but I would bet if FE devs wants to contribute to availability in a capacity-constrained world (as GH CTO mentions) they could focus on profiling and optimization, backend-access patterns for example, caching, etc. Maybe they already have people dedicated on that but if they are coming out of a "new features first" operating regime I would bet there's some fruit to pick there.
      • embedding-shape 18 hours ago
        Ditto. I agree though, just because the priority is reliability, doesn't mean others can't work on features, especially features that might help with reliability, which I read was the motivation behind the new single-issue view, so that's my bad, might have been a bit much.

        I still think the rest of my point stands, especially the last one which is the move that has the biggest impact to the most of us developers.

      • dangus 11 hours ago
        Why do we need to be charitable to Microsoft?

        Did we lose our ability to consider them the evil empire?

        • allthetime 1 hour ago
          There’s a lot of “won’t someone think of the GitHub employees” on here
      • saghm 14 hours ago
        No, but they are ordered generally, and in this case they are explicitly saying that availability should come first
    • rwmj 19 hours ago
      It's entirely possible the move to Azure has made the availability problems worse. Dedicated hardware is much more predictable than cloud. "Let's not move to Azure and instead buy a few more racks" was likely a decision beyond the pay grade of github's management.
      • 0xy 18 hours ago
        Azure is easily the least reliable and least secure of the 3 hyperscalers, which is crazy because GCP was an also-ran underdog not that long ago.
        • alper 17 hours ago
          This entire exercise if anything is a huge indictment of Azure.

          But that doesn't matter because the kind of person that buys Azure, just like the kind of person that buys MS Teams, is entirely driven by price and does not care about anything else.

          • panarky 15 hours ago
            > entirely driven by price

            I might buy that argument if Azure compensated for its awful availability and security with lower prices.

            But the kind of person who buys Azure is the kind of person who buys Windows and Teams, perfectly happy to pay a premium for all the extra abuse.

            • bmitc 5 hours ago
              It's curious how bad people say Azure is. I've never used it, but I've used AWS, and AWS is a gigantic mess. So that makes me concerned if Azure is worse than a gigantic mass.
              • solatic 2 hours ago
                Azure's management APIs break connections coming from outside Azure's network every time they use DNS to execute a blue/green swap on their public load balancers. Existing connections are not gracefully drained. Terraform state gets corrupted (it thinks the operation failed when it actually succeeded and the resource was actually created) and requires manual fixing.

                This happened frequently enough at large enough scale we seriously considered building an automation to attempt to analyze the Terraform logs for the connection breaking and automatically import the created resource.

                Azure support was completely worthless.

              • murkt 3 hours ago
                Azure is worse. These series of posts were posted here not that long ago https://isolveproblems.substack.com/p/how-microsoft-vaporize...
              • skywhopper 48 minutes ago
                AWS is a complex mess, but it’s pretty good at delivering its services reliably. Azure is a mess that is also unreliable.
      • AntiUSAbah 16 hours ago
        I mean its Microsoft and its Azure. How much can go wrong clicking yourself a few/hundred non autoscaling normal VMs?

        There is so much workload running on Azure, i never heard of VMs go away.

        If Microsoft can source hardware for Azure, Microsoft can source hardware for Github.

        • dijit 16 hours ago
          there's a lot that can go wrong with a hypervisor, even including hiding hardware issues from the guest OS.

          We don't think about it because we've been quite spoiled with excellent virtual machine platforms (KVM, Xen and even VMWare).

          Those that have worked a lot with VirtualBox will be aware of this, it can be deeply unnerving that VM technology is the default way to deploy things after you've spent sufficient time with VirtualBox. (which: is very good for its original purpose, but not for reliability).

          The question is: Does Azure use something more like VirtualBox, or more like KVM?

          HyperV exhibits properties closer to VirtualBox.

          • stackskipton 14 hours ago
            HyperV looks like VirtualBox but it's not. It's type 1 like KVM is.
            • dijit 14 hours ago
              i meant in terms of bubbling up hardware issues.
        • ZoneZealot 16 hours ago
          I've had Windows Server VMs soft crash and hard crash on Azure. Some soft-lock and a restart via Azure gets them back. Some times the only fix has been to power off / deprovision - then power on again (i.e. a restart didn't fix it). It's not common, but I've encountered it multiple times. These are with operating systems that were created in Azure from their images.
    • ncruces 19 hours ago
      > So the currently delayed feature development is now gonna be further delayed, yet almost every week we see new features and changes, just the other day the single issues view was changed, as just one example.

      They did that as a panic mode hack to mitigate performance: https://news.ycombinator.com/item?id=47912521

    • giancarlostoro 16 hours ago
      If they had not added or changed any features to GitHub for the past 5 years, nobody would be upset, and yet, they keep changing it. It's a website that doesn't need to be reworked every five minutes. I assume the main development teams maintaining GitHubs codebase are ran by managers who cannot justify their jobs unless they deliver new features for the sake of delivering new features to keep their jobs going, and / or in the hopes of getting new people to join GH, when in reality the more they wind up breaking, the more the opposite becomes true.

      They severely nerfed their search, I'm not sure why every other major tech company (Google - Search and YouTube) keeps breaking search for everything when it was working fine previously.

      What's a bigger joke is Microsoft has Azure DevOps which looks like it might be abandoned? But then you also have GitHub... My least favorite thing about both is the ticketing system, I cannot believe that I'd ever utter the phrase "I miss Jira" when every Jira project I've ever been in had been so inconsistently setup, every, single, one.

      • jamesfinlayson 7 hours ago
        > They severely nerfed their search

        This always kills me. It used to work so well, and now it doesn't seem to work at all if not logged in, and not particularly well if you are logged in.

      • JCTheDenthog 14 hours ago
        >What's a bigger joke is Microsoft has Azure DevOps which looks like it might be abandoned?

        My favorite was trying to figure out how to publish debug symbols with NuGet packages to Azure DevOps artifact feeds. Horrible documentation and I was never able to get it figured out.

      • greatgib 13 hours ago
        What they nerfed the most is the basic feature of the PR diff view.

        It's only job is to display diff and review comments and it easily hide the diff for files that are a lit bit longer and hide comments when you have more than a dozen. You need to click to see. It's impossible to search in diff without going through it to expand everything.

        And a ton of things are regression compared to working with pr a few years ago. Including being a lot worse in terms of latency!

  • maccard 20 hours ago
    It's kind of hard to read this with a straight face.

    The unlabelled graph with big numbers on top, the priorities that don't match with what we're experiencing, and a list of things that they're doing without a real acknowledgement of the _dire_ uptime over the last 12 months....

    • georgyo 20 hours ago
      These are not the worst graphs in the world... Sure the bottom left axis is not labeled, but it still conveys the point correctly. The growth between 2023->2024->2025->2026 is growing quickly. And that in the end/beginning of 2026 they say more growth than the three years before, combined!

      You don't need to know the bottom left axis number. We do have to assume the graph is linear, and not some kind of negative exponent log graph. But given the rest of the content, I think that is safe to assume.

      Any company that experiences significantly more growth than they were planning for will have capacity issues.

      The priorities are most inline with that. The are way beyond the point that they can just add more hardware. They need to make the backend more efficient, and all the stated goals are about helping there.

      • johndough 20 hours ago
        > You don't need to know the bottom left axis number.

        We very much do. The graph suggests an insane growth in PRs from almost zero to 90M. Now compare this misleading graph with this much clearer one, which shows that the growth over the last three years has been less than 80%: https://github.blog/wp-content/uploads/2025/10/octoverse-202...

        • heisenbit 1 hour ago
          PRs were the culmination of human work. Now PRs are generated by machines to trigger human work. So the growth graph is not really absurd.
        • SkiFire13 20 hours ago
          That link shows the number of PRs created to be less than 10M though.
      • maccard 20 hours ago
        > These are not the worst graphs in the world... Sure the bottom left axis is not labeled, but it still conveys the point correctly.

        No, they're completely useless. Using the "New repos per month" as an example, if the bottom left is 1m, then that's a 20x increase in 2 years which is a lot. If the bottom left is 19m, it's a 5% increase in 2 years which is nothing.

        The massive surge on their labelled X axis starts in 2026, and these issues have been going on for a lot longer than that. GHA has been borderline unusable for a year at this point, if not longer.

        > But given the rest of the content, I think that is safe to assume.

        The rest of the content is "we're working on it", and "here's two outages in the last 14 days, one of which caused actual data loss"

    • ncruces 20 hours ago
      More numbers: https://x.com/kdaigle/status/2040164759836778878

      What's the question here, you don't believe growth is currently exponential, or do you think it shouldn't be hard to scale, when 10x YoY is not enough?

      • OtherShrezzing 20 hours ago
        As a business user, our costs have gone up while service has gone down dramatically. Meanwhile our marginal cost to GitHub has hardly changed. Where our costs to them have increased, they mostly charge us per cpu minute, so obviously aren’t making any kind of loss on our account.

        I’m sure they’re experiencing scaling issues across the platform, but it’s unacceptable for that to have a negative impact on us when we're sending them $250/dev/yr for (what is in all honesty) hosting a bunch of static text files.

        • ncruces 20 hours ago
          I understand that, and maybe GitHub became a bad deal because of that.

          But if anything, their post and your reply are precisely an endorsement of usage based billing.

          The bit that's growing 13x YoY (and which they expect will easily blow past that) is unmetered - commits. The bit that is metered (for some, not all folks) - action minutes, grew only 2x YoY.

          GitHub was not built to limit the number of commits, checkouts, forks, issues, PRs, etc - nor do we want them to - but that's what's growing ridiculously as people unleash hordes of busy beaver agents on GitHub, because their either free or unlimited.

          Where there are limits - or usage based billing - people add guardrails and find optimizations.

          Because for all the talk, agents don't bring a 10x value increase; otherwise, they'd justify a 10x cost increase.

          Besides, other forges are having issues too. Even running your own. We have Anubis everywhere protecting them for a reason.

          • conartist6 15 hours ago
            That sounds bad. Paying users don't want huge and every-growing numbers of freeloaders reducing the return for each dollar they spend...

            That would only lead to further and further degradation of service until the paying customers were absolutely desperate to find a deal that didn't require them to lug around such a heavy ball and chain.

            It all made sense at the beginning when Github was free for OSS and OSS was thriving, but now these billions of commits are mostly incredibly low value. I'd bet the average commit now doesn't create 1/10th of the value the average commit did in, say, 2018

        • rdevilla 20 hours ago
          > we're sending them $250/dev/yr for (what is in all honesty) hosting a bunch of static text files.

          You know, you can just host your own code forge. Or you can just drop gitolite on a server. Or pull directly from each others' dev machines on a LAN.

          GitHub is not git.

        • tracker1 17 hours ago
          I'm curious how Azure DevOps reliability has been for comparison. My current job is managing stories in DevOps with SCC in GitHub ent. While I like Github slightly more, have been curious about the decision.
          • stackskipton 14 hours ago
            We use Azure DevOps at work for few things. It's been pretty rock solid since all agents don't recommend it and it's different architecture.

            It's also legacy at this point since Microsoft is pouring all resources into GitHub but for most people/companies, they could probably use Azure DevOps just fine.

            • joeywas 4 hours ago
              Concur on the rock solid comment. We use Azure Devops with git repos, lots of pipelines using self hosted or Microsoft hosted agents. There was an issue with Microsoft hosted agents a few months ago, but that didn't last long, and is the only issue in my memory.

              I do prefer github interface over azure devops.

        • graemep 18 hours ago
          In that case, why are you using them at all?
        • dist-epoch 20 hours ago
          > we're sending them $250/dev/yr for (what is in all honesty) hosting a bunch of static text files.

          so start a GitHub competitor which bills $50/dev/yr for solving this easy problem and make a lot of money?

      • maccard 20 hours ago
        These numbers should have been in the blog post, not the graphs that are present.

        > What's the question here, you don't believe growth is currently exponential, or do you think it shouldn't be hard to scale

        I think you're putting words in my mouth here; I didn't say either of those things. I'm saying that this blog post is a meaningless platitude when the github stability issues predate this, and that all this post says is "we hear you're having issues".

        • ncruces 19 hours ago
          Sorry if I misread your intent.

          I just think their charts, taken at face value, show substantially the same thing (for PRs, commits, new repos).

          Either those charts are a bald-faced lie (the tweet could be as well) or there is no way for that chart to be something else.

          The only way to fake exponential growth like that would be to use an inverse log scale (which would be a bald-faced lie).

          It doesn't even really matter what's the y-axis baseline, unless we really think growth was huge in 2020, then cratered to zero by 2023, now back to the previous normal.

          As for the rest of the post, I do think it's panic mode platitudes. But I honestly don't know what I'd write instead that's better.

          You can already see people complaining loudly where they instead of "we'll do better" decided to limit usage.

          • maccard 19 hours ago
            No problem - it's tough online sometimes.

            > I just think their charts, taken at face value, show substantially the same thing (for PRs, commits, new repos).

            The problem is that these charts show the massive exponential growth in 2026. But this didn't start in 2026, this has been going on since early last year. My team had more build failures in 2025 due to actions outages or "degraded performance" than _any other reason_ and that includes PR's that failed linting or tests that developer were working on.

            > As for the rest of the post, I do think it's panic mode platitudes. But I honestly don't know what I'd write instead that's better.

            IMO, this needed to be written a 6 months ago (around the time that the memo of them prioritising the migration to Azure was released), and then this post should have been "We're still struggling, this isn't good enough. Here's the amount of growth, here's what we've done to try and fix it, and here's what we're planning over the next 3-6 months", instead of "Our priorities are clear: availability first, then capacity, then new features" and "We are committed to improving availability, increasing resilience, scaling for the future of software development, and communicating more transparently along the way." This isn't transparency (yet).

    • PunchyHamster 17 hours ago
      You mean since GH acquisition 6 years ago https://damrnelson.github.io/github-historical-uptime/
    • ramon156 20 hours ago
      "We hear you" in ~300 words, basically.
    • ferguess_k 20 hours ago
      You can do the same with so many clients.
  • mijoharas 20 hours ago
    > we started working on path to multi cloud.

    Is this microsoft stating that they aren't able to get acceptable reliability from Azure? (I mean, I think a lot of us have heard that, but it's interesting to hear it from microsoft themselves).

    • derwiki 20 hours ago
      It’s pretty damning. But as someone who has used Azure, I buy it.
      • everfrustrated 20 hours ago
        Pretty damming that two Microsoft subsidiaries - GitHub and LinkedIn - either shelved their forced migration to Azure or are looking at non-Azure options.
    • cbg0 20 hours ago
      I think this is more tailored towards enterprise clients that lose money when Github is down, that would probably help with retention.
      • bombcar 20 hours ago
        You’d think they could have had the existing GitHub on whatever continue as is (maybe for paying customers) while all the AI new inrush goes to the Azure setup.
      • jofzar 20 hours ago
        Yeah that's a top tier enterprise plan feature if I have ever seen ut
    • jasoncartwright 20 hours ago
      Seems pretty sensible to not rely on a single provider for their large complex system?
      • embedding-shape 20 hours ago
        Man, you should have been there 6 months ago when they decided to start tearing down GitHub's own data centers and move everything exclusively to Azure. Seems they themselves realized this after they started moving, but imagine if you could have helped them realize this before they even started :)
        • nextaccountic 19 hours ago
          Made me think. Why not convert Github datacenters into Azure datacenters that have Github as their sole customer?

          Then it's up to Azure how they will manage this

          • hobofan 17 hours ago
            That sounds like the worst of both worlds? The Azure devision that can't even reliably can't provide decent infrastructure products based on their own data center trying to do the same one a bespoke data center.
        • benterix 19 hours ago
          > Seems they themselves realized this after they started moving

          I guess most people at Github knew exactly it makes no sense but they didn't really have a choice. Maybe some voiced their statement, got "we hear you" in response and were told to proceed anyway.

          • embedding-shape 19 hours ago
            Yeah, I don't know how it went down, but I also know exactly how it went down:

            Microsoft Execs: Everyone needs to move to Azure!

            GitHub developers: But Azure is not gonna be able to handle our load, we literally have our own data centers!

            Microsoft Execs: Sure, but you're Microsoft now, please publish blog post about how in half a year you'll be 100% on Azure.

            Few months later...

            GitHub Developer: We've tried our best, users are leaving in droves and Azure can't keep up!

            Microsoft Execs: Ok fine, you can use something else too, but only if you mainly use Azure and continue publishing blog posts about how great Azure is.

            • alper 17 hours ago
              Azure is the MS Teams of clouds.
      • cyanydeez 20 hours ago
        This isn't a mom and pop shop. They have locations all over the world: https://datacenters.microsoft.com/

        There's no intrinsic reason they should be vulnerable to themselves.

        • farfatched 20 hours ago
          +1. Multi-cloud is typically done for vendor independence.

          But Github don't have that rationale.

        • jasoncartwright 20 hours ago
          That website (for me) uses Cloudflare via WPEngine, which also isn't Azure
      • mijoharas 20 hours ago
        I mean, amazon (shopping, along with prime video e.t.c.) runs on AWS.
        • PunchyHamster 17 hours ago
          It was more "we built AWS to run our stuff and figured out we can sell it too".

          While Azure feels like Temu clone of Cloud

          • grogenaut 2 hours ago
            actually incorrect. They figured they could sell unused hardware retail didn't need during non-peak, and retail could become more scalable. They went off in a corner with uncle andy for a year or 2 and built the basics. Like 10 years later retail was actually using AWS and not something that pretended it wasn't on aws. MAWS (being on aws not bare metal) was like a 2012-2015 thing and took for ever for NAWS (native aws) to happen that wasn't apollo, tho amazon still loves apollo in many places. Kinda a dirty secret, retai wasn't on aws until after aws was really popular.
        • ksimukka 20 hours ago
          When I was at AWS, retail was not yet running on AWS. Has that changed?

          Prime video does use some AWS services, but live and on-demand are two entirely different beasts.

          • mijoharas 19 hours ago
            Really? I thought retail was. It's been almost a decade since I worked at prime video but I think everything was running on AWS. (Some things didn't use brazil etc, but I think all the servers etc. were on AWS)
            • malfist 19 hours ago
              It's a distinction without a difference. All new development is nAWS (native AWS) legacy is mAWS (not sure about the acronym) which is still AWS under the hood and is mostly just a pool of EC2 instances with preconfigured networks. Nothing made in the last five or six years is on maws, and amazon is a micro service shop so things are always being built new. If you joined today there's a good chance you'd join a team without any maws infra
              • cmckn 17 hours ago
                MAWS is “Move to AWS”, the name of the internal campaign to get legacy services into a somewhat-retrofitted AWS environment. It was a single VPC at one point.
                • malfist 16 hours ago
                  I just finished a nearly five year stint at amazon and didn't realize there was pre-maws stuff still around. Never encountered any of it. I was like two months from my yellow badge but, uh, life is really better outside amazon.
                  • grogenaut 2 hours ago
                    many parts of AWS are not on AWS, and there's reasons to have bare metal but it's not as common and aws gives you good access in most cases.
        • jasoncartwright 20 hours ago
          Prime video uses a non-AWS CDN when I watch football on it here in the UK
          • farfatched 20 hours ago
            The BBC were unable to find a single CDN that could serve the UK during its peak football matches. https://www.bbc.co.uk/webarchive/https%3A%2F%2Fwww.bbc.co.uk...
          • grogenaut 2 hours ago
            that's called load balancing and regional availability. many companies do multi-cdn. in fact it's smart to use multiple CDNs so you can do better in contract time. Twitch uses IVS but we have failover to other CDNs for very large events.
          • jamesfinlayson 6 hours ago
            I'd believe it - CloudFront always felt a bit like AWS ticking a box ("we have a CDN") rather than being a good to use product.
    • zamalek 15 hours ago
      There was somewhat recently a post here about how priorities, pressure, and management subverted Dave Cutler's vision for Azure (which was to have near zero human involvement) - my Google fu isn't strong enough to find it. Supposedly, someone running over or opening a serial to a rack/VM is now typical operational procedure.
      • ok_dad 15 hours ago
        • zamalek 14 hours ago
          That's the one!
          • consumer451 6 hours ago
            Amazing read. Thanks to both of you for finding that.

            > I later researched this further and found that no one at Microsoft, not a single soul, could articulate why up to 173 agents were needed to manage an Azure node, what they all did, how they interacted with one another, what their feature set was, or even why they existed in the first place.

            This reads like a description of the SLS-based (aka Senate Launch System) Artemis program, which somehow ended up deciding that the insane Lunar Gateway should be a thing.

            Destin (SmarterEveryDay on YouTube) [0] called out the entire nutball scheme to NASA, at NASA. This includes the SLS/Orion/Lunar Gateway insanity, and calling out the number of unknown, but very large number, of on-orbit refuelings that Starship would need to get to the moon.

            In that video's comments, I believe there is someone who worked on the Orion-related system, who says ~"Yeah, we thought the delta-v was too low, we could have increased it, but no one was speaking with each other at a whole system level."

            The mission drift at large orgs, gov and corp, is a huge problem that might one day be solved?

            [0] https://www.youtube.com/watch?v=OoJsPvmFixU

            • zamalek 3 hours ago
              Large orgs aim to produce some type of output. Their entire existence stems from a "perverse incentive."[1] Governments produce bills and laws, corps produce short-term profits, etc. I am pretty sure that preventing this type of waste consumes significantly more energy than creating the waste - e.g. the agile manifesto, the rework book.

              Jobs was probably a good example of this. In my opinion, his image of an innovator is vastly exaggerated. What he did do well was to not invent things. E.G. liquid glass would have never seen the light of day under him: he was adept at saying "no" and preventing waste - Apple is now at the whims of anyone with the next stupid idea, the ideal example of wasteful behavior.

              [1]: https://en.wikipedia.org/wiki/Perverse_incentive

      • pbronez 15 hours ago
    • youwangd 16 hours ago
      Show HN timing matters more than people think. Monday-Thursday, 9-11am Pacific, is when the front page has the most engaged readers. Weekend posts get less competition but also less engagement.
    • tedd4u 13 hours ago
      > multi-cloud

      XXXXL size project. May not ever deliver. But if it fails, it will only do so after years grinding through people, resources, etc.

    • jansan 20 hours ago
      The entire concept of multi cloud is amusing if you think what cloud originally was supposed to be. They could call them meta clouds (might infringe trademarks), and with the current growth trajectory of AI generated code eventually multi-meta-clouds, renamed to beyond-clouds, and then multi-beyond-clounds. I see no limits.
  • s_ting765 20 hours ago
    > Vladimir Fedorov is GitHub's Chief Technology Officer .... He currently serves on the board of Codepath.org, an organization dedicated to reprogramming higher education to create the first AI-native generation of engineers, CTOs, and founders.

    I think I found the issue.

  • BlackFingolfin 20 hours ago
    GitHub stability has been bad for me. And recently even the data they show me in the web has been unreliably.

    Since yesterday, me and several colleagues noticed that the pull request lists on the website are incomplete, across many repositories. For example, on https://github.com/gap-system/gap/pulls it says "Pull requests 78" in the "tab list", but the PR list view reports "35 open" (the number 78 is correct, and confirmed by e.g. `gh pr list`)

    And that despite <https://www.githubstatus.com> reporting "all systems operational".

    • matharmin 18 hours ago
      In many of my projects don't show any closed pull requests for the last 6 days. The CLI can list them, but anything going through search shows nothing.

      Their support acknowledged the issue, but has been silent since then, and the status page still shows nothing other than the potentially-related issue on the 27th. It looks like it has been resolved on some repositories in the meantime, but I still have the issue across multiple orgs and repositories.

      https://github.com/orgs/community/discussions/193388

      • tracker1 17 hours ago
        I'm not able to see the current release-please PR and the last one broke during release creation so aborted the deploy. Hoping today goes better, but limited expectations after yesterday and may be deploying manualy.
    • vinc 17 hours ago
      I noticed the same thing and indeed the status page is not reporting the issue. I could find the missing PRs by browsing the branches page.
    • embedding-shape 20 hours ago
      > For example, on https://github.com/gap-system/gap/pulls it says "Pull requests 78" in the "tab list", but the PR list view reports "35 open" (the number 78 is correct, and confirmed by e.g. `gh pr list`)

      Surely a scaling hack where they use "estimation" queries that return "kind of right" results instead of 100% correct data, as it's less load on the infrastructure. Not necessarily a bug as much a shit choice from product perspective.

      • BlackFingolfin 19 hours ago
        If the numbers were all that is wrong, that'd be OK. But it fails to list all data -- so the only way to navigate to the missing PRs is to know their number, and manually inserting the right URL (or to go to another PR, and then edit the URL in the navigation).

        Sorry, but I don't think there is any way this can be classified as "not actually a bug"

  • darkwater 20 hours ago
    Glad that they released some data about new repo/issues/commits over the last years. It confirms what everyone else already believed from the outside: agents are putting a lot of extra, sudden pressure on GitHub. It's like a startup that is growing exponentially, with the difference that they already have a large user base to serve - and that keeps them in the bullseye - and probably a not-so-fast-moving organization when it comes down to changes. On the other side of the coin, they also have a lot of talent, infra and money a startup might not have yet.
  • LiamPowell 20 hours ago
    I can not figure out what on Earth they've done with these graphs, it almost seems like these are an artists impression of a graph.

    Looking at the commit graph: Why do commits have big steps followed by slow rolloffs? Why do the steps not happen at uniform points Why do larger steps sometimes have less of a slope than smaller steps but not all the time?

    Then looking at the other graphs there's completely different effects going on.

    • jospeh554 17 hours ago
      It's because they are your standard PowerPoint graph that just shows "thing goes up" rather than actual data, or the meaning of the data.
      • arnitdo 15 hours ago
        They seem to be the result of an image-gen model to me

        If this is the unvetted and unbased information they are putting out in public facing-blogs, only the stars would know what data is being "presented" in their boardrooms

  • icy 20 hours ago
    I'm biased (founder of tangled.org), but the future really should be federated forges. Host repositories on sovereign infra with global identity + federated "metadata" (issues, pulls, etc.).

    Global indices for this should be trivial to spin up so availability is never a concern (we're working towards this!).

    • PunchyHamster 17 hours ago
      It's cute idea but most people don't want to host their own stuff.

      And if they are using 3rd parties to host their stuff, inevitable 1-3 big players will show up offering that as a service.

      And even if you do host your own stuff to avoid availability problems, the big actors can still fail just like GH and you can't do shit coz your dependencies need it.

      So the solution is same as it is now, proxy or mirror everything you use

      • icy 16 hours ago
        Yeah that's fine, we offer first-party hosting for free forever.
        • mperham 7 hours ago
          > we offer first-party hosting for free forever.

          You should probably stop promising this.

    • ArcHound 20 hours ago
      But, there are? I can host a repo on GitHub, Codeberg and self host it too. Then I need to watch over main to keep it consistent between those. After that's established, I can do updates from wherever. Link'em in the README.
      • embedding-shape 20 hours ago
        There are distributed forges? Yes, git is distributed, but often everything around it isn't. The case parent is trying to make, is that the rest ("federated forges") should also be distributed, not just git.
        • ArcHound 20 hours ago
          Ok, gotcha. So there's a demand for the additional features that are not bundled within git to be federated somehow.

          I'd say we have emails, mailing lists and bug trackers. Or maybe: what is the missing killer feature that needs federation?

          • embedding-shape 20 hours ago
            > what is the missing killer feature that needs federation?

            Issues, pull requests, collaboration/permissions/access, "staring"/"favoriting", etc.

            I think ultimately the goal is that people can run their own forges, yet still collaborate on repositories hosted in other forges, leveraging your existing authentication so you no longer need to sign up individually for each forge.

      • nibbleyou 20 hours ago
        There's also a tool to automatically push it to multiple repos: https://github.com/prashantsengar/GitEcho

        Disclaimer: the author is a colleague of mine

        Though to be fair, what the parent meant by federated forges is different than this approach.

    • ljm 19 hours ago
      I would love if it coding agents didn't default to GitHub for their deep VCS integration.

      If I could get the same bells and whistles by wiring up another forge, so long as it offered a decent API and/or sent events over a webhook, I'd have everything self-hosted.

      The agents would need to expose an interface on their own end but as long as you implemented it with a plugin, it'd take the dependency of GitHub and you could use MCP or skills for the rest of it.

      • icy 19 hours ago
        The neat thing about Tangled is it's built on an open protocol (https://atproto.com)—this allows us to effectively build an API-free system since all data on Tangled can effectively be ingested via the AT Protocol firehose.

        Which is to say, this is perfect for agents given they don't need any bespoke SDK from us: simply write Tangled records for issues, pulls, whatever to your PDS and it'll show up on Tangled. We plan to start working on some exemplar agents first-party that would 1. enhance Tangled itself, 2. showcase cool things you can do with an open data firehose.

        • ljm 10 hours ago
          You do realise that writing Tangled records for issues, pulls, whatever constitutes both a spec and API.

          The fact that you use a protocol to define it is beside the point. You still have to define what a Tangled record is, and the interface that accepts it, and the mechanism to resolve it on the client.

          How else do you define what a 'tangled' is even if the underlying structure is git.

    • ramon156 20 hours ago
      Love the idea, would replace the LLM generated content ony our site, though.

      I recently migrated to codeberg because I'm okay with self-hosting big runners, while using codeberg's available runners for smaller cron-based things (they even have lazy runners for this).

      • icy 20 hours ago
        It’s… all hand written? We just sound “professional”.
    • sikozu 20 hours ago
      I've never heard of this before, going to sign up and check it out!
      • icy 20 hours ago
        Thanks! If you need anything, email me anirudh@!
    • beernet 20 hours ago
      What is "sovereign infra" exactly?
      • mathgeek 20 hours ago
        I know it's just marketing speak, but the term made me think of the scenes in the Matrix where what's left of humanity (ignoring all the cyclical lore that was added on top of it) has to make sure the machines can't remote in to any of their tech.
      • tfrancisl 20 hours ago
        No less than self hosted, imo. If youre on some cloud it doesnt really matter that you pay them absurd amounts of money, you arent sovereign.
        • beernet 18 hours ago
          So if a company self hosts their physical infrastructure which will burn down once a fire sets in, they are more "sovereign" than a company running on a redundant cloud? I definitely would not want to be "sovereign" then.

          Point is: This discussion is much more multi-dimensional than some suggest.

          • tfrancisl 11 hours ago
            A redundant cloud that could be rug pulled from you any day if the platform decides you are in violation of their terms, or if they just dont like your project. Yes, on prem is more sovereign than that. That doesnt mean it doesn't have drawbacks, and no one said it didnt. But if sovereignty is more important than redundancy, then on prem is certainly an option.
        • embedding-shape 20 hours ago
          So literally a computer at home/in the office, as with anything else you don't really "own" the infrastructure? Or is this just about "cloud"?
          • icy 20 hours ago
            Yeah sorry it's marketing BS speak for self-hosted or just infra that you control. It could be a VPS, it could be a Raspberry Pi at home. Your repos live on your servers. (And we support this on Tangled today!)
            • embedding-shape 20 hours ago
              > just infra that you control

              But a VPS isn't actually infrastructure you control, you essentially have as much control over it as "cloud", so I don't think that'd be counted as "sovereign", would it?

              • icy 20 hours ago
                Perhaps, but it's still better than nothing!
    • iso1631 18 hours ago
      > the future really should be federated

      The internet should not be centralised, but you can't make a billion dollar company without capturing the world and selling your company to a trillion dollar company

  • frangonf 20 hours ago
    What are we doing?

    Stop subsidizing tokens now that we extracted enough training data from you and we have enough agentic junkies business to keep the flywheel going up and cut on the loss leaders. [0]

    [0] https://news.ycombinator.com/item?id=47923357

  • latexr 20 hours ago
    > The main driver is a rapid change in how software is being built. Since the second half of December 2025, agentic development workflows have accelerated sharply.

    GitHub instability has started way before that. I understand it’s too much to ask of a trillion-dollar corporation to consider the impact of their own actions, but perhaps they should’ve thought of that before forcing LLM development down everyone’s throats.

    • mathgeek 20 hours ago
      While they contributed, they were still following the market trend anyway. If they weren't letting folks use it directly, other companies would have (and are).
      • latexr 20 hours ago
        > they were still following the market trend anyway.

        They started the trend with Copilot.

        > If they weren't letting folks use it directly

        There is a chasm of difference between “letting you use it” and “forcing it down your throat”. Microsoft is doing the latter, not the former. Copilot is annoyingly present by default at every step on GitHub.

        • jcgrillo 4 hours ago
          They should have just called it Clippy and revived the animated avatar.
  • torben-friis 20 hours ago
    Not enough attention is being put in the production/delivery mismatch.

    GitHub is claiming they require 30x scale due to the giant increase in repository creation, PRs, commits, etc.

    I have not seen a single product increase in features or quality as an end user, nor new significant products have come out in this period (other than the LLMs themselves).

    Where is all this code going?

    • jmbwell 19 hours ago
      I understood it to mean, GitHub is being crushed by LLM/AI/Agentic code review and submission, not GitHub’s code itself

      What I’m not seeing here but I am seeing with the Linux kernel is, most of the automatically submitted code is irrelevant or not useful

      (Maybe that’s what you were getting at, apologies)

      • torben-friis 13 hours ago
        >GitHub is being crushed by LLM/AI/Agentic code review and submission, not GitHub’s code itself

        Yes, that was the intented meaning, sorry if it wasn't clear.

        My point was that, if we can assume github's load is a decent proxy for global code generation, we're generating 30x without 30x results.

        30x means that iOS could generate as many features as it has had since development in just a year. I don't think there is evidence of even 2x delivery in the industry.

    • whstl 20 hours ago
      I for one believe Microsoft when they say this code is going to Github... to die.

      Half of my friends is vibe-coding something but they can barely get the rest of the group chat to use it once.

      In companies, I see people vibe-coding "miracle apps" that fall under the smallest amount of scrutiny.

      Basically people are doing the same developers do when they say "I can do this in a weekend", which is getting a prototype sort of running and then immediately losing energy (or in this case lacking ability) to push it forward.

      • jamesfinlayson 5 hours ago
        Yeah I was talking to someone recently that needed some feature in a long-abandoned tool. They vibe-coded the feature and it worked, so good for them, but then they added up vibe-coding a bunch of extra features that they didn't need, just because.
      • jansan 20 hours ago
        > Half of my friends is vibe-coding something but they can barely get the rest of the group chat to use it once.

        Some people I know can't even explain what they are trying to create.

    • yakattak 18 hours ago
      To die. I’m sure that’s nothing new for GitHub, but now it can happen at scale.
  • zamalek 15 hours ago
    > Our priorities are clear: availability first, then capacity, then new features.

    No mention of Copilot/slopiffication. Probably an intentional omission as Microsoft only has one true priority across all of its products.

  • jftuga 20 hours ago
    Some interesting tid bits:

    * we had to resolve a variety of bottlenecks that appeared faster than expected from moving webhooks to a different backend (out of MySQL)

    * * redesigning user session cache to redoing authentication and authorization flows to substantially reduce database load.

    * we accelerated parts of migrating performance or scale sensitive code out of Ruby monolith into Go.

    I'd like to know what database backend they migrated to. I was also surprised to read that the migration from Ruby to a more performant language had not already been completed. I assume this is because it a large code base with many moving parts, etc.

    • mohsen1 20 hours ago
      Another interesting bit: they are hitting performance issues due to the rise of monorepos. GitHub and frankly Git were not designed for monorepos
      • ghthor 18 hours ago
        Yet the Linux kernel is a monorepo
        • guipsp 15 hours ago
          The Linux kernel is pretty small
        • mohsen1 16 hours ago
          Try google3
  • danra 2 hours ago
    When it's down to brass tacks, the most common GitHub action, actions/checkout, is not taking contributions due to "focus [...] on strategic areas" [0] despite having years-old issues - here's one[1] that soon celebrates its sixth birthday, despite having an available PR!

    [0] https://github.com/actions/checkout#note

    [1] https://github.com/actions/checkout/issues/270#issue-6289677...

  • clvx 18 hours ago
    With this prioritization Github IPv6 support is gonna happen the next decade.
    • snihalani 2 hours ago
      IPv6 doesn't sound like a huge lift at the entrypoints. Internal networking to IPv6 only sounds like an impossible lift
  • himata4113 20 hours ago
    so what they're saying is that Co-Authored-By [email protected] is overloading their systems?

    and that azure cannot scale fast enough to handle the load so they're embracing multi-cloud as a company... owned by microsoft?

    woah. what am I reading.

  • baq 20 hours ago
    openai, anthropic, google and a plethora of chinese models all end up pushing code into github. you can discuss whether gpt 5.5 is better than opus 4.7, but for github it doesn't matter: they'll be receiving the code no matter which llm spits it out.

    amazing on one hand, quite scary on the other for github and all other forges if this continues and there is no reason why it wouldn't.

    • graemep 18 hours ago
      Simple solution: charge all users. Charge more for higher usage.
      • gattr 17 hours ago
        And/or provide a baseline free tier, corresponding to how much a typical human user would at most push/clone etc. They have pre-LLM statistics on that.
  • mrhottakes 16 hours ago
    LLMs have helped us invent websites that only work sometimes. We're truly living in the future.
  • pluc 20 hours ago
    There are no words that Microsoft can use that would make me trust Microsoft.
  • sikozu 20 hours ago
    This latest incident was the nail in the coffin for me. I've been on GitHub since 2012 but I'm feeling the pull to migrate out to Gitea/Forgejo. Has anybody done this recently? How'd it go?
    • embedding-shape 20 hours ago
      When one of the incident they write about here happened, I wrote about my experience moving from GitHub to Forgejo which I happened to complete just the night before that happened: https://news.ycombinator.com/item?id=47878192 (lots of other people sharing their experience as replies too)

      I was thinking of maybe doing a proper write up about how to host your own Forgejo + Action runners on Linux, Windows and macOS, not sure if there is enough interest. What would people for sure want to know in a guide/explanation of this?

    • sltr 20 hours ago
      I moved over back when GitHub was planning to charge per minute to use my own runner. It was easy with Claude, the gh API, and forgejo web API. I even set up daily backups to my S3 clone of choice.

      The only repos I left on GitHub are forks and one with a bit of public engagement.

  • steve1977 20 hours ago
    I know that I'm simplifying (probably too much), but it seems like things were fine when GitHub was still a Ruby on Rails monolith and all the rigmarole with microservices etc. only made things worse.
    • remus 20 hours ago
      Unless everything else stays the same (underlying traffic etc.) then you can't really compare. Could be that you hit some fundamental scaling limit with the old design and it completely falls over after a certain scale.
      • steve1977 19 hours ago
        Oh as said I'm pretty sure things are more complex. It's just funny in a way that all these technologies that are usually being sold as "enablers for scale" don't seem to do their job very well.
    • tankenmate 20 hours ago
      This sounds more like a belief, based on little more than "correlation is causation", than analysis that controls for macro-trends backed by evidence.
      • sgarland 19 hours ago
        It is, but everyone is entitled to beliefs. Anecdotally, I feel the same way. Everywhere I’ve been, there has always been a legacy monolith that was stable as a rock, with dozens of new microservices scattered around it in an attempt to exit the monolith. The microservices have never once been stable. People fail to take the most basic things into consideration, like “you can’t have Consistency and Availability when everything is a network call.”

        I’m sure survivor bias is at play here, but when I look through the older code bases - especially the data model - it’s an entirely different world than the newer stuff, and it’s clear which of the two was written by people who understand systems.

        • jamesfinlayson 5 hours ago
          I feel like this accurately described my last two jobs - the rock solid old system that never fails plus a bunch of inconsequential microservices hanging off it (because microservice are the hot new thing, but the microservice people are still smart enough to not replace core functionality from the old system).
        • steve1977 12 hours ago
          I mean one of the selling points of microservices is, that the developer can only be concerned with their slice of the whole and does not need to understand systems. Maybe this is not so clever after all.

          Or, it would require an architect who has a very good understanding of the system. Which in reality seems to be rare.

      • embedding-shape 20 hours ago
        GitHub been oscillating between long phases of "Never any new features but rock-solid and no downtime" and "New features every week but also unicorns (used to be the "service unavailable page") every week" for as long as I can remember. Seems they're on some interval switching between the two.
  • jcattle 20 hours ago
    When there's a gold rush invest in checks notes jewellery makers?
  • eolgun 19 hours ago
    The AI agent growth explanation is interesting but also a bit of a deflection. If a meaningful portion of your traffic is now automated agents, your capacity planning model is fundamentally different, you're no longer scaling for human paced workflows but for burst patterns that look nothing like historical load.

    The unlabeled graphs don't help the credibility case. When you are already in the hole on trust, shipping a post that requires readers to assume favorable baselines is exactly the wrong move.

  • dangoodmanUT 19 hours ago
    Two incidents? Just two?

    In seriousness, looking at their scale, this is an insane engineering challenge.

    Especially if they’re moving databases, not easy ever, and certainly not at that scale

  • zinodaur 17 hours ago
    > posts graphs without way to determine scale of y axis

    Now that’s the kind of excellence I expect from the GitHub engineering team

  • otar 20 hours ago
    I had to postpone a call with developers (in 2 different countries) because I didn't had access to the issues board, which is a single source of truth for us.

    I understand the rapid growth (because of AI agents), but if such critical software service becomes unstable then it's time to migrate? Thinking about self-hosting GitLab.

    • embedding-shape 20 hours ago
      > but if such critical software service becomes unstable then it's time to migrate?

      Right way to think about this:

      > If things we need/see as critical for our work are hosted on a platform with really bad reliability, it's time for us to migrate

      My internet connection at home is really shit, and almost every week there is a multi-hour downtime for some reason, not to mention when La Liga games are on TV anything using Cloudflare is unavailable, so I've had to spend extra energy and time to setup things in a way so I can still work whenever this happens.

  • saghm 14 hours ago
    Given what "An Update on <XYZ>" usually means, I can only assume this means that Github has decided to no longer provide availability. Not particularly surprising given current trends I guess
  • mendyberger 19 hours ago
    I wonder if this mess has anything to do with talent loss resulting from layoffs after the pandemic
    • pointlessone 19 hours ago
      I’d guess it has much more to do with the extra load agentic ai generates. If we take the charts in the OP at face value, do you think gh suddenly exploded in popularity? At this point I think almost everyone who has any use for gh already has an account and use it as much as they ever would. But all the charts go to the moon. Gh obviously didn’t take into account that ai agents can generate a lot of activity they don’t have capacity for.
  • guidoiaquinti 20 hours ago
    > While we were already in progress of migrating out of our smaller custom data centers into public cloud, we started working on path to multi cloud. This longer-term measure is necessary to achieve the level of resilience, low latency, and flexibility that will be needed in the future.

    Wild

  • TuxPowered 17 hours ago
    The availability of GitHub is still at 0% - it can't be reached over IPv6.
  • cedws 20 hours ago
    I wonder if they’ll end the free lunch we’ve been having since the MS takeover. There’s been a deluge of spam and crapware projects due to the LLM wave which is visible in that graph. Can’t see them sustaining being a public dustbin for low value projects forever.
    • sbarre 20 hours ago
      I could see them expiring/archiving/deleting inactive projects after some time.

      I feel like this would have negative impacts (lots of interesting historical archives on Github) but maybe if a project hasn't been touched, or cloned, in some time, it just gets deleted with some notice.

      • jamesfinlayson 5 hours ago
        I hope not but it will probably happen.

        Just last week I found an interesting repo that hadn't been touched in 9 years. I immediately cloned it as it was something reverse engineered so DMCA isn't out of the question, but now I have two reasons to clone.

      • rmunn 15 hours ago
        Thing is, projects that don't get touched for months and months are the least costly. Disk space is cheap; what's costly is compute time to process new commits, new/updated/closed issues, new/reviewed/merged PRs, and so on. Inactive projects just sit there taking up disk space but basically zero compute time. So it would make no sense at all for them to delete old, inactive projects. (Which doesn't mean they won't do it: they might have hidden costs I'm unaware of, or they might make stupid decisions. People do make stupid decisions sometimes).
        • ifwinterco 44 minutes ago
          Also creates a perverse incentive to automatically push random commits to make sure your repos stay “active” and don’t get deleted, creating more load
  • nraynaud 20 hours ago
    So I gather that nobody is working on a search that stays on the current branch?
  • GS_Projects 19 hours ago
    The bit nobody covers in these write-ups: small teams without dual-cloud failover budget. Last big GitHub outage cost me a deploy day. Not catastrophic but the kind of thing you don't budget for when GitHub is your single source of truth.

    Status page is also still doing that thing where every component is green but in practice clone is hanging, push is timing out, actions are stuck. Per-service uptime is a managed number. The user-experience number is the one that matters and it's not in the post-mortem.

  • throwatdem12311 20 hours ago
    > The main driver is a rapid change in how software is being built.

    Leopard, meet face.

    Too little too late, yesterday was the straw that broke the camel’s back for us and we’ve started a migration to a self-hosted GitLab.

  • Waterluvian 19 hours ago
    I have a hard time believing anything what's said in a blog post where a graph lacks axes labels/scale. It tells me that nobody who cares about correctness had any say on the content of the post. Maybe I'm being 8am cranky and pedantic, but I'm sticking with it.

    > availability first, then capacity, then new features.

    I'd love to experience first-hand a leadership team who says, "stop accepting new paying customers until we've got availability sorted out!"

  • sltr 20 hours ago
    One thing is clear: an LLM wrote this.
  • dzonga 19 hours ago
    blame MySQL. Blame Ruby.

    on another note - is the exponential growth from 'agentic' workflows actually resulting in productive software in the wild. Or it is just noise. On my end I haven't seen the software I use getting better.

  • rootnod3 20 hours ago
    > Our priorities are clear: availability first

    That's a delayed April fool's right?

    • embedding-shape 20 hours ago
      No, just a 6 month old memo that was first opened today, as they said literally the same 6 months ago.
  • pier25 17 hours ago
    Github has been having availability issues for years now.
  • fontain 20 hours ago
    Personally, I’m sympathetic. We know that GitHub did a huge amount of work over the last decade to make Git scale, which has benefited us all. These new scaling challenges are real challenges, 30x growth would be a nightmare for any system that was already pushing the limits of what was possible, I think we are being far too hard on GitHub, they deserve a little grace.
    • someone_eu 20 hours ago
      GitHub's scaling issues are caused by their own vendor-lock approach and monopoly. Yes, of course _their_ goal is to be even bigger and even more all-consuming, so _they_ have to deal with the scale. Why a user would be sympathetic to that?

      The user (and not a big tech monopoly) answer to scaling issues is almost always to stop scaling and start federating and interoperating.

    • remus 20 hours ago
      For all the negatives about github I agree. They offer a lot of free stuff, and LLMs seem likely to put massively increase their costs with no guarantee they'll be making money off it. I can't think of many (any?) large businesses which could scale up to meet so much new demand without some significant growing pains along the way.
  • perbu 18 hours ago
    fwiw, I've had good luck scaling git, specially doing clones, in the HTTP layer, using Varnish. this was CI bringing Github Enterprise to it's knees.
  • BigTTYGothGF 17 hours ago
    LLMs and vibe coding ruining it for the rest of us.
  • lousken 20 hours ago
    Availability is priority? Does not seem like it is https://mrshu.github.io/github-statuses/
  • jameskilton 19 hours ago
    Nice, they have availability numbers now on their status page, but they aren't aggregating.

    If you multiply all current numbers together (as of Apr 28), you find out that GitHub has a 97.26% uptime.

    One ... single ... 9.

    They can do better.

    • embedding-shape 19 hours ago
      Kind of unfair though, do the same for any platform with multiple services and you'd probably get <99% for most of them.

      > you find out that GitHub has a 97.26% uptime

      Calculating that to "Downtime per day" you get ~40 minutes of downtime per day, almost a week per year. Crazy stuff for something essential like this.

  • bananapub 20 hours ago
    anyone who's actually worked there, could you explain why they're finding scalability and reliability so hard? naively it seems like 'repo groups', ie clusters of repositories linked by being mutual forks, would be fairly isolated for the whole git storage layer, and everything else feels pretty easily parallelisable (issues, actions, etc, modulo taking locks now and then to submit results or whatever). and given that, surely you can incrementally deploy changes across those many shards to avoid most big outages?

    are there big conceptual serialisations that I've missed? is it just not well factored? was the move to Azure just a catastrophically bad idea? some other thing?

    • fontain 19 hours ago
      Almost every high volume service on the internet is write a little, read a lot, and when there are writes, they're relatively small, a few bytes into a database that can fan out. GitHub is very different: constant writes, large files, it is under far more pressure than the systems the rest of us build. And then, as the article says, vibecoding happens, and suddenly they're receiving 30x the volume of expensive operations. GitHub are responsible for many of the performance improvements made to Git over the years, Git scales today because of work GitHub did, but that work was never intended to scale to volume of today.

      Even as recently as 18 months ago, Lovable appeared, seemingly overnight, and caused huge problems for GitHub because they were creating repositories on GitHub for every single Lovable project, offloading the very high cost onto GitHub, hundreds of thousands of repositories. A couple of years before that, Homebrew used GitHub as a de facto CDN and that was a huge problem, too.

      Nowadays it is easy to imagine how we can scale out a service like Twitter or YouTube or Facebook because everything has been done before, but that's not true of Git, Git hasn't ever scaled like this before, there are very few examples of service with GitHub's characteristics.

      https://lovable.dev/blog/incident-github-outage

      https://news.ycombinator.com/item?id=42659111

    • dist-epoch 20 hours ago
      recently there was a twit how GitHub PR diffs had 10 React components PER LINE. And how they optimized that to only 2 React components per line or something.

      > To summarize, for every v1 diff line there would be:

      > - Minimum of 10-15 DOM tree elements

      > - Minimum of 8-13 React Components

      > - Minimum of 20 React Event Handlers

      > - Lots of small re-usable React Components

      https://github.blog/engineering/architecture-optimization/th...

      • bananapub 20 hours ago
        I'm asking about the infrastructure, obviously they chose for some reason to make my computer fans turn on to show some red and green lines on a text file.
        • dist-epoch 19 hours ago
          terrible frontend architecture suggests poor engineering culture which typically spreads to all teams, including the infrastructure team
  • imrozim 20 hours ago
    As a solo dev GitHub going down is scary all my code, all my history, one platform. This makes me want to keep local backups more seriously.
    • tosti 18 hours ago
      Sorry to ask but... Do you have any idea how git works???
    • 2ndorderthought 20 hours ago
      Yea or use another provider like codeberg
      • maccard 19 hours ago
        Personally I'd never use codeberg. Their FAQ on licensing [0] is basically everything that anyone who supports free software should abhor - it's "we might allow you to do what you want to".

        [0] https://docs.codeberg.org/getting-started/faq/#how-about-pri...

      • imrozim 20 hours ago
        True but switching is not that easy when all your ci pipelines and integration on in GitHub.
        • embedding-shape 20 hours ago
          I don't think it's 100% compatible, but Gitea's/Forgejo's (which Codeberg runs on) own Action implementation is pretty much the same as GitHub Actions, with minor differences.
          • imrozim 17 hours ago
            Good to know might actually try it for one project 1 st before switching
  • devmor 16 hours ago
    Microsoft has been an abysmal steward of Github - the few nice features it has over self-hosting just aren't worth losing an hour or more of CI/CD downtime during daylight hours every week.

    Yesterday was the last straw for me - I've begun migrating my personal private projects and my contracting firm's projects off of github.

  • OutOfHere 18 hours ago
    > we accelerated parts of migrating performance or scale sensitive code out of Ruby monolith into Go.

    I am surprised that Microsoft is allowed to use Go. How long will it be before a bean counter forces a rewrite to a Microsoft favored language?

    • senderista 13 hours ago
      They used Go for the new TypeScript compiler!
  • everfrustrated 20 hours ago
    So they haven't even finished migrating from their datacenters to Azure and have now started a project to add another cloud provider ("multi cloud")? Madness.
  • JimmaDaRustla 17 hours ago
    AS IF THEY POST THIS WHILE THEIR SEARCH IS BROKEN, what a circus
  • agluszak 18 hours ago
    Regarding their image with stats (https://github.blog/wp-content/uploads/2026/04/record-accell...) - what exactly are the ranges on y-axes? I doubt they had close to 0 PRs merged in 2023 ;)
  • yieldcrv 20 hours ago
    Ruby catching strays

    Good chuckle out of this post, it’s crazy that neither Atlassian (Bitbucket) or Gitlab are capturing value out of this same agentic coding boom. I wish github was separately publicly traded outside of Microsoft.

    Nowhere to get exposure to this

  • 000ooo000 18 hours ago
    Load from paying customers vs. load from nonpaying users would be interesting to know. No doubt omitted deliberately.
  • dangus 12 hours ago
    Notice how the graphs have no Y axis. That's how you know it's manipulative.

    This company is owned by one of the major causes of the AI boom and is hiding behind difficulty scaling, despite its parent company also being a premier source of scaling solutions.

    GitHub: don't gaslight your customers.

    It is not your customers' problem that you're having trouble scaling. Nobody cares. Give us the service we are paying you for and make it reliable, or else we'll choose something else.

    After the words "Both of those incidents are not acceptable" the blog post should have been over. Nobody needs to hear a sob story about how your service is too popular.

  • jimmypk 19 hours ago
    [dead]
  • huijzer 20 hours ago
    I’m pretty sure my Forgejo instance on a Raspberry Pi is outperforming GitHub reliability. It’s faster that’s for sure.
    • huijzer 12 hours ago
      Why the downvotes? I’m serious. On GitHub I’ve experienced many downtimes. My Forgejo hasn’t gone down yet apart from reboots by me.