The Wayback Machine - https://web.archive.org/web/20250319091611/https://www.githubstatus.com/
GitHub header

All Systems Operational

About This Site

For the status of GitHub Enterprise Cloud - EU, please visit: eu.githubstatus.com
For the status of GitHub Enterprise Cloud - Australia, please visit: au.githubstatus.com

Git Operations ? Operational
Webhooks ? Operational
Visit www.githubstatus.com for more information Operational
API Requests ? Operational
Issues ? Operational
Pull Requests ? Operational
Actions ? Operational
Packages ? Operational
Pages ? Operational
Codespaces ? Operational
Copilot Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Mar 19, 2025
Completed - The scheduled maintenance has been completed.
Mar 19, 05:00 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Mar 18, 21:00 UTC
Scheduled - Migrations will be undergoing maintenance starting at 21:00 UTC on Tuesday, March 18 2025 with an expected duration of up to eight hours.

During this maintenance period, users will experience delays importing repositories into GitHub.

Once the maintenance period is complete, all pending imports will automatically proceed.

Mar 18, 19:28 UTC
Resolved - This incident has been resolved.
Mar 19, 00:55 UTC
Update - Actions is operating normally.
Mar 19, 00:55 UTC
Update - The provider has reported full mitigation of the underlying issue, and Actions has been healthy since approximately 00:15 UTC.
Mar 19, 00:55 UTC
Update - We are continuing to investigate issues with delayed or failed workflow runs with Actions. We are engaged with a third-party provider who is also investigating issues and has confirmed we are impacted.
Mar 19, 00:22 UTC
Update - Some customers may be experiencing delays or failures when queueing workflow runs
Mar 18, 23:45 UTC
Investigating - We are investigating reports of degraded performance for Actions
Mar 18, 23:45 UTC
Mar 18, 2025
Resolved - This incident has been resolved.
Mar 18, 18:45 UTC
Update - We are seeing recovery and no new errors for the last 15mins.
Mar 18, 18:28 UTC
Update - We are still investigating infrastructure issues and our provider has acknowledged the issues and is working on a mitigation. Customers might still see errors when creating messages, or new threads in Copilot Chat. Retries might be successful.
Mar 18, 17:42 UTC
Update - We are still investigating infrastructure issues and collaborating with providers. Customers might see some errors when creating messages, or new threads in Copilot Chat. Retries might be successful.
Mar 18, 16:42 UTC
Update - We are experiencing issues with our underlying data store which is causing a degraded experience for a small percentage of users using Copilot Chat in github.com
Mar 18, 16:00 UTC
Investigating - We are currently investigating this issue.
Mar 18, 15:58 UTC
Resolved - This incident has been resolved.
Mar 18, 17:15 UTC
Update - We are seeing improvements in telemetry and are monitoring for full recovery.
Mar 18, 16:56 UTC
Update - We've applied a mitigation to fix the issues with queuing Actions jobs on macos-15-arm64 Hosted runner. We are monitoring.
Mar 18, 16:36 UTC
Update - The team continues to investigate issues with some Actions macos-15-arm64 Hosted jobs being queued for up to 15 minutes. We will continue providing updates on the progress towards mitigation.
Mar 18, 15:43 UTC
Investigating - We are currently investigating this issue.
Mar 18, 15:05 UTC
Mar 17, 2025
Resolved - This incident has been resolved.
Mar 17, 23:02 UTC
Update - We saw a spike in error rate with issues related pages and API requests due to some problems with restarts in our kubernetes infrastructure that, at peak, caused 0.165% of requests to see timeouts or errors related to these API surfaces over a 15 minute period. At this time we see minimal impact and are continuing to investigate the cause of the issue.
Mar 17, 23:01 UTC
Update - We are investigating reports of issues with service(s): Issues We're continuing to investigate. Users may see intermittent HTTP 500 responses when using Issues. Retrying the request may succeed.
Mar 17, 21:25 UTC
Update - We are investigating reports of issues with service(s): Issues We're continuing to investigate. We will continue to keep users updated on progress towards mitigation.
Mar 17, 20:51 UTC
Update - We are investigating reports of issues with service(s): Issues. We will continue to keep users updated on progress towards mitigation.
Mar 17, 19:19 UTC
Investigating - We are investigating reports of degraded performance for Issues
Mar 17, 18:39 UTC
Mar 16, 2025

No incidents reported.

Mar 15, 2025

No incidents reported.

Mar 14, 2025

No incidents reported.

Mar 13, 2025

No incidents reported.

Mar 12, 2025
Resolved - On March 12, 2025, between 13:28 UTC and 14:07 UTC, the Actions service experienced degradation leading to run start delays. During the incident, about 0.6% of workflow runs failed to start, 0.8% of workflow runs were delayed by an average of one hour, and 0.1% of runs ultimately ended with an infrastructure failure. The issue stemmed from connectivity problems between the Actions services and certain nodes within one of our Redis clusters. The service began recovering once connectivity to the Redis cluster was restored at 13:41 UTC. These connectivity issues are typically not a concern because we can fail over to healthier replicas. However, due to an unrelated issue, there was a replication delay at the time of the incident, and failing over would have caused a greater impact on our customers. We are working on improving our resiliency and automation processes for this infrastructure to improve the speed of diagnosing and resolving similar issues in the future.
Mar 12, 14:07 UTC
Update - We have applied a mitigation for the affected Redis node, and are starting to see recovery with Action workflow executions.
Mar 12, 13:55 UTC
Investigating - We are investigating reports of degraded performance for Actions
Mar 12, 13:28 UTC
Mar 11, 2025

No incidents reported.

Mar 10, 2025

No incidents reported.

Mar 9, 2025

No incidents reported.

Mar 8, 2025
Resolved - On March 8, 2025, between 17:16 UTC and 18:02 UTC, GitHub Actions and Pages services experienced degraded performance leading to delays in workflow runs and Pages deployments. During this time, 34% of Actions workflow runs experienced delays, and a small percentage of runs using GitHub-hosted runners failed to start. Additionally, Pages deployments for sites without a custom Actions workflow (93% of them) did not run, preventing new changes from being deployed.

An unexpected data shape led to crashes in some of our pods. We mitigated the incident by excluding the affected pods and correcting the data that led to the crashes. We’ve fixed the source of the unexpected data shape and have improved the overall resilience of our service against such occurrences.

Mar 8, 18:11 UTC
Update - Actions is operating normally.
Mar 8, 18:11 UTC
Update - Actions run start delays are mitigated. Actions runs that failed will need to be re-run. Impacted Pages updates will need to re-run their deployments.
Mar 8, 18:10 UTC
Update - Pages is operating normally.
Mar 8, 18:00 UTC
Update - We are investigating impact to Actions run start delays, about 40% of runs are not starting within five minutes and Pages deployments are impacted for GitHub hosted runners.
Mar 8, 17:50 UTC
Investigating - We are investigating reports of degraded performance for Actions and Pages
Mar 8, 17:45 UTC
Mar 7, 2025
Resolved - On March 7, 2025, from 09:30 UTC to 11:07 UTC, we experienced a networking event that disrupted connectivity to our search infrastructure, impacting about 25% of search queries and indexing attempts. Searches for PRs, Issues, Actions workflow runs, Packages, Releases, and other products were impacted, resulting in failed requests or stale data. The connectivity issue self-resolved after 90 minutes. The backlog of indexing jobs was fully processed and saw recovery soon after, and queries to all indexes also saw an immediate return to normal throughput.

We are working with our cloud provider to identify the root cause and are researching additional layers of redundancy to reduce customer impact in the future for issues like this one. We are also exploring mitigation strategies for faster resolution.

Mar 7, 11:24 UTC
Update - We continue investigating degraded experience with searching for issues, pull, requests and actions workflow runs.
Mar 7, 10:54 UTC
Update - Actions is experiencing degraded performance. We are continuing to investigate.
Mar 7, 10:27 UTC
Update - Searches for issues and pull-requests may be slower than normal and timeout for some users
Mar 7, 10:12 UTC
Update - Pull Requests is experiencing degraded performance. We are continuing to investigate.
Mar 7, 10:06 UTC
Update - Issues is experiencing degraded performance. We are continuing to investigate.
Mar 7, 10:05 UTC
Investigating - We are currently investigating this issue.
Mar 7, 10:03 UTC
Mar 6, 2025

No incidents reported.

Mar 5, 2025

No incidents reported.