DEV Community

Cover image for Beyond Stars and Forks: Why Open Source Needs Better Collaboration Metrics
BekahHW
BekahHW

Posted on • Originally published at bekahhw.com

Beyond Stars and Forks: Why Open Source Needs Better Collaboration Metrics

When we were working on the Intro to Open Source course, one of the biggest painpoints we noted with new contributors was the frustration they felt when their PRs weren’t merged in in what they felt was a reasonable amount of time. They had done their research, found an issue, gotten assigned, and then…nothing. No feedback. No merge. Just silence.

It’s a familiar story. I’ve had contributors tell me, “My PR has been sitting there for two weeks and I haven’t heard a thing.” And I get it. There are so many reasons this happens, including burnout, abandoned projects, the lottery factor, and it’s rarely about bad intentions. That’s why I always tell contributors to join the community before contributing. It helps you understand the project’s rhythms, how to communicate with maintainers, and whether it’s a space that supports new members.

If you’ve read anything I’ve written in the last five years, you know I care deeply about the open source community. But how we have traditionally evaluated projects and metrics doesn’t give enough insight into the most meaningful parts. A lot of times, these metrics—stars, forks, downloads, and DORA metrics—miss the most important part of the story: how people collaborate.

A Different Approach to Open Source Metrics

Since OpenSauced shut down, I’ve been exploring different options for understanding the collaboration problem.
Collab.dev isn’t a replacement for OpenSauced, but it’s telling a different (and important) part of the story and capturing how people collaborate. It surfaces the human patterns behind the code, like review responsiveness, contributor distribution, and merge dynamics. Industry-accepted metrics like DORA are valuable for understanding software delivery performance, but not so much in the human department. They can tell you how fast code gets deployed, but not whether contributors feel supported, welcomed, or left in the dark. Open source is as much about relationships as releases, and we need metrics that reflect that.

The Collaboration Visibility Gap

The problem isn't just that our current metrics are incomplete. Vanity metrics have been touted as meaningful indications of the project’s health, and, to be direct, they're just not that important.

If we consider the challenges maintainers face every day, we'll see that it’s often difficult to:

  • Identify which contributors are most likely to become long-term participants.
  • Pinpoint exactly where the review process stalls or breaks down.
  • Understand if the community environment genuinely feels welcoming to newcomers.
  • Distinguish between sustainable growth and problematic scaling.

The path isn’t clearer for potential contributors either. They often struggle to determine:

  • Whether the project actively reviews and merges community contributions.
  • How long it typically takes for contributions to be reviewed.
  • Which maintainers are most responsive in the contributor’s area of interest.
  • If there’s a healthy balance between contributions from the core team and the wider community.

For many of us, we have to make a decision about where to invest our time and energy. It can be a real letdown if we’ve invested time and realized we made the wrong decision, coming out of it with nothing to really show for the work we’ve done. For instance, I wanted to learn more about cognee, an AI Memory management framework, recently, so I created a collab.dev cognee page to learn more about the collaboration happening. When you first look at cognee on GitHub, it looks like a growing open source project with a decent star count, active issues, and regular commits. But looking at Collab.dev’s dashboard, I get a richer story.

The Story Cognee’s Data Tells through Collab.dev

Contributor Distribution

When we think about good contributor distribution in an open source project, that usually means responsibilities, activity, and knowledge aren’t tied to a few people. Distribution allows for decreased burnout, project resilience, and creates a more welcoming environment. What we see with cognee is a genuinely balanced project. With 51% of contributions coming from the core team and 49% from the community, cognee has built real community ownership without abandoning maintainer responsibility, and we can make a connection with a collaborative environment and higher motivation to support the project from the community.

Contributor distribution graph

PR Lifecycle Metrics

It continues to get interesting. The review funnel shows that 88% of PRs receive reviews and 85.2% get approved. That approval rate tells a story about quality control and contributor experience. It suggests that either the project has excellent contribution guidelines that help people submit good PRs, or the maintainers are actively helping contributors improve their work rather than just rejecting it. On top of this, there’s a quick turnaround with a median response time of 1.9 hours and 42% of reviews happening within an hour. They’re not waiting three weeks for a review. The maintainers are cultivating a positive contributor experience through their responsiveness.

cognee lifecycle metrics

What this tells us about the Human Story

As a maintainer, I’ve been in the situation where I don’t have the capacity to immediately respond to contributors, and sometimes they even have to wait weeks for my response. Obviously, this isn’t ideal. Usually, what happens is that the person has moved on, they may not respond at all, or they’re less likely to contribute again. What we see from cognee’s numbers is that they don’t have that same problem.

When someone contributes to cognee, they aren’t left wondering whether or not their efforts are valued. They get fast enough feedback to stay engaged and iterate quickly. Their turnaround time for a review (1.6hrs) is a good way to encourage repeat contributors. Additionally, with a median merge time of 19.5 hours signals to contributors that their work has real and immediate impact. And they’re able to see their contributions available to users.

The Collaboration Pattern

When you look at these metrics together, they’re telling a story of intentional collaboration design, a story that thinks about the contributor and maintainer experience. They’ve created systems and habits that make collaboration feel responsive and worthwhile. What’s telling about this data is also what’s not happening. We don’t see any pile-ups of unreviewed PRs. There are no huge gaps between approval and merge. There are no signs of contributor frustration or maintainer overwhelm.

This collaboration story matters, not just to show that cognee looks like a good place to contribute, but because this can become a replicable story. Other projects can learn how to make collaboration feel good for the contributors involved. We can look at the data and the project and better understand what systems and practices created these patterns, and we can reach out to maintainers of projects we admire to ask: How do you build review workflows that are both thorough and fast? How do you maintain quality while staying responsive to community contributions?

Collaboration quality doesn’t have to be something we guess at. We can learn more through the data and find projects that have the capacity to take contributions from community members. (And if you’re interested, collab.dev also has a nifty comparison tool. You can check out my mem0 v. cognee comparison.)

The Bigger Picture: Measuring What Matters

We’re in a stage of open source where complex human dynamics determine whether open source projects succeed or fail. Collaboration metrics can help lead to better outcomes. When we measure collaboration effectively, we can:

  • Reduce contributor burnout by identifying overwhelmed maintainers
  • Increase successful first contributions by directing early contributors to responsive projects
  • Build more sustainable projects by understanding what healthy collaboration looks like
  • Create feedback loops that help projects improve their community practices

In open source, we've proven that collaborative development can create incredible value, but we can’t ignore the sustainability challenges, maintainer burnout, and the difficulty of scaling human collaboration.

Better visibility into collaboration patterns can help us to understand the future health of open source. We need tools that help us understand not just what code exists, but how effectively people work together to create and maintain it.

Open source has always been about people working together. Our metrics should reflect that meaningful work.

Top comments (25)

Collapse
 
srbhr profile image
𝚂𝚊𝚞𝚛𝚊𝚋𝚑 𝚁𝚊𝚒

was the frustration they felt when their PRs weren’t merged in in what they felt was a reasonable amount of time.

this is about transparency and communication. When someone adds a feature the question is, is it required?
For Resume Matcher, many times if someone does a PR I ask them to share why they've done that or if someone wants to contribute what kind of changes do they want to do before they get the PR done. And the communication happens in Discord community for us. This way we know why there'a PR, what changes does it targets and gets it merged in time.

When I suddenly get a PR with a lot of change, from someone who I've never interacted with. That's where things get tricky.

Collapse
 
bekahhw profile image
BekahHW

For sure. And this is where things like READMEs with clear contribution outlines can be helpful (start with an issue, if it's an architectural change, you need an RFC, no unsolicited PRs. It'll never be perfect, but you can definitely find ways to decrease the number of unwanted PRs.

Collapse
 
srbhr profile image
𝚂𝚊𝚞𝚛𝚊𝚋𝚑 𝚁𝚊𝚒

These days many PRs are just automated via Cursor, GitHub Co-Pilot, etc. And eventually bring in a lot of change in the code. It's very hard managing and yes, READMEs need to convey this message clearly. Do you think having a contributions section on docs would help @bekahhw ?

Thread Thread
 
bekahhw profile image
BekahHW

At the very least have a basic Contributions section on the docs and include the most important information. We have a pretty extensive CONTRIBUTING.md file at Virtual Coffee to help new folks navigate the process for contributing.

I know a lot of people are dealing with the automated PRs, and it's a real struggle for maintainers. I think having a bit of friction can help with that, but won't solve the problem.

For instance, using PR templates with a field that says something like:

AI-generated PR check

In this projects, we understand the value that AI can bring to code. However, we will not accept PRs that have been automated by AI and not reviewed by the PR submitter with a clear understanding of the changes, impacts, and maintainability of the code.

Did you use AI to generate any of this code?

  • [ ] ✅ Yes
  • [ ] ❌ No

If you answered yes, please answer the next question:

Did you personally review and understand all AI-generated code in this PR, including its purpose, impact, and long-term maintainability?

  • [ ] ✅ Yes
  • [ ] ❌ No

And then have a GH action that automatically closes the PR if they select No to the second question with a message: This PR is being closed as an AI-generated PR. Please feel free to resubmit once you understand the purpose, impact, and maintainability of the code.

That might be too complicated. Kind of thinking aloud here.

Thread Thread
 
srbhr profile image
𝚂𝚊𝚞𝚛𝚊𝚋𝚑 𝚁𝚊𝚒

Yes this is helpful, what I've done recently is setup CodeRabbit and if someone doesn't fixes the obvious bugs in the PR, then it's a no-go from my side. For Resume Matcher.

Thread Thread
 
bekahhw profile image
BekahHW

Awesome. You might want to check out pullflow.com/ (from the collab.dev team). It might be useful for your workflow.

Thread Thread
 
srbhr profile image
𝚂𝚊𝚞𝚛𝚊𝚋𝚑 𝚁𝚊𝚒

Okay 👌
Thanks ^_^

Collapse
 
dotallio profile image
Dotallio

This hits home - I've had PRs sit ignored for weeks, and that silence is rough. Do you think collaboration-focused metrics will ever replace stars and forks as the 'first impression' signal for most devs looking at a project?

Collapse
 
bekahhw profile image
BekahHW

That's a great question. I think it would take a couple of things for that to happen:

  • majority acceptance of the importance of collab metrics from maintainers
  • clear ways to display that data
  • GitHub integrating those metrics into their platform
Collapse
 
samuraix13 profile image
SamuraiX[13~]

I really would love to help with any open source project at some point, I feel like it's a duty, but unfortunately for now I can't really, though I will be free and able to write code without looking at clock again and again pretty soon, and will probably try to help an open source project that I think I can help by any mean, so I really appreciate your post giving more clear vision about situation!

Collapse
 
bekahhw profile image
BekahHW

It’s definitely a balance. And I think it’s better to wait until you have time to give back, than try to when you’re limited

Collapse
 
nevodavid profile image
Nevo David

been cool seeing steady progress - it adds up. you think the real growth for these projects comes more from systems or just people showing up every day?

Collapse
 
bekahhw profile image
BekahHW

As for most things in technology, I think it depends. You definitely need people to show up everyday. One of the big things is also having the right people show up. You can have ten contributors, but if 1-2 of them aren't good communicators, listeners, and/or collaborators, you're probably going to be getting less done than if you had 2-3 right contributors. As projects scale, you definitely need to have solid systems built.

Collapse
 
parag_nandy_roy profile image
Parag Nandy Roy

This is such an important conversation....metrics should reflect the human side of open source...not just numbers.

Collapse
 
bekahhw profile image
BekahHW

Always the challenge, right?

Collapse
 
parag_nandy_roy profile image
Parag Nandy Roy

yup.

Collapse
 
morphzg profile image
MorphZG

This post needs more visibility for sure. Here is the comment for the algorithm.

Collapse
 
bekahhw profile image
BekahHW

I appreciate that!

Collapse
 
codergirl1991 profile image
Jessica Wilkins

Another great article Bekah!

There were so many things here that you mentioned that resonate with what I have been experiencing as a maintainer for freeCodeCamp.

Collapse
 
bekahhw profile image
BekahHW

Thanks so much, Jessica! It's nice to be talking about open source again and thinking about the collaborative aspects.

Collapse
 
duker profile image
Duke

Totally agree — stars and forks don’t tell the whole story. We need better ways to measure real collaboration and community impact in open source.

Collapse
 
bekahhw profile image
BekahHW

Absolutely. I think there's a lot of progress being made, and I really appreciate the approach collab.dev is taking.

Collapse
 
itzsaga profile image
Seth

I've run into your core statement recently. Working at a company that has an open source project as part of it's DNA we're thinking through how do we measure "success" of that project? Do we have to have telemetry? Is it contributors? Stars? There's the marketing aspect of it where people will evaluate a project based on downloads and stars but also the maintenance aspect over time. You bring up very interesting points around collaboration patterns as a measure for an open source projects health.

Collapse
 
bekahhw profile image
BekahHW

Thanks, Seth. I've written pretty extensively about the metrics that matter. A lot of it depends on your goals. Ultimately, keeping your project secure and resilient should be at the core of your goals. Stars are not going to tell that story. Understanding collaboration metrics, if your team is resilient against the lottery factor, having an SBOM, are really great ways to get started. Happy to answer any questions.

Collapse
 
angelgabriel7 profile image
M Zaky Zulfikar

Impressive

Some comments may only be visible to logged-in visitors. Sign in to view all comments.