1

My plan is to start building the open-source packages from their sources and use organization's security resources like SAST tools to detect security issues in them.

The good thing that I see coming out of this effort is better security, especially with some of the lesser known, smaller open-source projects that are not built with security in mind. The organization can then create pull-requests to fix the discovered issues as a giveback to the open-source community.

However, I'm afraid that the hashes of the generated artifacts will change and automated tools like Whitesource used to detect known vulnerabilities and licenses for opensource packages might stop working.

Has anyone faced such an issue? Is there a middle ground where we can have the perks of both the strategies?

2
  • 1
    What prevents your from building and analizing the packages only on a test platform and using the official modules on the production platform? Commented Nov 15, 2021 at 11:19
  • @SergeBallesta: We have more than a hundred products in the organization, each using hundreds (some even thousands) of open-source packages in their portfolio. With so many packages, using a test platform doesn't inspire confidence. I can see it getting out of hand quickly. What I'm trying to do is create an official workflow that our product teams can adopt. I know it'll be a good effort but I'm having a lot of doubts too. Commented Nov 15, 2021 at 11:30

2 Answers 2

1

As your avatar is a penguin I am assuming you are using Linux. The big Linux distros like Debian or Redhat have dedicated Security teams that publish which versions of the software they package is vulnerable to every CVE.

So instead of relying on the hash (IMHO a flawed concept that makes Whitesource unusable for opensource) rely on the version number + distro patchlevel to identify vulnerabilities.

The drawback is of course, you have to rely on packages that your distro packages, but if a package is missing you might just submit your package upstream.

-1

We have more than a hundred products in the organization, each using hundreds (some even thousands) of open-source packages in their portfolio.

Why do you think that your current approach will provide any security? Developers of these hundred products will just ignore any findings you have. Even worse: You may not even know what packages are used in all these products.

If you want to improve supply chain security, consider following approach. Organize a single repository that is allowed to be used in all the products. You can use any artifact repository of your choice (Artifactory, Nexus, whatever). Create there virtual repositories pointing to any real repositories your developers want: Linux packages (Debian, RPM, ...), Node.js packages, Java packages, Python packages, Docker images, etc.

All modern artifact repositories provide integration with CVE databases. Thus, you will get reports, if any of your products uses any artifacts with known vulnerabilities.

You would need to enforce deployment policy: Any artifacts that are deployed or published, must be built on restricted environments that have access only to this your repository and thus only use the scanned artifacts.

Thus you will have an overview of what vulnerabilities you have. Then you can estimate the risks of the issues for your products, prioritize them and handle.

You can still implement your own scanning process and disable usage of some artifacts, if they violate your policies. Also in this case having a single repository and enforcing all products to use it will give you the confidence that there are no any packages you are not aware of.

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.