We have more than a hundred products in the organization, each using
hundreds (some even thousands) of open-source packages in their
portfolio.
Why do you think that your current approach will provide any security? Developers of these hundred products will just ignore any findings you have. Even worse: You may not even know what packages are used in all these products.
If you want to improve supply chain security, consider following approach. Organize a single repository that is allowed to be used in all the products. You can use any artifact repository of your choice (Artifactory, Nexus, whatever). Create there virtual repositories pointing to any real repositories your developers want: Linux packages (Debian, RPM, ...), Node.js packages, Java packages, Python packages, Docker images, etc.
All modern artifact repositories provide integration with CVE databases. Thus, you will get reports, if any of your products uses any artifacts with known vulnerabilities.
You would need to enforce deployment policy: Any artifacts that are deployed or published, must be built on restricted environments that have access only to this your repository and thus only use the scanned artifacts.
Thus you will have an overview of what vulnerabilities you have. Then you can estimate the risks of the issues for your products, prioritize them and handle.
You can still implement your own scanning process and disable usage of some artifacts, if they violate your policies. Also in this case having a single repository and enforcing all products to use it will give you the confidence that there are no any packages you are not aware of.