The Wayback Machine - https://web.archive.org/web/20200904083738/https://github.com/grafana/loki/issues/2068
Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Promtail extraScrapeConfigs not being picked up #2068

Open
robbertvdg opened this issue May 12, 2020 · 8 comments
Open

Promtail extraScrapeConfigs not being picked up #2068

robbertvdg opened this issue May 12, 2020 · 8 comments

Comments

@robbertvdg
Copy link

@robbertvdg robbertvdg commented May 12, 2020

Describe the bug
It seems that excluding logs in a namespace using the configuration below does not work:

extraScrapeConfigs:
 - job_name: blacklist
   kubernetes_sd_configs:
   - role: pod
   relabel_configs:
   - action: drop
     regex: namespace_name
     source_labels:
     - __meta_kubernetes_namespace

To Reproduce
Steps to reproduce the behavior:

  1. Deployed helm loki-stack (0.36.2)
  2. Attach grafana to loki datasource
  3. Query: {namespace="namespace_name"} in Grafana Loki
  4. See logs

Expected behavior
Not seeing any logs

Environment:

  • Infrastructure: kubernetes (1.15)
  • Deployment tool: helm

Maybe i'm missing something, help would be appreciated!

@robbertvdg
Copy link
Author

@robbertvdg robbertvdg commented May 13, 2020

After a bit more research we came up with the following that did work:

scrapeConfigs:
    - job_name: kubernetes-pods-name
      [.. default .. ]
    - job_name: kubernetes-pods-app
      pipeline_stages:
        - docker: {}
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - action: drop
        regex: .+
        source_labels:
        - __meta_kubernetes_pod_label_name
      - action: drop
         regex: {{ namespace_name }}
         source_labels:
        - __meta_kubernetes_namespace
     [ .. default .. ]

Which feels very hacky and unsustainable, since we have to copy the default, modify it and then overwrite it. Is this intended behaviour?

@cyriltovena
Copy link
Contributor

@cyriltovena cyriltovena commented May 27, 2020

I guess it's because extraScrapeConfigs is at the end and not at the beginning ? It this was the problem I'm ok to receive a PR to move it to the top, because you're right if you want to drop targets this is useless.

Can you confirm that this was the problem ?

@robbertvdg
Copy link
Author

@robbertvdg robbertvdg commented May 28, 2020

I think that would solve the problem, will make a PR soon.

@cyriltovena
Copy link
Contributor

@cyriltovena cyriltovena commented May 29, 2020

You mean a PR ;)

@robbertvdg
Copy link
Author

@robbertvdg robbertvdg commented May 29, 2020

To clarify: the example on top is to exclude a namespace, we eventually wanted to exclude pod logs with:

  - action: drop
    regex: application-controller
    source_labels:
    - __meta_kubernetes_pod_label_app

inside the existing kubernetes-pods-app scrape job. This did work.
I tested yesterday with the extraScrapeConfigs on top, instead of at the bottom, but that did not seem to work. It feels to me like the kubernetes-pods-app job still picks it up. I am not too familiair with this, can you confirm this is the case?

@djmilosev
Copy link

@djmilosev djmilosev commented Aug 20, 2020

@robbertvdg

Were you able to exclude logs for a specific pod? I've tried with the following within the existing "kubernetes-pods-app" job but with no luck:

- action: drop
regex: application-controller
source_labels:
- __meta_kubernetes_pod_label_app

Maybe you can share with me how your "kubernetes-pods-app" looks like? Thanks.

@robbertvdg
Copy link
Author

@robbertvdg robbertvdg commented Aug 20, 2020

Yes, it should work like this:

    - job_name: kubernetes-pods-app
      pipeline_stages:
        - docker: {}
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - action: drop
        regex: .+
        source_labels:
        - __meta_kubernetes_pod_label_name
     # custom code
      - action: drop
        regex: <pod app label>
        source_labels:
          - __meta_kubernetes_pod_label_app
     # custom code end
      - source_labels:
        - __meta_kubernetes_pod_label_app
        target_label: __service__
      - source_labels:
        - __meta_kubernetes_pod_node_name
        target_label: __host__
      - action: drop
        regex: ''
        source_labels:
        - __service__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - action: replace
        replacement: $1
        separator: /
        source_labels:
        - __meta_kubernetes_namespace
        - __service__
        target_label: job
      - action: replace
        source_labels:
        - __meta_kubernetes_namespace
        target_label: namespace
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_name
        target_label: instance
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_container_name
        target_label: container_name
      - replacement: /var/log/pods/*$1/*.log
        separator: /
        source_labels:
        - __meta_kubernetes_pod_uid
        - __meta_kubernetes_pod_container_name
        target_label: __path__
@djmilosev
Copy link

@djmilosev djmilosev commented Aug 20, 2020

@robbertvdg

Thanks, this seems to be working fine. 🥇

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.