Hello,
I have a cluster in public cloud with different pods. One pod contains a SpringBoot Java project.
I set configuration for Log Data platform to follow this doc https://docs.ovh.com/fr/logs-data-platform/quick-start/
But no data in Graylog.
Do I need to add the token X-OVH-TOKEN for each log ? How ?
Thanks
Johann
Logs Data Platform & SpringBoot Project
Sujets apparentés
- [ Metrics ] New release : 2.0
5676
26.06.2017 22:06
- [Metrics] Grafana completely broken (fixed)
3931
02.01.2017 10:44
- Evaluer le volume envoyé de logs
3799
28.10.2016 00:07
- Envoi des logs depuis un cluster kubernetes via fluentd
3487
31.10.2016 13:28
- Supprimer data/Metrics sur opentsdb
3095
05.03.2018 15:48
- Problème de performances sur la partie metrics
3015
17.01.2019 08:01
- [Metrics] Data source InfluxDB dans Grafana
2843
12.01.2018 10:45
- [FR] [Metrics] Support du protocole Graphite disponible !
2759
28.12.2017 14:08
- Coupures de service ?
2615
29.05.2019 16:10
- [Metrics] How query your data using graphite ?
2251
29.12.2017 17:32
Hello,
yes you are right, each log must have the X-OVH-TOKEN field. How do you send your logs to Graylog, do you use a logging framework from your application or you forward the logs of your pods ?
Hello,
Thanks for your answer.
I use logback and logstash (SpringBoot).
I don't use specific appender, the log is write in the pod directly.
JOhann
Sorry for the late answer
If you can configure Logstash you can set up a Elasticsearch output to send your logs to a special index as detailed here : https://docs.ovh.com/fr/logs-data-platform/ldp-index/
You will need also to add your token with the mutate filter:
https://www.elastic.co/guide/en/logstash/current/plugins-filters-mutate.html#plugins-filters-mutate-add_field
the output configuration for Logstash is here: https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html. Just use the index username, password
Alternatively, you can also use fluent-bit to automatically send the logs of your pods to Logs Data Platform as described here : https://docs.ovh.com/fr/logs-data-platform/kubernetes-fluent-bit/
This documentation also details how to add the Token of your stream
If you need any help in deploying these two solutions, don't hesitate to ask here. We will gladly help you.
Hello,
Sorry for my late answer.
I follow the instruction for fluent-bit, and it works fine ! Thank you !
Johann
In addition, be careful to do not send all logs from the cluster as kube-system, registry, etc...
In 1 hour, I collect 1.5 GB of logs.
To do it, update the conf in Fluent Bit ConfigMap, an example :
```
input-kubernetes.conf: |
[INPUT]
Name tail
Tag kube.*
Path /var/log/containers/*deployment_name*.log
Parser docker
DB /var/log/flb_kube.db
Mem_Buf_Limit 5MB
Skip_Long_Lines On
Refresh_Interval 10
```
But a question : how delete the useless collect data in Data Stream ?
Hello,
Glad to know you succeeded in configuring fluent-bit.
Unfortunately, logs in our platform are immutable so you cannot delete them from your stream but, if you want, you can create a new stream (since streams are free), send the logs of your final configuration to this stream and delete the former one.
Thanks for your help :)