Logs Data Platform & SpringBoot Project
... / Logs Data Platform & Spri...
BMPCreated with Sketch.BMPZIPCreated with Sketch.ZIPXLSCreated with Sketch.XLSTXTCreated with Sketch.TXTPPTCreated with Sketch.PPTPNGCreated with Sketch.PNGPDFCreated with Sketch.PDFJPGCreated with Sketch.JPGGIFCreated with Sketch.GIFDOCCreated with Sketch.DOC Error Created with Sketch.
question

Logs Data Platform & SpringBoot Project

Par
JohannR3
Créé le 2021-06-19 07:33:50 (edited on 2024-09-04 13:26:10) dans Logs & Metrics-old

Hello,

I have a cluster in public cloud with different pods. One pod contains a SpringBoot Java project.

I set configuration for Log Data platform to follow this doc https://docs.ovh.com/fr/logs-data-platform/quick-start/

But no data in Graylog.

Do I need to add the token X-OVH-TOKEN for each log ? How ?

Thanks

Johann


7 réponses ( Latest reply on 2021-06-28 18:36:07 Par
JohannR3
)

Hello,
yes you are right, each log must have the X-OVH-TOKEN field. How do you send your logs to Graylog, do you use a logging framework from your application or you forward the logs of your pods ?

Hello,
Thanks for your answer.
I use logback and logstash (SpringBoot).
I don't use specific appender, the log is write in the pod directly.
JOhann

Sorry for the late answer
If you can configure Logstash you can set up a Elasticsearch output to send your logs to a special index as detailed here : https://docs.ovh.com/fr/logs-data-platform/ldp-index/

You will need also to add your token with the mutate filter:
https://www.elastic.co/guide/en/logstash/current/plugins-filters-mutate.html#plugins-filters-mutate-add_field
the output configuration for Logstash is here: https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html. Just use the index username, password

Alternatively, you can also use fluent-bit to automatically send the logs of your pods to Logs Data Platform as described here : https://docs.ovh.com/fr/logs-data-platform/kubernetes-fluent-bit/
This documentation also details how to add the Token of your stream

If you need any help in deploying these two solutions, don't hesitate to ask here. We will gladly help you.

Hello,
Sorry for my late answer.
I follow the instruction for fluent-bit, and it works fine ! Thank you !
Johann

In addition, be careful to do not send all logs from the cluster as kube-system, registry, etc...
In 1 hour, I collect 1.5 GB of logs.
To do it, update the conf in Fluent Bit ConfigMap, an example :
```
input-kubernetes.conf: |
[INPUT]
Name tail
Tag kube.*
Path /var/log/containers/*deployment_name*.log
Parser docker
DB /var/log/flb_kube.db
Mem_Buf_Limit 5MB
Skip_Long_Lines On
Refresh_Interval 10
```

But a question : how delete the useless collect data in Data Stream ?

Hello,

Glad to know you succeeded in configuring fluent-bit.

Unfortunately, logs in our platform are immutable so you cannot delete them from your stream but, if you want, you can create a new stream (since streams are free), send the logs of your final configuration to this stream and delete the former one.

Thanks for your help :)

Les réponses sont actuellement désactivées pour cette question.