-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathgrafana-loki-promtel
More file actions
188 lines (123 loc) · 3.47 KB
/
grafana-loki-promtel
File metadata and controls
188 lines (123 loc) · 3.47 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
📘 Kubernetes Logging Stack Documentation
Loki + Promtail + External Grafana (Bare Metal)
1️⃣ Project Objective
To deploy a centralized logging system in a Kubernetes cluster using:
Grafana Loki – Log aggregation backend
Promtail – Log collector
Grafana – Visualization UI
Grafana runs on a bare-metal machine outside Kubernetes.
2️⃣ Environment Details
Component Value
Kubernetes Type Bare-metal cluster
Namespace logging
Loki Version 3.6.5
Helm Chart loki-6.53.0
Node IP 192.168.31.67
Grafana Installed locally (outside cluster)
3️⃣ Architecture Overview
Kubernetes Pods
↓
Promtail (DaemonSet)
↓
Loki (Single Binary)
↓
NodePort (31000)
↓
Grafana (Bare Metal)
4️⃣ Pre-Requisites
Kubernetes cluster running
kubectl configured
Helm installed
Internet access for pulling Helm charts
5️⃣ Installation Steps
5.1 Create Namespace
kubectl create namespace logging
5.2 Add Helm Repository
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update
5.3 Install Loki (Single Binary Mode)
helm install loki grafana/loki \
--namespace logging
5.4 Install Promtail
helm install promtail grafana/promtail \
--namespace logging
6️⃣ Verify Deployment
kubectl get pods -n logging
kubectl get svc -n logging
Expected output:
loki-0 → Running
promtail-xxxxx → Running
loki-gateway → Running
7️⃣ Testing Loki API
Initial test:
curl http://127.0.0.1:3100
Problem:
❌ Connection refused
Reason:
Loki service type was ClusterIP, not externally accessible.
8️⃣ Temporary Testing via Port Forward
kubectl -n logging port-forward svc/loki 3100:3100
Query logs:
NOW=$(date +%s)
START=$((NOW - 300))
curl -G "http://127.0.0.1:3100/loki/api/v1/query_range" \
--data-urlencode "query={namespace=\"logging\"}" \
--data-urlencode "start=${START}000000000" \
--data-urlencode "end=${NOW}000000000"
Result:
✅ Logs returned successfully.
Conclusion:
Loki ingestion working.
9️⃣ Permanent Exposure Using NodePort
Since Grafana runs outside Kubernetes, Loki must be exposed externally.
9.1 Convert Service to NodePort
kubectl -n logging patch svc loki --type='json' -p='[
{"op":"replace","path":"/spec/type","value":"NodePort"},
{"op":"add","path":"/spec/ports/0/nodePort","value":31000}
]'
Verify:
kubectl get svc -n logging
Expected:
loki NodePort 3100:31000/TCP
9.2 Open Firewall
sudo ufw allow 31000/tcp
Test in browser:
http://192.168.31.67:31000/ready
Expected:
ready
🔟 Configure Loki in Grafana
Open Grafana
Go to Connections → Data Sources
Add Loki
URL:
http://192.168.31.67:31000
Authentication: No Authentication
Click Save & Test
Result:
✅ Data source successfully connected.
1️⃣1️⃣ Querying Logs
In Explore:
Working query:
{namespace="calico-system"}
Issue faced:
{}
Error:
Loki requires at least one non-empty label matcher.
Correct alternative:
{namespace=~".+"}
1️⃣2️⃣ Real-Time Log Metrics
Count logs per minute:
count_over_time({namespace="calico-system"}[1m])
Enable auto-refresh → 5s
1️⃣3️⃣ Issues Faced & Solutions
Problem Cause Solution
Connection refused Service was ClusterIP Changed to NodePort
No data in dashboard Wrong time range Changed to Last 1 hour
{} query failed Loki requires label matcher Used {namespace=~".+"}
Grafana unable to connect Using cluster DNS Used NodeIP:NodePort
1️⃣4️⃣ Final Outcome
Centralized logging implemented
External Grafana integrated
Real-time log monitoring enabled
Permanent NodePort exposure configured
Firewall rules configured