application
stringclasses
3 values
id
int64
2
101
question
stringlengths
33
136
logql_query
stringlengths
93
560
query_explanation
stringlengths
3
2.51k
query_result
stringlengths
0
2.1k
category
stringlengths
0
29
openstack
2
How long did it take to spawn instance 3edec1e4-9678-4a3a-a21b-a145a4ee5e61 on the hypervisor?
{application="openstack", log_file_type="nova-compute", component="nova.compute.manager"} |= "3edec1e4-9678-4a3a-a21b-a145a4ee5e61" |= "Took" |= "seconds to spawn the instance on the hypervisor" | regexp "\\[instance: (?P<instance_id>[^\\]]+)\\] Took (?P<spawn_time>\\d+\\.\\d+) seconds to spawn the instance on the hypervisor" | line_format "{{.instance_id}} took {{.spawn_time}}"
bla
3edec1e4-9678-4a3a-a21b-a145a4ee5e61 took 20.58
openstack
3
What was the total time taken to build instance 3edec1e4-9678-4a3a-a21b-a145a4ee5e61?
{application="openstack", log_file_type="nova-compute"} |= `3edec1e4-9678-4a3a-a21b-a145a4ee5e61` |= `Took` |= `seconds to build instance` | regexp `\[instance: (?P<instance_id>[^\]]+)\] Took (?P<build_time>\d+\.\d+) seconds to build instance` | line_format `{{.build_time}}`
1. {application="openstack", log_file_type="nova-compute"} Fetch all log lines matching label filters. 2. <expr> |= `3edec1e4-9678-4a3a-a21b-a145a4ee5e61` Return log lines that contain string 3edec1e4-9678-4a3a-a21b-a145a4ee5e61. 3. <expr> |= `Took` Return log lines that contain string Took. 4. <expr> |= `seconds to build instance` Return log lines that contain string seconds to build instance. 5. <expr> | regexp `\[instance: (?P<instance_id>[^\]]+)\] Took (?P<build_time>\d+\.\d+) seconds to build instance` The regexp parser takes a single parameter | regexp "<re>" which is the regular expression using the Golang RE2 syntax. The regular expression must contain a least one named sub-match (e.g (?P<name>re)), each sub-match will extract a different label. The expression matches the structure of a log line. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation. 6. <expr> | line_format `{{.build_time}}` This will replace log line using a specified template. The template can refer to stream labels and extracted labels. Example: {{.status_code}} - {{.message}}
21.38
openstack
4
What was the total time taken to build instance 3edec1e4-9678-4a3a-a21b-a145a4ee5e61?
{application="openstack", log_file_type="nova-compute"} |= "3416d0fa-6f0f-41ce-9c0a-59ae9a735da8" |~ "Total (memory|disk|vcpu):" | regexp "\\[instance: (?P<instance_id>[^\\]]+)\\] Total (?P<resource>\\w+): (?P<total>\\d+(?:\\.\\d+)?) (?P<unit>\\w+), used: (?P<used>\\d+(?:\\.\\d+)?) \\w+" | line_format "{{.resource}}: {{.used}} {{.unit}} used out of {{.total}} {{.unit}}"
1 {application="openstack", log_file_type="nova-compute"} Fetch all log lines matching label filters. 2 <expr> |= `3416d0fa-6f0f-41ce-9c0a-59ae9a735da8` Return log lines that contain string 3416d0fa-6f0f-41ce-9c0a-59ae9a735da8. 3 <expr> |~ `Total (memory|disk|vcpu):` Return log lines that match a RE2 regex pattern. Total (memory|disk|vcpu):. 4 <expr> | regexp `\[instance: (?P<instance_id>[^\]]+)\] Total (?P<resource>\w+): (?P<total>\d+(?:\.\d+)?) (?P<unit>\w+), used: (?P<used>\d+(?:\.\d+)?) \w+` The regexp parser takes a single parameter | regexp "<re>" which is the regular expression using the Golang RE2 syntax. The regular expression must contain a least one named sub-match (e.g (?P<name>re)), each sub-match will extract a different label. The expression matches the structure of a log line. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation. 5 <expr> | line_format `{{.resource}}: {{.used}} {{.unit}} used out of {{.total}} {{.unit}}` This will replace log line using a specified template. The template can refer to stream labels and extracted labels. Example: {{.status_code}} - {{.message}}
vcpu: 0.00 VCPU used out of 16 VCPU disk: 0.00 GB used out of 15 GB memory: 512.00 MB used out of 64172 MB
openstack
5
What is the vCPU usage for compute node cp-1.slowvm1.tcloud-pg0.utah.cloudlab.us?
max by (node) ( max_over_time( {application="openstack", log_file_type="nova-compute", component="nova.compute.resource_tracker"} |= "cp-1.slowvm1.tcloud-pg0.utah.cloudlab.us" |= "Final resource view:" | regexp "Final resource view: name=(?P<node>\\S+) phys_ram=(?P<total_ram>\\d+)MB used_ram=(?P<used_ram>\\d+)MB phys_disk=(?P<total_disk>\\d+)GB used_disk=(?P<used_disk>\\d+)GB total_vcpus=(?P<total_vcpus>\\d+) used_vcpus=(?P<used_vcpus>\\d+)" | label_format used_vcpus="{{.used_vcpus}}" | unwrap used_vcpus [30d] ) )
1 {application="openstack", log_file_type="nova-compute", component="nova.compute.resource_tracker"} Fetch all log lines matching label filters. 2 <expr> |= `cp-1.slowvm1.tcloud-pg0.utah.cloudlab.us` Return log lines that contain string cp-1.slowvm1.tcloud-pg0.utah.cloudlab.us. 3 <expr> |= `Final resource view:` Return log lines that contain string Final resource view:. 4 <expr> | regexp `Final resource view: name=(?P<node>\S+) phys_ram=(?P<total_ram>\d+)MB used_ram=(?P<used_ram>\d+)MB phys_disk=(?P<total_disk>\d+)GB used_disk=(?P<used_disk>\d+)GB total_vcpus=(?P<total_vcpus>\d+) used_vcpus=(?P<used_vcpus>\d+)` The regexp parser takes a single parameter | regexp "<re>" which is the regular expression using the Golang RE2 syntax. The regular expression must contain a least one named sub-match (e.g (?P<name>re)), each sub-match will extract a different label. The expression matches the structure of a log line. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation. 5 <expr> | label_format used_vcpus="{{.used_vcpus}}" This will change name of label to desired new label. In the example below, label "error_level" will be renamed to "level". Example: error_level=`level` Read the docs for more. 6 <expr> | unwrap used_vcpus Use the extracted label used_vcpus as sample values instead of log lines for the subsequent range aggregation. 7 max_over_time(<expr> [30d]) The maximum of all values in the specified interval. The range vector is set to 30d. 8 max by(node) (<expr>) Calculates max over dimensions while preserving label node.
<graph> graph with plot of used_vcpus across different logs. maximum is 1 </graph>
openstack
6
What is the RAM usage for compute node cp-1.slowvm1.tcloud-pg0.utah.cloudlab.us?
max by (node) ( max_over_time( {application="openstack", log_file_type="nova-compute", component="nova.compute.resource_tracker"} |= "cp-1.slowvm1.tcloud-pg0.utah.cloudlab.us" |= "Final resource view:" | regexp "Final resource view: name=(?P<node>\\S+) phys_ram=(?P<total_ram>\\d+)MB used_ram=(?P<used_ram>\\d+)MB phys_disk=(?P<total_disk>\\d+)GB used_disk=(?P<used_disk>\\d+)GB total_vcpus=(?P<total_vcpus>\\d+) used_vcpus=(?P<used_vcpus>\\d+)" | label_format used_ram_mb="{{.used_ram}}" | unwrap used_ram_mb [30d] ) )
1 {application="openstack", log_file_type="nova-compute", component="nova.compute.resource_tracker"} Fetch all log lines matching label filters. 2 <expr> |= `cp-1.slowvm1.tcloud-pg0.utah.cloudlab.us` Return log lines that contain string cp-1.slowvm1.tcloud-pg0.utah.cloudlab.us. 3 <expr> |= `Final resource view:` Return log lines that contain string Final resource view:. 4 <expr> | regexp `Final resource view: name=(?P<node>\S+) phys_ram=(?P<total_ram>\d+)MB used_ram=(?P<used_ram>\d+)MB phys_disk=(?P<total_disk>\d+)GB used_disk=(?P<used_disk>\d+)GB total_vcpus=(?P<total_vcpus>\d+) used_vcpus=(?P<used_vcpus>\d+)` The regexp parser takes a single parameter | regexp "<re>" which is the regular expression using the Golang RE2 syntax. The regular expression must contain a least one named sub-match (e.g (?P<name>re)), each sub-match will extract a different label. The expression matches the structure of a log line. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation. 5 <expr> | label_format used_ram_mb="{{.used_ram}}" This will change name of label to desired new label. In the example below, label "error_level" will be renamed to "level". Example: error_level=`level` Read the docs for more. 6 <expr> | unwrap used_ram_mb Use the extracted label used_ram_mb as sample values instead of log lines for the subsequent range aggregation. 7 max_over_time(<expr> [30d]) The maximum of all values in the specified interval. The range vector is set to 30d. 8 max by(node) (<expr>) Calculates max over dimensions while preserving label node.
<graph>\ graph with plot of used_vcpus across different logs. maximum is 2560MB RAM </graph>
openstack
7
What was the maximum response time for GET request to /v2/54fadb412c4e40cdbaed9335e4c35a9e/servers/detail?
max ( max_over_time( {application="openstack", log_file_type="nova-api"} |= "GET /v2/54fadb412c4e40cdbaed9335e4c35a9e/servers/detail" |= "status: 200" | regexp `"GET /v2/54fadb412c4e40cdbaed9335e4c35a9e/servers/detail HTTP/1\.1" status: 200 len: \d+ time: (?P<response_time>\d+\.\d+)` | label_format response_time="{{.response_time}}" | unwrap response_time [30d] ) )
1 {application="openstack", log_file_type="nova-api"} Fetch all log lines matching label filters. 2 <expr> |= `GET /v2/54fadb412c4e40cdbaed9335e4c35a9e/servers/detail` Return log lines that contain string GET /v2/54fadb412c4e40cdbaed9335e4c35a9e/servers/detail. 3 <expr> |= `status: 200` Return log lines that contain string status: 200. 4 <expr> | regexp `"GET /v2/54fadb412c4e40cdbaed9335e4c35a9e/servers/detail HTTP/1\.1" status: 200 len: \d+ time: (?P<response_time>\d+\.\d+)` The regexp parser takes a single parameter | regexp "<re>" which is the regular expression using the Golang RE2 syntax. The regular expression must contain a least one named sub-match (e.g (?P<name>re)), each sub-match will extract a different label. The expression matches the structure of a log line. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation. 5 <expr> | label_format response_time="{{.response_time}}" This will change name of label to desired new label. In the example below, label "error_level" will be renamed to "level". Example: error_level=`level` Read the docs for more. 6 <expr> | unwrap response_time Use the extracted label response_time as sample values instead of log lines for the subsequent range aggregation. 7 max_over_time(<expr> [30d]) The maximum of all values in the specified interval. The range vector is set to 30d. 8 max(<expr>) Calculates max over the dimensions.
1.30 <graph> graph with 1.30 as a line. </graph>
openstack
8
How many POST requests to /v2/e9746973ac574c6b8a9e8857f56a7608/os-server-external-events resulted in a 200 status code?
sum(count_over_time({application="openstack", log_file_type="nova-api"} |= "POST /v2/e9746973ac574c6b8a9e8857f56a7608/os-server-external-events" |= "status: 200" | __error__ = "" [30d]))
1 {application="openstack", log_file_type="nova-api"} Fetch all log lines matching label filters. 2 <expr> |= `POST /v2/e9746973ac574c6b8a9e8857f56a7608/os-server-external-events` Return log lines that contain string POST /v2/e9746973ac574c6b8a9e8857f56a7608/os-server-external-events. 3 <expr> |= `status: 200` Return log lines that contain string status: 200. 4 <expr> | __error__=`` Filter out all formatting and parsing errors. 5 count_over_time(<expr> [30d]) The count of all values in the specified interval. The range vector is set to 30d. 6 sum(<expr>) Calculates sum over the dimensions
2.07k <graph> graph with 2.07k as straight line </graph>
openstack
9
What's the average response time for DELETE requests to /v2/54fadb412c4e40cdbaed9335e4c35a9e/servers/
avg by (application) ( avg_over_time({application="openstack", log_file_type="nova-api"} |= "DELETE /v2/54fadb412c4e40cdbaed9335e4c35a9e/servers/" | regexp `"DELETE /v2/54fadb412c4e40cdbaed9335e4c35a9e/servers/.*" status: (?P<status>\d+) len: \d+ time: (?P<response_time>\d+\.\d+)` | label_format response_time="{{.response_time}}" | unwrap response_time [40d] ) )
1 {application="openstack", log_file_type="nova-api"} Fetch all log lines matching label filters. 2 <expr> |= `DELETE /v2/54fadb412c4e40cdbaed9335e4c35a9e/servers/` Return log lines that contain string DELETE /v2/54fadb412c4e40cdbaed9335e4c35a9e/servers/. 3 <expr> | regexp `"DELETE /v2/54fadb412c4e40cdbaed9335e4c35a9e/servers/.*" status: (?P<status>\d+) len: \d+ time: (?P<response_time>\d+\.\d+)` The regexp parser takes a single parameter | regexp "<re>" which is the regular expression using the Golang RE2 syntax. The regular expression must contain a least one named sub-match (e.g (?P<name>re)), each sub-match will extract a different label. The expression matches the structure of a log line. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation. 4 <expr> | label_format response_time="{{.response_time}}" This will change name of label to desired new label. In the example below, label "error_level" will be renamed to "level". Example: error_level=`level` Read the docs for more. 5 <expr> | unwrap response_time Use the extracted label response_time as sample values instead of log lines for the subsequent range aggregation. 6 avg_over_time(<expr> [40d]) The average of all values in the specified interval. The range vector is set to 40d. 7 avg by(application) (<expr>) Calculates avg over dimensions while preserving label application.
0.267 <graph> single line graph of 0.267 for the entire OpenStack logs </graph>
openstack
10
How many instances of 'HTTP exception thrown: No instances found for any event' occurred in the nova-api logs in the last 40 days?
sum by (application) (count_over_time({application="openstack", log_file_type="nova-api", log_level="INFO"} |= "HTTP exception thrown: No instances found for any event" [40d]))
1 {application="openstack", log_file_type="nova-api", log_level="INFO"} Fetch all log lines matching label filters. 2 <expr> |= `HTTP exception thrown: No instances found for any event` Return log lines that contain string HTTP exception thrown: No instances found for any event. 3 count_over_time(<expr> [40d]) The count of all values in the specified interval. The range vector is set to 40d. 4 sum by(application) (<expr>) Calculates sum over dimensions while preserving label application.
2.06k <graph>
openstack
11
What errors were encountered during the ComputeManager._run_image_cache_manager_pass operation in the last 40 days?
sum(count_over_time({application="openstack", log_file_type=~"nova-compute|nova-api|nova-scheduler", log_level="ERROR"} |= "Error during ComputeManager._run_image_cache_manager_pass" [40d])) by (log_file_type)
1 {application="openstack", log_file_type=~"nova-compute|nova-api|nova-scheduler", log_level="ERROR"} Fetch all log lines matching label filters. 2 <expr> |= `Error during ComputeManager._run_image_cache_manager_pass` Return log lines that contain string Error during ComputeManager._run_image_cache_manager_pass. 3 count_over_time(<expr> [40d]) The count of all values in the specified interval. The range vector is set to 40d. 4 sum by(log_file_type) (<expr>) Calculates sum over dimensions while preserving label log_file_type.
8 <graph>
openstack
12
How many times did we receive a 503 status code while validating tokens in the past 30 days?
sum(count_over_time({application="openstack"} |= "Bad response code while validating token: 503" [30d]))
1 {application="openstack"} Fetch all log lines matching label filters. 2 <expr> |= `Bad response code while validating token: 503` Return log lines that contain string Bad response code while validating token: 503. 3 count_over_time(<expr> [30d]) The count of all values in the specified interval. The range vector is set to 30d. 4 sum(<expr>) Calculates sum over the dimensions.
2 <graph>
openstack
13
How many active base files are currently in use?
count(sum by (base_file_id, application) (count_over_time({application="openstack", log_file_type="nova-compute"} |= "Active base files:" | regexp "Active base files: /var/lib/nova/instances/_base/(?P<base_file_id>[a-f0-9]+)" [30d])))
1 {application="openstack", log_file_type="nova-compute"} Fetch all log lines matching label filters. 2 <expr> |= `Active base files:` Return log lines that contain string Active base files:. 3 <expr> | regexp `Active base files: /var/lib/nova/instances/_base/(?P<base_file_id>[a-f0-9]+)` The regexp parser takes a single parameter | regexp "<re>" which is the regular expression using the Golang RE2 syntax. The regular expression must contain a least one named sub-match (e.g (?P<name>re)), each sub-match will extract a different label. The expression matches the structure of a log line. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation. 4 count_over_time(<expr> [30d]) The count of all values in the specified interval. The range vector is set to 30d. 5 sum by(base_file_id, application) (<expr>) Calculates sum over dimensions while preserving labels base_file_id and application. 6 count(<expr>) Calculates count over the dimensions.
1
openstack
14
What is the latest usage status of image 0673dd71-34c5-4fbb-86c4-40623fbe45b4?
(last_over_time({application="openstack", log_file_type="nova-compute"} |= "0673dd71-34c5-4fbb-86c4-40623fbe45b4" |= "in use:" | regexp `image (?P<image_id>[^ ]+) at \((?P<image_path>[^)]+)\): in use: on this node (?P<local_use>\d+) local, (?P<other_nodes>\d+) on other nodes sharing this instance storage` | line_format "{{.image_id}} has {{.local_use}}" | unwrap local_use [40d]))
1 {application="openstack", log_file_type="nova-compute"} Fetch all log lines matching label filters. 2 <expr> |= `0673dd71-34c5-4fbb-86c4-40623fbe45b4` Return log lines that contain string 0673dd71-34c5-4fbb-86c4-40623fbe45b4. 3 <expr> |= `in use:` Return log lines that contain string in use:. 4 <expr> | regexp `image (?P<image_id>[^ ]+) at \((?P<image_path>[^)]+)\): in use: on this node (?P<local_use>\d+) local, (?P<other_nodes>\d+) on other nodes sharing this instance storage` The regexp parser takes a single parameter | regexp "<re>" which is the regular expression using the Golang RE2 syntax. The regular expression must contain a least one named sub-match (e.g (?P<name>re)), each sub-match will extract a different label. The expression matches the structure of a log line. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation. 5 <expr> | line_format `{{.image_id}} has {{.local_use}}` This will replace log line using a specified template. The template can refer to stream labels and extracted labels. Example: {{.status_code}} - {{.message}} Read the docs for more. 6 <expr> | unwrap local_use Use the extracted label local_use as sample values instead of log lines for the subsequent range aggregation. 7 last_over_time(<expr> [40d]) The last of all values in the specified interval. Only available in Loki 2.3+. The range vector is set to 40d.
0673dd71-34c5-4fbb-86c4-40623fbe45b4 has 1
openstack
15
How many unknown base files were detected in the last 30 days?
count(sum by (file_path, application) ( count_over_time({application="openstack", log_file_type=~"nova.*"} |= "Unknown base file:" | regexp "Unknown base file: (?P<file_path>[^\\s]+)" [30d])))
1 {application="openstack", log_file_type=~"nova.*"} Fetch all log lines matching label filters. 2 <expr> |= `Unknown base file:` Return log lines that contain string Unknown base file:. 3 <expr> | regexp `Unknown base file: (?P<file_path>[^\s]+)` The regexp parser takes a single parameter | regexp "<re>" which is the regular expression using the Golang RE2 syntax. The regular expression must contain a least one named sub-match (e.g (?P<name>re)), each sub-match will extract a different label. The expression matches the structure of a log line. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation. 4 count_over_time(<expr> [30d]) The count of all values in the specified interval. The range vector is set to 30d. 5 sum by(file_path, application) (<expr>) Calculates sum over dimensions while preserving labels file_path and application. 6 count(<expr>) Calculates count over the dimensions.
1 <graph>
openstack
16
How long did it take to deallocate the network for instance 3edec1e4-9678-4a3a-a21b-a145a4ee5e61?
{application="openstack", log_file_type="nova-compute"} |= "3edec1e4-9678-4a3a-a21b-a145a4ee5e61" |= "Took" |= "seconds to deallocate network for instance" | regexp "\\[instance: (?P<instance_id>[^\\]]+)\\] Took (?P<deallocate_time>\\d+\\.\\d+) seconds to deallocate network for instance" | line_format "{{.deallocate_time}}"
1 {application="openstack", log_file_type="nova-compute"} Fetch all log lines matching label filters. 2 <expr> |= `3edec1e4-9678-4a3a-a21b-a145a4ee5e61` Return log lines that contain string 3edec1e4-9678-4a3a-a21b-a145a4ee5e61. 3 <expr> |= `Took` Return log lines that contain string Took. 4 <expr> |= `seconds to deallocate network for instance` Return log lines that contain string seconds to deallocate network for instance. 5 <expr> | regexp `\[instance: (?P<instance_id>[^\]]+)\] Took (?P<deallocate_time>\d+\.\d+) seconds to deallocate network for instance` The regexp parser takes a single parameter | regexp "<re>" which is the regular expression using the Golang RE2 syntax. The regular expression must contain a least one named sub-match (e.g (?P<name>re)), each sub-match will extract a different label. The expression matches the structure of a log line. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation. 6 <expr> | line_format `{{.deallocate_time}}` This will replace log line using a specified template. The template can refer to stream labels and extracted labels. Example: {{.status_code}} - {{.message}}
0.47
openstack
17
How many network-vif-plugged events were created in the last hour?
(sum (count_over_time({application="openstack", component="nova.api.openstack.compute.server_external_events"} |= "Creating event network-vif-plugged" | regexp "Creating event network-vif-plugged:(?P<event_id>[^ ]+) for instance (?P<instance_id>[^ ]+)" [1h])))
1 {application="openstack", component="nova.api.openstack.compute.server_external_events"} Fetch all log lines matching label filters. 2 <expr> |= `Creating event network-vif-plugged` Return log lines that contain string Creating event network-vif-plugged. 3 <expr> | regexp `Creating event network-vif-plugged:(?P<event_id>[^ ]+) for instance (?P<instance_id>[^ ]+)` The regexp parser takes a single parameter | regexp "<re>" which is the regular expression using the Golang RE2 syntax. The regular expression must contain a least one named sub-match (e.g (?P<name>re)), each sub-match will extract a different label. The expression matches the structure of a log line. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation. 4 count_over_time(<expr> [30d]) The count of all values in the specified interval. The range vector is set to 30d. 5 sum(<expr>) Calculates sum over the dimensions.
1.91k
openssh
18
How many failed password attempts were there for root user in the last hour?
sum(count_over_time({application="openssh", hostname="LabSZ"} |~ "Failed password for root from .* port .* ssh2" [30d]))
1 {application="openssh", hostname="LabSZ"} Fetch all log lines matching label filters. 2 <expr> |~ `Failed password for root from .* port .* ssh2` Return log lines that match a RE2 regex pattern. Failed password for root from .* port .* ssh2. 3 count_over_time(<expr> [30d]) The count of all values in the specified interval. The range vector is set to 30d. 4 sum(<expr>) Calculates sum over the dimensions.
177k <graph>
openssh
19
How many unique users experienced authentication failures from the IP address 5.36.59.76?
count( sum(count_over_time({application="openssh", hostname="LabSZ"} |= ip("5.36.59.76") | regexp "(?P<message>(Failed password for (invalid user )?(?P<user>\\S+)|message repeated (?P<repeat_count>\\d+) times: \\[ Failed password for (invalid user )?(?P<repeated_user>\\S+))) from 5\\.36\\.59\\.76 port (?P<port>\\d+) ssh2" | __error__="" | label_format user="{{ or .user .repeated_user }}" | __error__="" [30d])) by (user))
1 {application="openssh", hostname="LabSZ"} Fetch all log lines matching label filters. 2 <expr> |= ip(`5.36.59.76`) Return log lines using IP matching of 5.36.59.76 3 <expr> | regexp `(?P<message>(Failed password for (invalid user )?(?P<user>\S+)|message repeated (?P<repeat_count>\d+) times: \[ Failed password for (invalid user )?(?P<repeated_user>\S+))) from 5\.36\.59\.76 port (?P<port>\d+) ssh2` The regexp parser takes a single parameter | regexp "<re>" which is the regular expression using the Golang RE2 syntax. The regular expression must contain a least one named sub-match (e.g (?P<name>re)), each sub-match will extract a different label. The expression matches the structure of a log line. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation. 4 <expr> | __error__=`` Filter out all formatting and parsing errors. 5 <expr> | label_format user="{{ or .user .repeated_user }}" This will change name of label to desired new label. In the example below, label "error_level" will be renamed to "level". Example: error_level=`level` Read the docs for more. 6 <expr> | __error__=`` Filter out all formatting and parsing errors. 7 count_over_time(<expr> [30d]) The count of all values in the specified interval. The range vector is set to 30d. 8 sum by(user) (<expr>) Calculates sum over dimensions while preserving label user. 9 count(<expr>) Calculates count over the dimensions.
1
openssh
20
List all successful SSH logins including username and source IP
{application="openssh"} |= "Accepted password for" | regexp "(?P<username>\\S+) from (?P<source_ip>\\S+) port (?P<port>\\d+) ssh2" | line_format "User: {{.username}}, Source IP: {{.source_ip}}"
1 {application="openssh"} Fetch all log lines matching label filters. 2 <expr> |= `Accepted password for` Return log lines that contain string Accepted password for. 3 <expr> | regexp `(?P<username>\S+) from (?P<source_ip>\S+) port (?P<port>\d+) ssh2` The regexp parser takes a single parameter | regexp "<re>" which is the regular expression using the Golang RE2 syntax. The regular expression must contain a least one named sub-match (e.g (?P<name>re)), each sub-match will extract a different label. The expression matches the structure of a log line. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation. 4 <expr> | line_format `User: {{.username}}, Source IP: {{.source_ip}}` This will replace log line using a specified template. The template can refer to stream labels and extracted labels. Example: {{.status_code}} - {{.message}}
182 Results: User: jmzhu, Source IP: 183.11.69.89 User: curi, Source IP: 123.255.103.215 ... ...
openssh
21
What is the average number of successful logins per hour?
sum( count_over_time({application="openssh", hostname="LabSZ"} |= "Accepted password for" | logfmt | __error__="" [30d] ) ) / 24
1 {application="openssh", hostname="LabSZ"} Fetch all log lines matching label filters. 2 <expr> |= `Accepted password for` Return log lines that contain string Accepted password for. 3 <expr> | logfmt This will extract all keys and values from a logfmt formatted log line as labels. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation. 4 <expr> | __error__=`` Filter out all formatting and parsing errors. 5 count_over_time(<expr> [30d]) The count of all values in the specified interval. The range vector is set to 30d. 6 sum(<expr>) Calculates sum over the dimensions. 7 <expr> / 24 no docs
7.58
openssh
22
Show me the most recent successful login for user 'fztu', including timestamp and source IP
{application="openssh"} |= "Accepted password for fztu" | regexp "(?P<source_ip>\\d+\\.\\d+\\.\\d+\\.\\d+)"
1 {application="openssh"} Fetch all log lines matching label filters. 2 <expr> |= `Accepted password for fztu` Return log lines that contain string Accepted password for fztu. 3 <expr> | regexp `(?P<source_ip>\d+\.\d+\.\d+\.\d+)` The regexp parser takes a single parameter | regexp "<re>" which is the regular expression using the Golang RE2 syntax. The regular expression must contain a least one named sub-match (e.g (?P<name>re)), each sub-match will extract a different label. The expression matches the structure of a log line. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation.
Table of all successful logins
openssh
23
How many possible break-in attempts were detected in the last 24 hours?
sum(count_over_time({application="openssh", hostname="LabSZ"} |~ "POSSIBLE BREAK-IN ATTEMPT" [24h]))
sum(count_over_time({application="openssh", hostname="LabSZ"} |~ "POSSIBLE BREAK-IN ATTEMPT" [])) 1 {application="openssh", hostname="LabSZ"} Fetch all log lines matching label filters. 2 <expr> |~ `POSSIBLE BREAK-IN ATTEMPT` Return log lines that match a RE2 regex pattern. POSSIBLE BREAK-IN ATTEMPT. 3 count_over_time(<expr> []) The count of all values in the specified interval. The range vector is set to ``. 4 sum(<expr>) Calculates sum over the dimensions.
19.4k
openssh
24
List number of unique IP addresses that triggered a 'POSSIBLE BREAK-IN ATTEMPT' warning
count(sum by (ip) (count_over_time({application="openssh", hostname="LabSZ"} | regexp "(?P<ip>\\d+\\.\\d+\\.\\d+\\.\\d+).*POSSIBLE BREAK-IN ATTEMPT" | label_format ip="{{ .ip }}" [1h])))
1 {application="openssh", hostname="LabSZ"} Fetch all log lines matching label filters. 2 <expr> | regexp `(?P<ip>\d+\.\d+\.\d+\.\d+).*POSSIBLE BREAK-IN ATTEMPT` The regexp parser takes a single parameter | regexp "<re>" which is the regular expression using the Golang RE2 syntax. The regular expression must contain a least one named sub-match (e.g (?P<name>re)), each sub-match will extract a different label. The expression matches the structure of a log line. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation. 3 <expr> | label_format ip="{{ .ip }}" This will change name of label to desired new label. In the example below, label "error_level" will be renamed to "level". Example: error_level=`level` Read the docs for more. 4 count_over_time(<expr> [1h]) The count of all values in the specified interval. The range vector is set to 1h. 5 sum by(ip) (<expr>) Calculates sum over dimensions while preserving label ip. 6 count(<expr>) Calculates count over the dimensions.
297
openssh
25
How many times did we see 'Connection reset by peer' errors in the last hour?",
sum by (application) (count_over_time({application="openssh", hostname="LabSZ"} |~ "Connection reset by peer" [1h]))
1 {application="openssh", hostname="LabSZ"} Fetch all log lines matching label filters. 2 <expr> |~ `Connection reset by peer` Return log lines that match a RE2 regex pattern. Connection reset by peer. 3 count_over_time(<expr> [1h]) The count of all values in the specified interval. The range vector is set to 1h. 4 sum by(application) (<expr>) Calculates sum over dimensions while preserving label application.
955
openssh
26
List all unique error messages related to disconnections, along with their frequency, in the past 12 hours
topk(1000, sum by (error_message) (count_over_time({application="openssh", hostname="LabSZ"} |~ "(?i)disconnect|disconnecting" | regexp "(?P<error_message>(error: Received disconnect from.*|Disconnecting:.*|Connection closed by.*|Received disconnect from.*|fatal:.*Connection reset by peer.*|error: connect_to.*failed\\.))" | __error__="" [12h])))
1 {application="openssh", hostname="LabSZ"} Fetch all log lines matching label filters. 2 <expr> |~ `(?i)disconnect|disconnecting` Return log lines that match a RE2 regex pattern. (?i)disconnect|disconnecting. 3 <expr> | regexp `(?P<error_message>(error: Received disconnect from.*|Disconnecting:.*|Connection closed by.*|Received disconnect from.*|fatal:.*Connection reset by peer.*|error: connect_to.*failed\.))` The regexp parser takes a single parameter | regexp "<re>" which is the regular expression using the Golang RE2 syntax. The regular expression must contain a least one named sub-match (e.g (?P<name>re)), each sub-match will extract a different label. The expression matches the structure of a log line. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation. 4 <expr> | __error__=`` Filter out all formatting and parsing errors. 5 count_over_time(<expr> [12h]) The count of all values in the specified interval. The range vector is set to 12h. 6 sum by(error_message) (<expr>) Calculates sum over dimensions while preserving label error_message. 7 topk(1000, <expr>) Calculates topk over the dimensions.
9
openssh
27
Show me all instances where we failed to receive an identification string from a connecting host
{application="openssh", hostname="LabSZ"} |= "Did not receive identification string from" | line_format "`{{ __timestamp__ }}`- Failed to receive identification string from {{.content}}"
1 {application="openssh", hostname="LabSZ"} Fetch all log lines matching label filters. 2 <expr> |= `Did not receive identification string from` Return log lines that contain string Did not receive identification string from. 3 <expr> | line_format ``{{ __timestamp__ }}`- Failed to receive identification string from {{.content}}` This will replace log line using a specified template. The template can refer to stream labels and extracted labels. Example: {{.status_code}} - {{.message}}
838 Lines of Unique Instances
openssh
28
How many unique users opened SSH sessions in the last 24 hours?",
count( sum by (user) ( count_over_time( {application="openssh", hostname="LabSZ"} |~ "session opened for user" | regexp "session opened for user (?P<user>\\S+)" | __error__="" [24h] ) ) )
1 {application="openssh", hostname="LabSZ"} Fetch all log lines matching label filters. 2 <expr> |~ `session opened for user` Return log lines that match a RE2 regex pattern. session opened for user. 3 <expr> | regexp `session opened for user (?P<user>\S+)` The regexp parser takes a single parameter | regexp "<re>" which is the regular expression using the Golang RE2 syntax. The regular expression must contain a least one named sub-match (e.g (?P<name>re)), each sub-match will extract a different label. The expression matches the structure of a log line. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation. 4 <expr> | __error__=`` Filter out all formatting and parsing errors. 5 count_over_time(<expr> [24h]) The count of all values in the specified interval. The range vector is set to 24h. 6 sum by(user) (<expr>) Calculates sum over dimensions while preserving label user. 7 count(<expr>) Calculates count over the dimensions.
9
openssh
29
How many attempts were made to log in with invalid usernames in the last 24 hours?
sum(count_over_time({application="openssh", hostname="LabSZ"} |~ "Failed password for invalid user" [24h]))
1 {application="openssh", hostname="LabSZ"} Fetch all log lines matching label filters. 2 <expr> |~ `Failed password for invalid user` Return log lines that match a RE2 regex pattern. Failed password for invalid user. 3 count_over_time(<expr> [24h]) The count of all values in the specified interval. The range vector is set to 24h. 4 sum(<expr>) Calculates sum over the dimensions.
19.9k
openssh
30
What are all the attempted invalid usernames?
{application="openssh", hostname="LabSZ"} | regexp "(?P<event>(Failed password for invalid user|Invalid user))\\s+(?P<username>\\S+)\\s+from"
1 {application="openssh", hostname="LabSZ"} Fetch all log lines matching label filters. 2 <expr> | regexp `(?P<event>(Failed password for invalid user|Invalid user))\s+(?P<username>\S+)\s+from` The regexp parser takes a single parameter | regexp "<re>" which is the regular expression using the Golang RE2 syntax. The regular expression must contain a least one named sub-match (e.g (?P<name>re)), each sub-match will extract a different label. The expression matches the structure of a log line. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation. 3 count_over_time(<expr> [1d]) The count of all values in the specified interval. The range vector is set to 1d.
1084 rows
openssh
31
Show me all the IP addresses that attempted to log in with the invalid username 'webmaster'
{application="openssh", hostname="LabSZ"} |= "Failed password for invalid user webmaster from (?P<ip>\\d+\\.\\d+\\.\\d+\\.\\d+)" | label_format ip="{{ .ip }}"
1 {application="openssh", hostname="LabSZ"} Fetch all log lines matching label filters. 2 <expr> |= `Failed password for invalid user webmaster from (?P<ip>\d+\.\d+\.\d+\.\d+)` Return log lines that contain string Failed password for invalid user webmaster from (?P<ip>\d+\.\d+\.\d+\.\d+). 3 <expr> | label_format ip="{{ .ip }}" This will change name of label to desired new label. In the example below, label "error_level" will be renamed to "level". Example: error_level=`level`
No logs found.
openssh
32
How many times did PAM ignore max retries in the last 24 hours?
sum(count_over_time({application="openssh", hostname="LabSZ"} |= "PAM service(sshd) ignoring max retries" [24h]))
1 {application="openssh", hostname="LabSZ"} Fetch all log lines matching label filters. 2 <expr> |= `PAM service(sshd) ignoring max retries` Return log lines that contain string PAM service(sshd) ignoring max retries. 3 count_over_time(<expr> [24h]) The count of all values in the specified interval. The range vector is set to 24h. 4 sum(<expr>) Calculates sum over the dimensions.
37.9k
openssh
33
List all instances of 'Corrupted MAC on input' errors, including timestamp and source IP
{application="openssh", hostname="LabSZ"} |= "Corrupted MAC on input" | regexp "(?P<source_ip>\\d+\\.\\d+\\.\\d+\\.\\d+)" | line_format "Corrupted MAC on input from {{.source_ip}}"
1 {application="openssh", hostname="LabSZ"} Fetch all log lines matching label filters. 2 <expr> |= `Corrupted MAC on input` Return log lines that contain string Corrupted MAC on input. 3 <expr> | regexp `(?P<source_ip>\d+\.\d+\.\d+\.\d+)` The regexp parser takes a single parameter | regexp "<re>" which is the regular expression using the Golang RE2 syntax. The regular expression must contain a least one named sub-match (e.g (?P<name>re)), each sub-match will extract a different label. The expression matches the structure of a log line. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation. 4 <expr> | line_format `Corrupted MAC on input from {{.source_ip}}` This will replace log line using a specified template. The template can refer to stream labels and extracted labels. Example: {{.status_code}} - {{.message}}
1
openstack
34
What is the average time taken to build instances in the past week?
avg_over_time( {application="openstack", log_file_type="nova-compute"} |= "Took" |= "seconds to build instance" | regexp "\\[instance: (?P<instance_id>[^\\]]+)\\] Took (?P<build_time>\\d+\\.\\d+) seconds to build instance" | unwrap build_time [7d] )
1 {application="openstack", log_file_type="nova-compute"} Fetch all log lines matching label filters. 2 <expr> |= `Took` Return log lines that contain string Took. 3 <expr> |= `seconds to build instance` Return log lines that contain string seconds to build instance. 4 <expr> | regexp `\[instance: (?P<instance_id>[^\]]+)\] Took (?P<build_time>\d+\.\d+) seconds to build instance` The regexp parser takes a single parameter | regexp "<re>" which is the regular expression using the Golang RE2 syntax. The regular expression must contain a least one named sub-match (e.g (?P<name>re)), each sub-match will extract a different label. The expression matches the structure of a log line. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation. 5 <expr> | unwrap build_time Use the extracted label build_time as sample values instead of log lines for the subsequent range aggregation. 6 avg_over_time(<expr> [7d]) The average of all values in the specified interval. The range vector is set to 7d.
Instance Lifecycle
openstack
35
What is the average time taken to build instances in the past week?
avg_over_time( {application="openstack", log_file_type="nova-compute"} |= "Took" |= "seconds to build instance" | regexp "\\[instance: (?P<instance_id>[^\\]]+)\\] Took (?P<build_time>\\d+\\.\\d+) seconds to build instance" | unwrap build_time [7d] )
1 {application="openstack", log_file_type="nova-compute"} Fetch all log lines matching label filters. 2 <expr> |= `Took` Return log lines that contain string Took. 3 <expr> |= `seconds to build instance` Return log lines that contain string seconds to build instance. 4 <expr> | regexp `\[instance: (?P<instance_id>[^\]]+)\] Took (?P<build_time>\d+\.\d+) seconds to build instance` The regexp parser takes a single parameter | regexp "<re>" which is the regular expression using the Golang RE2 syntax. The regular expression must contain a least one named sub-match (e.g (?P<name>re)), each sub-match will extract a different label. The expression matches the structure of a log line. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation. 5 <expr> | unwrap build_time Use the extracted label build_time as sample values instead of log lines for the subsequent range aggregation. 6 avg_over_time(<expr> [7d]) The average of all values in the specified interval. The range vector is set to 7d.
Instance Lifecycle
openstack
36
How many instances were terminated on compute node cp-1.slowvm1.tcloud-pg0.utah.cloudlab.us yesterday?
sum(count_over_time({application="openstack", log_file_type="nova-compute", component="nova.compute.manager"} |= "cp-1.slowvm1.tcloud-pg0.utah.cloudlab.us" |= "Terminating instance" [1d] offset 1d))
1 {application="openstack", log_file_type="nova-compute", component="nova.compute.manager"} Fetch all log lines matching label filters. 2 <expr> |= `cp-1.slowvm1.tcloud-pg0.utah.cloudlab.us` Return log lines that contain string cp-1.slowvm1.tcloud-pg0.utah.cloudlab.us. 3 <expr> |= `Terminating instance` Return log lines that contain string Terminating instance. 4 count_over_time(<expr> [1d]) The count of all values in the specified interval. The range vector is set to 1d. 5 sum(<expr>) Calculates sum over the dimensions.
0
Instance Lifecycle
openstack
37
How many compute nodes are there?
count(sum by (node_name, application) ( count_over_time( {application="openstack", log_file_type="nova-compute", component="nova.compute.resource_tracker"} |= "Final resource view: name=" | regexp "Final resource view: name=(?P<node_name>[^ ]+)" | label_format node_name="{{ .node_name }}" [1d])))
1 {application="openstack", log_file_type="nova-compute", component="nova.compute.resource_tracker"} Fetch all log lines matching label filters. 2 <expr> |= `Final resource view: name=` Return log lines that contain string Final resource view: name=. 3 <expr> | regexp `Final resource view: name=(?P<node_name>[^ ]+)` The regexp parser takes a single parameter | regexp "<re>" which is the regular expression using the Golang RE2 syntax. The regular expression must contain a least one named sub-match (e.g (?P<name>re)), each sub-match will extract a different label. The expression matches the structure of a log line. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation. 4 <expr> | label_format node_name="{{ .node_name }}" This will change name of label to desired new label. In the example below, label "error_level" will be renamed to "level". Example: error_level=`level` Read the docs for more. 5 count_over_time(<expr> [1d]) The count of all values in the specified interval. The range vector is set to 1d. 6 sum by(node_name, application) (<expr>) Calculates sum over dimensions while preserving labels node_name and application. 7 count(<expr>) Calculates count over the dimensions.
1
Resource Usage
openstack
38
What is the total disk space used by instances on compute node cp-1.slowvm1.tcloud-pg0.utah.cloudlab.us?
last_over_time( {application="openstack", log_file_type="nova-compute", component="nova.compute.resource_tracker"} |= "cp-1.slowvm1.tcloud-pg0.utah.cloudlab.us" |= "Final resource view:" | regexp "Final resource view: name=(?P<node>\\S+) .*used_disk=(?P<used_disk>\\d+)GB.*" | label_format used_disk_gb="{{.used_disk}}" | unwrap used_disk_gb [1h] )
1 {application="openstack", log_file_type="nova-compute", component="nova.compute.resource_tracker"} Fetch all log lines matching label filters. 2 <expr> |= `cp-1.slowvm1.tcloud-pg0.utah.cloudlab.us` Return log lines that contain string cp-1.slowvm1.tcloud-pg0.utah.cloudlab.us. 3 <expr> |= `Final resource view:` Return log lines that contain string Final resource view:. 4 <expr> | regexp `Final resource view: name=(?P<node>\S+) .*used_disk=(?P<used_disk>\d+)GB.*` The regexp parser takes a single parameter | regexp "<re>" which is the regular expression using the Golang RE2 syntax. The regular expression must contain a least one named sub-match (e.g (?P<name>re)), each sub-match will extract a different label. The expression matches the structure of a log line. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation. 5 <expr> | label_format used_disk_gb="{{.used_disk}}" This will change name of label to desired new label. In the example below, label "error_level" will be renamed to "level". Example: error_level=`level` Read the docs for more. 6 <expr> | unwrap used_disk_gb Use the extracted label used_disk_gb as sample values instead of log lines for the subsequent range aggregation. 7 last_over_time(<expr> [1h]) The last of all values in the specified interval. Only available in Loki 2.3+. The range vector is set to 1h.
20
Resource Usage
openstack
39
How many POST requests to /v2/*/os-server-external-events resulted in a 500 status code in the past 24 hours?
sum(count_over_time({application="openstack", log_file_type="nova-api"} |= "POST" |= "/v2/" |= "/os-server-external-events" |= "status: 500" | __error__="" [24h]))
1 {application="openstack", log_file_type="nova-api"} Fetch all log lines matching label filters. 2 <expr> |= `POST` Return log lines that contain string POST. 3 <expr> |= `/v2/` Return log lines that contain string /v2/. 4 <expr> |= `/os-server-external-events` Return log lines that contain string /os-server-external-events. 5 <expr> |= `status: 500` Return log lines that contain string status: 500. 6 <expr> | __error__=`` Filter out all formatting and parsing errors. 7 count_over_time(<expr> [24h]) The count of all values in the specified interval. The range vector is set to 24h. 8 sum(<expr>) Calculates sum over the dimensions.
0
API Performance
openstack
40
What is the average response time for DELETE requests to /v2/*/servers/* in the last 7 days?
avg( avg_over_time( {application="openstack", log_file_type="nova-api"} |= "DELETE" |~ "/v2/.*/servers/.*" | regexp `"DELETE /v2/.*/servers/[^ ]+ HTTP/1\.1" status: (?P<status>\d+) len: \d+ time: (?P<response_time>\d+\.\d+)` | unwrap response_time [7d] ) )
1 {application="openstack", log_file_type="nova-api"} Fetch all log lines matching label filters. 2 <expr> |= `DELETE` Return log lines that contain string DELETE. 3 <expr> |~ `/v2/.*/servers/.*` Return log lines that match a RE2 regex pattern. /v2/.*/servers/.*. 4 <expr> | regexp `"DELETE /v2/.*/servers/[^ ]+ HTTP/1\.1" status: (?P<status>\d+) len: \d+ time: (?P<response_time>\d+\.\d+)` The regexp parser takes a single parameter | regexp "<re>" which is the regular expression using the Golang RE2 syntax. The regular expression must contain a least one named sub-match (e.g (?P<name>re)), each sub-match will extract a different label. The expression matches the structure of a log line. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation. 5 <expr> | unwrap response_time Use the extracted label response_time as sample values instead of log lines for the subsequent range aggregation. 6 avg_over_time(<expr> [7d]) The average of all values in the specified interval. The range vector is set to 7d. 7 avg(<expr>) Calculates avg over the dimensions.
0.268 <graph>
API Performance
openstack
41
How many 'HTTP exception thrown: No instances found for any event' errors occurred in the nova-api logs in the last 24 hours?
sum( count_over_time({application="openstack", log_file_type="nova-api", log_level="ERROR"} |= "HTTP exception thrown: No instances found for any event" [24h]) )sum( count_over_time({application="openstack", log_file_type="nova-api", log_level="ERROR"} |= "HTTP exception thrown: No instances found for any event" [24h]) )
1 {application="openstack", log_file_type="nova-api", log_level="ERROR"} Fetch all log lines matching label filters. 2 <expr> |= `HTTP exception thrown: No instances found for any event` Return log lines that contain string HTTP exception thrown: No instances found for any event. 3 count_over_time(<expr> [24h]) The count of all values in the specified interval. The range vector is set to 24h. 4 sum(<expr>) Calculates sum over the dimensions.
2.1k
Error Analysis
openstack
42
What are the top 5 most frequent errors encountered during ComputeManager._run_image_cache_manager_pass in the past week?
topk(5, sum by (error_message) (count_over_time({application="openstack", log_file_type=~"nova.*", log_level="ERROR"} |= "Error during ComputeManager._run_image_cache_manager_pass" | pattern "<_> Error during ComputeManager._run_image_cache_manager_pass: <error_message>" [1h])) )
1 {application="openstack", log_file_type=~"nova.*", log_level="ERROR"} Fetch all log lines matching label filters. 2 <expr> |= `Error during ComputeManager._run_image_cache_manager_pass` Return log lines that contain string Error during ComputeManager._run_image_cache_manager_pass. 3 <expr> | pattern `<_> Error during ComputeManager._run_image_cache_manager_pass: <error_message>` The pattern parser allows the explicit extraction of fields from log lines by defining a pattern expression (| pattern <pattern-expression>). The expression matches the structure of a log line. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation. 4 count_over_time(<expr> [1h]) The count of all values in the specified interval. The range vector is set to 1h. 5 sum by(error_message) (<expr>) Calculates sum over dimensions while preserving label error_message. 6 topk(5, <expr>) Calculates topk over the dimensions.
8
Error Analysis
openstack
43
How many times did we receive a 503 status code while validating tokens in the last hour?
sum(count_over_time({application="openstack", log_level="ERROR", component="keystonemiddleware.auth_token"} |= "Bad response code while validating token: 503" [1h]))
1 {application="openstack", log_level="ERROR", component="keystonemiddleware.auth_token"} Fetch all log lines matching label filters. 2 <expr> |= `Bad response code while validating token: 503` Return log lines that contain string Bad response code while validating token: 503. 3 count_over_time(<expr> [1h]) The count of all values in the specified interval. The range vector is set to 1h. 4 sum(<expr>) Calculates sum over the dimensions.
2
Error Analysis
openstack
44
What is the maximum time taken to allocate a network for an instance in the last 24 hours?"
max_over_time( {application="openstack", log_file_type="nova-compute"} |= "Took" |= "seconds to allocate network for instance" | regexp "\\[instance: (?P<instance_id>[^\\]]+)\\] Took (?P<allocation_time>\\d+\\.\\d+) seconds to allocate network for instance" | unwrap allocation_time [24h] )
1 {application="openstack", log_file_type="nova-compute"} Fetch all log lines matching label filters. 2 <expr> |= `Took` Return log lines that contain string Took. 3 <expr> |= `seconds to allocate network for instance` Return log lines that contain string seconds to allocate network for instance. 4 <expr> | regexp `\[instance: (?P<instance_id>[^\]]+)\] Took (?P<allocation_time>\d+\.\d+) seconds to allocate network for instance` The regexp parser takes a single parameter | regexp "<re>" which is the regular expression using the Golang RE2 syntax. The regular expression must contain a least one named sub-match (e.g (?P<name>re)), each sub-match will extract a different label. The expression matches the structure of a log line. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation. 5 <expr> | unwrap allocation_time Use the extracted label allocation_time as sample values instead of log lines for the subsequent range aggregation. 6 max_over_time(<expr> [24h]) The maximum of all values in the specified interval. The range vector is set to 24h.
NaN
Network Operations
openssh
45
How many failed password attempts were there for the root user from IP address 5.36.59.76 in the last 6 hours?
sum(count_over_time({application="openssh", hostname="LabSZ"} |= "Failed password for root from 5.36.59.76" |= "ssh2" [6h]))
1 {application="openssh", hostname="LabSZ"} Fetch all log lines matching label filters. 2 <expr> |= `Failed password for root from 5.36.59.76` Return log lines that contain string Failed password for root from 5.36.59.76. 3 <expr> |= `ssh2` Return log lines that contain string ssh2. 4 count_over_time(<expr> [6h]) The count of all values in the specified interval. The range vector is set to 6h. 5 sum(<expr>) Calculates sum over the dimensions.
2
Authentication Failures
openssh
46
What are the top 3 usernames with the most authentication failures across all hosts in the past 12 hours?
topk(3, sum by (user) (count_over_time({application="openssh"} | regexp "(?P<event>(Failed password for (invalid user )?(?P<user>\\S+)|message repeated (?P<repeat_count>\\d+) times: \\[ Failed password for (invalid user )?(?P<repeat_user>\\S+)))" | label_format user="{{ or .user .repeat_user }}" | __error__="" [12h])))
1 {application="openssh"} Fetch all log lines matching label filters. 2 <expr> | regexp `(?P<event>(Failed password for (invalid user )?(?P<user>\S+)|message repeated (?P<repeat_count>\d+) times: \[ Failed password for (invalid user )?(?P<repeat_user>\S+)))` The regexp parser takes a single parameter | regexp "<re>" which is the regular expression using the Golang RE2 syntax. The regular expression must contain a least one named sub-match (e.g (?P<name>re)), each sub-match will extract a different label. The expression matches the structure of a log line. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation. 3 <expr> | label_format user="{{ or .user .repeat_user }}" This will change name of label to desired new label. In the example below, label "error_level" will be renamed to "level". Example: error_level=`level` Read the docs for more. 4 <expr> | __error__=`` Filter out all formatting and parsing errors. 5 count_over_time(<expr> [12h]) The count of all values in the specified interval. The range vector is set to 12h. 6 sum by(user) (<expr>) Calculates sum over dimensions while preserving label user. 7 topk(3, <expr>) Calculates topk over the dimensions.1 {application="openssh"} Fetch all log lines matching label filters. 2 <expr> | regexp `(?P<event>(Failed password for (invalid user )?(?P<user>\S+)|message repeated (?P<repeat_count>\d+) times: \[ Failed password for (invalid user )?(?P<repeat_user>\S+)))` The regexp parser takes a single parameter | regexp "<re>" which is the regular expression using the Golang RE2 syntax. The regular expression must contain a least one named sub-match (e.g (?P<name>re)), each sub-match will extract a different label. The expression matches the structure of a log line. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation. 3 <expr> | label_format user="{{ or .user .repeat_user }}" This will change name of label to desired new label. In the example below, label "error_level" will be renamed to "level". Example: error_level=`level` Read the docs for more. 4 <expr> | __error__=`` Filter out all formatting and parsing errors. 5 count_over_time(<expr> [12h]) The count of all values in the specified interval. The range vector is set to 12h. 6 sum by(user) (<expr>) Calculates sum over dimensions while preserving label user. 7 topk(3, <expr>) Calculates topk over the dimensions.
{user="admin"} {user="root"} {user="test"} <graph>
Authentication Failures
openssh
47
Show me all authentication failures where the number of retries exceeded the maximum allowed limit
{application="openssh", hostname="LabSZ"} |= "PAM service(sshd) ignoring max retries;" | regexp "PAM service\\(sshd\\) ignoring max retries; (?P<actual>\\d+) > (?P<max>\\d+)" | line_format "Max retries exceeded: {{.actual}} > {{.max}}"
1 {application="openssh", hostname="LabSZ"} Fetch all log lines matching label filters. 2 <expr> |= `PAM service(sshd) ignoring max retries;` Return log lines that contain string PAM service(sshd) ignoring max retries;. 3 <expr> | regexp `PAM service\(sshd\) ignoring max retries; (?P<actual>\d+) > (?P<max>\d+)` The regexp parser takes a single parameter | regexp "<re>" which is the regular expression using the Golang RE2 syntax. The regular expression must contain a least one named sub-match (e.g (?P<name>re)), each sub-match will extract a different label. The expression matches the structure of a log line. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation. 4 <expr> | line_format `{{.__timestamp__}} - Max retries exceeded: {{.actual}} > {{.max}}` This will replace log line using a specified template. The template can refer to stream labels and extracted labels. Example: {{.status_code}} - {{.message}}1 {application="openssh", hostname="LabSZ"} Fetch all log lines matching label filters. 2 <expr> |= `PAM service(sshd) ignoring max retries;` Return log lines that contain string PAM service(sshd) ignoring max retries;. 3 <expr> | regexp `PAM service\(sshd\) ignoring max retries; (?P<actual>\d+) > (?P<max>\d+)` The regexp parser takes a single parameter | regexp "<re>" which is the regular expression using the Golang RE2 syntax. The regular expression must contain a least one named sub-match (e.g (?P<name>re)), each sub-match will extract a different label. The expression matches the structure of a log line. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation. 4 <expr> | line_format `{{.__timestamp__}} - Max retries exceeded: {{.actual}} > {{.max}}` This will replace log line using a specified template. The template can refer to stream labels and extracted labels. Example: {{.status_code}} - {{.message}}
Log Table with the result
Authentication Failures
openssh
48
How many times has the message 'Disconnecting: Too many authentication failures' appeared in the logs in the last 24 hours?
sum(count_over_time({application="openssh", hostname="LabSZ"} |= "Disconnecting: Too many authentication failures" [24h]))
1 {application="openssh", hostname="LabSZ"} Fetch all log lines matching label filters. 2 <expr> |= `Disconnecting: Too many authentication failures` Return log lines that contain string Disconnecting: Too many authentication failures. 3 count_over_time(<expr> [24h]) The count of all values in the specified interval. The range vector is set to 24h. 4 sum(<expr>) Calculates sum over the dimensions.
37.3k
Brute Force Attempts
openssh
49
Show me the usernames targeted in brute force attacks where the attacker tried more than 10 different passwords
count by (user) (count_over_time({application="openssh", hostname="LabSZ"} |~ "Failed password for (invalid user )?(\\S+) from" | regexp "Failed password for (invalid user )?(?P<user>\\S+) from" [1d]) ) > 10
1 {application="openssh", hostname="LabSZ"} Fetch all log lines matching label filters. 2 <expr> |~ `Failed password for (invalid user )?(\S+) from` Return log lines that match a RE2 regex pattern. Failed password for (invalid user )?(\S+) from. 3 <expr> | regexp `Failed password for (invalid user )?(?P<user>\S+) from` The regexp parser takes a single parameter | regexp "<re>" which is the regular expression using the Golang RE2 syntax. The regular expression must contain a least one named sub-match (e.g (?P<name>re)), each sub-match will extract a different label. The expression matches the structure of a log line. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation. 4 count_over_time(<expr> [1d]) The count of all values in the specified interval. The range vector is set to 1d. 5 count by(user) (<expr>) Calculates count over dimensions while preserving label user. 6 <expr> > 10
148 <graph>
Brute Force Attempts
openssh
50
How many 'POSSIBLE BREAK-IN ATTEMPT' warnings were logged in the past week, grouped by source IP address?
sum by (source_ip) ( count_over_time( {application="openssh", hostname="LabSZ"} |~ "POSSIBLE BREAK-IN ATTEMPT" | regexp "(?P<source_ip>\\d+\\.\\d+\\.\\d+\\.\\d+)" [1w] ) )
1 {application="openssh", hostname="LabSZ"} Fetch all log lines matching label filters. 2 <expr> |~ `POSSIBLE BREAK-IN ATTEMPT` Return log lines that match a RE2 regex pattern. POSSIBLE BREAK-IN ATTEMPT. 3 <expr> | regexp `(?P<source_ip>\d+\.\d+\.\d+\.\d+)` The regexp parser takes a single parameter | regexp "<re>" which is the regular expression using the Golang RE2 syntax. The regular expression must contain a least one named sub-match (e.g (?P<name>re)), each sub-match will extract a different label. The expression matches the structure of a log line. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation. 4 count_over_time(<expr> [1w]) The count of all values in the specified interval. The range vector is set to 1w. 5 sum by(source_ip) (<expr>) Calculates sum over dimensions while preserving label source_ip.
296
Suspicious Activities
openssh
51
List all instances where a connection was established without receiving an identification string, including timestamp and source IP
{application="openssh", hostname="LabSZ"} |= "Did not receive identification string from" | regexp "Did not receive identification string from (?P<source_ip>\\S+)" | line_format "Failed to receive identification string from {{.source_ip}}"
1 {application="openssh", hostname="LabSZ"} Fetch all log lines matching label filters. 2 <expr> |= `Did not receive identification string from` Return log lines that contain string Did not receive identification string from. 3 <expr> | regexp `Did not receive identification string from (?P<source_ip>\S+)` The regexp parser takes a single parameter | regexp "<re>" which is the regular expression using the Golang RE2 syntax. The regular expression must contain a least one named sub-match (e.g (?P<name>re)), each sub-match will extract a different label. The expression matches the structure of a log line. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation. 4 <expr> | line_format `Failed to receive identification string from {{.source_ip}}` This will replace log line using a specified template. The template can refer to stream labels and extracted labels. Example: {{.status_code}} - {{.message}}
838 Rows
Suspicious Activities
openssh
52
Show me all occurrences of 'Bad protocol version identification' errors, along with the incorrect version strings used
{application="openssh", hostname="LabSZ"} |= "Bad protocol version identification" | regexp "Bad protocol version identification '(?P<version>.*?)' from (?P<ip>\\S+) port (?P<port>\\d+)" | line_format "Bad protocol version '{{.version}}' from IP: {{.ip}} on port: {{.port}}"
1 {application="openssh", hostname="LabSZ"} Fetch all log lines matching label filters. 2 <expr> |= `Bad protocol version identification` Return log lines that contain string Bad protocol version identification. 3 <expr> | regexp `Bad protocol version identification '(?P<version>.*?)' from (?P<ip>\S+) port (?P<port>\d+)` The regexp parser takes a single parameter | regexp "<re>" which is the regular expression using the Golang RE2 syntax. The regular expression must contain a least one named sub-match (e.g (?P<name>re)), each sub-match will extract a different label. The expression matches the structure of a log line. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation. 4 <expr> | line_format `Bad protocol version '{{.version}}' from IP: {{.ip}} on port: {{.port}}` This will replace log line using a specified template. The template can refer to stream labels and extracted labels. Example: {{.status_code}} - {{.message}}
22 Rows
Suspicious Activities
openssh
53
What is the distribution of SSH disconnection reasons across all hosts in the last 24 hours?
sum by (reason) ( count_over_time({application="openssh", hostname="LabSZ"} |~ "Received disconnect from|Disconnecting:|error: Received disconnect from" | regexp "(?P<reason>(Received disconnect from [^:]+: (?P<code>\\d+): (?P<message>[^\\[]+))|(Disconnecting: [^\\[]+)|(error: Received disconnect from [^:]+: (?P<error_code>\\d+): (?P<error_message>.+)))" | label_format reason="{{ or .message .reason .error_message }}" [24h]) )
1 {application="openssh", hostname="LabSZ"} Fetch all log lines matching label filters. 2 <expr> |~ `Received disconnect from|Disconnecting:|error: Received disconnect from` Return log lines that match a RE2 regex pattern. Received disconnect from|Disconnecting:|error: Received disconnect from. 3 <expr> | regexp `(?P<reason>(Received disconnect from [^:]+: (?P<code>\d+): (?P<message>[^\[]+))|(Disconnecting: [^\[]+)|(error: Received disconnect from [^:]+: (?P<error_code>\d+): (?P<error_message>.+)))` The regexp parser takes a single parameter | regexp "<re>" which is the regular expression using the Golang RE2 syntax. The regular expression must contain a least one named sub-match (e.g (?P<name>re)), each sub-match will extract a different label. The expression matches the structure of a log line. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation. 4 <expr> | label_format reason="{{ or .message .reason .error_message }}" This will change name of label to desired new label. In the example below, label "error_level" will be renamed to "level". Example: error_level=`level` Read the docs for more. 5 count_over_time(<expr> [24h]) The count of all values in the specified interval. The range vector is set to 24h. 6 sum by(reason) (<expr>) Calculates sum over dimensions while preserving label reason.
33 unique reasons
Connection Analysis
openssh
54
How many connections were closed during the pre-authentication phase, grouped by source IP address?
sum by (source_ip) ( count_over_time({application="openssh", hostname="LabSZ"} |~ "Connection closed by .* \\[preauth\\]" | regexp "Connection closed by (?P<source_ip>\\d+\\.\\d+\\.\\d+\\.\\d+)" | __error__="" [24h] ) )
1 {application="openssh", hostname="LabSZ"} Fetch all log lines matching label filters. 2 <expr> |~ `Connection closed by .* \[preauth\]` Return log lines that match a RE2 regex pattern. Connection closed by .* \[preauth\]. 3 <expr> | regexp `Connection closed by (?P<source_ip>\d+\.\d+\.\d+\.\d+)` The regexp parser takes a single parameter | regexp "<re>" which is the regular expression using the Golang RE2 syntax. The regular expression must contain a least one named sub-match (e.g (?P<name>re)), each sub-match will extract a different label. The expression matches the structure of a log line. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation. 4 <expr> | __error__=`` Filter out all formatting and parsing errors. 5 count_over_time(<expr> [24h]) The count of all values in the specified interval. The range vector is set to 24h. 6 sum by(source_ip) (<expr>) Calculates sum over dimensions while preserving label source_ip.
373 Unique Source IPs
Connection Analysis
openssh
55
What is the average session duration for user 'fztu' across all hosts in the past week?"
avg( (sum by (session_id) ( count_over_time({application="openssh"} |~ "pam_unix\\(sshd:session\\): session (opened|closed) for user fztu" | __error__="" | line_format "{{.timestamp}} {{.content}}" | regexp "(?P<timestamp>\\S+\\s+\\S+).*pam_unix\\(sshd:session\\): session (?P<action>opened|closed) for user fztu" | label_format session_id="{{.timestamp}}_{{.action}}" [1w] )) * 2 > 1 ) / 2 )
1 {application="openssh"} Fetch all log lines matching label filters. 2 <expr> |~ `pam_unix\(sshd:session\): session (opened|closed) for user fztu` Return log lines that match a RE2 regex pattern. pam_unix\(sshd:session\): session (opened|closed) for user fztu. 3 <expr> | __error__=`` Filter out all formatting and parsing errors. 4 <expr> | line_format `{{.timestamp}} {{.content}}` This will replace log line using a specified template. The template can refer to stream labels and extracted labels. Example: {{.status_code}} - {{.message}} Read the docs for more. 5 <expr> | regexp `(?P<timestamp>\S+\s+\S+).*pam_unix\(sshd:session\): session (?P<action>opened|closed) for user fztu` The regexp parser takes a single parameter | regexp "<re>" which is the regular expression using the Golang RE2 syntax. The regular expression must contain a least one named sub-match (e.g (?P<name>re)), each sub-match will extract a different label. The expression matches the structure of a log line. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation. 6 <expr> | label_format session_id="{{.timestamp}}_{{.action}}" This will change name of label to desired new label. In the example below, label "error_level" will be renamed to "level". Example: error_level=`level` Read the docs for more. 7 count_over_time(<expr> [1w]) The count of all values in the specified interval. The range vector is set to 1w. 8 sum by(session_id) (<expr>) Calculates sum over dimensions while preserving label session_id. 9 <expr> / 2 no docs 10 avg(<expr>) Calculates avg over the dimensions.
60
User Session Analysis
openssh
56
Show me a timeline of session openings and closings for all users on host 'webserver01' in the last 24 hours
{application="openssh", hostname="LabSZ"} |= "session opened for user" or "session closed for user" | regexp "(?P<action>opened|closed) for user (?P<username>\\S+)" | line_format "Session {{.action}} for user {{.username}}"
1 {application="openssh", hostname="LabSZ"} Fetch all log lines matching label filters. 2 <expr> |= `session opened for user` or `session closed for user` Return log lines that contain string session opened for user, or session closed for user. 3 <expr> | regexp `(?P<action>opened|closed) for user (?P<username>\S+)` The regexp parser takes a single parameter | regexp "<re>" which is the regular expression using the Golang RE2 syntax. The regular expression must contain a least one named sub-match (e.g (?P<name>re)), each sub-match will extract a different label. The expression matches the structure of a log line. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation. 4 <expr> | line_format `Session {{.action}} for user {{.username}}` This will replace log line using a specified template. The template can refer to stream labels and extracted labels. Example: {{.status_code}} - {{.message}}
364 rows in result table
User Session Analysis
openssh
57
What are the top 10 most frequently attempted invalid usernames, along with their attempt counts, in the last 7 days?
topk(10, sum by (invalid_user) ( count_over_time({application="openssh", hostname="LabSZ"} |~ "Failed password for invalid user .* from .* port .* ssh2" | regexp "Failed password for invalid user (?P<invalid_user>\\S+) from" [7d]) ) )
1 {application="openssh", hostname="LabSZ"} Fetch all log lines matching label filters. 2 <expr> |~ `Failed password for invalid user .* from .* port .* ssh2` Return log lines that match a RE2 regex pattern. Failed password for invalid user .* from .* port .* ssh2. 3 <expr> | regexp `Failed password for invalid user (?P<invalid_user>\S+) from` The regexp parser takes a single parameter | regexp "<re>" which is the regular expression using the Golang RE2 syntax. The regular expression must contain a least one named sub-match (e.g (?P<name>re)), each sub-match will extract a different label. The expression matches the structure of a log line. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation. 4 count_over_time(<expr> [7d]) The count of all values in the specified interval. The range vector is set to 7d. 5 sum by(invalid_user) (<expr>) Calculates sum over dimensions while preserving label invalid_user. 6 topk(10, <expr>) Calculates topk over the dimensions.
18 <graph>
Invalid User Attempts
openssh
58
Show me all unique IP addresses that attempted to log in with the invalid username 'admin' more than 5 times
sum by (ip) ( count_over_time( {application="openssh", hostname="LabSZ"} |~ "Failed password for invalid user admin from" | regexp "Failed password for invalid user admin from (?P<ip>\\d+\\.\\d+\\.\\d+\\.\\d+) port \\d+ ssh2" [24h] ) ) > 5
1 {application="openssh", hostname="LabSZ"} Fetch all log lines matching label filters. 2 <expr> |~ `Failed password for invalid user admin from` Return log lines that match a RE2 regex pattern. Failed password for invalid user admin from. 3 <expr> | regexp `Failed password for invalid user admin from (?P<ip>\d+\.\d+\.\d+\.\d+) port \d+ ssh2` The regexp parser takes a single parameter | regexp "<re>" which is the regular expression using the Golang RE2 syntax. The regular expression must contain a least one named sub-match (e.g (?P<name>re)), each sub-match will extract a different label. The expression matches the structure of a log line. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation. 4 count_over_time(<expr> [24h]) The count of all values in the specified interval. The range vector is set to 24h. 5 sum by(ip) (<expr>) Calculates sum over dimensions while preserving label ip. 6 <expr> > 5
226 unique IP addresses
Invalid User Attempts
openssh
59
How many attempts were made to authenticate with invalid users from each unique source IP in the past 24 hours? (more than 200 attempts)
sum( sum by (source_ip) ( count_over_time( {application="openssh", hostname="LabSZ"} |~ "Failed password for invalid user" | regexp "Failed password for invalid user (?P<invalid_user>\\S+) from (?P<source_ip>\\d+\\.\\d+\\.\\d+\\.\\d+)" | __error__="" [24h] ) ) > 200 )
1 {application="openssh", hostname="LabSZ"} Fetch all log lines matching label filters. 2 <expr> |~ `Failed password for invalid user` Return log lines that match a RE2 regex pattern. Failed password for invalid user. 3 <expr> | regexp `Failed password for invalid user (?P<invalid_user>\S+) from (?P<source_ip>\d+\.\d+\.\d+\.\d+)` The regexp parser takes a single parameter | regexp "<re>" which is the regular expression using the Golang RE2 syntax. The regular expression must contain a least one named sub-match (e.g (?P<name>re)), each sub-match will extract a different label. The expression matches the structure of a log line. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation. 4 <expr> | __error__=`` Filter out all formatting and parsing errors. 5 count_over_time(<expr> [24h]) The count of all values in the specified interval. The range vector is set to 24h. 6 sum by(source_ip) (<expr>) Calculates sum over dimensions while preserving label source_ip. 7 <expr> > 200
11.5k
Invalid User Attempts
openssh
60
How many times did we encounter 'Corrupted MAC on input' errors in the last week, grouped by host?
sum by (hostname) (count_over_time({application="openssh"} |= "Corrupted MAC on input" [1w]))
1 {application="openssh"} Fetch all log lines matching label filters. 2 <expr> |= `Corrupted MAC on input` Return log lines that contain string Corrupted MAC on input. 3 count_over_time(<expr> [1w]) The count of all values in the specified interval. The range vector is set to 1w. 4 sum by(hostname) (<expr>) Calculates sum over dimensions while preserving label hostname.
1
System Health and Performance
openssh
61
List all 'Bad packet length' errors, including the erroneous length values, sorted by frequency
sum by (length) ( count_over_time( {application="openssh", hostname="LabSZ"} | regexp "Bad packet length (?P<length>\\d+)\\." | __error__="" [7d] ) )
1 {application="openssh", hostname="LabSZ"} Fetch all log lines matching label filters. 2 <expr> | regexp `Bad packet length (?P<length>\d+)\.` The regexp parser takes a single parameter | regexp "<re>" which is the regular expression using the Golang RE2 syntax. The regular expression must contain a least one named sub-match (e.g (?P<name>re)), each sub-match will extract a different label. The expression matches the structure of a log line. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation. 3 <expr> | __error__=`` Filter out all formatting and parsing errors. 4 count_over_time(<expr> [7d]) The count of all values in the specified interval. The range vector is set to 7d. 5 sum by(length) (<expr>) Calculates sum over dimensions while preserving label length.
1 <graph>
System Health and Performance
openssh
62
Show me the trend of 'PAM service ignoring max retries' occurrences over the past day.
sum by (timestamp) ( count_over_time( {application="openssh", hostname="LabSZ"} |~ "PAM service\\(sshd\\) ignoring max retries" | __error__="" | label_format day="{{.timestamp}}" [1d] ) )
1 {application="openssh", hostname="LabSZ"} Fetch all log lines matching label filters. 2 <expr> |~ `PAM service\(sshd\) ignoring max retries` Return log lines that match a RE2 regex pattern. PAM service\(sshd\) ignoring max retries. 3 <expr> | __error__=`` Filter out all formatting and parsing errors. 4 <expr> | label_format day="{{.timestamp}}" This will change name of label to desired new label. In the example below, label "error_level" will be renamed to "level". Example: error_level=`level` Read the docs for more. 5 count_over_time(<expr> [1d]) The count of all values in the specified interval. The range vector is set to 1d. 6 sum by(timestamp) (<expr>) Calculates sum over dimensions while preserving label timestamp
38.0k
System Health and Performance
hdfs
63
How many unique blocks were deleted across all DataNodes in the last 24 hours?
count( sum by (block_id) ( count_over_time( {application="hdfs"} |~ "Deleting block blk_.*" | regexp "Deleting block (?P<block_id>blk_[^ ]+)" [1h] ) ) )
1 {application="hdfs"} Fetch all log lines matching label filters. 2 <expr> |~ `Deleting block blk_.*` Return log lines that match a RE2 regex pattern. Deleting block blk_.*. 3 <expr> | regexp `Deleting block (?P<block_id>blk_[^ ]+)` The regexp parser takes a single parameter | regexp "<re>" which is the regular expression using the Golang RE2 syntax. The regular expression must contain a least one named sub-match (e.g (?P<name>re)), each sub-match will extract a different label. The expression matches the structure of a log line. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation. 4 count_over_time(<expr> [1h]) The count of all values in the specified interval. The range vector is set to 1h. 5 sum by(block_id) (<expr>) Calculates sum over dimensions while preserving label block_id. 6 count(<expr>) Calculates count over the dimensions.
2.08k
Block Management
hdfs
64
What are the top 5 DataNodes with the most blocks added to their invalidSet in the past 12 hours?"
topk(5, sum by (datanode) ( count_over_time( {application="hdfs", component="dfs.FSNamesystem"} |~ "BLOCK\\* NameSystem\\.delete: .* is added to invalidSet of .*" | regexp "BLOCK\\* NameSystem\\.delete: .* is added to invalidSet of (?P<datanode>[\\d\\.]+:\\d+)" [1h] ) ) )
Explanation of the query: 1. `{component="dfs.FSNamesystem"}`: This selects all logs from the FSNamesystem component, which handles these operations. 2. `|~ "BLOCK\\* NameSystem\\.delete: .* is added to invalidSet of .*"`: This line filter matches log lines containing the invalidSet addition event. 3. `| regexp "BLOCK\\* NameSystem\\.delete: .* is added to invalidSet of (?P<datanode>[\\d\\.]+:\\d+)"`: This extracts the DataNode IP and port using a regular expression and assigns it to the label "datanode". 4. `[12h]`: This specifies the 12-hour time range as requested in the question. 5. `count_over_time(...)`: This counts the occurrences of each unique datanode over the specified time range. 6. `sum by (datanode) (...)`: This sums the counts for each unique datanode, giving us the total number of blocks added to the invalidSet for each DataNode. 7. `topk(5, ...)`: This selects the top 5 DataNodes with the highest counts. This query efficiently identifies and ranks the top 5 DataNodes that had the most blocks added to their invalidSet in the past 12 hours. The result will show the DataNode IP:port and the count of blocks added to its invalidSet.
"10.251.198.196:50010", "10.251.123.33:50010", "10.251.127.243:50010", "10.251.106.10:50010", "10.251.35.1:50010"
Block Management
hdfs
65
How many blocks were successfully verified in the last 6 hours?
sum( count_over_time( {application="hdfs", component="dfs.DataBlockScanner", log_level="INFO"} |= "Verification succeeded for" [6h] ) )
Explanation of the query: 1. `{component="dfs.DataBlockScanner", log_level="INFO"}`: This selects all logs from the DataBlockScanner component with INFO log level, which is likely to contain the verification success messages. 2. `|= "Verification succeeded for"`: This line filter matches log lines containing the exact phrase "Verification succeeded for", which corresponds to successful block verification events. 3. `[6h]`: This specifies the 6-hour time range as requested in the question. 4. `count_over_time(...)`: This counts the occurrences of the matched log lines over the specified time range. 5. `sum(...)`: This sums up all the counts, giving us the total number of blocks successfully verified across all DataNodes. This query efficiently counts the number of blocks that were successfully verified in the last 6 hours across all DataNodes. The result will be a single number representing the total count of successful block verifications.
20.7k
Block Management
hdfs
66
What is the average size of blocks received by DataNodes in the last 4 hours?
avg( avg_over_time( {application="hdfs"} |~ "Received block .* of size .* from .*" | regexp "Received block .* of size (?P<size>[0-9]+) from .*" | unwrap size [4h] ) )
Explanation of the query: 1. `{component="dfs.DataNode"}`: This selects all logs from the DataNode component, which handles block receiving operations. 2. `|~ "Received block .* of size .* from .*"`: This line filter matches log lines containing the block received event. 3. `| regexp "Received block .* of size (?P<size>[0-9]+) from .*"`: This extracts the block size using a regular expression and assigns it to the label "size". 4. `| unwrap size`: This unwraps the "size" label, converting it from a string to a numeric value that can be used in calculations. 5. `[4h]`: This specifies the 4-hour time range as requested in the question. 6. `avg_over_time(...)`: This calculates the average of the unwrapped size values over the specified time range for each unique combination of labels. 7. `avg(...)`: This calculates the overall average across all DataNodes. This query efficiently calculates the average size of blocks received by DataNodes in the last 4 hours. The result will be a single value representing the average block size in bytesExplanation of the query: 1. `{component="dfs.DataNode"}`: This selects all logs from the DataNode component, which handles block receiving operations. 2. `|~ "Received block .* of size .* from .*"`: This line filter matches log lines containing the block received event. 3. `| regexp "Received block .* of size (?P<size>[0-9]+) from .*"`: This extracts the block size using a regular expression and assigns it to the label "size". 4. `| unwrap size`: This unwraps the "size" label, converting it from a string to a numeric value that can be used in calculations. 5. `[4h]`: This specifies the 4-hour time range as requested in the question. 6. `avg_over_time(...)`: This calculates the average of the unwrapped size values over the specified time range for each unique combination of labels. 7. `avg(...)`: This calculates the overall average across all DataNodes. This query efficiently calculates the average size of blocks received by DataNodes in the last 4 hours. The result will be a single value representing the average block size in bytes
65.1 Mil
Data Transfer and Replication
hdfs
67
How many block replication requests were issued to DataNode 10.250.14.224:50010 in the past 24 hours?
sum( count_over_time( {component="dfs.FSNamesystem"} |~ "BLOCK\\* ask 10\\.250\\.14\\.224:50010 to replicate .* to datanode\\(s\\) .*" [24h] ) )
Explanation of the query: 1. `{component="dfs.FSNamesystem"}`: This selects all logs from the FSNamesystem component, which handles replication requests. 2. `|~ "BLOCK\\* ask 10\\.250\\.14\\.224:50010 to replicate .* to datanode\\(s\\) .*"`: This line filter matches log lines containing the block replication request event specifically for the DataNode 10.250.14.224:50010. The IP address and port are explicitly included in the regex to ensure we only count requests for this specific DataNode. 3. `[24h]`: This specifies the 24-hour time range as requested in the question. 4. `count_over_time(...)`: This counts the occurrences of the matched log lines over the specified time range. 5. `sum(...)`: This sums up all the counts. In this case, it's not strictly necessary as we're already filtering for a specific DataNode, but it ensures we get a single total count even if there were multiple streams of logs. This query efficiently counts the number of block replication requests issued to DataNode 10.250.14.224:50010 in the past 24 hours. The result will be a single value representing the total count of such requests.
1
Data Transfer and Replication
hdfs
68
Which source IP had the highest number of successful block transmissions in the last 12 hours?
topk(1, sum by (source_ip) ( count_over_time( {application="hdfs", component="dfs.DataNode$DataTransfer"} |~ "Transmitted block .* to .*" | regexp "(?P<source_ip>[\\d\\.]+):\\d+:Transmitted block .* to .*" [12h] ) ) )
Explanation of the query: 1. `{application="hdfs", component="dfs.DataNode$DataTransfer"}`: This selects all logs from the HDFS application and the DataNode's DataTransfer component, which handles data transfer operations. 2. `|~ "Transmitted block .* to .*"`: This line filter matches log lines containing successful block transmission events. 3. `| regexp "(?P<source_ip>[\\d\\.]+):\\d+:Transmitted block .* to .*"`: This extracts the source IP address using a regular expression and assigns it to the label "source_ip". Note that we're capturing only the IP part, not the port. 4. `[12h]`: This specifies the 12-hour time range as requested in the question. 5. `count_over_time(...)`: This counts the occurrences of each unique source_ip over the specified time range. 6. `sum by (source_ip) (...)`: This sums the counts for each unique source_ip, giving us the total number of successful block transmissions for each source IP. 7. `topk(1, ...)`: This selects the top 1 source IP with the highest count of successful block transmissions. This query efficiently identifies the source IP that had the highest number of successful block transmissions in the last 12 hours. The result will show the source IP and the count of successful block transmissions.
"10.251.65.203", "10.251.43.21"
Data Transfer and Replication
hdfs
69
What are the top 3 most frequent exceptions encountered during writeBlock operations in the past 24 hours?"
topk(3, sum by (exception_type) ( count_over_time( {component=~"dfs.DataNode.*"} |~ "writeBlock .* received exception" | regexp "writeBlock .* received exception (?P<exception_type>[^:]+)" [24h] ) ) )
Explanation of the query: 1. `{component=~"dfs.DataNode.*"}`: This selects all logs from DataNode-related components, which handle writeBlock operations. 2. `|~ "writeBlock .* received exception"`: This line filter matches log lines containing writeBlock exceptions. 3. `| regexp "writeBlock .* received exception (?P<exception_type>[^:]+)"`: This extracts the exception type using a regular expression and assigns it to the label "exception_type". It captures everything after "received exception" up to the first colon, which typically gives us the exception class name. 4. `[24h]`: This specifies the 24-hour time range as requested in the question. 5. `count_over_time(...)`: This counts the occurrences of each unique exception_type over the specified time range. 6. `sum by (exception_type) (...)`: This sums the counts for each unique exception_type, giving us the total number of occurrences for each exception type. 7. `topk(3, ...)`: This selects the top 3 exception types with the highest counts. This query efficiently identifies and ranks the top 3 most frequent exceptions encountered during writeBlock operations in the past 24 hours. The result will show the exception type and the count of occurrences for each of the top 3 exceptions.
{exception_type="java.io.EOFException"} {exception_type="java.io.IOException"} {exception_type="java.io.InterruptedIOException"}
Error Analysis
hdfs
70
How many unique blocks experienced 'Connection reset by peer' errors during receiveBlock in the last 6 hours?"
count( sum by (block_id) ( count_over_time( {component=~"dfs.DataNode.*"} |~ "Exception in receiveBlock for block .* java.io.IOException: Connection reset by peer" | regexp "Exception in receiveBlock for block (?P<block_id>blk_[^ ]+) .*" [6h] ) ) )
Explanation of the query: 1. `{component=~"dfs.DataNode.*"}`: This selects all logs from the DataNode component and its subcomponents, which handle block receiving operations. 2. `|~ "Exception in receiveBlock for block .* java.io.IOException: Connection reset by peer"`: This line filter matches log lines containing the specific error event we're interested in. 3. `| regexp "Exception in receiveBlock for block (?P<block_id>blk_[^ ]+) .*"`: This extracts the block ID using a regular expression and assigns it to the label "block_id". 4. `[6h]`: This specifies the 6-hour time range as requested in the question. 5. `count_over_time(...)`: This counts the occurrences of each unique block_id over the specified time range. 6. `sum by (block_id) (...)`: This sums the counts for each unique block_id, effectively collapsing multiple occurrences of the same block_id into a single entry. 7. `count(...)`: This counts the number of unique block_ids after summing, giving us the total number of unique blocks that experienced the specific error. This query efficiently counts the number of unique blocks that experienced 'Connection reset by peer' errors during receiveBlock operations in the last 6 hours. The result will be a single value representing the count of unique blocks affected by this error.
5
Error Analysis
hdfs
71
Which DataNodes had the highest number of 'Broken pipe' exceptions in PacketResponder threads in the past 12 hours?
{component="dfs.DataNode$PacketResponder", log_level="INFO"} |~ "PacketResponder .* Exception java.io.IOException: Broken pipe" | regexp "(?P<datanode>\\d+\\.\\d+\\.\\d+\\.\\d+:\\d+).*PacketResponder .* Exception java.io.IOException: Broken pipe"
Explanation of the query: 1. `{component="dfs.DataNode$PacketResponder", log_level="INFO"}`: This selects all logs from the PacketResponder component with INFO log level. 2. `|~ "PacketResponder .* Exception java.io.IOException: Broken pipe"`: This line filter matches log lines containing the 'Broken pipe' exception in PacketResponder. 3. `| regexp "(?P<datanode>\\d+\\.\\d+\\.\\d+\\.\\d+:\\d+).*PacketResponder .* Exception java.io.IOException: Broken pipe"`: This extracts the DataNode IP and port using a regular expression and assigns it to the label "datanode". We assume the DataNode information is at the beginning of the log line, which is common in distributed system logs.
blk_-4567777441263358151 blk_3858821904894294369
Error Analysis
hdfs
72
What is the average time taken for a block to be transmitted between DataNodes in the last hour?"
( sum(rate({application="hdfs"} |~ "Transmitted block" [1h])) / sum(rate({application="hdfs"} |~ "Starting thread to transfer block" [1h])) ) * 3600
Explanation of the query: 1. `{application="hdfs"}`: This selects all logs from the HDFS application 2. `|~ "Transmitted block"` and `|~ "Starting thread to transfer block"`: These line filters match log lines containing the end and start of block transfer events, respectively. 3. `[1h]`: This specifies the 1-hour time range as requested in the question. 4. `rate(... [1h])`: This calculates the per-second rate of occurrences for each event over the last hour. 5. `sum(...)`: This sums the rates across all DataNodes. 6. The division `(...) / (...)` gives us the average time between start and end events. 7. `* 3600`: This converts the result from seconds to hours. This query approximates the average time taken for a block to be transmitted between DataNodes in the last hour. It does this by calculating the ratio of completed transmissions to started transmissions and then converting this to an average time in seconds. The result will be a single value representing the average time in seconds for a block to be transmitted between DataNodes in the last hour. Note that this method assumes that the rate of starting and completing transfers is relatively constant over the time period. It may not be as accurate as directly measuring the time for each individual transfer, but it provides a good approximation given the limitations of LogQL.
20.4k <graph> average value graph over 1 hour
Performance Monitoring
hdfs
73
How many blocks took longer than 2 minutes to be received by a DataNode due to SocketTimeoutException in the past 24 hours?
sum( count_over_time( {application="hdfs", component=~"dfs.DataNode|dfs.DataNode$DataTransfer"} |~ "Exception in receiveBlock for block .* java.net.SocketTimeoutException: .* millis timeout" | regexp "Exception in receiveBlock for block .* java.net.SocketTimeoutException: (?P<timeout>[0-9]+) millis timeout" # | unwrap timeout | __error__="" | timeout > 120000 [24h] ) )
1. `{application="hdfs", component=~"dfs.DataNode|dfs.DataNode$DataTransfer"}`: This selects all logs from the HDFS application, specifically from the DataNode and DataNode$DataTransfer components. 2. `|~ "Exception in receiveBlock for block .* java.net.SocketTimeoutException: .* millis timeout"`: This line filter matches log lines containing the SocketTimeoutException event for receiving blocks. 3. `| regexp "Exception in receiveBlock for block .* java.net.SocketTimeoutException: (?P<timeout>[0-9]+) millis timeout"`: This extracts the timeout duration using a regular expression and assigns it to the label "timeout". 4. `| unwrap timeout`: This unwraps the "timeout" label, converting it from a string to a numeric value that can be used in calculations. 5. `| __error__=""`: This filters out any lines where the unwrap operation failed. 6. `| timeout > 120000`: This filters for events where the timeout exceeds 2 minutes (120,000 milliseconds). 7. `[24h]`: This specifies the 24-hour time range as requested in the question. 8. `count_over_time(...)`: This counts the occurrences of events that meet all the above criteria over the specified time range. 9. `sum(...)`: This sums the counts, giving us the total number of events across all DataNodes. This query efficiently counts the number of blocks that took longer than 2 minutes to be received by a DataNode due to SocketTimeoutException in the past 24 hours. The result will be a single value representing this count.
4
Performance Monitoring
hdfs
74
How many times did the NameSystem update the blockMap in the past 6 hours?
sum(count_over_time({application="hdfs", component="dfs.FSNamesystem"} |~ "BLOCK\\* NameSystem\\.addStoredBlock: blockMap updated:" [6h]))
1. `{application="hdfs", component="dfs.FSNamesystem"}`: This selects all logs from the HDFS application and the FSNamesystem component, which handles NameSystem operations. 2. `|~ "BLOCK\\* NameSystem\\.addStoredBlock: blockMap updated:"`: This line filter matches log lines containing the blockMap update event. 3. `[6h]`: This specifies the 6-hour time range as requested in the question. 4. `count_over_time(... [6h])`: This counts the occurrences of the matched log lines over the specified time range. 5. `sum(...)`: This sums the counts across all instances, giving us the total number of blockMap updates. This query efficiently counts the number of times the NameSystem updated the blockMap in the past 6 hours. The result will be a single value representing the total count of blockMap updates during this period.
249k
NameNode Operations
hdfs
75
How many blocks were found not belonging to any file during addStoredBlock requests in the past 12 hours?"
sum(count_over_time({application="hdfs", component="dfs.FSNamesystem"} |~ "BLOCK\\* NameSystem\\.addStoredBlock: addStoredBlock request received for .* But it does not belong to any file\\." [12h]))
1. `{application="hdfs", component="dfs.FSNamesystem"}`: This selects all logs from the HDFS application and the FSNamesystem component, which handles these operations. 2. `|~ "BLOCK\\* NameSystem\\.addStoredBlock: addStoredBlock request received for .* But it does not belong to any file\\."`: This line filter matches log lines containing the specific event we're interested in, where a block is found not to belong to any file during an addStoredBlock request. 3. `[12h]`: This specifies the 12-hour time range as requested in the question. 4. `count_over_time(...)`: This counts the occurrences of the matched log lines over the specified time range. 5. `sum(...)`: This sums up all the counts, giving us the total number of occurrences across all instances. This query efficiently counts the number of blocks found not belonging to any file during addStoredBlock requests in the past 12 hours. The result will be a single value representing the total count of such events.
6
NameNode Operations
hdfs
76
How many blocks were added to the invalidSet in the last hour?
sum( count_over_time( {component="dfs.FSNamesystem"} |~ "BLOCK\\* NameSystem\\.delete: .* is added to invalidSet of .*" [1h] ) )
1. `{component="dfs.FSNamesystem"}`: This selects all logs from the FSNamesystem component, which handles these operations. 2. `|~ "BLOCK\\* NameSystem\\.delete: .* is added to invalidSet of .*"`: This line filter matches log lines containing the invalidSet addition event. 3. `[1h]`: This specifies the 1-hour time range as requested in the question. 4. `count_over_time(...)`: This counts the occurrences of the matched log lines over the specified time range. 5. `sum(...)`: This sums up all the counts, giving us the total number of blocks added to the invalidSet across all DataNodes. This query efficiently counts the total number of blocks added to the invalidSet in the last hour across all DataNodes. The result will be a single value representing the count of blocks added to the invalidSet.
5.81k
Block Management
hdfs
77
What are the top 5 DataNodes that served the most blocks in the past 24 hours?
topk(5, sum by (datanode) ( count_over_time( {application="hdfs"} |~ "Served block .* to .*" | regexp "(?P<datanode>[\\d\\.]+:\\d+) Served block .* to .*" [24h] ) ) )
1. `{application="hdfs"}`: This selects all logs from the DataNode component, which handles block serving operations. 2. `|~ "Served block .* to .*"`: This line filter matches log lines containing the block served event. 3. `| regexp "(?P<datanode>[\\d\\.]+:\\d+) Served block .* to .*"`: This extracts the DataNode IP and port using a regular expression and assigns it to the label "datanode". 4. `[24h]`: This specifies the 24-hour time range as requested in the question. 5. `count_over_time(...)`: This counts the occurrences of each unique datanode over the specified time range. 6. `sum by (datanode) (...)`: This sums the counts for each unique datanode, giving us the total number of blocks served by each DataNode. 7. `topk(5, ...)`: This selects the top 5 DataNodes with the highest counts. This query efficiently identifies and ranks the top 5 DataNodes that served the most blocks in the past 24 hours. The result will show the DataNode IP:port and the count of blocks served, sorted in descending order.
{datanode="10.250.11.100:50010"} {datanode="10.251.111.228:50010"} {datanode="10.251.194.213:50010"} {datanode="10.251.197.226:50010"} {datanode="10.251.75.49:50010"}
Block Management
hdfs
78
How many unique blocks were received across all DataNodes in the last 4 hours?
count( sum by (block_id) ( count_over_time( {component="dfs.DataNode"} |~ "Received block .* of size .* from .*" | regexp "Received block (?P<block_id>blk_[^ ]+) of size" [4h] ) ) )
Explanation of the query: 1. `{component="dfs.DataNode"}`: This selects all logs from the DataNode component, which handles block receiving operations. 2. `|~ "Received block .* of size .* from .*"`: This line filter matches log lines containing the block received event. 3. `| regexp "Received block (?P<block_id>blk_[^ ]+) of size"`: This extracts the block ID using a regular expression and assigns it to the label "block_id". 4. `[4h]`: This specifies the 4-hour time range as requested in the question. 5. `count_over_time(...)`: This counts the occurrences of each unique block_id over the specified time range. 6. `sum by (block_id) (...)`: This sums the counts for each unique block_id, effectively collapsing multiple occurrences of the same block_id into a single entry. 7. `count(...)`: This counts the number of unique block_ids after summing, giving us the total number of unique blocks received. This query efficiently counts the number of unique blocks received across all DataNodes in the last 4 hours, regardless of how many times each block was received. It provides an accurate count even if a block is received multiple times due to replication or other factors.
82.9k
Block Management
hdfs
79
What is the average size of blocks transferred between DataNodes in the last 12 hours?
avg( avg_over_time( {application="hdfs"} |~ "Received block .* src: .* dest: .* of size .*" | regexp "Received block .* src: .* dest: .* of size (?P<size>[0-9]+)" | unwrap size [12h] ) )
1. `{component="dfs.DataNode"}`: This selects all logs from the DataNode component, which handles block transfer operations. 2. `|~ "Received block .* src: .* dest: .* of size .*"`: This line filter matches log lines containing the block received event, which indicates a transfer between DataNodes. 3. `| regexp "Received block .* src: .* dest: .* of size (?P<size>[0-9]+)"`: This extracts the block size using a regular expression and assigns it to the label "size". 4. `| unwrap size`: This unwraps the "size" label, converting it from a string to a numeric value that can be used in calculations. 5. `[12h]`: This specifies the 12-hour time range as requested in the question. 6. `avg_over_time(...)`: This calculates the average of the unwrapped size values over the specified time range for each unique combination of labels. 7. `avg(...)`: This calculates the overall average across all transfers. This query efficiently calculates the average size of blocks transferred between DataNodes in the last 12 hours. The result will be a single value representing the average block size in bytes for all transfers that occurred during this period.
11.1M
Replication and Data Transfer
hdfs
80
How many block replication requests were issued in the past hour?
sum( count_over_time( {component="dfs.FSNamesystem"} |~ "BLOCK\\* ask .* to replicate .* to datanode\\(s\\) .*" [1h] ) )
1. `{component="dfs.FSNamesystem"}`: This selects all logs from the FSNamesystem component, which handles replication requests. 2. `|~ "BLOCK\\* ask .* to replicate .* to datanode\\(s\\) .*"`: This line filter matches log lines containing the block replication request event. The backslashes are used to escape special characters in the regex. 3. `[1h]`: This specifies the 1-hour time range as requested in the question. 4. `count_over_time(...)`: This counts the occurrences of the matched log lines over the specified time range. 5. `sum(...)`: This sums up all the counts, giving us the total number of block replication requests. This query efficiently counts the number of block replication requests issued in the past hour. The result will be a single value representing the total count of replication requests.
136 <graph>
Replication and Data Transfer
hdfs
81
Which DataNodes had the most failed block transfers in the last 24 hours?
topk(3, sum by (source_datanode) ( count_over_time( {application="hdfs"} |~ "Failed to transfer .* to .* got java.io.IOException: Connection reset by peer" | regexp "(?P<source_datanode>[\\d\\.]+:\\d+):Failed to transfer .* to .* got java.io.IOException: Connection reset by peer" [24h] ) ) )
1. `{application="hdfs"}`: This selects all logs from the DataNode component, which handles block transfer operations. 2. `|~ "Failed to transfer .* to .* got java.io.IOException: Connection reset by peer"`: This line filter matches log lines containing the failed block transfer event. 3. `| regexp "(?P<source_datanode>[\\d\\.]+:\\d+):Failed to transfer .* to .* got java.io.IOException: Connection reset by peer"`: This extracts the source DataNode IP and port using a regular expression and assigns it to the label "source_datanode". 4. `[24h]`: This specifies the 24-hour time range as requested in the question. 5. `count_over_time(...)`: This counts the occurrences of failed transfers for each unique source_datanode over the specified time range. 6. `sum by (source_datanode) (...)`: This sums the counts for each unique source_datanode, giving us the total number of failed transfers for each DataNode. 7. `topk(3, ...)`: This selects the top DataNode with the highest count of failed transfers. This query efficiently identifies the DataNode that had the most failed block transfers in the last 24 hours. The result will show the DataNode IP:port and the count of failed transfers, with only the top (most failing) DataNode being returned.
{source_datanode="10.251.194.147:50010"} {source_datanode="10.251.39.242:50010"} {source_datanode="10.251.74.227:50010"}
Replication and Data Transfer
hdfs
82
What are the most common exceptions encountered during block writes in the past 6 hours?
avg( avg_over_time( {component="dfs.DataNode"} |~ "Received block .* src: .* dest: .* of size .*" | regexp "Received block .* src: .* dest: .* of size (?P<size>[0-9]+)" | unwrap size [12h] ) )
1. `{application="hdfs", component=~"dfs.DataNode.*"}`: This selects all logs from HDFS application and DataNode-related components, which handle block write operations. 2. `|~ "writeBlock .* received exception"`: This line filter matches log lines containing block write exceptions. 3. `| regexp "writeBlock .* received exception (?P<exception_type>[^:]+)"`: This extracts the exception type using a regular expression and assigns it to the label "exception_type". It captures everything after "received exception" up to the first colon, which typically gives us the exception class name. 4. `[6h]`: This specifies the 6-hour time range as requested in the question. 5. `count_over_time(...)`: This counts the occurrences of each unique exception_type over the specified time range. 6. `sum by (exception_type) (...)`: This sums the counts for each unique exception_type, giving us the total number of occurrences for each exception type. 7. `topk(10, ...)`: This selects the top 10 exception types with the highest counts. We use 10 here to get a comprehensive view of the "most common" exceptions, but this number can be adjusted if needed. This query efficiently identifies and ranks the most common exceptions encountered during block writes in the past 6 hours. The result will show the exception types and their occurrence counts, sorted from most common to least common.
{exception_type="java.io.EOFException"} {exception_type="java.io.IOException"} {exception_type="java.io.InterruptedIOException"}
Error Analysis
hdfs
83
How many unique blocks experienced 'Connection reset by peer' errors in the last 24 hours?
count( sum by (block_id) ( count_over_time( {application="hdfs"} |~ "Connection reset by peer" |~ "blk_[0-9-]+" | regexp "(?P<block_id>blk_[0-9-]+)" [24h] ) ) )
1. `{application="hdfs"}`: This selects all logs from the HDFS application. 2. `|~ "Connection reset by peer"`: This line filter matches log lines containing the specific error message. 3. `|~ "blk_[0-9-]+"`: This additional line filter ensures we're only matching lines that contain a block ID. 4. `| regexp "(?P<block_id>blk_[0-9-]+)"`: This extracts the block ID using a regular expression and assigns it to the label "block_id". 5. `[24h]`: This specifies the 24-hour time range as requested in the question. 6. `count_over_time(...)`: This counts the occurrences of each unique block_id over the specified time range. 7. `sum by (block_id) (...)`: This sums the counts for each unique block_id, effectively collapsing multiple occurrences of the same block_id into a single entry. 8. `count(...)`: This counts the number of unique block_ids after summing, giving us the total number of unique blocks that experienced the error. This query efficiently counts the number of unique blocks that experienced 'Connection reset by peer' errors in the last 24 hours, regardless of which specific component logged the error or how many times the error occurred for each block. Add to Conversation
8
Error Analysis
hdfs
84
Which DataNode had the highest number of timeout exceptions while waiting for channel to be ready in the last 6 hours?
count_over_time( {component=~"dfs.DataNode.*"} |~ ".*SocketTimeoutException:.*millis timeout while waiting for channel to be ready.*" | regexp ".*SocketChannel\\[connected local=(?P<datanode>[\\d\\.]+):\\d+.*" [6h] )
1 {component=~"dfs.DataNode.*"} Fetch all log lines matching label filters. 2 <expr> |~ `.*SocketTimeoutException:.*millis timeout while waiting for channel to be ready.*` Return log lines that match a RE2 regex pattern. .*SocketTimeoutException:.*millis timeout while waiting for channel to be ready.*. 3 <expr> | regexp `.*SocketChannel\[connected local=(?P<datanode>[\d\.]+):\d+.*` The regexp parser takes a single parameter | regexp "<re>" which is the regular expression using the Golang RE2 syntax. The regular expression must contain a least one named sub-match (e.g (?P<name>re)), each sub-match will extract a different label. The expression matches the structure of a log line. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation. 4 count_over_time(<expr> [6h]) The count of all values in the specified interval. The range vector is set to 6h.
{application="hdfs", component="dfs.DataNode$PacketResponder", log_level="INFO", service_name="hdfs"}
Performance Monitoring
hdfs
85
How many times did the NameSystem allocate new blocks in the past minute?
sum( count_over_time( {application="hdfs"} |~ "BLOCK\\* NameSystem\\.allocateBlock:" [1m] ) )
1. `{application="hdfs"}`: This selects all logs from the FSNamesystem component, which handles block allocation. 2. `|~ "BLOCK\\* NameSystem\\.allocateBlock:"`: This line filter matches log lines containing the block allocation event. We use `\\` to escape the asterisk in the log message. 3. `[1h]`: This specifies the 1-minute time range as requested in the question. 4. `count_over_time(...)`: This counts the occurrences of the matched log lines over the specified time range. 5. `sum(...)`: This sums up all the counts, giving us the total number of block allocations across all instances of FSNamesystem. This query efficiently counts the number of times the NameSystem allocated new blocks in the past hour. The result will be a single value representing the total count of block allocations.
1.88k <graph>
NameNode Operations
hdfs
86
What is the total size of blocks added to the blockMap in the last 24 hours?
sum( sum_over_time( {component="dfs.FSNamesystem"} |~ "BLOCK\\* NameSystem\\.addStoredBlock: blockMap updated:.*is added to.*size.*" | regexp "BLOCK\\* NameSystem\\.addStoredBlock: blockMap updated:.*is added to.*size (?P<size>[0-9]+)" | unwrap size [24h] ) )
1. `{component="dfs.FSNamesystem"}`: This selects all logs from the FSNamesystem component, which handles blockMap operations. 2. `|~ "BLOCK\\* NameSystem\\.addStoredBlock: blockMap updated:.*is added to.*size.*"`: This line filter matches log lines containing the blockMap update event. 3. `| regexp "BLOCK\\* NameSystem\\.addStoredBlock: blockMap updated:.*is added to.*size (?P<size>[0-9]+)"`: This extracts the block size using a regular expression and assigns it to the label "size". 4. `| unwrap size`: This unwraps the "size" label, converting it from a string to a numeric value that can be used in calculations. 5. `[24h]`: This specifies the 24-hour time range as requested in the question. 6. `sum_over_time(...)`: This sums up all the unwrapped size values over the specified time range. 7. `sum(...)`: This calculates the total sum across all instances, giving us the total size of blocks added to the blockMap. This query efficiently calculates the total size of blocks added to the blockMap in the last 24 hours. The result will be a single value representing the total block size in bytes.
16.1 Tri <graph>
NameNode Operations
hdfs
87
How many blocks were removed from neededReplications as they didn't belong to any file in the past 12 hours?
count_over_time( {component="dfs.FSNamesystem"} |= "BLOCK* Removing block" |= "from neededReplications as it does not belong to any file" [12h] )
1. `{component="dfs.FSNamesystem"}`: This selects all logs from the FSNamesystem component, which handles these operations. 2. `|= "BLOCK* Removing block"`: This line filter matches log lines containing the beginning of our target message. 3. `|= "from neededReplications as it does not belong to any file"`: This additional line filter ensures we're matching the exact event we're interested in. 4. `[12h]`: This specifies the 12-hour time range as requested in the question. 5. `count_over_time(...)`: This counts the occurrences of the matched log lines over the specified time range. This query efficiently counts the number of blocks removed from neededReplications because they didn't belong to any file in the past 12 hours. The result will be a single value representing the count of such events.
0
NameNode Operations
hdfs
88
How many blocks took longer than 2 minutes to be received by a DataNode due to SocketTimeoutException in the past 24 hours?"
sum( count_over_time( {component="dfs.DataNode$BlockReceiver"} |~ "Exception in receiveBlock for block .* java.net.SocketTimeoutException: .* millis timeout" | regexp "Exception in receiveBlock for block (?P<block_id>blk_[^ ]+) java.net.SocketTimeoutException: (?P<timeout>[0-9]+) millis timeout" | timeout > 120000 [24h] ) )
1. `{component="dfs.DataNode$BlockReceiver"}`: This selects all logs from the DataNode component, which handles block receiving operations. 2. `|~ "Exception in receiveBlock for block .* java.net.SocketTimeoutException: .* millis timeout"`: This line filter matches log lines containing the SocketTimeoutException event for receiving blocks. 3. `| regexp "Exception in receiveBlock for block (?P<block_id>blk_[^ ]+) java.net.SocketTimeoutException: (?P<timeout>[0-9]+) millis timeout"`: This extracts the block ID and timeout duration using a regular expression and assigns them to the labels "block_id" and "timeout" respectively. 4. `| timeout > 120000`: This filters for events where the timeout is greater than 120000 milliseconds (2 minutes). 5. `[24h]`: This specifies the 24-hour time range as requested in the question. 6. `count_over_time(...)`: This counts the occurrences of events that match our criteria over the specified time range. 7. `sum(...)`: This sums up all the counts, giving us the total number of blocks that took longer than 2 minutes to be received due to SocketTimeoutException. This query efficiently counts the number of blocks that took longer than 2 minutes to be received by a DataNode due to SocketTimeoutException in the past 24 hours. The result will be a single value representing the count of such events.
0
Performance Issues
hdfs
89
How many times did the PendingReplicationMonitor time out for blocks in the past 12 hours?
sum( count_over_time( {application="hdfs"} |~ "PendingReplicationMonitor timed out block .*" [12h] ) )
Explanation of the query: 1. `{application="hdfs"}`: This selects all logs from the HDFS application, as we don't have a specific component for PendingReplicationMonitor. 2. `|~ "PendingReplicationMonitor timed out block .*"`: This line filter matches log lines containing the PendingReplicationMonitor timeout event. 3. `[12h]`: This specifies the 12-hour time range as requested in the question. 4. `count_over_time(...)`: This counts the occurrences of the matched log lines over the specified time range. 5. `sum(...)`: This sums up all the counts, giving us the total number of times the PendingReplicationMonitor timed out for blocks in the past 12 hours. This query efficiently counts the number of PendingReplicationMonitor timeout events across all HDFS components in the last 12 hours. The result will be a single value representing the total count of these timeout events.
2
Performance Issues
hdfs
90
What is the average time taken for a block to be transmitted between DataNodes in the last hour?
( sum(rate({application="hdfs"} |~ "Transmitted block" [1h])) / sum(rate({application="hdfs"} |~ "Starting thread to transfer block" [1h])) ) * 3600
Explanation of the query: 1. `{application="hdfs"}`: This selects all logs from HDFS application. 2. `|~ "Transmitted block"` and `|~ "Starting thread to transfer block"`: These line filters match log lines containing the end and start of block transfer events, respectively. 3. `[1h]`: This specifies the 1-hour time range as requested in the question. 4. `rate(... [1h])`: This calculates the per-second rate of occurrences for each event over the last hour. 5. `sum(...)`: This sums the rates across all DataNodes. 6. The division `(...) / (...)` gives us the average time between start and end events. 7. `* 3600`: This converts the result from seconds to hours. This query approximates the average time taken for a block to be transmitted between DataNodes in the last hour. It does this by calculating the ratio of completed transmissions to started transmissions and then converting this to an average time in seconds. Note that this method assumes that the rate of starting and completing transfers is relatively constant over the time period. It may not be as accurate as directly measuring the time for each individual transfer, but it provides a good approximation given the limitations of LogQL. The result will be a single value representing the average time in seconds for a block to be transmitted between DataNodes in the last hour.
38k <graph>
Replication and Data Transfer
openstack
91
How many times did the image cache manager pass run in the last 7 days?"
sum(count_over_time({application="openstack"} |~ "ComputeManager\\._run_image_cache_manager_pass" [7d]))
1. `{application="openstack"}`: This selects all logs from the OpenStack application. 2. `|~ "ComputeManager\\._run_image_cache_manager_pass"`: This line filter matches log lines containing "ComputeManager._run_image_cache_manager_pass", which should capture both successful runs and errors related to the image cache manager pass. 3. `[7d]`: This specifies the 7-day time range as requested in the question. 4. `count_over_time(...)`: This counts the occurrences of matching log lines over the specified time range. 5. `sum(...)`: This sums up all the counts, giving us the total number of times the image cache manager pass ran (or attempted to run) in the last 7 days. This query efficiently counts the number of times the image cache manager pass ran in the last 7 days, including both successful runs and potential errors. The result will be a single number representing the total count.
8 <graph>
System Health and Maintenance
openstack
92
List all instances of 'Bad response code while validating token' errors.
{application="openstack"} |= "Bad response code while validating token" | line_format "{{.timestamp}} {{.labels.component}} {{.content}}"
1. `{application="openstack"}`: This selects all logs from the OpenStack application, regardless of log level. 2. `|= "Bad response code while validating token"`: This line filter matches log lines containing the exact error message we're looking for. 3. `| line_format "{{.timestamp}} {{.labels.component}} {{.content}}"`: This formats the output to show the timestamp, the component that generated the error, and the full error message. This query will return all instances of the "Bad response code while validating token" error, showing the timestamp, the component that generated the error, and the full error message for each occurrence. This allows for a comprehensive view of when and where these errors occurred within the OpenStack system, regardless of the log level.
2 rows in a table
Security and Authentication
openstack
93
What are the instances where authentication failure has occurred in the last 24 hours?
count_over_time( {application="openstack"} |~ "authentication failure|Bad response code while validating token|Unable to validate token" | regexp "(?P<client_ip>\\d+\\.\\d+\\.\\d+\\.\\d+)" [24h] )
1 {application="openstack"} Fetch all log lines matching label filters. 2 <expr> |~ `authentication failure|Bad response code while validating token|Unable to validate token` Return log lines that match a RE2 regex pattern. authentication failure|Bad response code while validating token|Unable to validate token. 3 <expr> | regexp `(?P<client_ip>\d+\.\d+\.\d+\.\d+)` The regexp parser takes a single parameter | regexp "<re>" which is the regular expression using the Golang RE2 syntax. The regular expression must contain a least one named sub-match (e.g (?P<name>re)), each sub-match will extract a different label. The expression matches the structure of a log line. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation. 4 count_over_time(<expr> [24h]) The count of all values in the specified interval. The range vector is set to 24h.
{application="openstack", component="keystonemiddleware.auth_token", log_file_name="nova-api.log.1.2017-05-16_13:53:08", log_file_type="nova-api", log_level="ERROR", request_id="1cc7d50c-25a2-46b0-a668-9c00f589160c", service_name="openstack", tenant_id="113d3a99c3da401fbd62cc2caa5b96d2", user_id="54fadb412c4e40cdbaed9335e4c35a9e"} {application="openstack", component="keystonemiddleware.auth_token", log_file_name="nova-api.log.1.2017-05-16_13:53:08", log_file_type="nova-api", log_level="ERROR", service_name="openstack"} {application="openstack", component="keystonemiddleware.auth_token", log_file_name="nova-api.log.1.2017-05-16_13:53:08", log_file_type="nova-api", log_level="null", request_id="1cc7d50c-25a2-46b0-a668-9c00f589160c", service_name="openstack", tenant_id="113d3a99c3da401fbd62cc2caa5b96d2", user_id="54fadb412c4e40cdbaed9335e4c35a9e"} {application="openstack", component="keystonemiddleware.auth_token", log_file_name="nova-api.log.1.2017-05-16_13:53:08", log_file_type="nova-api", log_level="null", service_name="openstack"} {application="openstack", component="keystonemiddleware.auth_token", log_file_name="nova-api.log.1.2017-05-17_12:02:19", log_file_type="nova-api", log_level="ERROR", request_id="abcdb287-429c-40a5-8e79-d490d4fb7410", service_name="openstack", tenant_id="113d3a99c3da401fbd62cc2caa5b96d2", user_id="54fadb412c4e40cdbaed9335e4c35a9e"} {application="openstack", component="keystonemiddleware.auth_token", log_file_name="nova-api.log.1.2017-05-17_12:02:19", log_file_type="nova-api", log_level="ERROR", service_name="openstack"} {application="openstack", component="keystonemiddleware.auth_token", log_file_name="nova-api.log.1.2017-05-17_12:02:19", log_file_type="nova-api", log_level="null", request_id="abcdb287-429c-40a5-8e79-d490d4fb7410", service_name="openstack", tenant_id="113d3a99c3da401fbd62cc2caa5b96d2", user_id="54fadb412c4e40cdbaed9335e4c35a9e"} {application="openstack", component="keystonemiddleware.auth_token", log_file_name="nova-api.log.1.2017-05-17_12:02:19", log_file_type="nova-api", log_level="null", service_name="openstack"} <graph>
Security and Authentication
openstack
94
List the compute hosts that had mismatched instance synchronizations in the last 24 hours
sum by (application) ( count_over_time( {application="openstack"} |= "The instance sync for host" |= "did not match" | regexp "The instance sync for host '(?P<host>.*?)' did not match" | label_format host="{{ .host }}" | line_format "{{ .host }}" [24h] ) )
1 {application="openstack"} Fetch all log lines matching label filters. 2 <expr> |= `The instance sync for host` Return log lines that contain string The instance sync for host. 3 <expr> |= `did not match` Return log lines that contain string did not match. 4 <expr> | regexp `The instance sync for host '(?P<host>.*?)' did not match` The regexp parser takes a single parameter | regexp "<re>" which is the regular expression using the Golang RE2 syntax. The regular expression must contain a least one named sub-match (e.g (?P<name>re)), each sub-match will extract a different label. The expression matches the structure of a log line. The extracted labels can be used in label filter expressions and used as values for a range aggregation via the unwrap operation. 5 <expr> | label_format host="{{ .host }}" This will change name of label to desired new label. In the example below, label "error_level" will be renamed to "level". Example: error_level=`level` Read the docs for more. 6 <expr> | line_format `{{ .host }}` This will replace log line using a specified template. The template can refer to stream labels and extracted labels. Example: {{.status_code}} - {{.message}} Read the docs for more. 7 count_over_time(<expr> [24h]) The count of all values in the specified interval. The range vector is set to 24h. 8 sum by(application) (<expr>) Calculates sum over dimensions while preserving label application.
364 <graph>
Audit and Synchronization
openstack
95
How many instance usage audit runs were performed in the last 7 days
sum(count_over_time({application="openstack"} |= "Running instance usage audit for host" [7d]))
1. `{application="openstack"}`: This selects all logs from the OpenStack application. 2. `|= "Running instance usage audit for host"`: This line filter matches log lines containing the specific phrase that indicates an instance usage audit run. 3. `[7d]`: This specifies the 7-day time range as requested in the question. 4. `count_over_time(...)`: This counts the occurrences of matching log lines over the specified time range. 5. `sum(...)`: This sums up all the counts, giving us the total number of instance usage audit runs in the last 7 days. This query efficiently counts the number of instance usage audit runs performed in the last 7 days across all OpenStack components. The result will be a single number representing the total count of audit runs.
30 <graph>
Audit and Synchronization
openstack
96
What percentage of instance synchronizations between the database and hypervisor failed in the past month?
100 * sum(count_over_time({application="openstack"} |= "The instance sync for host" |= "did not match" [30d])) / (sum(count_over_time({application="openstack"} |= "The instance sync for host" |= "did not match" [30d])) + sum(count_over_time({application="openstack"} |= "Successfully synced instances from host" [30d])))
1. `{application="openstack"}`: This selects all logs from the OpenStack application 2. `|= "The instance sync for host" |= "did not match"`: This line filter matches log lines indicating failed synchronizations. 3. `|= "Successfully synced instances from host"`: This line filter matches log lines indicating successful synchronizations. 4. `[30d]`: This specifies the one-month time range as requested in the question. 5. `count_over_time(...)`: This counts the occurrences of matching log lines over the specified time range. 6. `sum(...)`: This sums up all the counts for each type of synchronization (failed and successful). 7. The overall structure `100 * failed / (failed + successful)` calculates the percentage of failed synchronizations.
42 <graph>
Audit and Synchronization
openstack
97
What is the total size of all active base files?
sum by (component) ( count_over_time({application="openstack", component="nova.virt.libvirt.imagecache"} |~ "Active base files: (?P<file_path>/.*)" [1h]) )
1 {application="openstack", component="nova.virt.libvirt.imagecache"} Fetch all log lines matching label filters. 2 <expr> |~ `Active base files: (?P<file_path>/.*)` Return log lines that match a RE2 regex pattern. Active base files: (?P<file_path>/.*). 3 count_over_time(<expr> [1h]) The count of all values in the specified interval. The range vector is set to 1h. 4 sum by(component) (<expr>) Calculates sum over dimensions while preserving label component.
12.0k <graph>
Image and File Management
openstack
98
What is the average response time for GET requests to /v2/{tenant_id}/servers/detail in the last hour?"
avg( avg_over_time( {application="openstack", log_file_type="nova-api"} |= "GET" |= "/v2/" |= "/servers/detail" | regexp `GET /v2/(?P<tenant_id>[^/]+)/servers/detail.*time: (?P<response_time>\d+\.\d+)` | unwrap response_time [1h] ) )
1. `{application="openstack", log_file_type="nova-api"}`: This selects all logs from the OpenStack application, specifically from the nova-api log file type. 2. `|= "GET" |= "/v2/" |= "/servers/detail"`: These line filters match log lines containing GET requests to /v2/{tenant_id}/servers/detail. 3. `| regexp `GET /v2/(?P<tenant_id>[^/]+)/servers/detail.*time: (?P<response_time>\d+\.\d+)```: This regular expression extracts the tenant_id and response_time from the log line. 4. `| unwrap response_time`: This unwraps the extracted response_time value for use in metric calculations. 5. `[1h]`: This specifies the 1-hour time range as requested in the question. 6. `avg_over_time(...)`: This calculates the average response time over the specified time range for each unique combination of labels. 7. `avg(...)`: This calculates the overall average across all label combinations. This query efficiently extracts the response times for the specific GET requests, calculates the average over the last hour, and provides a single value representing the average response time for GET requests to /v2/{tenant_id}/servers/detail in the last hour.
0.264 <graph>
API Performance and Requests
openstack
99
How many POST requests to /v2/{tenant_id}/os-server-external-events resulted in a 500 status code in the past 24 hours?
sum(count_over_time({application="openstack"} |= "POST" |= "/v2/" |= "/os-server-external-events" |= "status: 500" [24h]))
1. `{application="openstack", log_file_type="nova-api"}`: This selects all logs from the OpenStack application, specifically from the nova-api log file type, which handles API requests. 2. `|= "POST"`: This line filter matches log lines containing POST requests. 3. `|= "/v2/"`: This line filter matches log lines containing "/v2/" in the URL path. 4. `|= "/os-server-external-events"`: This line filter matches log lines containing the specific endpoint we're interested in. 5. `|= "status: 500"`: This line filter matches log lines where the status code is 500. 6. `[24h]`: This specifies the 24-hour time range as requested in the question. 7. `count_over_time(...)`: This counts the occurrences of matching log lines over the specified time range. 8. `sum(...)`: This sums up all the counts, giving us the total number of POST requests to the specified endpoint that resulted in a 500 status code in the past 24 hours. This query efficiently counts the number of POST requests to /v2/{tenant_id}/os-server-external-events that resulted in a 500 status code in the past 24 hours. The result will be a single number representing the total count of such errors.
0
API Performance and Requests
openstack
100
What is the 95th percentile response time for DELETE requests to /v2/{tenant_id}/servers/{server_id} in the last 7 days?
quantile_over_time(0.95, {application="openstack", log_file_type="nova-api"} |= "DELETE" |= "/v2/" |= "/servers/" | regexp `"DELETE /v2/(?P<tenant_id>[^/]+)/servers/(?P<server_id>[^ ]+) HTTP/1\.1" status: (?P<status>\d+) len: \d+ time: (?P<response_time>[\d\.]+)` | unwrap response_time [7d] )
1. `{application="openstack", log_file_type="nova-api"}`: This selects all logs from the OpenStack application, specifically from the nova-api log file type. 2. `|= "DELETE" |= "/v2/" |= "/servers/"`: These line filters narrow down the logs to DELETE requests for the specific endpoint we're interested in. 3. `| regexp ...`: This regular expression extracts the tenant_id, server_id, status, and response_time from the log line. The response_time is captured for our calculation. 4. `| unwrap response_time`: This unwraps the extracted response_time value for use in metric calculations. 5. `[7d]`: This specifies the 7-day time range as requested in the question. 6. `quantile_over_time(0.95, ...)`: This calculates the 95th percentile of the response times over the specified time range. This query efficiently calculates the 95th percentile response time for DELETE requests to the /v2/{tenant_id}/servers/{server_id} endpoint over the last 7 days. The result will be a single value representing the 95th percentile response time in seconds.
0.23
API Performance and Requests
openstack
101
What is the average time taken to build instances in the past week?
avg( avg_over_time( {application="openstack", component="nova.compute.manager"} |~ "Took .* seconds to build instance" | regexp "Took (?P<build_time>[0-9.]+) seconds to build instance" | unwrap build_time [1w] ) )
1. `{application="openstack", component="nova.compute.manager"}`: This selects all logs from the OpenStack application, specifically from the nova.compute.manager component, which is responsible for instance management. 2. `|~ "Took .* seconds to build instance"`: This line filter matches log lines containing the message about instance build time. 3. `| regexp "Took (?P<build_time>[0-9.]+) seconds to build instance"`: This extracts the build time value into a label named "build_time". 4. `| unwrap build_time`: This unwraps the "build_time" label, preparing it for numeric operations. 5. `[1w]`: This specifies the one-week time range as requested in the question. 6. `avg_over_time(...)`: This calculates the average of the build times over the specified time range for each unique combination of labels. 7. `avg(...)`: This calculates the overall average across all instances, giving us a single value representing the average time taken to build instances in the past week.
21.2 <graph>
Instance Lifecycle Management