Contrary to other DBMS, MongoDB self-managed deployments generate logs in a JSON format, which can be quite intimidating at first. But instead of hoping to never have to look at the logs, you can use some tools and tips to navigate them more easily and not waste time finding the information you’re looking for.

Summary

Log format overview

Inside /var/log/mongodb/mongod.log (or a custom path if you set one), a typical log entry looks like this (shortened, for readability) :

{"t":{"$date":"2025-03-06T14:54:28.298+01:00"},"s":"I",  "c":"CONTROL",  "id":8423403, "ctx":"initandlisten","msg":"mongod startup complete","attr":{"Summary of time elapsed":{"Startup from clean shutdown?":false,"Statistics":{"Set up periodic runner":"0 ms","Set up online certificate status protocol manager":"0 ms",[...],"Start transport layer":"0 ms","_initAndListen total elapsed time":"626 ms"}}}}

At first glance, it’s pretty difficult to extract the essential information, but let’s see how the document looks like when it’s formatted (we’ll see how to do that later).

{
  "t": {
    "$date": "2025-03-06T14:54:28.298+01:00"
  },
  "s": "I",
  "c": "CONTROL",
  "id": 8423403,
  "ctx": "initandlisten",
  "msg": "mongod startup complete",
  "attr": {
    "Summary of time elapsed": {
      "Startup from clean shutdown?": false,
      "Statistics": {
        "Set up periodic runner": "0 ms",
        "Set up online certificate status protocol manager": "0 ms",
        [...] # lines hidden
        "Start transport layer": "0 ms",
        "_initAndListen total elapsed time": "626 ms"
      }
    }
  }
}

Here is a description of the main fields of the log document :

  • t : Timestamp of the log entry.
  • s : Severity code associated with the log entry (E for error, W for warning, I for information and D1 to D5 for debug).
  • c : Category of the log entry. Most common categories are CONTROL, COMMAND, ELECTION, REPL (for replication) or NETWORK. An extensive list is available in the official MongoDB documentation.
  • id : Unique log entry ID.
  • ctx : Thread that generated the log.
  • msg : Usually a short message describing the log.
  • attr : Optional additional attributes.

This will help us when looking at the logs, first with mongosh.

Querying logs through mongosh

You can query logs inside the MongoDB shell called mongosh. To do so, use the getLog admin command :

db.adminCommand({ getLog: "global"}); // display all log entries

Another useful option is to view startup warnings, which will only display warning logs since last startup.

db.adminCommand({ getLog: "startupWarnings" }) // display startup warnings
{
  totalLinesWritten: 2,
  log: [
    '{"t":{"$date":"2025-03-07T08:32:41.005+01:00"},"s":"W",  "c":"NETWORK",  "id":5123300, "ctx":"initandlisten","msg":"vm.max_map_count is too low","attr":{"currentValue":65530,"recommendedMinimum":102400,"maxConns":51200},"tags":["startupWarnings"]}\n',
    '{"t":{"$date":"2025-03-07T08:32:41.005+01:00"},"s":"W",  "c":"CONTROL",  "id":8386700, "ctx":"initandlisten","msg":"We suggest setting swappiness to 0 or 1, as swapping can cause performance problems.","attr":{"sysfsFile":"/proc/sys/vm/swappiness","currentValue":60},"tags":["startupWarnings"]}\n'
  ],
  ok: 1
}

Even though this can sometimes be useful, it requires an authenticated access to the database, and it only works when the mongod process is running. You won’t be able to use this method when the database crashes, for instance. Moreover, the logs are difficult to read.

Most of the time, you will be better served by the jq utility.

jq is a powerful utility that helps you navigate JSON documents, and even though it is not an official MongoDB product, you should always integrate it in your MongoDB deployments.

Prettify MongoDB logs

The first benefit of the jq command is to display MongoDB logs in a readable format :

> head -1 mongod.log | jq
{
  "t": {
    "$date": "2025-03-05T14:44:28.531+01:00"
  },
  "s": "I",
  "c": "CONTROL",
  "id": 23285,
  "ctx": "main",
  "msg": "Automatically disabling TLS 1.0"
}

Of course, a single line of log will now span multiple lines in the output. But thanks to the log structure explained above, we can write our first queries with jq to filter the results and only display what’s important.

I would definitely recommend to build aliases with the following commands in order to quickly access the information that you find more valuable in the logs.

Display error messages

By using the s field (severity), we can filter the logs to only display error messages. This is especially useful when failing to start a MongoDB instance.

jq 'select(.s == "E")' mongod.log

You can also include warnings by slightly modifying the command.

jq 'select(.s == "E" or .s == "W")' mongod.log

Filtering displayed fields

When selecting fields, you should pipe the jq filtering to a json object like this one :

{time: .t["$date"], message: .msg, error: .attr.error}

From now on, I will use the -c option to display the JSON in the compact format. Even in this format, logs can be readable when you select or exclude specific fields. Here, I want to select the .t["$date"], .msg and .attr.error fields. To improve the display, I will rename them :

> jq -c 'select(.s == "E") | {time: .t["$date"], message: .msg, error: .attr.error}' mongod.log
{"time":"2025-03-05T14:44:28.665+01:00","message":"WiredTiger error message","error":13}
{"time":"2025-03-05T14:44:28.665+01:00","message":"WiredTiger error message","error":13}
{"time":"2025-03-05T14:44:28.665+01:00","message":"WiredTiger error message","error":13}
{"time":"2025-03-06T10:17:07.383+01:00","message":"DBException in initAndListen, terminating","error":"Location28596: Unable to determine status of lock file in the data directory /var/lib/mongodb: boost::filesystem::status: Permission denied [system:13]: \"/var/lib/mongodb/mongod.lock\""}

Similarly, you can exclude a field with the del function. For instance, this will remove the message sub-field located inside the attr field.

jq 'del(.attr.message)' mongod.log

Errors and warnings grouped by message

To check for recurrent warnings or errors, you can pipe the jq output to group the results by msg content.

jq 'select(.s == "E" or .s=="W") | .msg' mongod.log | sort | uniq -c | sort -nr | head

Occurrences of each log severity

If you want to quickly look for the number of every severity, you can do so with the s field.

> jq '.s' mongod.log | sort | uniq -c
     10 "E"
      3 "F"
   1727 "I"
     88 "W"

View logs for specific log categories

As mentioned before, the category of the logs might be interesting to filter (only the replication logs, for instance).

jq -c 'select(.c == "REPL")' mongod.log

Filter logs by date

Whether you use log rotation or not, you might want to quickly access the last minutes/hours/days of logs. With the date utility, you can retrieve the most recent logs :

jq -c --arg since "$(date -d '10 minutes ago' +%Y-%m-%dT%H:%M:%S)" 'select(.t["$date"] >= $since)' mongod.log

Still filtering on the .t["$date"] field, you can also extract a portion of the logs :

jq -c 'select(.t["$date"] >= "2025-03-06T14:30:00" and .t["$date"] <= "2025-03-06T14:40:00")' mongod.log

Look for a specific pattern in the log

Of course, you can always use grep followed by jq to find a pattern in the logs : grep -i "pattern" mongod.log | jq

But if you want to look for a specific pattern inside a specific field, you can do so with the test function :

> jq -c 'select(.msg | test("failed to authenticate"; "i"))' mongod.log // (i option for case insensitivity)
{"t":{"$date":"2025-03-07T08:37:52.950+01:00"},"s":"I","c":"ACCESS","id":5286307,"ctx":"conn18","msg":"Failed to authenticate","attr":{"client":"xxx.xxx.xxx.xxx(ip):xxxxx(port)","isSpeculative":true,"isClusterMember":false,"mechanism":"SCRAM-SHA-256","user":"root","db":"admin","error":"AuthenticationFailed: SCRAM authentication failed, storedKey mismatch","result":18,"metrics":{"conversation_duration":{"micros":5091,"summary":{"0":{"step":1,"step_total":2,"duration_micros":62},"1":{"step":2,"step_total":2,"duration_micros":48}}}},"extraInfo":{}}}

Check for logs regarding connections to the MongoDB database

For filtering connections logs, search for the attr.remote field :

jq -c 'select(.attr.remote)' mongod.log

Analyzing slow queries with jq

Inside the mongo shell, you can activate logging for slow queries with db.setProfilingLevel(1, <slowms>), with <slowms> being the threshold (in milliseconds) to log such queries.

Warnings related to slow queries in MongoDB :

  • Once activated, the slow queries logging could slow down the database, so be very careful when activating it.
  • There is a security threat when combining slow query logging and queryable encryption, since queries will not be encrypted in the mongod.log file.

Slow query logs look like this :

{
  "t": { "$date": "2024-03-06T12:34:56.789Z" },
  "s": "I",
  "c": "COMMAND",
  "id": 123,
  "ctx": "conn20",
  "msg": "Slow query",
  "attr": {
    "ns": "mydb.coll",
    "command": { "find": "coll", "filter": { "status": "active" } },
    "planSummary": "COLLSCAN",
    "keysExamined": 0,
    "docsExamined": 5000,
    "numYields": 0,
    "reslen": 2000,
    "locks": { "Global": { "acquireCount": { "r": 1 } } },
    "durationMillis": 150
  }
}

With this in mind, and with what we have already seen, you can filter the logs with the fields you want, like attr.durationMillis (duration of the query, in milliseconds), or attr.ns, which is the object on which the query is made.

For instance, if you want to retrieve slow queries above a given threshold (one second, in the example below) :

jq 'select(.attr.durationMillis >= 1000)' mongod.log

Or if you want to filter slow queries on a specific database mydb and collection coll :

jq 'select(.msg == "Slow query" and .attr.ns == "mydb.coll")' mongod.log

You can also select only queries that are run on a given database mydb :

jq 'select(.msg == "Slow query" and .attr.command["$db"] == "mydb")' mongod.log

Conclusion

While being a bit complex at first sight, MongoDB logs are very useful if you know how to apprehend them. By leveraging the jq utility for advanced filtering, and combining it with monitoring tools, you can efficiently analyze logs and improve your efficiency as a DBA.