Thoughtful Logging in Elixir: A Phoenix Story

The choices you make at foundational points of your architecture can have massive implications down the road for your Elixir applications. By taking an informed approach to logging, you can set yourself up for success.

Written by
Daniel Spofford
April 23, 2018
Filed under:

Well thought-out logging is crucial for a successful system, helping developers track down faults, observe emerging patterns, identify misuse of the system, and indicate what is important (or not important) to the system.

But it's not always obvious what information should be logged, at what level, and when it should be logged. It's all too easy to set a low bar for quality early on in a project, and as more time passes, it becomes increasingly difficult to course correct. Today we'll take a look at logging when creating a Phoenix API and addressing the common pitfalls you might encounter along the way.

  • Elixir 1.6.4
  • Phoenix 1.3.2

It is 10:13 a.m., your stomach flutters with anticipation, the only noise is the enter key bottoming out as you finish typing mix machamp --no-ecto:

Fetch and install dependencies? [Yn] y

Enter the directory, start your server with mix phx.server, and navigate your browser to A quick glance at the terminal shows:

[info] GET /
[debug] Processing with MachampWeb.PageController.index/2
  Parameters: %{"arms" => "four"}
  Pipelines: [:browser]
[info] Sent 200 in 164µs

Phoenix logs helpful information about the route hit, the associated controller, action, parameters, and the response time. This provides a clear picture of what sort of requests are being made against your system. If any parameters are deemed sensitive and should not be logged, you can configure it like this:

config :phoenix, :filter_parameters, ["password", "secret"]

Phoenix’s default is just ["password"]. So far this is straightforward, but let's see what this looks like in production. Enter the assets directory and run npm install && npm run deploy. Hop back up to the project root and run mix phx.digest to finish the ceremony required to serve static assets without error. Finally, restart your server, this time overriding the mix environment and providing a port with MIX_ENV=prod PORT=4000 mix phx.server and hit it with your browser again:

17:42:51.469 [info] GET /
17:42:51.469 [info] Sent 200 in 167µs

Aweso… wait, where did the log with parameters go? The issue here is that by default when Phoenix logs requests, they are logged at the debug level. That's fine in development, but it falls apart in production. It may not seem like it yet, but this is a fork in the road. It is as much of an opportunity to take a measured approach as it is for you to chop your foot off with an axe.

Don’t Use an Axe

Often developers are tempted to simply reconfigure Logger’s :level option in config/prod.exs like so:

config :logger, level: :debug

However, this will end up hurting you in the long run more than helping as it undermines the semantics of log levels. You wouldn't typically want all of your debug level calls to oversaturate your production logs all the time. It's preferable to log information you wish to see consistently at the info level or higher and reserve debug level logs for the nitty-gritty details you only want to see when the circumstances demand it.

If you take an "everything-always" approach to logging, it won't be long before the lines between what is and isn't important begin to blur. Time will roll forward, and your team will write debug logs throughout your system that may not be intended to be seen in production. Developers will start leaning on the idea that anything goes and put less thought into logging as a whole. The degradation of quality compounds over time.

Why it Matters

Who cares? So what if I log every other line in production at random levels? Isn't it better to have more information than less?

Future developers of your system will care (including you). Logs should always serve a purpose, and log levels are named aptly to convey the purpose they serve. As such, debug logs are useful... when debugging. It may not always be that you are chasing a bug down, it may be for understanding what is going on at a deeper level or for some other exceptional reason.

The fact of the matter is debug logs tend to be one or many of the following:

  • Verbose
  • Irrelevant to regular operation of the system
  • Expensive to attempt because they are within tight code
  • Expensive to even compute

Over-logging is especially relevant in Elixir due to Logger's backpressure feature. Typically Logger runs in async mode. When you attempt to log it happens asynchronously, allowing you to continue on while it works. However, if Logger gets overwhelmed, it switches to sync mode causing all log requests to block until completion, only returning to async mode once it recovers from the load. This has a tendency to reveal other architectural flaws in your system's design as it will test how resilient you are to timeouts in a very chaotic fashion. This is not fun to encounter in production.

Even if you do not overwhelm Logger itself, it is still possible to chew through a good portion of your available CPU by fully printing large data structures or calculating non-trivial values within logs. Another consequence to consider is increased volume of logs which impacts the cost and speed of storage, transportation, and analytics. Perhaps worst of all is the cost incurred from not knowing what is important or not. Have you ever been introduced to a system, looked at the logs with someone, and had them say something along the lines of "oh just ignore that, that doesn't matter anymore"? Then why are you logging it?

Tune Out the Pain

Logger ships with a variety of options to mitigate some of the more common issues people encounter. To adjust when Logger considers itself overwhelmed with messages, use the :sync_threshold option, this is the number of queued messages above which Logger will switch to sync mode. It will return to async mode when the number of queued messages drops below :sync_threshold * 0.75. In a similar vein, :discard_threshold is the level at which Logger will stop queueing new messages altogether and instead discard them to protect against memory use spiraling out of control.

The :level option allows you to suppress what level of logs are printed. You can even change this value at runtime with a call like Logger.configure(level: :debug). This is an incredibly useful tool that allows developers to see lower level logs at runtime only when desired. A caveat to this approach is that the calls themselves to Logger are still evaluated at runtime. When logging strings like this:

Logger.debug("the meaning of life: #{inspect compute(42)}")

Even if you have configured level: :info, the string is still evaluated. Meaning that compute(42) is still evaluated and Logger still goes through the motions to decide to actually log or not. This has the potential to degrade performance in a somewhat deceptive manner. This can be side-stepped altogether by passing a zero arity function rather than a string:

Logger.debug(fn -> "the meaning of life: #{inspect process(42)}" end)

That way only the minimal work is done by Logger to check whether the message is at a level that should be logged or not. It is possible to take it even a step further with the :compile_time_purge_level option. Logger defaults this to :debug (not purging anything). This option is true to its name and strips out logs at compile time, preventing the call itself from even existing at runtime. Keep in mind this purge only applies to the Logger.{debug,info,warn,error}/2 macros and not to Logger.log/3 which they use internally. Meaning you can circumvent ever being purged while also logging at whatever level you want by using that function directly. These two options, along with passing functions instead of strings, facilitate placing logs inside tight recursive functions, well-trodden code paths, or that have expensive to compute input while incurring negligible or even no cost at all in production.

Interestingly, Phoenix uses Logger.log/3 to print the requests we were looking at before. This means that setting level: :debug, compile_time_purge_level: :info will still print Logger.log/3 calls that specify a :debug level like our request, but Logger.debug/2 calls will not print because they will be purged at compile time. I am not a fan of this approach as it still abuses log semantics while also making it impossible to temporarily turn on debug logging.

There is a better way.

A Machete Will Do

If you are determined to see your request parameters in production logs, rather than modifying how you configure Logger, it may be sufficient to pass an additional option to your controllers so that they log at the info level rather than debug. Modify MachampWeb.controller/0 adding log: :info to the options passed to the use/2 macro:

def controller do
  quote do
    use Phoenix.Controller, namespace: MachampWeb, log: :info
    import Plug.Conn
    import MachampWeb.Router.Helpers
    import MachampWeb.Gettext

The standard :debug, :info, :warn, :error, levels can be passed as values to :log along with false to not log at all which provides functionality somewhat analogous to :compile_time_purge in this particular instance. Now our output when hitting the server looks like this:

19:07:16.083 [info] GET /
19:07:16.083 [info] Processing with MachampWeb.PageController.index/2
  Parameters: %{"arms" => "four"}
  Pipelines: [:browser]
19:07:16.083 [info] Sent 200 in 213µs

Hooray! Keep in mind this impacts all of your controllers.

I Need a Knife, Not a Machete

If you only want info level logs from some subset of your controllers you simply need a way to indicate that when calling use in your controller. This can be achieved in a couple ways, lets talk through one. Add an additional clause to the __using__/1 macro in MachampWeb to take more than just which function to call and have it accept a value to be supplied to the :log option:

defmacro __using__([:controller = which, log]) do
  apply(__MODULE__, which, [log])

Then modify the controller/0 function in that module to respect that param and default it to :debug if it is not supplied:

def controller(log \\ :debug) do
  quote bind_quoted: [log: log] do
    use Phoenix.Controller, namespace: MachampWeb, log: log
    import Plug.Conn
    import MachampWeb.Router.Helpers
    import MachampWeb.Gettext

If we hit the server now:

19:42:30.980 [info] GET /
19:42:30.980 [info] Sent 200 in 221µs

No log, but no crashes either. This means for controllers where you don’t want to modify this behavior at all, you can leave them as is and default their behavior centrally like we are above by falling back to :debug which is not logged by default in production. Lets update our controller to utilize our changes. Modify MachampWeb.PageController's use/2 call to look like this:

use MachampWeb, [:controller, :info]

One more time, for the people in the back:

19:44:43.260 [info] GET /
19:44:43.260 [info] Processing with MachampWeb.PageController.index/2
  Parameters: %{"arms" => "four"}
  Pipelines: [:browser]
19:44:43.260 [info] Sent 200 in 226µs

Great, we have a per-controller solution to making Phoenix log requests at the info level.

Turns Out I Need a Scalpel

If you don’t want to log every request that goes through a controller it is probably time to take logging into your own hands. Again, there are plenty of ways to achieve this. You could write a logging plug and put it into MachampWeb.Router, you could forgo these more centralized approaches to logging and rely on deeper feature implementations to log appropriately, or you could log at the action level of a controller, which is what we will discuss here. Don’t make any of the previously discussed changes and instead modify MachampWeb.PageController.index/2 to look like this:

def index(conn, params) do[
    "Processing with #{__MODULE__}.index/2\n",
    "  Parameters: #{inspect(params)}\n",
    "  Pipelines: #{inspect(conn.private.phoenix_pipelines)}"

  render(conn, "index.html")

Lo and behold:

19:57:44.161 [info] GET /
19:57:44.161 [info] Processing with Elixir.MachampWeb.PageController.index/2
  Parameters: %{"arms" => "four"} Pipelines: [:browser]
  Pipelines: [:browser]
19:57:44.162 [info] Sent 200 in 274µs

The world (of logging) is your oyster.


Thoughtful logging that respects convention is more maintainable, plays nice with other Elixir code, and is unsurprising to other developers. Keep in mind that Elixir and Phoenix provide options to fine tune what is logged, when it is logged, and how Logger reacts under load. Strings input to log calls are evaluated when the call to Logger is attempted, but zero arity functions are only evaluated if the log will be printed. Configuring Logger’s :level dictates what logs are suppressed, but the call to Logger is still evaluated, whereas configuring :compile_time_purge_level dictates what logs are stripped at compile time.

The real takeaway here is that you will save everyone from some pain by taking the time to do things reasonably right early on in the process. It becomes more expensive to fix foundational mistakes the longer they exist. There are many correct ways to approach logging in your system, which one is right for you depends squarely on your particular use case. In the context of the example we have been using:

  • If there are only a few requests you care about or you desire the highest level of flexibility, the scalpel approach is best.
  • If you only want to see requests that go through a proper subset of your controllers, you should use a knife.
  • If you just want to see every request that goes through any controller, chop it up with a machete.

Just don’t use an axe.