Skip to main content

Logging

Overview

We are using the Winston logging framework to control what we log and where .

There are 3 places we currently log to

  1. Console: only in local and only errors
  2. AWS Cloudwatch: all environments except local, by default any log messages with level info and higher will be logged
  3. Sentry: only errors are sent to Sentry

The logging component is suitable for general purpose logging to the above sources. However we also have a LogWrapper in the context component which should be used for most use cases when we're executing a handler.

When we assign an incoming event (API request or Lambda event) to a context instance (context.setEventSource()) we set up a LogWrapper instance, there is one LogWrapper instance per handler execution. The purpose of the LogWrapper is to simplify the collection of common information and assigning this information to each log entry.

Audit is the class which represents a log entry, named this way because we use the same entity for business auditing, it contains common information as well as a property bag for any additional data we may want to include in our logs.

Logs are sent to AWS Cloudwatch in JSON format which allows us to automnatically filter logs by properties from the JSON structure

CorrelationId

We use a correlationId to correlate logs between systems. The intention with this is that the client will set a unique ID for each page view and include it in the x-correlation-id header. When we construct a new incoming event we either take the correlationId from the header or generate a new one if its not present.

Importantly the BaseEventPayload class requires the correlationId on its constructor, so all event payloads should contain a correlationId so we can stitch together logs from the front end, APIs and events.

Info Logging

If you want to ensure some information is logged for all executions of a given handler you can use the following (info level will always be written to AWS Cloudwatch logs)

context.log.info('my message').send()

The above line of code will generate a log entry and populate it with information we have from context, including authenticated user, incoming event, response if applicable and queue the log entry to be sent to cloudwatch.

If you only want the log entry to be sent on a successful execution of a handler you can subscribe to the afterCommit event on the Session

context.session.on('afterCommit', () => {
context.log.info('my message').send()
})

The LogEntry class has a number of chainable helper methods for setting up the LogEntry before its published, for example

context.log
.info('A test log message')
.withAuthInfo(context.user)
.withEvent(context.event)
.withElapsed(`${new Date().valueOf() - +context.props['start']}ms`)
.appendInfo({
test: 'data',
debug: 'more debug output'
})
.send()

Debug Logging

If we change the active log level in an environment to debug (default is info) then we will trigger a default log entry to be sent to cloudwatch for all handlers (API and event). This is to facilitate debugging problems in an environment.

We support an optional filter on a logger, which can be set at runtime. Filter is a regex used to match the incoming endpoint path, this gives us comtrol pover which endpoints we log when debugging.

When we instantiate a LogWrapper instance for each handler exection we create a default log entry which will get published at the end of the handler via the components/context/middleware/logger.ts component if we're in debug level.

The middleware component fills the log entry with information pertaining to the current handler execution.

You can also set custom data throughout your handler code which will be added to the default debug log entry and any other log entry send during the execution as long as the data is added before the logs are sent.

To set additional data to include in any logs use the following

context.log.capture('more-log-data', {
test: 'logs',
more: 'stuff'
})

Sentry

We are using sentry for error management and profiling. Sentry error collection and profiling is handled in the components/context/middleware/sentry.ts middleware component. We use the same log entry data structure with our Sentry errors.

Toggling Log Level

Turning on our debug logs as described above should be sufficient to see all the contextual information for any given handler execution to facilitate debugging. We need a mechanism by which we can easaily turn this on and off at runtime for our APIs.

For Lambdas we are by default logging each successful execution.

For our APIs we control the log level via the SystemSettings document for each service. The SystemSettings document has a logging property with the current log level and filter. When the app starts up it will always aquire the default settings which is info level and no filter.

If we want to turn on debug logging for all or a group of endpoints we can use the PUT {{service_host}}/admin/settings endpoint. This allows us to specify a log level and filter as below

PUT /{service_host}/admin/settings
{
"logging": {
"level": "debug",
"filter": "^.*inventory/records/.*$"
}
}

The above will turn on debug logging for any endpoint matching the filter regex. Filter is optional, if it is not specified all endpoints will be logged.

Once you are done debugging please ensure you reset the log level and filter as follows

PUT /{service_host}/admin/settings
{
"logging": {
"level": "info",
"filter": null
}
}

AWS Log Groups & Streams

The AWS log groups will be deployed as part of the infrastructure templates for services, for Lambda AWS will take care of the log groups.

For services we will have a log group per service and will generate a new log stream each day. For Lambdas there is a log group per Lambda and a log stream per instance of the Lambda

Which log group and stream we write to is controlled in the components/logging/logger.ts component

Auditing

We treat auditing separately to logging although we are using the same components to handle auditing. The key difference is audit logs will be written to RavenDB, are searchable via the FarmTo client and should only log business processes which we want to be able to retrospectively search through and verify.

Auditing piggy-backs on logging and will capture any logging information collected during a given handler execution, any custom information you want audited should be added to the log using the same capture method as described above, i.e.

context.log.capture('record-edit', {
id: '1-A',
change: {
things: 'that have changed'
}
})

Capture can be called as many times as necessary to capture all the relevant information.

The audit log is generated and stored on the session in the components/context/middleware/context.ts component

Note that GET requests will not be audited because we never commit sessions on a get request, this behaviour can be overridden by setting the commitOnGet field of the session to true

context.session.commitOnGet = true