The Logger utility must always be instantiated outside the Lambda handler. By doing this, subsequent invocations processed by the same instance of your function can reuse these resources. This saves cost by reducing function run time. In addition, Logger can keep track of a cold start and inject the appropriate fields into logs.
The library has three optional settings, which can be set via environment variables or passed in the constructor.
These settings will be used across all logs emitted:
Setting
Description
Environment variable
Default Value
Allowed Values
Example Value
Constructor parameter
Service name
Sets the name of service of which the Lambda function is part of, that will be present across all log statements
POWERTOOLS_SERVICE_NAME
service_undefined
Any string
serverlessAirline
serviceName
Logging level
Sets how verbose Logger should be, from the most verbose to the least verbose (no logs)
POWERTOOLS_LOG_LEVEL
INFO
DEBUG, INFO, WARN, ERROR, CRITICAL, SILENT
ERROR
logLevel
Sample rate
Probability that a Lambda invocation will print all the log items regardless of the log level setting
POWERTOOLS_LOGGER_SAMPLE_RATE
0
0.0 to 1.0
0.1
sampleRateValue
Info
When POWERTOOLS_DEV environment variable is present and set to "true" or "1", Logger will pretty-print log messages for easier readability. We recommend to use this setting only when debugging on local environments.
Example using AWS Serverless Application Model (SAM)¶
1 2 3 4 5 6 7 8 91011
import{Logger}from'@aws-lambda-powertools/logger';// Logger parameters fetched from the environment variables (see template.yaml tab)constlogger=newLogger();logger.info('Hello World');// You can also pass the parameters in the constructor// const logger = new Logger({// logLevel: 'WARN',// serviceName: 'serverlessAirline'// });
Optional - An object containing information about the Error passed to the logger
Note
If you emit a log message with a key that matches one of level, message, sampling_rate, service, or timestamp, the Logger will log a warning message and ignore the key.
import{Logger}from'@aws-lambda-powertools/logger';import{injectLambdaContext}from'@aws-lambda-powertools/logger/middleware';importmiddyfrom'@middy/core';constlogger=newLogger();constlambdaHandler=async(_event:unknown,_context:unknown):Promise<void>=>{logger.info('This is an INFO log with some context');};exportconsthandler=middy(lambdaHandler).use(injectLambdaContext(logger));
1 2 3 4 5 6 7 8 9101112131415
importtype{LambdaInterface}from'@aws-lambda-powertools/commons/types';import{Logger}from'@aws-lambda-powertools/logger';constlogger=newLogger();classLambdaimplementsLambdaInterface{// Decorate your handler class method@logger.injectLambdaContext()publicasynchandler(_event:unknown,_context:unknown):Promise<void>{logger.info('This is an INFO log with some context');}}constmyFunction=newLambda();exportconsthandler=myFunction.handler.bind(myFunction);// (1)
Binding your handler method allows your handler to access this within the class methods.
1 2 3 4 5 6 7 8 910111213
import{Logger}from'@aws-lambda-powertools/logger';importtype{Context}from'aws-lambda';constlogger=newLogger();exportconsthandler=async(_event:unknown,context:Context):Promise<void>=>{logger.addContext(context);logger.info('This is an INFO log with some context');};
In each case, the printed log will look like this:
1 2 3 4 5 6 7 8 9101112
{"level":"INFO","message":"This is an INFO log with some context","timestamp":"2021-12-12T21:21:08.921Z","service":"serverlessAirline","cold_start":true,"function_arn":"arn:aws:lambda:eu-west-1:123456789012:function:shopping-cart-api-lambda-prod-eu-west-1","function_memory_size":128,"function_request_id":"c6af9ac6-7b61-11e6-9a41-93e812345678","function_name":"shopping-cart-api-lambda-prod-eu-west-1","xray_trace_id":"abcdef123456abcdef123456abcdef123456"}
When debugging in non-production environments, you can log the incoming event using the logEventIfEnabled() method or by setting the logEvent option in the injectLambdaContext() Middy.js middleware or class method decorator.
Warning
This is disabled by default to prevent sensitive info being logged
1 2 3 4 5 6 7 8 910
process.env.POWERTOOLS_LOGGER_LOG_EVENT='true';import{Logger}from'@aws-lambda-powertools/logger';constlogger=newLogger();exportconsthandler=async(event:unknown)=>{logger.logEventIfEnabled(event);// (1)// ... your logic here};
You can control the event logging via the POWERTOOLS_LOGGER_LOG_EVENT environment variable.
1 2 3 4 5 6 7 8 91011
import{Logger}from'@aws-lambda-powertools/logger';import{injectLambdaContext}from'@aws-lambda-powertools/logger/middleware';importmiddyfrom'@middy/core';constlogger=newLogger();exportconsthandler=middy(async()=>{// ... your logic here}).use(injectLambdaContext(logger,{logEvent:true})// (1));
The logEvent option takes precedence over the POWERTOOLS_LOGGER_LOG_EVENT environment variable.
1 2 3 4 5 6 7 8 91011121314
importtype{LambdaInterface}from'@aws-lambda-powertools/commons/types';import{Logger}from'@aws-lambda-powertools/logger';constlogger=newLogger();classLambdaimplementsLambdaInterface{@logger.injectLambdaContext({logEvent:true})// (1)publicasynchandler(_event:unknown,_context:unknown):Promise<void>{// ... your logic here}}constmyFunction=newLambda();exportconsthandler=myFunction.handler.bind(myFunction);
The logEvent option takes precedence over the POWERTOOLS_LOGGER_LOG_EVENT environment variable.
Use POWERTOOLS_LOGGER_LOG_EVENT environment variable to enable or disable (true/false) this feature. When using Middy.js middleware or class method decorator, the logEvent option will take precedence over the environment variable.
To get started, install the @aws-lambda-powertools/jmespath package, and pass the search function using the correlationIdSearchFn constructor parameter:
You can retrieve correlation IDs via getCorrelationId method.
You can set a correlation ID using correlationIdPath parameter by passing a JMESPath expression, including our custom JMESPath functions or set it manually by calling setCorrelationId function.
1 2 3 4 5 6 7 8 910
import{Logger}from'@aws-lambda-powertools/logger';importtype{APIGatewayProxyEvent}from'aws-lambda';constlogger=newLogger();exportconsthandler=async(event:APIGatewayProxyEvent)=>{logger.setCorrelationId(event.requestContext.requestId);// (1)!logger.info('log with correlation_id');};
Alternatively, if the payload is more complex you can use a JMESPath expression as second parameter when prividing a search function in the constructor.
1 2 3 4 5 6 7 8 9101112131415161718
import{Logger}from'@aws-lambda-powertools/logger';import{search}from'@aws-lambda-powertools/logger/correlationId';import{injectLambdaContext}from'@aws-lambda-powertools/logger/middleware';importmiddyfrom'@middy/core';constlogger=newLogger({correlationIdSearchFn:search,});exportconsthandler=middy().use(injectLambdaContext(logger,{correlationIdPath:'headers.my_request_id_header',})).handler(async()=>{logger.info('log with correlation_id');});
1 2 3 4 5 6 7 8 910111213141516171819
importtype{LambdaInterface}from'@aws-lambda-powertools/commons/types';import{Logger}from'@aws-lambda-powertools/logger';import{search}from'@aws-lambda-powertools/logger/correlationId';constlogger=newLogger({correlationIdSearchFn:search,});classLambdaimplementsLambdaInterface{@logger.injectLambdaContext({correlationIdPath:'headers.my_request_id_header',})publicasynchandler(_event:unknown,_context:unknown):Promise<void>{logger.info('This is an INFO log with some context');}}constmyFunction=newLambda();exportconsthandler=myFunction.handler.bind(myFunction);
{"level":"INFO","message":"This is an INFO log with some context","timestamp":"2021-05-03 11:47:12,494+0000","service":"payment","correlation_id":"correlation_id_value"}
To ease routine tasks like extracting correlation ID from popular event sources, we provide built-in JMESPath expressions.
1 2 3 4 5 6 7 8 910111213141516171819202122
importtype{LambdaInterface}from'@aws-lambda-powertools/commons/types';import{Logger}from'@aws-lambda-powertools/logger';import{correlationPaths,search,}from'@aws-lambda-powertools/logger/correlationId';constlogger=newLogger({correlationIdSearchFn:search,});classLambdaimplementsLambdaInterface{@logger.injectLambdaContext({correlationIdPath:correlationPaths.API_GATEWAY_REST,})publicasynchandler(_event:unknown,_context:unknown):Promise<void>{logger.info('This is an INFO log with some context');}}constmyFunction=newLambda();exportconsthandler=myFunction.handler.bind(myFunction);
You can append additional data to a single log item by passing objects as additional parameters.
Pass a simple string for logging it with default key name extra
Pass one or multiple objects containing arbitrary data to be logged. Each data object should be placed in an enclosing object as a single property value, you can name this property as you need: { myData: arbitraryObjectToLog }
If you already have an object containing a message key and an additional property, you can pass this object directly
import{Logger}from'@aws-lambda-powertools/logger';constlogger=newLogger();exportconsthandler=async(event:unknown,_context:unknown):Promise<unknown>=>{constmyImportantVariable={foo:'bar',};// Log additional data in single log items// As second parameterlogger.info('This is a log with an extra variable',{data:myImportantVariable,});// You can also pass multiple parameters containing arbitrary objectslogger.info('This is a log with 3 extra objects',{data:myImportantVariable},{correlationIds:{myCustomCorrelationId:'foo-bar-baz'}},{lambdaEvent:event});// Simply pass a string for logging additional datalogger.info('This is a log with additional string value','string value');// Directly passing an object containing both the message and the additional infoconstlogObject={message:'This is a log message',additionalValue:42,};logger.info(logObject);return{foo:'bar',};};
{"level":"INFO","message":"This is a log with an extra variable","service":"serverlessAirline","timestamp":"2021-12-12T22:06:17.463Z","xray_trace_id":"abcdef123456abcdef123456abcdef123456","data":{"foo":"bar"}}{"level":"INFO","message":"This is a log with 3 extra objects","service":"serverlessAirline","timestamp":"2021-12-12T22:06:17.466Z","xray_trace_id":"abcdef123456abcdef123456abcdef123456","data":{"foo":"bar"},"correlationIds":{"myCustomCorrelationId":"foo-bar-baz"},"lambdaEvent":{"exampleEventData":{"eventValue":42}}}{"level":"INFO","message":"This is a log with additional string value","service":"serverlessAirline","timestamp":"2021-12-12T22:06:17.463Z","xray_trace_id":"abcdef123456abcdef123456abcdef123456","extra":"string value"}{"level":"INFO","message":"This is a log message","service":"serverlessAirline","timestamp":"2021-12-12T22:06:17.463Z","xray_trace_id":"abcdef123456abcdef123456abcdef123456","additionalValue":42}
import{Logger}from'@aws-lambda-powertools/logger';constlogger=newLogger({serviceName:'serverlessAirline',});constprocessTransaction=async(customerId:string):Promise<void>=>{try{logger.appendKeys({customerId,});// ... your business logiclogger.info('transaction processed');}finally{logger.resetKeys();// (1)!}};exportconsthandler=async(event:{customerId:string},_context:unknown):Promise<void>=>{awaitprocessTransaction(event.customerId);// .. other business logiclogger.info('other business logic processed');};
You can also remove specific keys by calling the removeKeys() method.
1 2 3 4 5 6 7 8 9101112131415
{"level":"INFO","message":"transaction processed","service":"serverlessAirline","timestamp":"2021-12-12T21:49:58.084Z","xray_trace_id":"abcdef123456abcdef123456abcdef123456","customerId":"123456789012"}{"level":"INFO","message":"other business logic processed","service":"serverlessAirline","timestamp":"2021-12-12T21:49:58.088Z","xray_trace_id":"abcdef123456abcdef123456abcdef123456"}
You can persist keys across Lambda invocations by using the persistentKeys constructor option or the appendPersistentKeys() method. These keys will persist even if you call the resetKeys() method.
A common use case is to set keys about your environment or application version, so that you can easily filter logs in CloudWatch Logs.
1 2 3 4 5 6 7 8 9101112131415161718
import{Logger}from'@aws-lambda-powertools/logger';constlogger=newLogger({serviceName:'serverlessAirline',persistentKeys:{environment:'prod',version:process.env.BUILD_VERSION,},});exportconsthandler=async(_event:unknown,_context:unknown):Promise<void>=>{logger.info('processing transaction');// ... your business logic};
1 2 3 4 5 6 7 8 910111213141516171819202122
import{Logger}from'@aws-lambda-powertools/logger';constlogger=newLogger({serviceName:'serverlessAirline',});declareconstgetRemoteConfig:(env:string)=>{environment:string;version:string;};const{environment,version}=getRemoteConfig('prod');logger.appendPersistentKeys({environment,version});exportconsthandler=async(_event:unknown,_context:unknown):Promise<void>=>{logger.info('processing transaction');// .. your business logic};
import{Logger}from'@aws-lambda-powertools/logger';constlogger=newLogger({serviceName:'serverlessAirline',});constprocessTransaction=async(customerId:string):Promise<void>=>{try{logger.appendKeys({customerId,});// ... your business logiclogger.info('transaction processed');}finally{logger.removeKeys(['customerId']);}};exportconsthandler=async(event:{customerId:string},_context:unknown):Promise<void>=>{awaitprocessTransaction(event.customerId);// .. other business logiclogger.info('other business logic processed');};
1 2 3 4 5 6 7 8 9101112131415161718192021222324
import{Logger}from'@aws-lambda-powertools/logger';constlogger=newLogger({serviceName:'serverlessAirline',persistentKeys:{foo:true,},});declareconstgetRemoteConfig:(env:string)=>{isFoo:boolean;};exportconsthandler=async(_event:unknown,_context:unknown):Promise<void>=>{const{isFoo}=getRemoteConfig('prod');if(isFoo)logger.removePersistentKeys(['foo']);logger.info('processing transaction');// ... your business logic};
Logger is commonly initialized in the global scope. Due to Lambda Execution Context reuse, this means that custom keys can be persisted across invocations.
Resetting the state allows you to clear all the temporary keys you have added.
Tip: When is this useful?
This is useful when you add multiple custom keys conditionally or when you use canonical or wide logs.
import{Logger}from'@aws-lambda-powertools/logger';// Persistent attributes will be cached across invocationsconstlogger=newLogger({logLevel:'info',persistentKeys:{environment:'prod',},});// Enable the clear state flagexportconsthandler=async(event:{userId:string},_context:unknown):Promise<void>=>{try{// This temporary key will be included in the log & cleared after the invocationlogger.appendKeys({details:{userId:event.userId},});// ... your business logic}finally{logger.info('WIDE');logger.resetKeys();}};
1 2 3 4 5 6 7 8 9101112131415161718192021222324
import{Logger}from'@aws-lambda-powertools/logger';import{injectLambdaContext}from'@aws-lambda-powertools/logger/middleware';importmiddyfrom'@middy/core';// Persistent attributes will be cached across invocationsconstlogger=newLogger({logLevel:'info',persistentKeys:{environment:'prod',},});exportconsthandler=middy(async(event:{userId:string},_context:unknown):Promise<void>=>{// This temporary key will be included in the log & cleared after the invocationlogger.appendKeys({details:{userId:event.userId},});// ... your business logiclogger.info('WIDE');}).use(injectLambdaContext(logger,{resetKeys:true}));
importtype{LambdaInterface}from'@aws-lambda-powertools/commons/types';import{Logger}from'@aws-lambda-powertools/logger';// Persistent attributes will be cached across invocationsconstlogger=newLogger({logLevel:'info',persistentKeys:{environment:'prod',},});classLambdaimplementsLambdaInterface{@logger.injectLambdaContext({resetKeys:true})publicasynchandler(event:{userId:string},_context:unknown):Promise<void>{// This temporary key will be included in the log & cleared after the invocationlogger.appendKeys({details:{userId:event.userId},});// ... your business logiclogger.info('WIDE');}}constmyFunction=newLambda();exportconsthandler=myFunction.handler.bind(myFunction);// (1)!
Binding your handler method allows your handler to access this within the class methods.
You can log errors by using the error method and pass the error object as parameter.
The error will be logged with default key name error, but you can also pass your own custom key name.
1 2 3 4 5 6 7 8 9101112131415161718192021222324
import{Logger}from'@aws-lambda-powertools/logger';constlogger=newLogger();exportconsthandler=async(_event:unknown,_context:unknown):Promise<void>=>{try{thrownewError('Unexpected error #1');}catch(error){// Log information about the error using the default "error" keylogger.error('This is the first error',errorasError);}try{thrownewError('Unexpected error #2');}catch(error){// Log information about the error using a custom "myCustomErrorKey" keylogger.error('This is the second error',{myCustomErrorKey:errorasError,});}};
{"level":"ERROR","message":"This is the first error","service":"serverlessAirline","timestamp":"2021-12-12T22:12:39.345Z","xray_trace_id":"abcdef123456abcdef123456abcdef123456","error":{"name":"Error","location":"/path/to/my/source-code/my-service/handler.ts:18","message":"Unexpected error #1","stack":"Error: Unexpected error #1 at lambdaHandler (/path/to/my/source-code/my-service/handler.ts:18:11) at Object.<anonymous> (/path/to/my/source-code/my-service/handler.ts:35:1) at Module._compile (node:internal/modules/cjs/loader:1108:14) at Module.m._compile (/path/to/my/source-code/node_modules/ts-node/src/index.ts:1371:23) at Module._extensions..js (node:internal/modules/cjs/loader:1137:10) at Object.require.extensions.<computed> [as .ts] (/path/to/my/source-code/node_modules/ts-node/src/index.ts:1374:12) at Module.load (node:internal/modules/cjs/loader:973:32) at Function.Module._load (node:internal/modules/cjs/loader:813:14) at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:76:12) at main (/path/to/my/source-code/node_modules/ts-node/src/bin.ts:331:12)"}}{"level":"ERROR","message":"This is the second error","service":"serverlessAirline","timestamp":"2021-12-12T22:12:39.377Z","xray_trace_id":"abcdef123456abcdef123456abcdef123456","myCustomErrorKey":{"name":"Error","location":"/path/to/my/source-code/my-service/handler.ts:24","message":"Unexpected error #2","stack":"Error: Unexpected error #2 at lambdaHandler (/path/to/my/source-code/my-service/handler.ts:24:11) at Object.<anonymous> (/path/to/my/source-code/my-service/handler.ts:35:1) at Module._compile (node:internal/modules/cjs/loader:1108:14) at Module.m._compile (/path/to/my/source-code/node_modules/ts-node/src/index.ts:1371:23) at Module._extensions..js (node:internal/modules/cjs/loader:1137:10) at Object.require.extensions.<computed> [as .ts] (/path/to/my/source-code/node_modules/ts-node/src/index.ts:1374:12) at Module.load (node:internal/modules/cjs/loader:973:32) at Function.Module._load (node:internal/modules/cjs/loader:813:14) at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:76:12) at main (/path/to/my/source-code/node_modules/ts-node/src/bin.ts:331:12)"}}
Logging errors and log level
You can also log errors using the warn, info, and debug methods. Be aware of the log level though, you might miss those errors when analyzing the log later depending on the log level configuration.
The default log level is INFO and can be set using the logLevel constructor option or by using the POWERTOOLS_LOG_LEVEL environment variable.
We support the following log levels:
Level
Numeric value
TRACE
6
DEBUG
8
INFO
12
WARN
16
ERROR
20
CRITICAL
24
SILENT
28
You can access the current log level by using the getLevelName() method. This method returns the name of the current log level as a string. If you want to change the log level at runtime, you can use the setLogLevel() method. This method accepts a string value that represents the log level you want to set, both lower and upper case values are supported.
If you want to access the numeric value of the current log level, you can use the level property. For example, if the current log level is INFO, logger.level property will return 12.
The SILENT log level provides a simple and efficient way to suppress all log messages without the need to modify your code. When you set this log level, all log messages, regardless of their severity, will be silenced.
This feature is useful when you want to have your code instrumented to produce logs, but due to some requirement or business decision, you prefer to not emit them.
By setting the log level to SILENT, which can be done either through the logLevel constructor option or by using the POWERTOOLS_LOG_LEVEL environment variable, you can easily suppress all logs as needed.
Note
Use the SILENT log level with care, as it can make it more challenging to monitor and debug your application. Therefore, we advise using this log level judiciously.
With AWS Lambda Advanced Logging Controls (ALC), you can control the output format of your logs as either TEXT or JSON and specify the minimum accepted log level for your application.
Regardless of the output format setting in Lambda, we will always output JSON formatted logging messages.
When you have this feature enabled, log messages that don’t meet the configured log level are discarded by Lambda. For example, if you set the minimum log level to WARN, you will only receive WARN and ERROR messages in your AWS CloudWatch Logs, all other log levels will be discarded by Lambda.
sequenceDiagram
title Lambda ALC allows WARN logs only
participant Lambda service
participant Lambda function
participant Application Logger
Note over Lambda service: AWS_LAMBDA_LOG_LEVEL="WARN"
Lambda service->>Lambda function: Invoke (event)
Lambda function->>Lambda function: Calls handler
Lambda function->>Application Logger: logger.warn("Something happened")
Lambda function-->>Application Logger: logger.debug("Something happened")
Lambda function-->>Application Logger: logger.info("Something happened")
Lambda service->>Lambda service: DROP INFO and DEBUG logs
Lambda service->>CloudWatch Logs: Ingest error logs
Priority of log level settings in Powertools for AWS Lambda
When the Advanced Logging Controls feature is enabled, we are unable to increase the minimum log level below the AWS_LAMBDA_LOG_LEVEL environment variable value, see AWS Lambda service documentation for more details.
We prioritise log level settings in this order:
AWS_LAMBDA_LOG_LEVEL environment variable
Setting the log level in code using the logLevel constructor option, or by calling the logger.setLogLevel() method
POWERTOOLS_LOG_LEVEL environment variable
In the event you have set a log level in Powertools to a level that is lower than the ACL setting, we will output a warning log message informing you that your messages will be discarded by Lambda.
Log buffering enables you to buffer logs for a specific request or invocation. Enable log buffering by passing logBufferOptions when initializing a Logger instance. You can buffer logs at the WARNING, INFO, DEBUG, or TRACE level, and flush them automatically on error or manually as needed.
This is useful when you want to reduce the number of log messages emitted while still having detailed logs when needed, such as when troubleshooting issues.
1 2 3 4 5 6 7 8 91011121314151617181920
import{Logger}from'@aws-lambda-powertools/logger';constlogger=newLogger({logBufferOptions:{maxBytes:20480,flushOnErrorLog:true,},});logger.debug('This is a debug message');// This is NOT bufferedexportconsthandler=async()=>{logger.debug('This is a debug message');// This is bufferedlogger.info('This is an info message');// your business logic herelogger.error('This is an error message');// This also flushes the buffer// or logger.flushBuffer(); // to flush the buffer manually};
When configuring the buffer, you can set the following options to fine-tune how logs are captured, stored, and emitted. You can configure the following options in the logBufferOptions constructor parameter:
Parameter
Description
Configuration
Default
enabled
Enable or disable log buffering
true, false
false
maxBytes
Maximum size of the log buffer in bytes
number
20480
bufferAtVerbosity
Minimum log level to buffer
TRACE, DEBUG, INFO, WARNING
DEBUG
flushOnErrorLog
Automatically flush buffer when logging an error
true, false
true
1 2 3 4 5 6 7 8 910111213141516
import{Logger}from'@aws-lambda-powertools/logger';constlogger=newLogger({logBufferOptions:{bufferAtVerbosity:'warn',// (1)!},});exportconsthandler=async()=>{// All logs below are bufferedlogger.debug('This is a debug message');logger.info('This is an info message');logger.warn('This is a warn message');logger.clearBuffer();// (2)!};
Setting bufferAtVerbosity: 'warn' configures log buffering for WARNING and all lower severity levels like INFO, DEBUG, and TRACE.
Calling logger.clearBuffer() will clear the buffer without emitting the logs.
import{Logger}from'@aws-lambda-powertools/logger';constlogger=newLogger({logBufferOptions:{maxBytes:20480,flushOnErrorLog:false,// (1)!},});exportconsthandler=async()=>{logger.debug('This is a debug message');// This is bufferedtry{thrownewError('a non fatal error');}catch(error){logger.error('A non fatal error occurred',{error});// This does NOT flush the buffer}logger.debug('This is another debug message');// This is bufferedtry{thrownewError('a fatal error');}catch(error){logger.error('A fatal error occurred',{error});// This does NOT flush the bufferlogger.flushBuffer();}};
Disabling flushOnErrorLog will not flush the buffer when logging an error. This is useful when you want to control when the buffer is flushed by calling the logger.flushBuffer() method.
When using the logger.injectLambdaContext() class method decorator or the injectLambdaContext() middleware, you can configure the logger to automatically flush the buffer when an error occurs. This is done by setting the flushBufferOnUncaughtError option to true in the decorator or middleware options.
1 2 3 4 5 6 7 8 91011121314151617181920212223
import{Logger}from'@aws-lambda-powertools/logger';importtype{Context}from'aws-lambda';constlogger=newLogger({logLevel:'DEBUG',logBufferOptions:{enabled:true},});classLambda{@logger.injectLambdaContext({flushBufferOnUncaughtError:true,})asynchandler(_event:unknown,_context:Context){// Both logs below are bufferedlogger.debug('a debug log');logger.debug('another debug log');thrownewError('an error log');// This causes the buffer to flush}}constlambda=newLambda();exportconsthandler=lambda.handler.bind(lambda);
1 2 3 4 5 6 7 8 9101112131415161718
import{Logger}from'@aws-lambda-powertools/logger';import{injectLambdaContext}from'@aws-lambda-powertools/logger/middleware';importmiddyfrom'@middy/core';constlogger=newLogger({logLevel:'DEBUG',logBufferOptions:{enabled:true},});exportconsthandler=middy().use(injectLambdaContext(logger,{flushBufferOnUncaughtError:true})).handler(async(event:unknown)=>{// Both logs below are bufferedlogger.debug('a debug log');logger.debug('another debug log');thrownewError('an error log');// This causes the buffer to flush});
This works only when using the logger.injectLambdaContext() class method decorator or the injectLambdaContext() middleware. You can configure the logger to automatically flush the buffer when an error occurs by setting the flushBufferOnUncaughtError option to true in the decorator or middleware options.
Does the buffer persist across Lambda invocations?
No, each Lambda invocation has its own buffer. The buffer is initialized when the Lambda function is invoked and is cleared after the function execution completes or when flushed manually.
Are my logs buffered during cold starts?
No, we never buffer logs during cold starts. This is because we want to ensure that logs emitted during this phase are always available for debugging and monitoring purposes. The buffer is only used during the execution of the Lambda function.
How can I prevent log buffering from consuming excessive memory?
You can limit the size of the buffer by setting the maxBytes option in the logBufferOptions constructor parameter. This will ensure that the buffer does not grow indefinitely and consume excessive memory.
What happens if the log buffer reaches its maximum size?
Older logs are removed from the buffer to make room for new logs. This means that if the buffer is full, you may lose some logs if they are not flushed before the buffer reaches its maximum size. When this happens, we emit a warning when flushing the buffer to indicate that some logs have been dropped.
How is the log size of a log line calculated?
The log size is calculated based on the size of the stringified log line in bytes. This includes the size of the log message, the size of any additional keys, and the size of the timestamp.
What timestamp is used when I flush the logs?
The timestamp preserves the original time when the log record was created. If you create a log record at 11:00:10 and flush it at 11:00:25, the log line will retain its original timestamp of 11:00:10.
What happens if I try to add a log line that is bigger than max buffer size?
The log will be emitted directly to standard output and not buffered. When this happens, we emit a warning to indicate that the log line was too big to be buffered.
What happens if Lambda times out without flushing the buffer?
Logs that are still in the buffer will be lost. If you are using the log buffer to log asynchronously, you should ensure that the buffer is flushed before the Lambda function times out. You can do this by calling the logger.flushBuffer() method at the end of your Lambda function.
Do child loggers inherit the buffer?
No, child loggers do not inherit the buffer from their parent logger but only the buffer configuration. This means that if you create a child logger, it will have its own buffer and will not share the buffer with the parent logger.
By default, Logger emits records with the default Lambda timestamp in UTC, i.e. 2016-06-20T12:08:10.000Z
If you prefer to log in a specific timezone, you can configure it by setting the TZ environment variable. You can do this either as an environment variable or directly within your Lambda function settings.
The createChild method allows you to create a child instance of the Logger, which inherits all of the attributes from its parent. You have the option to override any of the settings and attributes from the parent logger, including its settings, any extra keys, and the log formatter.
Once a child logger is created, the logger and its parent will act as separate instances of the Logger class, and as such any change to one won't be applied to the other.
The following example shows how to create multiple Loggers that share service name and persistent attributes while specifying different logging levels within a single Lambda invocation. As the result, only ERROR logs with all the inherited attributes will be displayed in CloudWatch Logs from the child logger, but all logs emitted will have the same service name and persistent attributes.
import{Logger}from'@aws-lambda-powertools/logger';// This logger has a service name, some persistent attributes// and log level set to INFOconstlogger=newLogger({serviceName:'serverlessAirline',logLevel:'INFO',persistentLogAttributes:{aws_account_id:'123456789012',aws_region:'eu-west-1',},});// This other logger inherits all the parent's attributes// but the log level, which is now set to ERRORconstchildLogger=logger.createChild({logLevel:'ERROR',});exportconsthandler=async(_event:unknown,_context:unknown):Promise<void>=>{logger.info('This is an INFO log, from the parent logger');logger.error('This is an ERROR log, from the parent logger');childLogger.info('This is an INFO log, from the child logger');childLogger.error('This is an ERROR log, from the child logger');};
{"level":"INFO","message":"This is an INFO log, from the parent logger","service":"serverlessAirline","timestamp":"2021-12-12T22:32:54.667Z","aws_account_id":"123456789012","aws_region":"eu-west-1","xray_trace_id":"abcdef123456abcdef123456abcdef123456"}{"level":"ERROR","message":"This is an ERROR log, from the parent logger","service":"serverlessAirline","timestamp":"2021-12-12T22:32:54.670Z","aws_account_id":"123456789012","aws_region":"eu-west-1","xray_trace_id":"abcdef123456abcdef123456abcdef123456"}{"level":"ERROR","message":"This is an ERROR log, from the child logger","service":"serverlessAirline","timestamp":"2021-12-12T22:32:54.670Z","aws_account_id":"123456789012","aws_region":"eu-west-1","xray_trace_id":"abcdef123456abcdef123456abcdef123456"}
Use sampling when you want to dynamically change your log level to DEBUG based on a percentage of your invocations.
You can use values ranging from 0 to 1 (100%) when setting the sampleRateValue constructor option or POWERTOOLS_LOGGER_SAMPLE_RATE env var.
When is this useful?
Let's imagine a sudden spike increase in concurrency triggered a transient issue downstream. When looking into the logs you might not have enough information, and while you can adjust log levels it might not happen again.
This feature takes into account transient issues where additional debugging information can be useful.
Sampling decision happens at the Logger initialization. When using the injectLambdaContext method either as a decorator or Middy.js middleware, the sampling decision is refreshed at the beginning of each Lambda invocation for you, except for cold starts.
If you're not using either of these, you'll need to manually call the refreshSamplingRate() function at the start of your handler to refresh the sampling decision for each invocation.
1 2 3 4 5 6 7 8 910111213141516
import{Logger}from'@aws-lambda-powertools/logger';constlogger=newLogger({logLevel:'ERROR',// (1)!sampleRateValue:0.5,});exportconsthandler=async()=>{logger.refreshSampleRateCalculation();// (2)!logger.error('This log is always emitted');logger.debug('This log has ~50% chance of being emitted');logger.info('This log has ~50% chance of being emitted');logger.warn('This log has ~50% chance of being emitted');};
The log level must be set to a more verbose level than DEBUG for log sampling to kick in.
You need to call logger.refreshSamplingRate() at the start of your handler only if you're not using the injectLambdaContext() class method decorator or Middy.js middleware.
12345678
{"level":"ERROR","message":"This log is always emitted","sampling_rate":"0.5","service":"serverlessAirline","timestamp":"2021-12-12T22:59:06.334Z","xray_trace_id":"abcdef123456abcdef123456abcdef123456"}
[{"level":"ERROR","message":"This log is always emitted","sampling_rate":"0.5","service":"serverlessAirline","timestamp":"2021-12-12T22:59:06.334Z","xray_trace_id":"abcdef123456abcdef123456abcdef123456"},{"level":"DEBUG","message":"This log has ~50% chance of being emitted","sampling_rate":"0.5","service":"serverlessAirline","timestamp":"2021-12-12T22:59:06.337Z","xray_trace_id":"abcdef123456abcdef123456abcdef123456"},{"level":"INFO","message":"This log has ~50% chance of being emitted","sampling_rate":"0.5","service":"serverlessAirline","timestamp":"2021-12-12T22:59:06.338Z","xray_trace_id":"abcdef123456abcdef123456abcdef123456"},{"level":"WARN","message":"This log has ~50% chance of being emitted","sampling_rate":"0.5","service":"serverlessAirline","timestamp":"2021-12-12T22:59:06.338Z","xray_trace_id":"abcdef123456abcdef123456abcdef123456"}]
[{"level":"ERROR","message":"This log is always emitted","sampling_rate":"0.5","service":"serverlessAirline","timestamp":"2021-12-12T22:59:06.334Z","xray_trace_id":"abcdef123456abcdef123456abcdef123456"},{"level":"DEBUG","message":"This log has ~50% chance of being emitted","sampling_rate":"0.5","service":"serverlessAirline","timestamp":"2021-12-12T22:59:06.337Z","xray_trace_id":"abcdef123456abcdef123456abcdef123456"},{"level":"INFO","message":"This log has ~50% chance of being emitted","sampling_rate":"0.5","service":"serverlessAirline","timestamp":"2021-12-12T22:59:06.338Z","xray_trace_id":"abcdef123456abcdef123456abcdef123456"},{"level":"WARN","message":"This log has ~50% chance of being emitted","sampling_rate":"0.5","service":"serverlessAirline","timestamp":"2021-12-12T22:59:06.338Z","xray_trace_id":"abcdef123456abcdef123456abcdef123456"}]
12345678
{"level":"ERROR","message":"This log is always emitted","sampling_rate":"0.5","service":"serverlessAirline","timestamp":"2021-12-12T22:59:06.334Z","xray_trace_id":"abcdef123456abcdef123456abcdef123456"}
You can customize the structure (keys and values) of your logs by passing a custom log formatter, a class that implements the LogFormatter interface, to the Logger constructor.
When working with custom log formatters, you take full control over the structure of your logs. This allows you to optionally drop or transform keys, add new ones, or change the format to suit your company's logging standards or use Logger with a third-party logging service.
import{Logger}from'@aws-lambda-powertools/logger';importtype{Context}from'aws-lambda';import{MyCompanyLogFormatter}from'./bringYourOwnFormatterClass';constlogger=newLogger({logFormatter:newMyCompanyLogFormatter(),logLevel:'DEBUG',serviceName:'serverlessAirline',sampleRateValue:0.5,persistentLogAttributes:{awsAccountId:process.env.AWS_ACCOUNT_ID,logger:{name:'@aws-lambda-powertools/logger',version:'0.0.1',},},});exportconsthandler=async(_event:unknown,context:Context):Promise<void>=>{logger.addContext(context);logger.info('This is an INFO log',{correlationIds:{myCustomCorrelationId:'foo-bar-baz'},});};
import{LogFormatter,LogItem}from'@aws-lambda-powertools/logger';importtype{LogAttributes,UnformattedAttributes,}from'@aws-lambda-powertools/logger/types';// Replace this line with your own typetypeMyCompanyLog=LogAttributes;classMyCompanyLogFormatterextendsLogFormatter{publicformatAttributes(attributes:UnformattedAttributes,additionalLogAttributes:LogAttributes):LogItem{constbaseAttributes:MyCompanyLog={message:attributes.message,service:attributes.serviceName,environment:attributes.environment,awsRegion:attributes.awsRegion,correlationIds:{awsRequestId:attributes.lambdaContext?.awsRequestId,xRayTraceId:attributes.xRayTraceId,},lambdaFunction:{name:attributes.lambdaContext?.functionName,arn:attributes.lambdaContext?.invokedFunctionArn,memoryLimitInMB:attributes.lambdaContext?.memoryLimitInMB,version:attributes.lambdaContext?.functionVersion,coldStart:attributes.lambdaContext?.coldStart,},logLevel:attributes.logLevel,timestamp:this.formatTimestamp(attributes.timestamp),// You can extend this functionlogger:{sampleRateValue:attributes.sampleRateValue,},};constlogItem=newLogItem({attributes:baseAttributes});logItem.addAttributes(additionalLogAttributes);// add any attributes not explicitly definedreturnlogItem;}}export{MyCompanyLogFormatter};
1 2 3 4 5 6 7 8 910111213141516171819202122232425
{"message":"This is an INFO log","service":"serverlessAirline","awsRegion":"eu-west-1","correlationIds":{"awsRequestId":"c6af9ac6-7b61-11e6-9a41-93e812345678","xRayTraceId":"abcdef123456abcdef123456abcdef123456","myCustomCorrelationId":"foo-bar-baz"},"lambdaFunction":{"name":"shopping-cart-api-lambda-prod-eu-west-1","arn":"arn:aws:lambda:eu-west-1:123456789012:function:shopping-cart-api-lambda-prod-eu-west-1","memoryLimitInMB":128,"version":"$LATEST","coldStart":true},"logLevel":"INFO","timestamp":"2021-12-12T23:13:53.404Z","logger":{"sampleRateValue":"0.5","name":"aws-lambda-powertools-typescript","version":"0.0.1"},"awsAccountId":"123456789012"}
Note that when implementing this method, you should avoid mutating the attributes and additionalLogAttributes objects directly. Instead, create a new object with the desired structure and return it. If mutation is necessary, you can create a structuredClone of the object to avoid side effects.
You can extend the default JSON serializer by passing a custom serializer function to the Logger constructor, using the jsonReplacerFn option. This is useful when you want to customize the serialization of specific values.
1 2 3 4 5 6 7 8 910111213
import{Logger}from'@aws-lambda-powertools/logger';importtype{CustomReplacerFn}from'@aws-lambda-powertools/logger/types';constjsonReplacerFn:CustomReplacerFn=(_:string,value:unknown)=>valueinstanceofSet?[...value]:value;constlogger=newLogger({serviceName:'serverlessAirline',jsonReplacerFn});exportconsthandler=async():Promise<void>=>{logger.info('Serialize with custom serializer',{serializedValue:newSet([1,2,3]),});};
1 2 3 4 5 6 7 8 910111213
{"level":"INFO","message":"Serialize with custom serializer","timestamp":"2024-07-07T09:52:14.212Z","service":"serverlessAirline","sampling_rate":0,"xray_trace_id":"1-668a654d-396c646b760ee7d067f32f18","serializedValue":[1,2,3]}
By default, Logger uses JSON.stringify() to serialize log items and a custom replacer function to serialize common unserializable values such as BigInt, circular references, and Error objects.
When you extend the default JSON serializer, we will call your custom serializer function before the default one. This allows you to customize the serialization while still benefiting from the default behavior.
When unit testing your code that makes use of logger.addContext() or injectLambdaContext middleware and decorator, you can optionally pass a dummy Lambda Context if you want your logs to contain this information.
This is a sample that provides the minimum information necessary for Logger to inject context data:
When unit testing your code with Jest or Vitest you can use the POWERTOOLS_DEV environment variable in conjunction with the --silent CLI option to suppress logs from Logger.
Disabling logs while testing with Vitest
1
exportPOWERTOOLS_DEV=true&&npxvitest--silent
Alternatively, you can also set the POWERTOOLS_DEV environment variable to true in your test setup file, or in a hoisted block at the top of your test file.