Creates a mapping between an event source and an AWS Lambda function. Lambda reads items from the event source and triggers the function.
For details about each event source type, see the following topics.
The following error handling options are only available for stream sources (DynamoDB and Kinesis):
BisectBatchOnFunctionError
- If the function returns an error,
split the batch in two and retry.
DestinationConfig
- Send discarded records to an Amazon SQS queue
or Amazon SNS topic.
MaximumRecordAgeInSeconds
- Discard records older than the
specified age. The default value is infinite (-1). When set to
infinite (-1), failed records are retried until the record expires
MaximumRetryAttempts
- Discard records after the specified number
of retries. The default value is infinite (-1). When set to infinite
(-1), failed records are retried until the record expires.
ParallelizationFactor
- Process multiple batches from each shard
concurrently.
lambda_create_event_source_mapping(EventSourceArn, FunctionName,
Enabled, BatchSize, MaximumBatchingWindowInSeconds,
ParallelizationFactor, StartingPosition, StartingPositionTimestamp,
DestinationConfig, MaximumRecordAgeInSeconds,
BisectBatchOnFunctionError, MaximumRetryAttempts,
TumblingWindowInSeconds, Topics, Queues, SourceAccessConfigurations,
SelfManagedEventSource, FunctionResponseTypes)
A list with the following syntax:
list(
UUID = "string",
StartingPosition = "TRIM_HORIZON"|"LATEST"|"AT_TIMESTAMP",
StartingPositionTimestamp = as.POSIXct(
"2015-01-01"
),
BatchSize = 123,
MaximumBatchingWindowInSeconds = 123,
ParallelizationFactor = 123,
EventSourceArn = "string",
FunctionArn = "string",
LastModified = as.POSIXct(
"2015-01-01"
),
LastProcessingResult = "string",
State = "string",
StateTransitionReason = "string",
DestinationConfig = list(
OnSuccess = list(
Destination = "string"
),
OnFailure = list(
Destination = "string"
)
),
Topics = list(
"string"
),
Queues = list(
"string"
),
SourceAccessConfigurations = list(
list(
Type = "BASIC_AUTH"|"VPC_SUBNET"|"VPC_SECURITY_GROUP"|"SASL_SCRAM_512_AUTH"|"SASL_SCRAM_256_AUTH",
URI = "string"
)
),
SelfManagedEventSource = list(
Endpoints = list(
list(
"string"
)
)
),
MaximumRecordAgeInSeconds = 123,
BisectBatchOnFunctionError = TRUE|FALSE,
MaximumRetryAttempts = 123,
TumblingWindowInSeconds = 123,
FunctionResponseTypes = list(
"ReportBatchItemFailures"
)
)
The Amazon Resource Name (ARN) of the event source.
Amazon Kinesis - The ARN of the data stream or a stream consumer.
Amazon DynamoDB Streams - The ARN of the stream.
Amazon Simple Queue Service - The ARN of the queue.
Amazon Managed Streaming for Apache Kafka - The ARN of the cluster.
[required] The name of the Lambda function.
Name formats
Function name - MyFunction
.
Function ARN -
arn:aws:lambda:us-west-2:123456789012:function:MyFunction
.
Version or Alias ARN -
arn:aws:lambda:us-west-2:123456789012:function:MyFunction:PROD
.
Partial ARN - 123456789012:function:MyFunction
.
The length constraint applies only to the full ARN. If you specify only the function name, it's limited to 64 characters in length.
If true, the event source mapping is active. Set to false to pause polling and invocation.
The maximum number of items to retrieve in a single batch.
Amazon Kinesis - Default 100. Max 10,000.
Amazon DynamoDB Streams - Default 100. Max 1,000.
Amazon Simple Queue Service - Default 10. For standard queues the max is 10,000. For FIFO queues the max is 10.
Amazon Managed Streaming for Apache Kafka - Default 100. Max 10,000.
Self-Managed Apache Kafka - Default 100. Max 10,000.
(Streams and SQS standard queues) The maximum amount of time to gather records before invoking the function, in seconds.
(Streams) The number of batches to process from each shard concurrently.
The position in a stream from which to start reading. Required for
Amazon Kinesis, Amazon DynamoDB, and Amazon MSK Streams sources.
AT_TIMESTAMP
is only supported for Amazon Kinesis streams.
With StartingPosition
set to AT_TIMESTAMP
, the time from which to
start reading.
(Streams) An Amazon SQS queue or Amazon SNS topic destination for discarded records.
(Streams) Discard records older than the specified age. The default value is infinite (-1).
(Streams) If the function returns an error, split the batch in two and retry.
(Streams) Discard records after the specified number of retries. The default value is infinite (-1). When set to infinite (-1), failed records will be retried until the record expires.
(Streams) The duration of a processing window in seconds. The range is between 1 second up to 15 minutes.
The name of the Kafka topic.
(MQ) The name of the Amazon MQ broker destination queue to consume.
An array of the authentication protocol, or the VPC components to secure your event source.
The Self-Managed Apache Kafka cluster to send records.
(Streams) A list of current response type enums applied to the event source mapping.
svc$create_event_source_mapping(
EventSourceArn = "string",
FunctionName = "string",
Enabled = TRUE|FALSE,
BatchSize = 123,
MaximumBatchingWindowInSeconds = 123,
ParallelizationFactor = 123,
StartingPosition = "TRIM_HORIZON"|"LATEST"|"AT_TIMESTAMP",
StartingPositionTimestamp = as.POSIXct(
"2015-01-01"
),
DestinationConfig = list(
OnSuccess = list(
Destination = "string"
),
OnFailure = list(
Destination = "string"
)
),
MaximumRecordAgeInSeconds = 123,
BisectBatchOnFunctionError = TRUE|FALSE,
MaximumRetryAttempts = 123,
TumblingWindowInSeconds = 123,
Topics = list(
"string"
),
Queues = list(
"string"
),
SourceAccessConfigurations = list(
list(
Type = "BASIC_AUTH"|"VPC_SUBNET"|"VPC_SECURITY_GROUP"|"SASL_SCRAM_512_AUTH"|"SASL_SCRAM_256_AUTH",
URI = "string"
)
),
SelfManagedEventSource = list(
Endpoints = list(
list(
"string"
)
)
),
FunctionResponseTypes = list(
"ReportBatchItemFailures"
)
)
if (FALSE) {
# The following example creates a mapping between an SQS queue and the
# my-function Lambda function.
svc$create_event_source_mapping(
BatchSize = 5L,
EventSourceArn = "arn:aws:sqs:us-west-2:123456789012:my-queue",
FunctionName = "my-function"
)
}
Run the code above in your browser using DataLab