Send Cloudwatch logs matching a pattern to SQS queue
I would like to send all Cloudwatch logs where the message of the console.log (appearing in my Cloudwatch logs) matches a certain pattern( for example including the word "postToSlack", or having a certain json field like "slack:true"...)
But I'm stuck at the very beginning of my attempts: I am first trying to implement the most basic task: send ALL cloudwatch logs written when my lambdas are executed (via console.logs placed inside the lambda functions) message to SQS (why? because I first try to make the simplest thing before complexifying with filtering which log to send and which log not to send).
So I created a Cloudwatch Rules > Event > Event Pattern like here below:
{
"source": [
"aws.logs"
]
}
and as a Target, I selected SQS and then a queue I have created.
But when I trigger for example my lambdas, they do appear in Cloudwatch logs, so I would have expected the log content to be "sent" to the queue but nothing is visible on SQs when I poll/check the content of the queue.
Is there something I am misunderstanding about cloudwatch Rules ?
CONTEXT EXPLANATION
I have lambdas that every hour trigger massively (at my scale:) with like maybe 300 to 500 executions of lambdas in a 1 or 2 minutes period.
I want to monitor on Slack all their console.logs (i am logging real error.stack javascript messages as well as purely informative messages like the result of the lambda output "Report Card of the lambda: company=Apple, location=cupertino...").
I could just use a http call to Slack on each lambda but Slack for incoming hooks has a limit of about 1 request per second, after that you get 429 errors if you try to send more than 1 incoming webhook per second... So I thought I'd need to use a queue so that I don't have 300+ lambdas writing to Slack at the same second, but instead controlling the flow from AWS to Slack in a centralized queue called slackQueue.
My idea is to send certain logs (see further down) from Cloudwatchto the SQS slackQueue, and then use this SQS queue as a lambda trigger and sending with this lambda batches of 10 messages (the maximum allowed by AWS; for me 1 message= 1 console.log) concatenated into one big string or array (whatever) to send it to my Slack channel (btw, you can concatenate and send in one call up to 100 slack messages based on Slack limits, so if i could process 100 messages=console.log and concatenate I would but the current batch size limit is 10 for AWS I think ), this way, ensuring I am not sending more than 1 "request" per second to Slack (this request having the content of 10 console.logs).
When I say above "certain logs", it means, I actually I don't want ALL logs to be sent to the queue (because I don't want them on Slack): indeed I don't want the purely "debugging" messages like a console.log("entered function foo").
which are useful during development but have nothing to do on Slack.
As regards some comments: I don't want to use , to my understanding (not expert of AWS) cloudwatch alarms, or metrics filters because they're quite pricy (I'd have those triggered hundreds of times every hour) and don't actually fit my need: I don't want only to read on Slack only when critical problem or a "problem" occurs (like CPU> xxx ...) but really send a regular filtered flow of "almost" all my logs to Slack to read the logs inside Slack instead of inside AWS as Slack is the tool opened all day long, that it's being used for logs/messages coming from other sources than AWS as a centralized place, and that pretty Slack attachment messages formatting is better digested by us. Of course the final lambda (the one sending the messages to slack) would do a bit of formatting to add the italic/bold/etc., and markdown required by slack to have nicely formatted "Slack attachements" but that's not the most complex issue here :)
amazon-web-services aws-lambda aws-sdk amazon-sqs
|
show 2 more comments
I would like to send all Cloudwatch logs where the message of the console.log (appearing in my Cloudwatch logs) matches a certain pattern( for example including the word "postToSlack", or having a certain json field like "slack:true"...)
But I'm stuck at the very beginning of my attempts: I am first trying to implement the most basic task: send ALL cloudwatch logs written when my lambdas are executed (via console.logs placed inside the lambda functions) message to SQS (why? because I first try to make the simplest thing before complexifying with filtering which log to send and which log not to send).
So I created a Cloudwatch Rules > Event > Event Pattern like here below:
{
"source": [
"aws.logs"
]
}
and as a Target, I selected SQS and then a queue I have created.
But when I trigger for example my lambdas, they do appear in Cloudwatch logs, so I would have expected the log content to be "sent" to the queue but nothing is visible on SQs when I poll/check the content of the queue.
Is there something I am misunderstanding about cloudwatch Rules ?
CONTEXT EXPLANATION
I have lambdas that every hour trigger massively (at my scale:) with like maybe 300 to 500 executions of lambdas in a 1 or 2 minutes period.
I want to monitor on Slack all their console.logs (i am logging real error.stack javascript messages as well as purely informative messages like the result of the lambda output "Report Card of the lambda: company=Apple, location=cupertino...").
I could just use a http call to Slack on each lambda but Slack for incoming hooks has a limit of about 1 request per second, after that you get 429 errors if you try to send more than 1 incoming webhook per second... So I thought I'd need to use a queue so that I don't have 300+ lambdas writing to Slack at the same second, but instead controlling the flow from AWS to Slack in a centralized queue called slackQueue.
My idea is to send certain logs (see further down) from Cloudwatchto the SQS slackQueue, and then use this SQS queue as a lambda trigger and sending with this lambda batches of 10 messages (the maximum allowed by AWS; for me 1 message= 1 console.log) concatenated into one big string or array (whatever) to send it to my Slack channel (btw, you can concatenate and send in one call up to 100 slack messages based on Slack limits, so if i could process 100 messages=console.log and concatenate I would but the current batch size limit is 10 for AWS I think ), this way, ensuring I am not sending more than 1 "request" per second to Slack (this request having the content of 10 console.logs).
When I say above "certain logs", it means, I actually I don't want ALL logs to be sent to the queue (because I don't want them on Slack): indeed I don't want the purely "debugging" messages like a console.log("entered function foo").
which are useful during development but have nothing to do on Slack.
As regards some comments: I don't want to use , to my understanding (not expert of AWS) cloudwatch alarms, or metrics filters because they're quite pricy (I'd have those triggered hundreds of times every hour) and don't actually fit my need: I don't want only to read on Slack only when critical problem or a "problem" occurs (like CPU> xxx ...) but really send a regular filtered flow of "almost" all my logs to Slack to read the logs inside Slack instead of inside AWS as Slack is the tool opened all day long, that it's being used for logs/messages coming from other sources than AWS as a centralized place, and that pretty Slack attachment messages formatting is better digested by us. Of course the final lambda (the one sending the messages to slack) would do a bit of formatting to add the italic/bold/etc., and markdown required by slack to have nicely formatted "Slack attachements" but that's not the most complex issue here :)
amazon-web-services aws-lambda aws-sdk amazon-sqs
Why do you wish to send the CloudWatch Logs to SQS? The normal mechanism is to specify a Filter Pattern in CloudWatch Logs, which can then be used to trigger a CloudWatch Alarm to alert for particular situations (eg a certain error occurring more then n times per hour). Do you really need to send the content of the CloudWatch Log entry, or are you just trying to trigger some form of notification?
– John Rotenstein
Nov 13 '18 at 6:59
@JohnRotenstein Thanks for your reply and the help! I think you were right about not giving enough context so I added a detailed explanation on the question. I'm open to any type of system even it's not the one I initially thought best (cloudwatch logs -> filter out useless console.logs -> SQS -> lambda -> Slack) because, to be honest I started AWS a month ago and not expert by any means
– Mathieu
Nov 13 '18 at 8:23
1
@Mathieu Probably the right tool for the job is SNS. Combined with the Opsidian integration with SLack, you can probably achieve what you need: medium.com/opsidian/…
– Sofo Gial
Nov 15 '18 at 8:32
@SofoGial Opsidian Thanks for your answer. Opsidian looks really cool but the issu is they talk about "Get Intelligent Alarms" on their website so there are 2 main problems with opsidian for me. see next comment
– Mathieu
Nov 15 '18 at 8:55
1. Setting alarms when a problem occurs as stated in the question, is not my goal. I don't want to only send "problems"/bugs/ with alarms (like CPU > xx) but, as explained in the question, all the logs even they're not an issue (ex: how to filter a log sent by console.log("Report Card of the lambda: company=Apple, location=cupertino... with the system described by your medium post ?); and even if I could "bend" cloudwatch alarms use for sth that is not really an "issue alarm" each time for each log, as they're not crashes/problems, it would be crazy expensive to pipe all logs like this )"
– Mathieu
Nov 15 '18 at 8:55
|
show 2 more comments
I would like to send all Cloudwatch logs where the message of the console.log (appearing in my Cloudwatch logs) matches a certain pattern( for example including the word "postToSlack", or having a certain json field like "slack:true"...)
But I'm stuck at the very beginning of my attempts: I am first trying to implement the most basic task: send ALL cloudwatch logs written when my lambdas are executed (via console.logs placed inside the lambda functions) message to SQS (why? because I first try to make the simplest thing before complexifying with filtering which log to send and which log not to send).
So I created a Cloudwatch Rules > Event > Event Pattern like here below:
{
"source": [
"aws.logs"
]
}
and as a Target, I selected SQS and then a queue I have created.
But when I trigger for example my lambdas, they do appear in Cloudwatch logs, so I would have expected the log content to be "sent" to the queue but nothing is visible on SQs when I poll/check the content of the queue.
Is there something I am misunderstanding about cloudwatch Rules ?
CONTEXT EXPLANATION
I have lambdas that every hour trigger massively (at my scale:) with like maybe 300 to 500 executions of lambdas in a 1 or 2 minutes period.
I want to monitor on Slack all their console.logs (i am logging real error.stack javascript messages as well as purely informative messages like the result of the lambda output "Report Card of the lambda: company=Apple, location=cupertino...").
I could just use a http call to Slack on each lambda but Slack for incoming hooks has a limit of about 1 request per second, after that you get 429 errors if you try to send more than 1 incoming webhook per second... So I thought I'd need to use a queue so that I don't have 300+ lambdas writing to Slack at the same second, but instead controlling the flow from AWS to Slack in a centralized queue called slackQueue.
My idea is to send certain logs (see further down) from Cloudwatchto the SQS slackQueue, and then use this SQS queue as a lambda trigger and sending with this lambda batches of 10 messages (the maximum allowed by AWS; for me 1 message= 1 console.log) concatenated into one big string or array (whatever) to send it to my Slack channel (btw, you can concatenate and send in one call up to 100 slack messages based on Slack limits, so if i could process 100 messages=console.log and concatenate I would but the current batch size limit is 10 for AWS I think ), this way, ensuring I am not sending more than 1 "request" per second to Slack (this request having the content of 10 console.logs).
When I say above "certain logs", it means, I actually I don't want ALL logs to be sent to the queue (because I don't want them on Slack): indeed I don't want the purely "debugging" messages like a console.log("entered function foo").
which are useful during development but have nothing to do on Slack.
As regards some comments: I don't want to use , to my understanding (not expert of AWS) cloudwatch alarms, or metrics filters because they're quite pricy (I'd have those triggered hundreds of times every hour) and don't actually fit my need: I don't want only to read on Slack only when critical problem or a "problem" occurs (like CPU> xxx ...) but really send a regular filtered flow of "almost" all my logs to Slack to read the logs inside Slack instead of inside AWS as Slack is the tool opened all day long, that it's being used for logs/messages coming from other sources than AWS as a centralized place, and that pretty Slack attachment messages formatting is better digested by us. Of course the final lambda (the one sending the messages to slack) would do a bit of formatting to add the italic/bold/etc., and markdown required by slack to have nicely formatted "Slack attachements" but that's not the most complex issue here :)
amazon-web-services aws-lambda aws-sdk amazon-sqs
I would like to send all Cloudwatch logs where the message of the console.log (appearing in my Cloudwatch logs) matches a certain pattern( for example including the word "postToSlack", or having a certain json field like "slack:true"...)
But I'm stuck at the very beginning of my attempts: I am first trying to implement the most basic task: send ALL cloudwatch logs written when my lambdas are executed (via console.logs placed inside the lambda functions) message to SQS (why? because I first try to make the simplest thing before complexifying with filtering which log to send and which log not to send).
So I created a Cloudwatch Rules > Event > Event Pattern like here below:
{
"source": [
"aws.logs"
]
}
and as a Target, I selected SQS and then a queue I have created.
But when I trigger for example my lambdas, they do appear in Cloudwatch logs, so I would have expected the log content to be "sent" to the queue but nothing is visible on SQs when I poll/check the content of the queue.
Is there something I am misunderstanding about cloudwatch Rules ?
CONTEXT EXPLANATION
I have lambdas that every hour trigger massively (at my scale:) with like maybe 300 to 500 executions of lambdas in a 1 or 2 minutes period.
I want to monitor on Slack all their console.logs (i am logging real error.stack javascript messages as well as purely informative messages like the result of the lambda output "Report Card of the lambda: company=Apple, location=cupertino...").
I could just use a http call to Slack on each lambda but Slack for incoming hooks has a limit of about 1 request per second, after that you get 429 errors if you try to send more than 1 incoming webhook per second... So I thought I'd need to use a queue so that I don't have 300+ lambdas writing to Slack at the same second, but instead controlling the flow from AWS to Slack in a centralized queue called slackQueue.
My idea is to send certain logs (see further down) from Cloudwatchto the SQS slackQueue, and then use this SQS queue as a lambda trigger and sending with this lambda batches of 10 messages (the maximum allowed by AWS; for me 1 message= 1 console.log) concatenated into one big string or array (whatever) to send it to my Slack channel (btw, you can concatenate and send in one call up to 100 slack messages based on Slack limits, so if i could process 100 messages=console.log and concatenate I would but the current batch size limit is 10 for AWS I think ), this way, ensuring I am not sending more than 1 "request" per second to Slack (this request having the content of 10 console.logs).
When I say above "certain logs", it means, I actually I don't want ALL logs to be sent to the queue (because I don't want them on Slack): indeed I don't want the purely "debugging" messages like a console.log("entered function foo").
which are useful during development but have nothing to do on Slack.
As regards some comments: I don't want to use , to my understanding (not expert of AWS) cloudwatch alarms, or metrics filters because they're quite pricy (I'd have those triggered hundreds of times every hour) and don't actually fit my need: I don't want only to read on Slack only when critical problem or a "problem" occurs (like CPU> xxx ...) but really send a regular filtered flow of "almost" all my logs to Slack to read the logs inside Slack instead of inside AWS as Slack is the tool opened all day long, that it's being used for logs/messages coming from other sources than AWS as a centralized place, and that pretty Slack attachment messages formatting is better digested by us. Of course the final lambda (the one sending the messages to slack) would do a bit of formatting to add the italic/bold/etc., and markdown required by slack to have nicely formatted "Slack attachements" but that's not the most complex issue here :)
amazon-web-services aws-lambda aws-sdk amazon-sqs
amazon-web-services aws-lambda aws-sdk amazon-sqs
edited Nov 13 '18 at 21:31
Mathieu
asked Nov 12 '18 at 22:02
MathieuMathieu
1,13862861
1,13862861
Why do you wish to send the CloudWatch Logs to SQS? The normal mechanism is to specify a Filter Pattern in CloudWatch Logs, which can then be used to trigger a CloudWatch Alarm to alert for particular situations (eg a certain error occurring more then n times per hour). Do you really need to send the content of the CloudWatch Log entry, or are you just trying to trigger some form of notification?
– John Rotenstein
Nov 13 '18 at 6:59
@JohnRotenstein Thanks for your reply and the help! I think you were right about not giving enough context so I added a detailed explanation on the question. I'm open to any type of system even it's not the one I initially thought best (cloudwatch logs -> filter out useless console.logs -> SQS -> lambda -> Slack) because, to be honest I started AWS a month ago and not expert by any means
– Mathieu
Nov 13 '18 at 8:23
1
@Mathieu Probably the right tool for the job is SNS. Combined with the Opsidian integration with SLack, you can probably achieve what you need: medium.com/opsidian/…
– Sofo Gial
Nov 15 '18 at 8:32
@SofoGial Opsidian Thanks for your answer. Opsidian looks really cool but the issu is they talk about "Get Intelligent Alarms" on their website so there are 2 main problems with opsidian for me. see next comment
– Mathieu
Nov 15 '18 at 8:55
1. Setting alarms when a problem occurs as stated in the question, is not my goal. I don't want to only send "problems"/bugs/ with alarms (like CPU > xx) but, as explained in the question, all the logs even they're not an issue (ex: how to filter a log sent by console.log("Report Card of the lambda: company=Apple, location=cupertino... with the system described by your medium post ?); and even if I could "bend" cloudwatch alarms use for sth that is not really an "issue alarm" each time for each log, as they're not crashes/problems, it would be crazy expensive to pipe all logs like this )"
– Mathieu
Nov 15 '18 at 8:55
|
show 2 more comments
Why do you wish to send the CloudWatch Logs to SQS? The normal mechanism is to specify a Filter Pattern in CloudWatch Logs, which can then be used to trigger a CloudWatch Alarm to alert for particular situations (eg a certain error occurring more then n times per hour). Do you really need to send the content of the CloudWatch Log entry, or are you just trying to trigger some form of notification?
– John Rotenstein
Nov 13 '18 at 6:59
@JohnRotenstein Thanks for your reply and the help! I think you were right about not giving enough context so I added a detailed explanation on the question. I'm open to any type of system even it's not the one I initially thought best (cloudwatch logs -> filter out useless console.logs -> SQS -> lambda -> Slack) because, to be honest I started AWS a month ago and not expert by any means
– Mathieu
Nov 13 '18 at 8:23
1
@Mathieu Probably the right tool for the job is SNS. Combined with the Opsidian integration with SLack, you can probably achieve what you need: medium.com/opsidian/…
– Sofo Gial
Nov 15 '18 at 8:32
@SofoGial Opsidian Thanks for your answer. Opsidian looks really cool but the issu is they talk about "Get Intelligent Alarms" on their website so there are 2 main problems with opsidian for me. see next comment
– Mathieu
Nov 15 '18 at 8:55
1. Setting alarms when a problem occurs as stated in the question, is not my goal. I don't want to only send "problems"/bugs/ with alarms (like CPU > xx) but, as explained in the question, all the logs even they're not an issue (ex: how to filter a log sent by console.log("Report Card of the lambda: company=Apple, location=cupertino... with the system described by your medium post ?); and even if I could "bend" cloudwatch alarms use for sth that is not really an "issue alarm" each time for each log, as they're not crashes/problems, it would be crazy expensive to pipe all logs like this )"
– Mathieu
Nov 15 '18 at 8:55
Why do you wish to send the CloudWatch Logs to SQS? The normal mechanism is to specify a Filter Pattern in CloudWatch Logs, which can then be used to trigger a CloudWatch Alarm to alert for particular situations (eg a certain error occurring more then n times per hour). Do you really need to send the content of the CloudWatch Log entry, or are you just trying to trigger some form of notification?
– John Rotenstein
Nov 13 '18 at 6:59
Why do you wish to send the CloudWatch Logs to SQS? The normal mechanism is to specify a Filter Pattern in CloudWatch Logs, which can then be used to trigger a CloudWatch Alarm to alert for particular situations (eg a certain error occurring more then n times per hour). Do you really need to send the content of the CloudWatch Log entry, or are you just trying to trigger some form of notification?
– John Rotenstein
Nov 13 '18 at 6:59
@JohnRotenstein Thanks for your reply and the help! I think you were right about not giving enough context so I added a detailed explanation on the question. I'm open to any type of system even it's not the one I initially thought best (cloudwatch logs -> filter out useless console.logs -> SQS -> lambda -> Slack) because, to be honest I started AWS a month ago and not expert by any means
– Mathieu
Nov 13 '18 at 8:23
@JohnRotenstein Thanks for your reply and the help! I think you were right about not giving enough context so I added a detailed explanation on the question. I'm open to any type of system even it's not the one I initially thought best (cloudwatch logs -> filter out useless console.logs -> SQS -> lambda -> Slack) because, to be honest I started AWS a month ago and not expert by any means
– Mathieu
Nov 13 '18 at 8:23
1
1
@Mathieu Probably the right tool for the job is SNS. Combined with the Opsidian integration with SLack, you can probably achieve what you need: medium.com/opsidian/…
– Sofo Gial
Nov 15 '18 at 8:32
@Mathieu Probably the right tool for the job is SNS. Combined with the Opsidian integration with SLack, you can probably achieve what you need: medium.com/opsidian/…
– Sofo Gial
Nov 15 '18 at 8:32
@SofoGial Opsidian Thanks for your answer. Opsidian looks really cool but the issu is they talk about "Get Intelligent Alarms" on their website so there are 2 main problems with opsidian for me. see next comment
– Mathieu
Nov 15 '18 at 8:55
@SofoGial Opsidian Thanks for your answer. Opsidian looks really cool but the issu is they talk about "Get Intelligent Alarms" on their website so there are 2 main problems with opsidian for me. see next comment
– Mathieu
Nov 15 '18 at 8:55
1. Setting alarms when a problem occurs as stated in the question, is not my goal. I don't want to only send "problems"/bugs/ with alarms (like CPU > xx) but, as explained in the question, all the logs even they're not an issue (ex: how to filter a log sent by console.log("Report Card of the lambda: company=Apple, location=cupertino... with the system described by your medium post ?); and even if I could "bend" cloudwatch alarms use for sth that is not really an "issue alarm" each time for each log, as they're not crashes/problems, it would be crazy expensive to pipe all logs like this )"
– Mathieu
Nov 15 '18 at 8:55
1. Setting alarms when a problem occurs as stated in the question, is not my goal. I don't want to only send "problems"/bugs/ with alarms (like CPU > xx) but, as explained in the question, all the logs even they're not an issue (ex: how to filter a log sent by console.log("Report Card of the lambda: company=Apple, location=cupertino... with the system described by your medium post ?); and even if I could "bend" cloudwatch alarms use for sth that is not really an "issue alarm" each time for each log, as they're not crashes/problems, it would be crazy expensive to pipe all logs like this )"
– Mathieu
Nov 15 '18 at 8:55
|
show 2 more comments
1 Answer
1
active
oldest
votes
@Mathieu, I guess you've misunderstood the CloudWatch Events with CloudWatch logs slightly.
What you need is a real time processing of the log data generated by your lambda functions, filter the logs based on a pattern and then store those filtered logs to your Slack for analysis.
But configuring a CloudWatch Event with SQS is similar like a SQS trigger to Lambda. Here, cloudWatch will trigger (send message to) the SQS queue. The content of the message is not your logs but either the default or custom message that you've created.
Solution #1:
Use Subscription filter to filter out the logs as per requirement and subscribe to AWS Kinesis/AWS Lambda/Amazon Kinesis Data Firehouse.
Using the filtered stream (Kinesis), trigger your lambda to push that data to Slack.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Subscriptions.html
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html
Solution #2:
- Push your cloudWatch logs to S3.
- Create a notification event in S3 on 'ObjectCreated' event and use that to trigger a Lambda function.
- In your Lambda function, write the logic to read the logs from S3 (equivalent to reading a file), filter them and push the filtered logs to Slack.
Thanks, much clearer now. I'll try to go for kinesis i think. thanks a lot for the help
– Mathieu
Nov 22 '18 at 8:28
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53270751%2fsend-cloudwatch-logs-matching-a-pattern-to-sqs-queue%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
@Mathieu, I guess you've misunderstood the CloudWatch Events with CloudWatch logs slightly.
What you need is a real time processing of the log data generated by your lambda functions, filter the logs based on a pattern and then store those filtered logs to your Slack for analysis.
But configuring a CloudWatch Event with SQS is similar like a SQS trigger to Lambda. Here, cloudWatch will trigger (send message to) the SQS queue. The content of the message is not your logs but either the default or custom message that you've created.
Solution #1:
Use Subscription filter to filter out the logs as per requirement and subscribe to AWS Kinesis/AWS Lambda/Amazon Kinesis Data Firehouse.
Using the filtered stream (Kinesis), trigger your lambda to push that data to Slack.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Subscriptions.html
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html
Solution #2:
- Push your cloudWatch logs to S3.
- Create a notification event in S3 on 'ObjectCreated' event and use that to trigger a Lambda function.
- In your Lambda function, write the logic to read the logs from S3 (equivalent to reading a file), filter them and push the filtered logs to Slack.
Thanks, much clearer now. I'll try to go for kinesis i think. thanks a lot for the help
– Mathieu
Nov 22 '18 at 8:28
add a comment |
@Mathieu, I guess you've misunderstood the CloudWatch Events with CloudWatch logs slightly.
What you need is a real time processing of the log data generated by your lambda functions, filter the logs based on a pattern and then store those filtered logs to your Slack for analysis.
But configuring a CloudWatch Event with SQS is similar like a SQS trigger to Lambda. Here, cloudWatch will trigger (send message to) the SQS queue. The content of the message is not your logs but either the default or custom message that you've created.
Solution #1:
Use Subscription filter to filter out the logs as per requirement and subscribe to AWS Kinesis/AWS Lambda/Amazon Kinesis Data Firehouse.
Using the filtered stream (Kinesis), trigger your lambda to push that data to Slack.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Subscriptions.html
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html
Solution #2:
- Push your cloudWatch logs to S3.
- Create a notification event in S3 on 'ObjectCreated' event and use that to trigger a Lambda function.
- In your Lambda function, write the logic to read the logs from S3 (equivalent to reading a file), filter them and push the filtered logs to Slack.
Thanks, much clearer now. I'll try to go for kinesis i think. thanks a lot for the help
– Mathieu
Nov 22 '18 at 8:28
add a comment |
@Mathieu, I guess you've misunderstood the CloudWatch Events with CloudWatch logs slightly.
What you need is a real time processing of the log data generated by your lambda functions, filter the logs based on a pattern and then store those filtered logs to your Slack for analysis.
But configuring a CloudWatch Event with SQS is similar like a SQS trigger to Lambda. Here, cloudWatch will trigger (send message to) the SQS queue. The content of the message is not your logs but either the default or custom message that you've created.
Solution #1:
Use Subscription filter to filter out the logs as per requirement and subscribe to AWS Kinesis/AWS Lambda/Amazon Kinesis Data Firehouse.
Using the filtered stream (Kinesis), trigger your lambda to push that data to Slack.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Subscriptions.html
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html
Solution #2:
- Push your cloudWatch logs to S3.
- Create a notification event in S3 on 'ObjectCreated' event and use that to trigger a Lambda function.
- In your Lambda function, write the logic to read the logs from S3 (equivalent to reading a file), filter them and push the filtered logs to Slack.
@Mathieu, I guess you've misunderstood the CloudWatch Events with CloudWatch logs slightly.
What you need is a real time processing of the log data generated by your lambda functions, filter the logs based on a pattern and then store those filtered logs to your Slack for analysis.
But configuring a CloudWatch Event with SQS is similar like a SQS trigger to Lambda. Here, cloudWatch will trigger (send message to) the SQS queue. The content of the message is not your logs but either the default or custom message that you've created.
Solution #1:
Use Subscription filter to filter out the logs as per requirement and subscribe to AWS Kinesis/AWS Lambda/Amazon Kinesis Data Firehouse.
Using the filtered stream (Kinesis), trigger your lambda to push that data to Slack.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Subscriptions.html
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html
Solution #2:
- Push your cloudWatch logs to S3.
- Create a notification event in S3 on 'ObjectCreated' event and use that to trigger a Lambda function.
- In your Lambda function, write the logic to read the logs from S3 (equivalent to reading a file), filter them and push the filtered logs to Slack.
answered Nov 22 '18 at 8:26
AgamAgam
43547
43547
Thanks, much clearer now. I'll try to go for kinesis i think. thanks a lot for the help
– Mathieu
Nov 22 '18 at 8:28
add a comment |
Thanks, much clearer now. I'll try to go for kinesis i think. thanks a lot for the help
– Mathieu
Nov 22 '18 at 8:28
Thanks, much clearer now. I'll try to go for kinesis i think. thanks a lot for the help
– Mathieu
Nov 22 '18 at 8:28
Thanks, much clearer now. I'll try to go for kinesis i think. thanks a lot for the help
– Mathieu
Nov 22 '18 at 8:28
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53270751%2fsend-cloudwatch-logs-matching-a-pattern-to-sqs-queue%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Why do you wish to send the CloudWatch Logs to SQS? The normal mechanism is to specify a Filter Pattern in CloudWatch Logs, which can then be used to trigger a CloudWatch Alarm to alert for particular situations (eg a certain error occurring more then n times per hour). Do you really need to send the content of the CloudWatch Log entry, or are you just trying to trigger some form of notification?
– John Rotenstein
Nov 13 '18 at 6:59
@JohnRotenstein Thanks for your reply and the help! I think you were right about not giving enough context so I added a detailed explanation on the question. I'm open to any type of system even it's not the one I initially thought best (cloudwatch logs -> filter out useless console.logs -> SQS -> lambda -> Slack) because, to be honest I started AWS a month ago and not expert by any means
– Mathieu
Nov 13 '18 at 8:23
1
@Mathieu Probably the right tool for the job is SNS. Combined with the Opsidian integration with SLack, you can probably achieve what you need: medium.com/opsidian/…
– Sofo Gial
Nov 15 '18 at 8:32
@SofoGial Opsidian Thanks for your answer. Opsidian looks really cool but the issu is they talk about "Get Intelligent Alarms" on their website so there are 2 main problems with opsidian for me. see next comment
– Mathieu
Nov 15 '18 at 8:55
1. Setting alarms when a problem occurs as stated in the question, is not my goal. I don't want to only send "problems"/bugs/ with alarms (like CPU > xx) but, as explained in the question, all the logs even they're not an issue (ex: how to filter a log sent by console.log("Report Card of the lambda: company=Apple, location=cupertino... with the system described by your medium post ?); and even if I could "bend" cloudwatch alarms use for sth that is not really an "issue alarm" each time for each log, as they're not crashes/problems, it would be crazy expensive to pipe all logs like this )"
– Mathieu
Nov 15 '18 at 8:55