Monitoring - A fix has been implemented and we are monitoring the Agent's connectivity during this process.
Sep 18, 17:26 UTC
Update - We are continuing to work on a fix for this issue.
Sep 18, 16:32 UTC
Identified - We are currently investigating the Agent version 1.5.6 and 1.6.1 connectivity issues. Users may experience connectivity problems with the following ingestion method Ubuntu and Kubernetes. Our teams have identified the problem and are working on resolving this matter.
Sep 18, 16:31 UTC
Log Ingestion (Agent/REST API/Code Libraries) ? Operational
90 days ago
99.81 % uptime
Today
Log Ingestion (Heroku) ? Operational
90 days ago
99.81 % uptime
Today
Log Ingestion (Syslog) ? Operational
90 days ago
99.81 % uptime
Today
Web App Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
had a major outage
had a partial outage
Average Live Tail Latency ?
Fetching
Past Incidents
Sep 17, 2019
Resolved - The delay has been resolved.
Sep 17, 17:32 UTC
Update - We are continuing to investigate this issue.
Sep 17, 15:29 UTC
Investigating - We are currently experiencing an indexing and live tail delay. Our infrastructure team is actively looking to remediate the issue.
Sep 17, 15:28 UTC
Sep 16, 2019
Resolved - The incident has been resolved.
Sep 16, 21:08 UTC
Monitoring - Live tail and indexing delays have been resolved. We are closely monitoring until further notice.
Sep 16, 20:53 UTC
Identified - Our infrastructure team has identified the issue and is currently working to restore service.
Customers might experience live tail, indexing and alerting delays during this time.
Sep 16, 19:58 UTC
Sep 15, 2019

No incidents reported.

Sep 14, 2019

No incidents reported.

Sep 13, 2019
Resolved - The incident has been resolved.
Sep 13, 14:49 UTC
Update - Update: We are still closely monitoring our infrastructure. Service has been restored and indexing delay is gone. Currently, filtering is still not being updated. We should be resolving the filter issue fairly shortly.
Sep 12, 17:54 UTC
Monitoring - Our infra team has implemented a fix to restore service. Live tail, indexing, and alerting delays are resolved. We are closely monitoring the state of the infrastructure at this time to prevent any further delays. Currently, filters within the top menu might be delayed.
Sep 11, 06:42 UTC
Update - Unexpected traffic along with scaling issues to handle the increased traffic caused delays with indexing, alerting and live tail. The start time was around 17:00 UTC Sept 9th, 2019. All customers are experiencing the above-said delays. Our infrastructure team and all other stakeholders are working diligently to get the scaling issue resolved as soon as possible and the application back to being fully operational.

The issue has been identified and this status will be updated soon with more information as our teams work on the incident.
Sep 11, 00:53 UTC
Update - We are continuing to work through the ingestion issues. It is likely that you are not getting alert notifications at this time.
Sep 10, 22:08 UTC
Identified - The issue has been identified. Indexing and live tail look to be slightly delayed, alerting is still delayed. We will set to monitoring as soon as the delay has disappeared.
Sep 10, 14:41 UTC
Investigating - We are currently experiencing a delay in ingestion including livetail, alerting, and indexing. Logs are being ingested but could be delayed 1 hour or more. Our infra team is actively working on this issue.
Sep 10, 05:50 UTC
Sep 12, 2019
Completed - The scheduled maintenance has been completed.
Sep 12, 14:39 UTC
In progress - LogDNA offices will be closed Sept 9th to Sept 11th. Responses to tickets will be slightly delayed.
Sep 9, 17:30 UTC
Scheduled - LogDNA offices will be closed from Sept 9th to Sept 11th. Responses to newly created tickets will be delayed during this time.
Sep 9, 17:23 UTC
Sep 10, 2019
Resolved - The issue has been resolved.
Sep 10, 05:42 UTC
Monitoring - Our infrastructure team is now monitoring the incident.
Sep 10, 02:39 UTC
Identified - Our infrastructure team has identified the issue. While the issue is being resolved, there will be a delay in livetail, indexing, and alerting.
Sep 9, 20:19 UTC
Investigating - We are currently investigating an issue with logs.logdna.com and connection issues.
Sep 9, 18:37 UTC
Sep 8, 2019

No incidents reported.

Sep 7, 2019

No incidents reported.

Sep 6, 2019
Resolved - This incident has been resolved.
Sep 6, 17:29 UTC
Monitoring - We will be migrating from an old environment to a new, high-performance environment on 2019-09-04 18:00 UTC. The new environment has been extensively tested for the past few months, and we do not expect to see much in the way of hiccups. The DNS routing will switch to the new environment around 18:00 UTC on 2019-09-04, and all log and user traffic should begin to route to the new environment. Our multi-cloud solution will automatically sync data between the two environments to ensure a seamless transition.

However, during the first 30 minutes of the migration, you may see lines temporarily missing from live tail or an occasional errant absence alert. We are monitoring both environments closely and have planned for multiple contingencies, and we will update this page if there is any unexpected turbulence. If you have any questions or concerns, please let us know. Thank you for your patience.

Update: Migration is currently in motion.

Update (9:51 PM PST): Infra team has completed the maintenance and is being monitored by appropriate teams.

Update (9:15 AM PST): Migration is complete and we are currently monitoring the infrastructure closely. Currently UDP and syslog might have some issues. Our infrastructure team is currently investigating.

Completed (Sept 6th, 2019 10:29 AM PST): Maintenence of now complete and we are continuing to monitor. If you have any questions or concerns, please email support@logdna.com.
Sep 4, 08:57 UTC
Sep 5, 2019
Resolved - This incident has been resolved.
Sep 5, 02:26 UTC
Monitoring - A fix has been implemented and we currently monitoring the situation.
Sep 4, 18:38 UTC
Investigating - We are currently investigating and working on resolving logs not being properly parsed including metadata.
Sep 4, 17:16 UTC
Sep 4, 2019
Resolved - This incident has been resolved.
Sep 4, 00:40 UTC
Monitoring - Our Infra team has discovered the issue, applied a fix and are actively monitoring services.
Sep 3, 20:50 UTC
Update - We are continuing to investigate this issue.
Sep 3, 20:48 UTC
Investigating - We are currently investigating our production cluster for stopped or delayed ingestion. We will update this once we've identified the issue.
Sep 3, 20:12 UTC