Platform

Data Quality & Observability

Detect anomalies anywhere in your data, in real time

Lineage

Get to the root cause and resolve issues quickly

Data Asset Insights

Discover data assets and understand how they are used

Discover the product for yourself

Take a tour
Pricing

Learn more

Customer stories

Hear why customers choose Validio

Blog

Data news and feature updates

Reports & guides

The latest whitepapers, reports and guides

Events & webinars

Upcoming events and webinars, and past recordings

Heroes of Data

Join Heroes of Data - by the data community, for the data community

Get help & Get started

OfferFit take their ML models to the next level with Validio

Read the case study
Data Trends & Insights

Timestamps: small data engineering issues with massive impact

Friday, Oct 11, 20244 min read
Elof Gerde

Benjamin Franklin once said, “When the well's dry, we know the worth of water.” It’s a quote that sums up the world of data engineering pretty well. Even with the buzz of AI, data engineering is often not at the top of a company's priority list, until the impact of bad data becomes unmistakably clear. The thing is, there are so many potential hazards that data teams have to fend off. This is especially true at fast-growing global tech companies. Even seemingly small tasks present significant engineering challenges when you have millions or billions of users. 

The ticking time bomb of timestamps

Let’s talk about timestamps. At a glance, they are just simple records of when events occur. But when you're dealing with millions or billions of users spread across multiple time zones, timestamps can become a labyrinth of confusion.

Imagine a global mobile app where users perform actions that are logged with timestamps. If these timestamps aren't consistently managed across time zones, you might end up with events appearing to occur in the future or in the past, disrupting data analytics, billing cycles, and user notifications. Daylight Saving Time shifts add another layer of complexity, potentially causing data overlap or gaps that can wreak havoc on time-dependent services. This isn't just a theoretical problem; real-world incidents have shown how devastating these small issues can be.

Tiny errors, titanic consequences

One of the more famous timestamp incidents happened in 2012 with the 2012 Leap Second Bug. On June 30, an extra second was added to Coordinated Universal Time (UTC) to account for Earth's slight slowing in rotation—a practice known as a "leap second." While this one-second adjustment might seem insignificant, it led to widespread chaos across the tech industry.

Several high-profile companies experienced system outages and performance issues:

  • Reddit saw its servers become overloaded, causing the site to crash and become inaccessible to users for more than an hour.
  • LinkedIn, Yelp, and Mozilla reported similar issues, with servers experiencing high CPU usage and services becoming sluggish or unresponsive.
  • Airline reservations were thrown into disarray when the Amadeus booking system failed, leading to flight delays and frustrated travelers worldwide.
  • The root cause? Systems weren't properly configured to handle the extra second, leading to errors in time calculations, database inconsistencies, and software crashes. This incident underscored the critical importance of meticulous timestamp management.

    But timestamps are just one example. Other "small" issues can also escalate into major problems:

  • Data Type Inconsistencies: Mixing data types—like combining integers and strings—can result in calculation errors or system crashes.
  • Character Encoding Mismatches: Misinterpreting character sets can corrupt text data, leading to misunderstandings in user communication or even data loss.
  • Floating-Point Precision Errors: Minor rounding errors in calculations can accumulate over time, resulting in significant financial discrepancies.
  • When left unchecked, these minor glitches can lead to inaccurate data insights, poor decision-making, and a degraded user experience—all of which can cost companies millions and tarnish their reputation.

    Tracking timestamp anomalies to ensure quality at scale

    Recognizing the potential impact of these seemingly small issues, it's essential to have tools that help detect and address them proactively. Validio's data quality and observability platform is designed to identify these hidden challenges before they escalate, providing real-time monitoring and alerts for anomalies in your data.

    Some helpful things that Validio users do when it comes to timestamps include:

  • Monitor Timestamp Differences: Continuously track the intervals between timestamps or timestamp columns to detect anomalies in event sequences, such as unexpected delays or out-of-order events, which may indicate processing issues or time synchronization problems.
  • Ensure Data Freshness: Monitor data freshness to detect delays or gaps in data updates, maintaining up-to-date information for decision-making and user interactions.
  • Detect Schema Changes or Data Types in Time Fields: Automatically monitor for changes in database schemas related to timestamp data, helping you manage data type inconsistencies and prevent errors due to unexpected alterations in how time information is stored.
  • By proactively addressing these areas, you reduce the risk of errors impacting your operations and reputation.

    Final thoughts: Keep the wells filled with water

    We’ve delved deeper into some of the ways that timestamps can cause significant disruptions to companies, but timestamps are only one set of an almost endless number of ways that data issues can appear. Don’t underestimate the challenge of maintaining high-quality data. Equip your data teams with the tools they require to stay on top of everything.

    Keep your wells filled. 

    Learn more about how data teams use Validio for proactive data quality monitoring