/
/

How to Set Backup Schedules by Tier and Data Volatility

by Lauren Ballejos, IT Editorial Expert
How to Set Backup Schedules by Tier and Data Volatility blog banner image

Key points

  • A tiered backup strategy is essential, classifying data by volatility and business impact to determine appropriate backup frequency.
  • Defining Recovery Point Objectives (RPO) for each tier based on data change rates ensures critical data can be restored with minimal loss.
  • Implementing immutable or air-gapped backups is mandatory to protect data from corruption, malware, or accidental deletion.
  • Regularly validating backups through scheduled restore tests is the only way to guarantee data can be successfully recovered.
  • Backup schedules and procedures must be periodically reviewed and optimized using insights from restoration tests and logs.
  • The chosen backup tools must support the required data types and locations, from servers and endpoints to SaaS platforms like Microsoft 365.

IT administrators and managed service providers (MSPs) must follow backup schedule best practices: failing to capture critical business data and communications in backups can lead to an immediate halt to business operations, and serious questions from stakeholders about the effectiveness and competence of your IT team.

This practical guide explains how to set backup schedules, plan and implement tiered data backup strategies, including how to decide on tiers, schedules, immutable and offsite backups, and the importance of validating your backups.

What is the best schedule for backups?

The best schedule for backups is entirely dependent on how your business operates. Some critical data is volatile and changes quickly, requiring regular backups, while other important data may only change periodically (making it wasteful to capture it too frequently). Some data, while significant in volume and regularly updated, can also be recreated.

Each ‘workload’ in your business (which could be a manual process like updating spreadsheets, or automated like collecting analytics data) will differ, and may produce different kinds of data. A tiered approach to data protection ensures that all valuable data is regularly captured so that the latest version is always protected, while optimizing for cost by copying less volatile data less frequently. To assess these factors and decide on tiers and the workloads assigned to them, you’ll need:

  • Business impact analysis with target recovery time objective (RTO) and recovery point objective (RPO) per workload
  • Current inventory of workloads and the change rates of the data they produce
  • Backup storage methods that support immutability (such as Amazon S3) or air-gapping (magnetic tape, optical discs, hard disks)
  • Test environment or time window for periodic restore drills
  • A shared IT documentation platform for schedules, recovery procedures, exceptions, and test evidence

Step 1: Know your data, assess volatility, and define tiers

You must fully understand all of your data so you can classify it based on how often it changes. This is known as volatility, for example, your e-commerce database may change by the second as sales are made, while shipping records are updated daily. You must also factor in how important specific data is, which is dependent on your business environment: critical data may be defined by data that would cause a halt to business if permanently lost, or cause regulatory concerns due to data retention requirements.

Start with four tiers based on business impact and regulatory needs. Each tier should match a typical change rate and acceptable RPO in your data. You can also decide whether each tier uses incremental or full backups, depending on the data covered and storage cost factors.

After establishing a simple baseline per tier, you can optimize them. For example, you may start with tiers like:

  • Tier 1/critical services: Hourly or near-continuous, plus nightly full backups
  • Tier 2/business apps and file shares: Every 2–4 hours plus nightly backups
  • Tier 3/lower-change workloads: Twice daily or nightly backups
  • Tier 4/archives: Weekly or monthly copy jobs

As part of your IT documentation, keep details of each tier, its predicted change rate, RPO/RTO values, and leave space for implementation details to be added later. You should also record how these implementations help you reach restoration targets.

Step 2: Decide on tools and assign jobs to backup tiers

Next, you need to find tools that can back up the data you have identified above. Data stored in different locations may require different tools. For example, file, web, and email servers can either mirror their files (for frequently changing content) and/or have snapshots taken for full system recovery, SaaS platforms like Microsoft 365 and Google Workspace require their own specialized backup solutions, and cloud platforms like AWS provide their own environment-specific backup tools. Databases may also benefit from platform-specific solutions, especially if managed rather than self-hosted. Containerized workloads will also require backup solutions specific to how they are built, deployed, and where data is stored.

Once tools are decided, you can create backup jobs within them that cover the required data and are configured to match the tiers decided above.

Step 3: Create immutable and offsite backups

When implementing your chosen backup tools, ensure that there are fast restore paths (for example, quickly recovering a file for a user), granular recovery, and the ability to perform a full restore in the event of disaster. Each tier or tool may require different recovery methods to rebuild a full working infrastructure with all pertinent data.

Immutable or air-gapped backups are a must-have for every backup strategyImmutable backups cannot be modified once created, preventing them from being corrupted by malware or user error. Similarly, air-gapped backups (for example, stored on a hard drive which is then disconnected and stored off-site as part of a 3-2-1 backup system) are out of reach of malware.

Step 4: Validate backups with restore tests

Regularly validate your backups with restore tests. This should also be done on a schedule that ensures data in all tiers is tested. If data cannot be restored, the backup is useless, so you must check all backup media and restore processes and quickly fix any identified issues to prevent data loss. Document these tests, including total restore times and predicted costs incurred.

Step 5: Review and optimize backup tiers and procedures

Use the results of backup restoration tests to:

  • Tighten schedules to meet RPOs, so that when a real restore is required, nothing is missed.
  • Perform periodic reviews of simulated and real restoration events to see where improvements can be made, and update your tiers, configurations, and documentation of these to reflect any changes made.

Logs and test results are a valuable resource for IT administrators and MSPs that can provide real value by giving you insights that can be used to improve service quality.

NinjaOne helps you create and implement tiered backup plans that are optimized for your organization

No backup plan suits every business: you are responsible for your backups, and must tailor your tiers, schedules, and backup solutions to meet your unique business needs. NinjaOne provides a comprehensive backup solution that covers servers and endpoints, even remotely. NinjaOne can also back up SaaS platforms, including Microsoft 365 and Google Workspace, helping keep all critical data and communications in your control.

These tools, combined with scripting and automation that can also integrate with cloud platforms like AWS, let you schedule backups, automate data verification and generate reports, and store all of this in centrally available documentation for your team and stakeholders.

Implement a tiered backup strategy for predictable recovery

A theoretical backup schedule offers no real protection, because its value is proven only during a restoration event.

By defining your tiers based on data volatility and business impact, enforcing immutable copies, and rigorously tuning cadence with restore tests, you transform your plan into a reliable, auditable process.

This evidence-based approach gives IT teams the confidence that when disaster strikes, recovery will be predictable, swift, and complete.

FAQs

While the article suggests tiers can use either incremental or full backups, a common best practice is to combine frequent incremental backups (to capture changes) with periodic full backups (for reliable, standalone recovery points), especially for Tiers 1 and 2.

Data governed by regulations often has mandated retention periods and specific protection requirements, which may force you to create a dedicated tier with its own schedule and immutable storage, even if the data rarely changes.

Start by interviewing department heads to identify which workloads would cause the greatest financial, operational, or legal impact if lost, and quantify how long the business can tolerate downtime (RTO) and how much data loss is acceptable (RPO) for each.

It is expected that you will adapt the tier framework; you can create sub-tiers or define a new tier altogether based on a unique combination of volatility, restore priority, and cost, ensuring your documentation clearly outlines the rationale.

Beyond compatibility, prioritize tools that can centralize management and reporting, offer APIs for automation, and support your chosen immutable storage targets (like S3) to avoid managing multiple, disjointed systems.

You might also like

Ready to simplify the hardest parts of IT?