Skip to content

Frequently Asked Questions

Concepts

What's the difference between a Validation Rule and a Command Parser?

A Validation Rule defines what you want to check and how to compare the results (e.g., "Check that BGP neighbors remain unchanged using exact match").

A Command Parser defines how to collect the data for a rule on a specific platform (e.g., "On Cisco IOS, run show ip bgp summary and parse it with TEXTFSM").

A single Validation Rule can have multiple Command Parsers — one per platform — so the same check works across different device types.

Do I need a Validation Rule Group?

No, Validation Rule Groups are optional. They're a convenience for bundling related rules together so you can run them all in a single job. If you're only running one rule at a time or selecting rules individually, you don't need a group.

What's the difference between a Snapshot and a Command Output?

A Snapshot is the full point-in-time collection — one named container that holds outputs for all devices and rules from a single Take Snapshot job run.

A Command Output is a single device's output for a single validation rule within a snapshot. If you take a snapshot of 5 devices with 3 rules each, you get one Snapshot containing 15 Command Outputs.

What's the difference between the Pre/Post columns and the Pre Snapshot/Post Snapshot columns on Validation Results?

The Pre and Post columns link to the specific Command Output for that one device and validation rule.

The Pre Snapshot and Post Snapshot columns link to the full Snapshot, which contains all the Command Outputs for every device and rule collected in that run.

Both are useful — use the Command Output links to see the specific extracted data, and the Snapshot links to see the broader collection context.

Setup

Why do I need a Command Parser per platform?

Different platforms use different commands and output formats for the same information. For example, show ip bgp summary on Cisco IOS produces very different output than show bgp summary on Juniper. Each platform needs its own Command Parser so the app knows which command to run and how to interpret the output on that device type.

How does the app know which Command Parser to use for each device?

The app matches the device's Platform (from the Nautobot device record) to the Platform assigned to the Command Parser. If a device's platform doesn't have a matching Command Parser for the selected rule, that device/rule combination is skipped during Take Snapshot.

My device doesn't have a Platform assigned — what happens?

Devices without a Platform assigned are skipped entirely during Take Snapshot. You'll see a warning in the job log. Make sure every device you want to check has a Platform set in Nautobot before running the job.

Do I have to create Command Parsers manually?

You can create them manually through the UI, or you can define them in YAML files in a Git repository and have Nautobot sync them automatically. See the Datasource Reference for details.

Rule Types

Which rule type should I use?
  • EXACT_MATCH — Check that nothing changed at all (most common)
  • TOLERANCE — Allow small numeric differences (e.g., CPU, prefix counts)
  • PARAMETER_MATCH — Verify a specific field has a specific expected value
  • OPERATOR — Check a value against a threshold or allowed list
  • REGEX — Validate a value matches a pattern

See the Rule Types Reference for detailed examples of each.

How do I handle data that changes every collection (like counters)?

You have a few options:

  • Use JMESPath in the Command Parser's Path field to extract only the fields you care about, ignoring the volatile ones
  • For EXACT_MATCH rules, use the Exclude field to drop specific keys from the comparison
  • Use a TOLERANCE rule if the value should stay within a certain range
What's the difference between EXACT_MATCH and TOLERANCE?

EXACT_MATCH fails if anything is different between the "pre" and "post" data. TOLERANCE allows numeric values to vary by a specified absolute amount — useful for values that naturally fluctuate (BGP prefix counts, CPU usage, counters).

Do PARAMETER_MATCH, OPERATOR, and REGEX rules need a \"pre\" snapshot?

Yes, the Compare Snapshots job requires both a "pre" and "post" snapshot to run. However, for these three rule types only the "post" snapshot's data is actually evaluated against your rule options — the "pre" snapshot's data is not used in the comparison.

Running Jobs

Where do I run the Take Snapshot and Compare Snapshots jobs?

You can run both jobs from two places:

  • From the list pages: Click the Take Snapshot button on the Snapshots list page, or the Compare Snapshots button on the Validation Results list page
  • From the Jobs page: Navigate to Jobs > find the job by name and run it
Can I schedule snapshots to run automatically?

Yes. Since both jobs are standard Nautobot Jobs, you can use Nautobot's built-in scheduling to run them on a recurring basis (e.g., daily compliance checks). See the Nautobot Jobs documentation for scheduling details.

Can I re-run a comparison on the same two snapshots?

No — each unique pair of (pre Command Output, post Command Output) can only have one Validation Result. If you try to compare the same snapshot pair again, the job will log a warning and skip the pairs that already have results.

What happens to old snapshots and command outputs?

The app includes a Clean Command Output job that deletes command outputs older than a specified retention period (default 30 days). Run it periodically to manage database size. Snapshots themselves are not automatically deleted.

Troubleshooting

Why does my Validation Result show \"Fail\" when nothing changed?

Check if the data you're collecting includes fields that change naturally (timestamps, counters, uptime, etc.). Use JMESPath in the Command Parser's Path field to extract only the stable fields you want to compare, or use the Exclude field for EXACT_MATCH rules.

Why don't I see results for some of my devices after running Compare Snapshots?

Most likely the device's platform didn't have a matching Command Parser for one or more of the selected rules. Check the Take Snapshot job log — device/rule combinations that are skipped due to missing Command Parsers are logged as warnings.

How do I test a rule before running it in production?
  1. Test your JMESPath expression using the JMESPath online tester with sample device output
  2. Run Take Snapshot against a small set of test devices to verify the collected data looks correct
  3. Make a known change on one device and run a second snapshot
  4. Run Compare Snapshots to confirm the rule catches what you expect and doesn't produce false positives
Where can I get help?

For support with the app, please open a ticket in the Network to Code customer portal.