Jekyll2023-02-01T21:31:01+00:00 NTC MagNetwork to Codeinfo@networktocode.compdb - How to Debug Your Code Like a Pro2023-02-01T00:00:00+00:002023-02-01T00:00:00+00:00 your hand if you still remember the first time you ever used print() for debugging your code! Perhaps it’s still the case today? Stare at the traceback, find the faulty line number, and insert a print statement just above it hoping to shed some light on the error. Although that’s a simple method, it has never been a very efficient one: the print() statement has to be moved to the next line… and the next one… and the next one… with no option to move around the code in an interactive way or play around with the imported libraries or functions. What about flooding your code in frustration with thousands of prints? There must be a better way to do it, right?

Fortunately, the community has come to our rescue with an amazing library called pdb — The Python Debugger. While you can use pdb as a regular library where you pass arguments—for an example, look at pdb.post_mortem()—we are mainly interested in the interactive debugging mode.

Let’s take a basic example using the NTC open source library—jdiff:

from jdiff import CheckType, extract_data_from_json

def pre_post_change_result(reference, comparison):
    """Evaluate pre and post network change."""    
    path = "result[*].interfaces.*.[$name$,interfaceStatus]"
    reference_value = extract_data_from_json(reference, path)
    comparison_value = extract_data_from_json(comparison, path)

    my_check = CheckType.create(check_type="exact ")
    return my_check.evaluate(reference_value, comparison_value)

if __name__ == "__main__":
    reference = {
      "result": [
          "interfaces": {
            "Management1": {
              "name": "Management1",
              "interfaceStatus": "connected",
    comparison = {
      "result": [
          "interfaces": {
            "Management1": {
              "name": "Management1",
              "interfaceStatus": "down",
    pre_post_change_result(reference, comparison)

When I run the above code, however, I get a NotImplementedError:

Traceback (most recent call last):
  File "/Users/olivierif/Desktop/", line 38, in <module>
  File "/Users/olivierif/Desktop/", line 8, in pre_post_change_result
    my_check = CheckType.create(check_type="exact")
  File "/usr/local/lib/python3.10/site-packages/jdiff/", line 29, in create
    raise NotImplementedError

Let’s see how we can debug the above code using pdb. My favorite way is to insert a breakpoint() line in the code, enter in debug mode, and move around from there.

New in version 3.7: The built-in breakpoint(), when called with defaults, can be used instead of import pdb; pdb.set_trace().

def pre_post_change_result(reference, comparison):
    """Evaluate pre and post network change."""  
    path = "result[*].interfaces.*.[$name$,interfaceStatus]"
    reference_value = extract_data_from_json(reference, path)
    comparison_value = extract_data_from_json(comparison, path)
    my_check = CheckType.create(check_type="exact")

    return my_check.evaluate(reference_value, comparison_value)

As soon as I run the code, the execution pauses and I am dropped into Python interpreter where the breakpoint() line was added. As we can see from the below output, pdb returns the code filename and directory path, the line and line number just below breakpoint(). I can now move around the code and start debugging…

> /Users/olivierif/Desktop/
-> path = "result[*].interfaces.*.[$name$,interfaceStatus]"

Let’s move closer to the line number returned by the traceback. Typing n as next, will move pdb to the next line—line number 7.

(Pdb) n
> /Users/olivierif/Desktop/
-> reference_value = extract_data_from_json(reference, path)

What if we want to print, for example, one of the function arguments or a variable? Just type the argument or variable name… Be aware, though, that the terminal must have passed the line where your variable is defined. pdb knows about only the code that has been through already.

(Pdb) reference
{'result': [{'interfaces': {'Management1': {'name': 'Management1', 'interfaceStatus': 'connected'}}}]}
(Pdb) my_check
*** NameError: name 'my_check' is not defined

Let’s now use j to jump to the fault code line. Before doing that, let’s see where we are in the code with l as list.

(Pdb) l
  3     def pre_post_change_result(reference, comparison):
  4         """Evaluate pre and post network change."""
  5         breakpoint()
  6         path = "result[*].interfaces.*.[$name$,interfaceStatus]"
  7  ->     reference_value = extract_data_from_json(reference, path)
  8         comparison_value = extract_data_from_json(comparison, path)
  9         my_check = CheckType.create(check_type="exact")
 11         return my_check.evaluate(reference_value, comparison_value)
 (Pdb) j 9
> /Users/olivierif/Desktop/
-> my_check = CheckType.create(check_type="exact")

Note that from line 7 I was able to move directly to line 9 with j 9 where 9 is the line number that I want pdb to move to.

Now the cool bit: In the code above, I am using the evaluate method to build my check type. If you remember the traceback, that was the line that gave me the error. While I am in pdb terminal I can s—as step—into that method and move around it:

(Pdb) s
> /usr/local/lib/python3.10/site-packages/jdiff/
-> @staticmethod
(Pdb) l
  7     # pylint: disable=arguments-differ
  8     class CheckType(ABC):
  9         """Check Type Base Abstract Class."""
 11  ->     @staticmethod
 12         def create(check_type: str):
 13             """Factory pattern to get the appropriate CheckType implementation.
 15             Args:
 16                 check_type: String to define the type of check.
(Pdb) n
> /usr/local/lib/python3.10/site-packages/jdiff/
-> if check_type == "exact_match":

Wait… what was the argument passed to this method? Can’t really remember. Let’s type a as for args.

Pdb) a
check_type = 'exact_matches'

…here we are! The method accepts exact_match string as check type, not exact!

Good, let’s now move pdb until we hit a return or raise line—with r key—so we can see our NotImplementedError line.

(Pdb) r
> /usr/local/lib/python3.10/site-packages/jdiff/>None
-> raise NotImplementedError

As you can see, using pdb is a way more efficient way to debug the code. There are tons of useful functions that can be used in interactive mode, and you can also use it to add useful verbosity to your code. I do invite you to spend some time around docs and play with it. Once you get acquainted with the library, you won’t have any more frustration in debugging your code.


Federico Olivieri
Reviewing Code More Effectively2023-01-13T00:00:00+00:002023-01-13T00:00:00+00:00 believe that it’s important not only to do code reviews when developing software, but also to really understand why we do code reviews, how to make code reviews useful and effective, and what to look for in a code review.

Why Do We Do Code Reviews?

Fundamentally, a code review serves to answer three questions, loosely in descending order of importance:

  1. Is a proposed change to the code a net improvement to the codebase as a whole?
  2. How might the proposed change be improved upon?
  3. What can the reviewer(s) (more broadly, the team) learn from the proposed change?

I want to especially draw your attention to the nuances of that first question. Code is never perfected, and expecting perfection in a code review can be counterproductive and frustrating to the contributor. The goal of a code review is not perfect code, but better code—the code can always be further improved in the future if it proves necessary to do so. Keeping that in mind can make for more efficient reviewing.

How Do We Make Code Reviews Effective?

As a maintainer of a software project, you should prioritize setting contributors up for successful code reviews. The details may vary depending on the size of the project and its pool of potential contributors, but ideas that generally apply include:

  • Labeling and categorizing open issues (bugs, feature requests, etc.) helpfully, especially in terms of their complexity or any specialized expertise that might be needed to tackle a specific issue. (Many open-source projects use a label such as help wanted or good first issue as a way to highlight issues that would be well-suited for a new contributor to take on, as one example.)
  • Documenting expectations for code contributions clearly, through tools such as GitHub’s “pull request templates” and, as well as by providing well-written developer documentation in general.
  • Automating any part of the project’s requirements that can be effectively automated, including unit and regression tests, but also extending to tools such as linters, spell-checkers, and code autoformatters.

As a contributor to a software project, key points to keep in mind include:

  • Solve one problem at a time—the smaller and more self-contained a code review is, the more easily and effectively it can be reviewed.
  • Provide a thorough explanation of the reasons behind the proposed code change—both as code comments and as the “description” or “summary” attached to the code review request, which can and should include screenshots, example payloads, and so forth.
  • Provide testing to demonstrate that the change does what it sets out to do. (Ideally automated, but even a well-documented manual test is far better than nothing!)
  • Approach the code review as a learning experience, and take feedback with an open mind.

As a reviewer of a code review, you should:

  • Approach the code review as both a teaching experience (sharing your hard-won expertise with the current code) but also as a learning experience.
  • Provide feedback politely and without ego (no matter how tempting it may be to regard your own existing code as impossible to improve upon!).
  • Link to relevant documentation and best practices to clarify and support any feedback you provide.

What Should We Look For in a Code Review?

I like to think of different approaches to a code review as a series of distinct frames of mind, or “hats” that I might “wear”. You can also think of “wearing a hat” as assuming a different persona as a reviewer, then focusing on areas that are important to that persona. In practice, there are no firm dividing lines between these, and I’ll often “wear” many “hats” at once as I’m doing a code review, but it can be a useful checklist to keep in mind for thoroughness.

The hats that I’ll discuss here are:

  • Beginner
  • User
  • Developer
  • Tester
  • Attacker
  • Maintainer

Hat of the Beginner

This involves an approach often labeled as “beginner’s mind” in other contexts. Fundamentally, the goal is to approach the code without preconceptions or assumptions, being unafraid to ask questions and seek clarification. The key skill here is curiosity. For example, you might ask:

  • Is the code understandable and well-documented?
  • Does this code actually do what the function name, comments, docstring, and so forth imply that it should do?
  • What might happen if this conditional logic check evaluates as False rather than True?
  • What might happen if a user provides “weird” or “invalid” inputs?
  • All in all, does the code change “make sense”?

Hat of the User

When wearing this hat, you focus on the experience of the user of this software. This could be a human user, but could also be another piece of software that interacts with this project via an API of some sort. Example questions to ask as a user might include:

  • Is the UI or API sensible, predictable, usable?
  • Is the operation of the software appropriately observable (via logging, metrics, telemetry, and so forth)?
  • Does the proposed code change introduce breaking changes to the existing experience of the user?
  • Does the proposed code change follow the principle of least surprise?
  • Is the code change clearly and correctly documented at all appropriate levels?

Hat of the Developer

This hat is what many of us may think of first when we think of approaches to code review, and it absolutely is a useful and important one at that. This approach focuses heavily on the details of the code and implementation, asking questions like:

  • Can this code be understood and maintained by developers other than the original author?
  • Is it well-designed, useful, reusable, and appropriately abstracted and structured?
  • Does it avoid unnecessary complexity?
  • Does it avoid presenting multiple ways to achieve the same end result?
  • Is it DRY?
  • Does it include changes that aren’t actually needed at this time (YAGNI)?
  • Does it match the idioms and style of the existing code?
  • Is there a standard or “off the shelf” solution that could be used to solve this particular problem instead of writing new code?

Hat of the Tester

This hat is my personal favorite, as I started my career as a software tester. The tester’s goal is to think of what might go wrong, as well as what needs to go right. You might ask:

  • Does this change meet all appropriate requirements (explicitly stated as well as implicit ones)?
  • Is the logic correct, and is it testable to demonstrate correctness?
  • Are the provided tests correct, useful, and thorough?
  • Does the code expect (and appropriately handle) unexpected inputs, events, and data?

… there are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. – Sir Tony Hoare

Hat of the Attacker

This is another fun hat (or maybe I just like finding problems?). The attacker’s hat is closely related to the tester’s hat, but takes on a more devious frame of mind. Here you should ask questions like:

  • Does the code show adequate performance under usual (and unusual) conditions?
  • Can I make the code crash or throw an exception?
  • Can I make the code access something I shouldn’t be able to?
  • Can I make the software break, corrupt or delete data, or otherwise do the unexpected?

Hat of the Maintainer

This is the big-picture counterpart to the detail-focused developer’s hat. Here you should be asking questions like:

  • If certain issues or nitpicks keep coming up in review after review, is there something we should automate in the future?
  • Does the code change fit into the broader context of the project?
  • Are any new dependencies being added to the project? And if so, are they acceptable (actively maintained, appropriately licensed, and so forth)?


I hope that you have learned something new from this overview of code reviewing approaches and best practices. While many of us may take more pleasure from writing our own code than reviewing that of others, it’s always worth remembering that a code review represents both a learning and a teaching opportunity for everyone involved, and that ultimately, the goal of every code review is to help make a better software project. I urge you to make code reviews a priority!

References and Further Reading

Many others have also written about the art and science of effective code reviews. Here are a few that I found particularly useful:


Glenn Matthews
Why You Should Invest in Training - Part 2 - Enabling Your Team as a Network Engineering Manager2023-01-10T00:00:00+00:002023-01-10T00:00:00+00:00 part 1 of this series, we covered why you should invest in training from the perspective of a Network Engineer. In part 2, we will discuss the benefits of investing in training for your team as a Network Engineering Manager.

As a Network Engineering Manager during the age of the great resignation, are you finding it difficult to retain your employees (not to mention also trying to find ways to gain greater insight and network control and stay innovative all at the same time)? The industry is growing, and there are more and more demands on managers to keep the company ship steering toward its goal.

One way to check several of these management boxes is to invest in your employees’ career development. Since the pandemic began in 2020, research has shown that companies can combat the effects of this resignation movement with a culture of learning. This decision will, in turn, ensure success for your other management goals. This is not only a move that will benefit the employees, but it’s also a way to achieve the business goals of the company, by training for improved job performance.

With network automation and other new technologies changing the network industry, there is an impending need to train top talent with the skills they need to meet the future of the workforce. And research also shows that employers that invest in their employees’ career development keep their employees longer. Those employees see their company’s dedication to their own upward mobility and will choose to stay at that company rather than search for other positions at other companies (that ultimately WILL invest in them). According to performance-management platform 15Five’s 2022 Workplace Report, nearly half of American workers say clear career growth is one of the most important factors for them to remain at a company. More than three-quarters (76%) of employees say they work harder for an employer that shows it cares about their growth as a professional than one that doesn’t.

Now that you are ready to start investing in your employees with training, _where does one start with team training and enablement these days in the network engineering/automation space? _In an industry that is constantly evolving, with the myriad of offerings available, from completely online and self-paced, to onsite, in-person options, it is hard to know which will be the most effective for your team. We have established that employers need to focus on upskilling their company’s talent, but they must also do this in ways that are manageable and fiscally feasible, while also making sure that the training program supports the company’s goals. Strategic alignment is key to developing training that will improve job performance (which, in turn, will benefit the employees).

Here are a few pieces of advice to get started:

1. Have a plan and set measurable goals.

What does “done” look like after the training is over? Have a plan and set measurable, attainable goals for your engineers. Ask yourself what specific job tasks related to network automation will each engineer do on a daily/weekly basis following the training program? Then, before the start of training, make sure those engineers know what they need to do in their jobs once they are done with the training. When employees know the bigger picture, and they see how the training will directly relate to their own job performance, they will be more invested in the learning opportunities provided.

2. Administer thorough diagnostics and assessments to truly understand where your engineers rank before and after training.

Before any training is delivered, the organization should administer an assessment that provides leaders with an understanding of where their employees rank in network automation technologies, knowledge, and skills. This will help uncover additional knowledge gaps, better evaluate student comprehension of the course material, and track engineers’ skills progress over time. At NTC, we start our diagnostics with a self-ranked assessment by the engineers, ranking their knowledge and skills from 1 (No knowledge) to 4 (Advanced- Subject Matter Expert). The organization should do an assessment before and after the training to better understand and track progress in job performance. The assessment not only provides meaningful data for you as a manager but also provides valuable information to the training entity or company to better understand which training programs (which technologies and skills) each of your engineers should start with and which training type or medium would best fit the needs of those teams (live, self-paced, or a hybrid approach).

In addition to the self-ranked assessment, it is important to include an evaluation of the training from the engineers where they have to apply their knowledge and skills, like an exam, a graded lab challenge, or if it fits with the program, a culminating hackathon—where they can apply their training in practical application. Not only does this give the managers additional data, but it allows for greater knowledge retention with the employees, especially if there are supplementary resources available for them to refer back to, like short on-demand videos, or a knowledge base, to cement the learned concepts. Practice makes perfect.

3. Allow time for learning during work hours.

Managers must ensure there is time during the workday for training, learning, and practicing skills. It is not feasible to set the precedent that your employees learn and practice network automation while also managing their regular network engineering workload—at least not at the start of their automation journey. How can you as a manager ensure there are dedicated times where they can attend a virtual workshop, complete a challenge assignment, watch self-paced modules, or attend a five-day training and also complete their job tasks? Remember, with network automation training specifically, this training investment will result in more workplace efficiency and less human error, along with other gains, so the benefits of scheduling this time in the short term will pay off in the long run.

At NTC, our Network Automation Academy provides a hybrid approach to learning for the busy network engineer in a delivery format that allows for greater retention of skills—skills that will improve job performance.

We take the time to strategically align the training program to your team’s needs. We will hold interviews with your team(s) and uncover gaps in knowledge that the right type of enablement will rectify. For example, in our Strategic Architecture and Design Analysis process with our Professional Services offering, we evaluate where enablement would support and reinforce the use of new technologies and workflows and use that enablement to increase the adoption of network automation at your company.

We offer flexible/remote learning; our experienced instructors can go onsite to one location to teach our formal training, or stay virtual if your employees are scattered across the globe. Either way, students will receive our signature 50% lecture/50% lab format. With self-paced learning options and graded challenge assignments, the engineers won’t quickly forget what they learned in that formal course. We also started a Training Credits program in 2022, allowing managers to send their engineers to our public training courses in waves so they aren’t all away from their job tasks at the same time.

Lastly, for the busy engineering team that is unable to attend even a three- to five-day network automation course, NTC Academy will build custom self-paced learning modules diving deep into your company’s unique automation environment to educate all levels of automation users (users, contributors, developers) on the specific information they need to know to succeed in your organization. Complete with knowledge quizzes and challenge assignments to measure learning and comprehension success, these modules provide that sought-after flexibility to fit into the days of busy engineer workloads. With a live guided discussion held every other week, your employees will still have the opportunity for those instructor touchpoints to keep them moving forward on their automation journey.

If you’d like to learn more about the NTC Academy and how we can skyrocket your company’s network automation journey through enablement and adoption, please visit or email us at


Elizabeth Yackley
Nautobot and Device Lifecycle - Nautobot LCM Application - Part 32023-01-03T00:00:00+00:002023-01-03T00:00:00+00:00 is Part 3 of an ongoing series about using Nautobot and Nautobot’s Lifecycle Management Application to help you with your device hardware and software planning. We will be looking at v1.0.2, which is the latest version at the time of writing this blog.

If you haven’t read Part 1 or Part 2, please give them a quick glance. These parts examine the basic building blocks that Nautobot’s Lifecycle Management Application uses to manage the device and software objects in Nautobot.

In this part we will dive into how Nautobot can help you by looking at various aspects of the Lifecycle Management Application.

What Does the Application Provide?

Throughout my time as a network engineer, device/software lifecycle was a major financial and time-consuming task. Usually my time was spent looking through our source of record for devices and parsing out the devices that were a certain hardware or making a filter to find out what devices were on a certain software, to see what needed to be replaced/upgraded. If not that, it would be a spreadsheet or an email that had a list of hardware that just said we need to upgrade or update these devices.

I would have loved to have had a centralized place that had all the software/hardware end-of-life (EoL) dates along with various other EoL data. During that time I had to go to Cisco’s website or Juniper’s website to gather all the information I needed to get EoL data for the devices/software I was working with.

This application helps to manage life cycle[en dash]related data, such as end-of-life dates, viable software versions, and maintenance contract information. All of this helps to provide valuable life cycle planning including maintenance events and hardware events.

Getting Started with DLM Application

First, you will need to add the application to your Nautobot instance. There is a guide here.

Once you have the application installed, you will see that you now have a drop-down menu that you can click on, and it should look like the one below.

DLM Dropdown

There are various items that make up the application, as you can see. So let’s dive into some of these!

Hardware Notices

Hardware notices are a great way to have a centralized location for EoL data for specific hardware. Let’s look at example hardware notices from You can assign device model objects or inventory item objects to these hardware notices either through the GUI or through the ORM using something such as a Nautobot job or Nautobot’s API.

Hardware Notice

  • Devices - This will autopopulate once you assign device model objects or inventory item objects to the hardware notice. If you click on the individual device, you will be forwarded to the device’s summary page (required when creating notice).
  • Device Type - If a device type object is assigned to the hardware notice, you will see that here. If you click on the device type, you will be directed to the summary page of that type; and from there you can click on instances to view all objects associated to the device type.
  • Inventory Item - If an inventory item object is assigned to the hardware notice, you will see that here.
  • End of Sale - Date of the vendor’s end of sale. When using an automated approach, you need to populate with “YYYY-MM-DD” format (required when creating notice).
  • End of Support - Date of the vendor’s end support. When using an automated approach, you need to populate with “YYYY-MM-DD” format (required when creating notice).
  • End of Software Releases - Date of the vendor’s end of software releases for the hardware. When using an automated approach, you need to populate with “YYYY-MM-DD” format.
  • End of Security Patches - Date of the vendor’s end of security patches for the hardware. When using an automated approach, you need to populate with “YYYY-MM-DD” format.
  • Documentation URL - Vendor’s URL or in-house URL that engineers can read over about the hardware dates and information.
  • Comments - You can fill in any comments regarding the hardware notice that can be filtered by using a GraphQL query or other filter in Nautobot’s ORM.
  • Tags - You can assign tags that you have created to tie in other objects, if needed.

Once you have assigned the hardware notice to either a device model or inventory item, you will see on the device’s page the linked hardware notice.

Hardware Notice Assignment

You will also see the hardware notice tied to the device type object:

Hardware Notice Device Type

Software (Notices)

Software List

The same goes for software notices. These are a great way to have a centralized location for EoL data for specific software. Let’s look at example software from You can assign software Nautobot objects to devices either through the GUI or through the ORM using something such as a Nautobot job.

Let’s look at what the software object has to offer.

Software Notice

  • Device Platform - The Nautobot platform object that is attached to the software (required when creating).
  • Software Version - This is the version of the software (required when creating).
  • Release Date - Date of the vendor’s software release. When using an automated approach, you need to populate with “YYYY-MM-DD” format.
  • End of Support - Date that the vendor’s support for the software will end. When using an automated approach, you need to populate with “YYYY-MM-DD” format.
  • Documentation URL - Vendor’s URL or in-house URL that engineers can read over about the hardware dates and information.
  • Long-Term Support - You can pick (True) or (False) regarding the software having long-term support in the network.
  • Pre Release - If the software is in testing or evaluation you can state that here.
  • Running on Devices - You will need to assign the software to devices when creating the software object. This is easier using Nautobot’s ORM in, for instance, a Nautobot job, which I will be talking about in Part 4.
  • Running on Inventory Items - You will need to assign the software to inventory items (if any) when creating the software object.
  • Corresponding CVEs - Any CVEs that vendor has can be listed here when you create the software object.
  • Tags - You can assign tags that you have created to tie in other objects, if needed.
  • Alias - This is not shown in the above screenshot, but you can add this when creating the software object. Some vendors have same version name; so when you are using the ORM to update software information, you will need to filter on the Alias, which should be unique. For example: - (e.g., Cisco IOS - 12.2(33)SXI14).

Once you have assigned the software object to either devices or inventory items, you will see the linked software object (notice) on the device’s page. We will explain validated software further down in the blog.

Software Assignment

Software Images

Software Images

Software images is a location where you can store various attributes for the software images you are currently using in the network. This software image object can be used in Nautobot’s API or ORM when you want to build jobs around automating software upgrades or validations.

Software Images Detail

Once you select your software image, you will be sent to all the attributes tied to that software image.

  • Software Version - This is the Nautobot Software object that will be tied to the software image (required when creating).
  • Image File Name - Name of the software image (required when creating).
  • Download URL - Vendor’s URL or in-house URL where the software file can be downloaded. Only HTTP and HTTPS are currently supported, as of 1.4.0.
  • Image File Checksum - Checksum of the file to use as validation when the file is downloaded.
  • Default Image - You can set this as a default image to be used on, say, specific hardware model types.
  • Tags - You can assign tags that you have created to tie in other objects, if needed.
  • Device assignments - Devices that are assigned this software image. This will be done when the software image object is created. (At least one assignment is needed.)
  • Inventory Items assignments - Inventory items that are assigned this software image. This will be done when the software image object is created. (At least one assignment is needed.)
  • Object Tags assignments - Object tags that are assigned this software image. This will be done when the software image object is created. (At least one assignment is needed.)

Now you can see that the software object has its software image tab populated.

Updated Software Object

Validated Softwares

Vvalidated Software List

Validated softwares is the software that has been vetted by the teams and approved to be used in the network. Validated softwares are also used in the lifecycle job to compare software versions of devices to the validated softwares that should be on that device. If the software is not Preferred Version, then it will not be targeted by the software validation job.

Validated Software

  • Software Version - This is the Nautobot Software object that will be tied to the software image (required when creating).
  • Valid Start - The date that the software is valid for the network (required when creating).
  • Valid End - Date when the software should be looked at to be replaced. This can be left blank and updated later.
  • Valid Now - This will be updated automatically if the server date is greater than the Valid Start and Valid End dates.
  • Preferred Version - This flag will be set so that when the software validation job is run, it will compare the software on device/inventory item to this specific software.
  • Tags - You can assign tags that you have created to tie in other objects, if needed.
  • Device assignments - Devices, Device Types, or Device Roles that are assigned this validated software. (At least one assignment is needed.)
  • Inventory Items assignments - Inventory items that are assigned this software image. (At least one assignment is needed.)
  • Object Tags assignments - Object tags that are assigned this software image. (At least one assignment is needed.)

A good way to tie the devices to the validated softwares is using tags, just in case you did not want to assign a device individually.

For one use case, I created the tag cisco_c6500_access; and every device that had that tag was meant to be on that validated software.

More about the validated software lifecycle job will appear in Part 4.

CVEs (Common Vulnerabilities and Exposures)

CVEs is a notice for vulnerabilities and exposures in either the hardware or software that you put in your network. This is potentially a security issue and should be assigned the highest urgency.

CVE List

  • Name - Name of the corresponding CVE (required when creating).
  • Publish Date - The date that the CVE was published by the vendor (required when creating).
  • Link - URL to the Vendor’s website regarding the CVE (required when creating).
  • Status - Is the CVE active or not?
  • Description - Description of the CVE.
  • Severity - The current severity of the CVE, so that the teams can focus on higher-priority CVEs.
  • CVSS Base Score - This will be given by the vendor in regard to the severity of the vulnerabilities or exposures.
  • CVSSv2 Score - Different standards of severity levels.
  • CVSSv3 Score - Different standards of severity levels.
  • Fix - Any information on fixes for the software/hardware affected.
  • Comments - Comments on what can be done, work-arounds, or next steps regarding software/hardware items that are affected by the CVE.
  • Affected Softwares - This is not shown in the screenshot above; but when creating the CVE, you can assign it to certain softwares. Inventory Items and Device Types will be added to a later release.



This is a centralized location for all the maintenance contracts that you have for various vendors. It includes start and end dates along with cost and support levels.

As you can see, there is quite a bit of information that you can assign to the contracts. You can assign devices to this contract when it is created, or you can create tags, as before, to link them.



The Device Lifecycle Application offers two different reports at the time of this writing.

  • Device Software Validation - This page will give you a graphical summary of the device validation job that Nautobot has. This job runs against all the devices and compares them to the validated software that is assigned to each device or device type.
  • Inventory Item Software Validation - This page will give you a graphical summary of the inventory item validation job that Nautobot has. This is a report that runs against all the inventory items and compares them to the validated software that is assigned to each device or device type.

Reports Page

The job that runs this is located in the Jobs tab, as shown below:

Validation Job

GraphQL Examples

I wanted to give a GraphQL example to look at the device software and validated software, since they were good learning headaches points I went through.

At the bottom of your Nautobot GUI there is a GraphQL link where you can test out queries if you like. Here is a sample query that will get you quite a bit of information if used in a GraphQL query. There are many more parameters that you can add if you want to gather more information.

query {
  devices {
    rel_device_soft {
      rel_soft_cve {
  device_software_validation {
    software {
      software_images {

With the query above, you will get a list of all the devices with the parameters queried. If there isn’t any data, it will show blank.

  "data": {
    "devices": [
        "name": "ams01-dist-01",
        "rel_device_soft": {
          "version": "12.2(33)SXI14",
          "end_of_support": "2017-08-31",
          "documentation_url": "",
          "rel_soft_cve": [CVE-2022-3580]
        "device_software_validation": {
          "software": {
            "version": "12.2(33)SXI14",
            "software_images": [
                "image_file_name": "cat6k_caa-universalk9.12.2.33.SPA.bin",
                "download_url": ""
          "is_validated": true

Nautobot API Examples

Nautobot’s API can be leveraged to gather Device Lifecycle Application data.

Various endpoints below can be sent a GET, PUT, DELETE, PATCH. Also, with each URI endpoint /{id}/ can be attached to the end to get more specific.


Example API Call

Using cURL:

curl -X 'GET' \
  '' \
  -H 'accept: application/json'

Using Python Requests:


  "count": 1,
  "next": null,
  "previous": null,
  "results": [
      "display": "cat6k_caa-universalk9.12.2.33.SPA.bin",
      "id": "26c6c38a-102f-4ccc-8ace-308148f06de6",
      "url": "",
      "image_file_name": "cat6k_caa-universalk9.12.2.33.SPA.bin",
      "software": {
        "display": "Cisco IOS - 12.2(33)SXI14",
        "id": "cfe75355-e5b5-4d67-a4dd-a6fcf59e9cd8",
        "url": "",
        "device_platform": "df107d2a-b84f-4a30-9e48-093e9ad9a550",
        "version": "12.2(33)SXI14",
        "end_of_support": "2017-08-31"
      "device_types": [
      "inventory_items": [],
      "object_tags": [],
      "download_url": "",
      "image_file_checksum": "5de9de43040184c7de2de60456027f8c",
      "default_image": false,
      "custom_fields": {},
      "tags": [],
      "created": "2022-11-15",
      "last_updated": "2022-11-15T18:09:03.760005Z"

What’s Next?

In the coming months I will be creating a specific blog post on each of the concepts mentioned below.


Zack Tobar
Managing Your Nautobot Environment with Poetry2022-12-23T00:00:00+00:002022-12-23T00:00:00+00:00 we’ve written previously, Poetry is the preferred method of managing Python projects using the PEP 621 method of storing project metadata in the pyproject.toml file. The intention behind using the PEP 621 format is to keep project metadata and related dependency management concise and contained in a single file. As Nautobot and all Apps are Django based applications that use Python, it makes Poetry the perfect solution for managing your Nautobot environment. This article will explain the various options Poetry provides for making managing and developing with Nautobot easier.

Managing Dependencies

The structure of your pyproject.toml file has been described in many other articles, so I won’t get into too much detail here. Below, I’ve provided an example of a new pyproject.toml for a Nautobot home lab that I’d like to manage using Poetry.

name = "nautobot_homelab"
version = "0.1.0"
description = "Nautobot Home Lab Environment"
authors = ["Network to Code, LLC <>"]

python = "3.10"
nautobot = "1.5.5"
nautobot-capacity-metrics = "^2.0.0"
nautobot-homelab-plugin = {path = "plugins/homelab_plugin", develop = true}
nautobot-ssot = {git = "", branch = "develop"}

bandit = "*"
black = "*"
django-debug-toolbar = "*"
django-extensions = "*"
invoke = "*"
ipython = "*"
pydocstyle = "*"
pylint = "*"
pylint-django = "*"
pytest = "*"
requests_mock = "*"
yamllint = "*"
toml = "*"

As you can see, I’ve defined the versions to be used as Python 3.10 and Nautobot 1.5.5 for the environment. I’ve also included the Capacity Metrics App to enable use with my lab telemetry stack, an App called nautobot-homelab-plugin being locally developed, and finally the Single Source of Truth framework for use with the locally developed App. You’ll notice that those last two are defined using more than just the desired version. They were added into the Poetry environment using the local directory of the plugin being worked on or referencing a Git repository and branch where the code resides. The final group of dependencies are noted as dev-dependencies as they should only be used for development environments. This is where you’d put any packages that you wish to use while developing your Apps, such as code linters.

Local Path

The plugin added to the project via a local path was added by issuing the command poetry add --editable ./plugins/homelab_plugin at the command line. This works as long as Poetry finds another pyproject.toml file for that project in the specified folder. If found, it will include all documented dependencies when generating the project lockfile. This is extremely helpful when you are working with a local development environment and need to view your changes quickly. Adding the --editable tag will denote the path should be loaded in develop mode so changes will be dynamically loaded. This means that as you make changes to your App while developing it, you don’t have to rebuild the entire Python package for it to function. This makes it much easier and quicker to iterate on your App, as changes should be immediately reflected in your environment.

Git Repository

If the code for your App resides in a Git repository, it’s typically best to just reference the repository and branch where it’s found as opposed to cloning it locally. This is done by issuing the command poetry add git+ at the command line. Using this method allows for you to retain the version control inherent to Git while still developing your App and testing it in your environment. This is especially handy when you’re working on a patch for some open-source project like the Infoblox SSoT App. As you don’t have direct access to the code, you would need to fork the repository and point to that for your environment. This enables you to test your fixes directly with your data and Nautobot before submitting a Pull Request back to the original repository for the fixes.

Local Development

Once you’ve determined all of the appropriate dependencies for your Nautobot environment, you should execute a poetry lock to generate the project lockfile. If you wish to use a local development environment, your next step would then be to issue a poetry install to install of those dependencies into the project virtual environment. This should include Nautobot and all of the dependencies you’ve defined in the pyproject.toml file. You will still be required to stand up either a Postgres or MySQL database and Redis server for full functionality. This can be quickly and easily accomplished by using a Docker container. Putting your secrets in a creds.env and all other environment variables in your development.env while using the following Docker Compose file will enable local development with your Poetry environment:

version: "3.8"
    image: "postgres:13-alpine"
      - "development.env"
      - "creds.env"
      - "5432:5432"
      # - "./nautobot.sql:/tmp/nautobot.sql"
      - "postgres_data:/var/lib/postgresql/data"
    image: "redis:6-alpine"
      - "sh"
      - "-c"  # this is to evaluate the $NAUTOBOT_REDIS_PASSWORD from the env
      - "redis-server --appendonly yes --requirepass $$NAUTOBOT_REDIS_PASSWORD"
      - "development.env"
      - "creds.env"
      - "6379:6379"
  postgres_data: {}

Docker Development

If you are working with Nautobot in a container-based environment, such as part of a Kubernetes or Nomad cluster, it might make sense to have your entire environment inside Docker containers instead of having Nautobot in your Poetry environment. However, you can still utilize the Poetry lockfile generated from the earlier step to create your Nautobot containers. By passing the desired Python and Nautobot versions to the Dockerfile below, you can generate a development container with Nautobot and your App installed.

FROM${NAUTOBOT_VERSION}-py${PYTHON_VER} as nautobot-base


RUN apt-get update && \
    apt-get upgrade -y && \
    apt-get autoremove -y && \
    apt-get clean all && \
    rm -rf /var/lib/apt/lists/* && \
    pip --no-cache-dir install --upgrade pip wheel


CMD ["nautobot-server", "runserver", "", "--insecure"]

RUN apt-get update && \
    apt-get upgrade -y && \
    apt-get autoremove -y && \
    apt-get clean all && \
    rm -rf /var/lib/apt/lists/*

COPY ./pyproject.toml ./poetry.lock /source/
COPY ./plugins /source/plugins

# Install the Nautobot project to include Nautobot
RUN cd /source && \
    poetry install --no-interaction --no-ansi && \
    mkdir /tmp/dist && \
    poetry export --without-hashes -o /tmp/dist/requirements.txt

# -------------------------------------------------------------------------------------
# Install all included plugins
# -------------------------------------------------------------------------------------
RUN for plugin in /source/plugins/*; do \
        cd $plugin && \
        poetry build && \
        cp dist/*.whl /tmp/dist; \

COPY ./jobs /opt/nautobot/jobs
COPY /opt/nautobot/

WORKDIR /source

Having a self-contained environment for developing your App can be extremely helpful in ensuring that all variables are accounted for, can be easily reproduced by others, and will not impact the system you’re developing on. It also makes it easy to create a production environment by copying the appropriate files from your development container into the production container. This can also be utilized in a CI/CD pipeline for automated testing of your application.


Today we’ve gone over how you can use Poetry to manage your Nautobot environment. We’ve shown how you can have the Apps included in your environment by referencing a Git repository or simply referencing a local directory where the App resides. We’ve also seen how the environment that Poetry creates can be used for developing Nautobot Apps.


Justin Drew
Simplify Network Automation with Nautobot Jobs2022-12-20T00:00:00+00:002022-12-20T00:00:00+00:00 to keep your script sprawl under control? You’re not alone. At some point, every organization on their network automation journey ends up with script sprawl. In an enterprise environment, however, being able to maintain control over these scripts and execute them using self-service is essential for empowering teams to take advantage of network automation.

During one of our recent webinars, Network to Code’s John Anderson, Principal Consultant, and Jeremy White, Principal Developer Advocate, discussed self-service and API access to Python scripts with Nautobot Jobs.

Read on to learn more about the challenges of script sprawl in network automation and how to overcome them using Nautobot Jobs.

The Challenges of Script Sprawl

After discovering the benefits of network automation, many network team members start building scripts to help them perform their daily tasks. However, these scripts are often executed locally within an individual team member’s own environment, which is also known as robotic desktop automation (RDA).

“When we’re operating in that sort of environment, it can be challenging to share the work that engineers are doing amongst one another,” stated Anderson. “It can be difficult to share if we don’t have the proper governance controls in place.”

Many network teams use Git repositories to track and share automation scripts, but this still poses problems for organizations. Engineers need to clone these Git repositories down to their local environments and ensure they’re using the right branch or script version. This approach lacks operational control and has limited consumability at an organizational level.

“We have the skill set within the team that we’re building some tooling and Python scripting, but we lack the auditability and permissions necessary in enterprise environments.” Anderson explained, “It’s helpful for individual team members, but we want to be able to scale this to the whole organization.”

The question is: how can organizations begin operationalizing these scripts so that other team members can take advantage of them as well? Overcoming this script sprawl requires centralizing governance while still providing self-service capabilities across distributed teams.

Automating Scripts with Nautobot

Nautobot is a source of truth and network automation platform that offers features for documenting and tracking network data. But the platform goes beyond data management with Nautobot Jobs, which is a way for executing custom logic on demand.

By moving automation scripts that are distributed throughout the network and across teams to Nautobot Jobs, organizations can centralize all of this code within Nautobot. These automation scripts can then be easily executed using the Nautobot UI or API by various team members.

There are three primary ways to migrate Jobs into Nautobot:

  • Upload scripts that were developed locally
  • Sync scripts from existing Git repositories
  • Use scripts provided by Nautobot Apps

“As we migrate these scripts into Nautobot Jobs, we gain operational control and the ability to scale this to the whole team,” explained Anderson. “Nautobot is so much more than just a data platform; it’s truly a network automation platform. We can treat Nautobot as the nucleus of automation.”

Diving Deeper into Nautobot Jobs

Nautobot Jobs are Python scripts that take user input from an automatically generated form before running. That means script authors can write a Job script and define the necessary input without manually building the dynamic form using HTML or CSS. Once the script is created, users can easily consume it using the dynamic form within the Nautobot UI.

“As Job authors, we focus on the actual business logic that’s important for manipulating data or talking to network devices,” Anderson said. “By utilizing Nautobot as the platform in which we operate those scripts, we get a bunch of enterprise-grade features as well.”

By using Nautobot Jobs for automated scripts, organizations can take advantage of a number of enterprise-grade features for better collaboration across teams:

  • Enterprise access and governance control options to improve security
  • A distributed Python execution environment to run scripts at scale
  • Integrated access to network data within Nautobot to ensure consistency

In addition to the self-service UI, Nautobot Jobs are accessible via API and webhooks. The REST API wrapper is a powerful way for engineers to access previous siloed automation scripts. Webhooks also enable event-based automation, where Python scripts are executed when certain criteria have been met.

Improving Collaboration and Contribution Across Teams

As you can see, Nautobot Jobs make it much easier to create and execute automation scripts in a distributed environment. It’s easy to convert scripts into Nautobot Jobs that can be consumed via API or self-service UI forms.

There’s also a Git integration to streamline version control and enable engineers to manage the governance of the code itself across one or more repositories. Engineers can maintain this code and conduct peer reviews using a traditional development workflow. This improves collaboration and makes it easier for engineers to contribute to network automation efforts.

Want to learn more about simplifying network automation with Nautobot Jobs? Watch the full webinar here.

-Tim S

Tim Schreyack
Network Automation with Nautobot and IP Fabric2022-12-15T00:00:00+00:002022-12-15T00:00:00+00:00 to build a comprehensive network automation solution?

You’ve come to the right place.

During one of our recent webinars, Paddy Kelly, Managing Consultant at Network to Code and Daren Fulwell, Network Automation Evangelist at IP Fabric had a discussion about network automation and single sources of truth.

Their conversation covered the key components of an effective network automation architecture, overviews of the Nautobot and IP Fabric platforms, and the benefits of two Nautobot IP Fabric integrations. This blog will provide the highlights of their conversation.

Top Components of Network Automation Architecture

Network to Code (NTC) has been building and refining network automation solutions for almost a decade.

Through this experience, NTC has found that an effective network automation solution has the following key components:

  1. User Interactions: Determines how users interact with the network automation solution, including dashboards, portals, and ChatOps.
  2. Orchestration: Defines how tasks in the automation engine are connected and coordinated. This is the bridge between the automation engine and the observability stack.
  3. Observability and Analytics: Exposes a continuous flow of rich data from the network, such as the observed state and how the network is actually operating.
  4. Automation Engines: Executes all the tasks that change the state of the network, from rendering configurations and provisioning network components to maintaining compliance.
  5. Source of Truth: Stores the data that defines the intended state of the network. It should contain all the systems and databases that act as the authority for their domain.

Nautobot Platform

For those that aren’t aware already, Nautobot is an extensible and flexible Network Source of Truth and Network Automation Platform that can power any network automation architecture.

“The primary use case of Nautobot is a flexible source of truth for networking,” explained Kelly. “The aim is to act as an authority for network data and document the network with the source of truth.”

Along with centralizing network data into a single source of truth, Nautobot provides an extensible plugin system with REST APIs, GraphQL, Git, webhooks, Job Hooks, and more. This enables Nautobot to integrate with a wide variety of systems to synchronize network data both into and out of the platform.

The Nautobot platform also supports creating and deploying custom apps to handle unique network automation use cases. In fact, there’s now a growing app ecosystem for Nautobot, which includes the IP Fabric Single Source of Truth app and IP Fabric ChatOps app.

IP Fabric Automated Assurance Platform

IP Fabric is an automated assurance platform that gives organizations end-to-end visibility into their network within a single platform. This includes an inventory of the entire network, the observed state of the network, and the topology of how individual network components are interconnected.

“The role of IP Fabric is network assurance,” stated Fulwell. “This is basically making sure that the network does what you intend it to. Not just one part of the network, but the whole thing.”

Many companies rely on documentation, tribal knowledge, or guesswork to manage their network, but this fragmented approach is inefficient and time-consuming. IP Fabric automatically discovers this data and makes a comprehensive view of the entire network available to users.

Diving Deeper into Single Source of Truth (SSoT)

Organizations have many different systems that manage their network data, such as IPAM (IP Address Management) tools, configuration tools, monitoring platforms, and more. Data ownership and governance across these different platforms is complex, but ensuring the flow of data between them is crucial to support network automation.

“It can be difficult to maintain this network data because duplication can happen across systems and then you don’t know which one to trust,” Paddy said. “An aggregation layer can really help in managing that data and providing a unified API.”

An aggregation layer acts as a single source of truth (SSoT) that can streamline network automation. Nautobot offers an SSoT framework that has existing integrations for many popular networks and provides the APIs necessary to build custom integrations with other systems. This enables data synchronization into and out of Nautobot.

For example, the IP Fabric Single Source of Truth app can pull the network observability data from the IP Fabric Assurance Platform into Nautobot. By synchronizing data between Nautobot and IP Fabric, organizations can begin building an SSoT that provides a comprehensive picture of their network.

Nautobot IP Fabric ChatOps

Nautobot IP Fabric ChatOps is another app in the ecosystem that leverages the ChatOps framework from Nautobot.

The ChatOps framework is a highly extensible, multi-platform chatbot that supports Slack, Microsoft Teams, Webex Teams, and many other popular communication platforms. ChatOps allows users to easily query and receive data from Nautobot and the other systems integrated with it.

In short, the Nautobot ChatOps app enables real-time access to data in IP Fabric’s Assurance Platform, significantly reducing the time it takes to resolve production issues by network and security operations teams. ChatOps, therefore, takes network data management a step further by making the SSoT more accessible to the broader organization.

“The chatbot exposes self-service capabilities to other teams so they can access the network and see data within the network,” concluded Paddy. “To be able to see comprehensive network data and have it available within a simple chat command is really powerful.”

Want to learn more about building an effective network automation solution with Nautobot and IP Fabric? Watch the full webinar here.


Tim Schreyack
Network Automation Architecture - An Example2022-12-13T00:00:00+00:002022-12-13T00:00:00+00:00 the previous blogs of this series about Network Automation Architecture, 1 and 2, we have presented the key architectural components and their respective functions. Now, it’s time to apply that architecture to a real scenario: a firewall rule automation in a hybrid network environment.

This is not the last blog of this series, more detailed ones for each of the primary components will come later. But, before they come, we want to bring the architecture to life by showing how to map a network automation solution.

The Example: Firewall Rule Automation in a Hybrid Environment

Firewall rule automation is a common network operation task that usually gets complicated due to the number of potential requestors and applications, and the variety of different platforms involved.

To illustrate it, we will use a simple scenario with only two firewall platforms: one running on-premises and another running in a cloud provider. Each environment has different configuration processes, and until both are updated and in sync, the network communication between the source and target application will not succeed.

From a user perspective, we want to offer an easy interface where the user could define the required communication, and from the information provided, the firewall services will be updated to allow the network flow between the desired applications. (Notice that we are not talking about IPs, leaving this more general to an application composed of multiple IP endpoints and different network service ports.)

Hopefully this scenario looks familiar to you. Now it’s time to start defining the automation solution that could implement it. Always, the first step to automate a process/workflow, is to understand it. And this is exactly what we start next.

Describe the Manual Operations Workflow

Even though describing an operation that your network team performs frequently could seem a trivial exercise, we recommend taking your time and doing it, collecting information from different perspectives. Not only from the network operators performing the firewall changes (two persons usually do things in a slightly different way), but also from other persons involved in the process (the requestors and also other teams, such as the security policy team who defines the security policy).

You can check other NTC blogs covering this topic in detail in

At this stage, automation is not the main topic. We should focus first on capturing, step-by-step, what happens from the workflow kick-off until it is completed.

In our example, a simplified workflow could look like this:

  1. An Application Owner notifies us about the source and destination of the flow.
  2. A network operator checks if it matches the security policy rules.
  3. If accepted, he runs some traceroutes to determine which firewall devices must be updated.
  4. He connects to the different firewalls to update the rules according to the communication flow requested.
  5. He notifies the Application Owner that the configuration is ready, and he can proceed with the verification.

Disclaimer: Do not take this as an exhaustive analysis. In this example we are using the minimum information to illustrate the point. A real example will contain much more details.

With this draft of the manual operational steps to perform the network task, we can start thinking about how this could look like in an automated version.

Translate the Workflow Steps to Automated Tasks

Once the manual workflow steps are defined, it’s time to translate these steps into automated tasks, understanding what is required and adding some improvements along the way.

The next figure shows a candidate automated workflow to solve the firewall rule automation operation described.

You can notice that some steps are just taken from the manual workflow, and defined as automated tasks, such as “Application Owner requests access from App A to App B”. However, even though not described here in detail (for brevity), each one of these steps comes with specific requirements about data management.

Every automated workflow requires structured data that could be understood by a machine. So, in the workflow entry point we will have to define clearly which is the minimum information requested. In our example, to simplify we will assume that an application name (source and destination) will suffice and the actual IPs and services will be taken from some place, making the user experience simpler.

Simply by automating the same steps, we are already getting some benefits. Automation enforces data normalization and validation that minimizes potential misunderstandings or copy & paste errors. Also, something we will get out-of-the-box from automation is consistency. Every operation will behave the same, not depending on operator’s criteria, but on the whole team who defined who the automation solution.

But, once automation is in place, we can introduce some advanced steps that were not possible, or were harder, in the manual workflow. For instance, we can execute a pre- and post-validation of the firewall changes, and get feedback about the change that we are deploying.

When a workflow is automated, it is not mandatory to automate it 100 percent. In some cases, adding a manual judgment step could make sense, especially during the adoption phase.

Once we have described and understood what is needed in each of the automated workflow steps it is time to use the network automation architecture.

Map the Automated Tasks to Architecture Components

At this point, we know what we require and expect from each of the steps that the workflow needs to implement. By using the network automation architecture, we can group some of these tasks in the same functional blocks, so we could have better insights when we determine which tools better solve the requirements.

In the next figure you can observe how the workflow tasks are mapped to the different architecture components:

  • User Interaction: contains the first and the last steps of the workflow. Initiating it, providing the required data, and getting the confirmation that the request has been executed.
  • Source of Truth: processes and ingests the data from the user and, using other data that defines the network intent, is able to validate whether it matches the security policy and determine which are the firewalls that are in the network path between the two applications. We should notice here that implicitly this SoT will convert the application names into the network data we require: IP addresses and service ports.
  • Orchestration: allows the triggering of the automation engine process, via different ways.
  • Automation Engine: here is where the magic happens. It converts the intent from the Source of Truth into real configuration artifacts that can be pre-validated, and finally deployed to the different network elements.
  • Telemetry and Observability: after the network state is updated, this component collects the data that provides evidence that the change has succeeded, or not, and sends a user notification accordingly.

Even though this is a really simplified explanation of the mapping process, it should give you an initial understanding of how the network automation architecture can be used.

Choose the Tools to Implement Each Component’s Tasks

Finally, it’s time to figure out which tools are the best to solve the needs of each architectural component. Keep in mind: the reality is that, usually, this process is strongly influenced by the current tech stack or other automation projects that could add some constraints in the decision.

In the next figure, we can see selection of open-source tooling that could solve the requirements of the automation workflow presented in this example.

Disclaimer: To illustrate this example we have selected some open-source tools. But there are many other options, both open-source and from vendors, that are totally valid to solve it.

  • User Interaction: Mattermost is an instant messaging application to allow user interaction, and Grafana offers a dashboard for visualization of the flow statistics.
  • Source of Truth: Consul provides a dynamic mapping of application names to IP addresses. Git contains the Jinja templates to create the CLI configuration artifacts for the on-premises firewall. And Nautobot, with the Firewall Models extension, offers the abstraction of firewall rules for both environments, along with the definition of the on-premises and cloud network inventory and CMDB.
  • Orchestration: AWX will start the workflow execution via a manual trigger, or from a webhook triggered from Nautobot when a new firewall rule is created, updated, or deleted.
  • Automation Engine: Batfish provides pre-validation of network configurations. Then Ansible will be used to configure the on-premises network firewall, rendering the configuration from the Source of Truth, and Terraform will also use the Source of Truth intent to provision the cloud firewall services.
  • Telemetry and Observability: Telegraf collector will get network data metrics from the firewall services, and will store them in Prometheus for later consumption from other automation processes or from Grafana.

And this is just the beginning of the game! Once all these tools are in place, you can look to reuse them for other workflows, or add new functionalities that they provide that were not in the initial set of requirements. Now that they are broken out by component, you can also replace tools if they are no longer the correct tool or add new ones to complement them when new requirements appear.

What’s Next

Hopefully you liked this blog and got a better understanding of how the network automation architecture can help you to approach building network automation solutions.

But fasten your seat belts, because the meat and potatoes comes now! The next blogs will cover in more detail each of the architecture components, describing the features and challenges that must be taken into account.


Christian Adell
What Is gRPC? - Part 22022-12-08T00:00:00+00:002022-12-08T00:00:00+00:00 blog will build on top of what was discussed in Part 1 of this series. If you have not, I highly recommend checking it out here. In it, I discuss Protocol Buffers, which are an integral part of gRPC. In this blog post, we’ll build a simple gRPC client/server setup so we can actually see the definition files in action. In order to get to a fully working gRPC client/server, we need to take the following steps:

  • Create a service in our Protocol Buffer
  • Create a request method within our service
  • Create a response in our Protocol Buffer

Extending Our Protocol Buffer Definition File

Let’s go over the three additions we need to make to our Protocol Buffer definition file (the service, request, and response portions).

Adding the Service

Currently, our Protocol Buffer definition should look like this:

syntax = "proto2";

package tutorial;

message Interface {
  required string name = 1
  optional string description = 2
  optional int32 speed = 3

Let’s add the service block to our definition file.

service InterfaceService {

This line defines a service, named InterfaceService, which our gRPC server will be offering. Within this service block, we can add the methods that the gRPC client can call.

Adding the Request and Response Methods

Before we add the request and response methods, I need to discuss the different types of messages gRPC services can handle. Four basic implementations of gRPC request and response methods can be used:

  1. Unary - This is similar to REST. The client sends a request, and the server sends a single response message.
  2. Server Streaming - The client sends a message, and the server responds with a stream of messages.
  3. Client Streaming - The client sends a stream of messages, and the server responds with a single message.
  4. Bidirectional Streaming - Both the client and server send streams of messages.

Each method has its pros and cons. For the sake of keeping this blog short, I’ll be implementing a unary request/response gRPC service.

You can read official documentation on gRPC message types here.

Let’s add a unary request/response type to our InterfaceService in our definition file.

service InterfaceService {
  rpc StandardizeInterfaceDescription(Interface) returns (Interface) {}

We are defining an RPC remote method named StandardizeInterfaceDescription that takes in data that adheres to our Interface message type we defined in the first blog post. We also define that the method will return data that adheres to our Interface message type.

You can have a gRPC function take in and return different message types.

Now, our Protocol Buffer definition file titled interface.proto should look like this.

syntax = "proto2";

service Interface {
  rpc StandardizeInterfaceDescription(Interface) returns (Interface) {}

package tutorial;

message Interface {
  required string name = 1;
  optional string description = 2;
  optional int32 speed = 3;

Re-creating Our Python gRPC Files

Now that we have our updated interface.proto definition file, we need to recompile to update our auto-generated gRPC Python code. Here we will be using the grpcio-tools library rather than the Protocol Buffer compiler we used in the first blog post. The grpcio-tools library is the more all encompassing tool compared to the Protcol Buffer compiler. To install the grpcio-tools library, run the command pip install grpcio-tools. Then, making sure you are in the same directory as the interface.proto file, run the command python -m grpc_tools.protoc -I . --python_out=. --grpc_python_out=. interface.proto.

The -I designates the proto_path or, in other words, the path to the destination for your proto definition file. The last argument is the name of your proto file.

After running this command, you should have two new files named and The first, is the protobuf class code and the second is the gRPC code related to our service. Here is a snapshot of the current directory structure:


Creating the gRPC Server

Now let’s create the code needed for our gRPC server. Create a new file at the root of the directory we are working in and name it Copy the below code snippet into that file:

from concurrent import futures

import grpc
import interface_pb2
import interface_pb2_grpc

class InterfaceGrpc(interface_pb2_grpc.InterfaceServiceServicer):
    def StandardizeInterfaceDescription(self, request):
        standard_description = request.description.upper()
        return interface_pb2.Interface(
  , description=standard_description, speed=request.speed

def server():
    server = grpc.server(futures.ThreadPoolExecutor(max_workers=2))
    interface_pb2_grpc.add_InterfaceServiceServicer_to_server(InterfaceGrpc(), server)
    print("Starting gRPC Server")

if __name__ == "__main__":

Let’s take a quick look at this file. At the top we are importing a number of things. First, from the concurrent library, we are importing futures. This just allows us to asynchrously execute callables. We also import the grpc library. Lastly, we import the two files we created earlier.

At the bottom of the file, our entry point into this file is the server() function, which does four main things:

  1. Creates a gRPC server from the grpc library
  2. Registers our Interface Service to the newly created gRPC server
  3. Adds a port that gRPC server will listen on
  4. Starts the server

Near the top of the file, we are extending the InterfaceServiceServicer from our interface_pb2_grpc file. Within this class is where we define the logic for the functions we created stubs for in our .proto file. Functions created in this class take a single required argument other than self listed below:

  • request - This is the interface message type sent into our gRPC function.

The next few lines take the data passed in by the request argument, capitalizes the description, creates an interface message type and sends it back to the client. That’s the gRPC server code. Let’s quickly put together a new file for our gRPC client.

Creating the gRPC Client

Creating the gRPC client is pretty straightforward. Copy and paste the below code snippet into a file called

import grpc
import interface_pb2
import interface_pb2_grpc

def client():
    channel = grpc.insecure_channel("localhost:5001")
    grpc_stub = interface_pb2_grpc.InterfaceServiceStub(channel)
    interface = interface_pb2.Interface(
        name="GigabitEthernet0/1", description="Port to DMZ firewall", speed=20
    retrieved_interface = stub.StandardizeInterfaceDescription(interface)

if __name__ == "__main__":

The client function does the following:

  1. Creates an insecure channel with a gRPC server running on your localhost on port 5001.
  2. Using the channel created in step 1, subscribes to the InterfaceService service we defined in our .proto file.
  3. Creates an Interface message object to pass into the gRPC call.
  4. Calls the StandardizeInterfaceDescription function call and passes in the interface object from step 3.
  5. Lastly, prints out what was received from the gRPC call.

Running the Client and Server Together

Now, let’s run the client and server together so we can see this code in action! First, open a terminal in the directory where the lives and run python3 You should have no prompt and get the line Starting gRPC Server.

Open another terminal in the same location and run python3

~/repo/Sandbox/blog_grpc ❯ python3

name: "GigabitEthernet0/1"
description: "PORT TO DMZ FIREWALL"
speed: 20

You can run the client side over and over again. The server will stay up and respond to however many requests it receives.

If everything was successful, you will get the response shown above. As you can see, it capitalized our entire description just like we wanted. I would definitely suggest playing around with this simple implementation of a gRPC server and client. You can even start to include non-unary functions in your gRPC server to explore more in-depth.


In this blog post, we updated our existing .proto definition file with our InterfaceService and added a StandardizeInterfaceDescription function within that service. We also used the grpc_tools library to generate the needed code to create our own gRPC server. Lastly, we created a small gRPC client to show our gRPC server in action. Hopefully, you now have a deeper understanding of what gRPC is and how it works. Initially, I wanted to explore gRPC in the networking world in this blog post. However, I thought it important to continue to look at gRPC a little more in-depth. In Part 3 of this series, I will be discussing where gRPC is within the networking world. We will review on the more established names, such as Cisco, Arista, and Juniper and look at how they are using gRPC and how they are enabling Network Engineers to use gRPC for their automation.


Adam Byczkowski
Network Automation Architecture - The Components2022-12-06T00:00:00+00:002022-12-06T00:00:00+00:00 the first blog of the Network Automation Architecture blog series, we exposed our motivations and introduced, briefly, the six components that compose it. This blog goes a bit deeper by providing more details, to provide a better understanding of each component scope.

As a refresh, this was the diagram describing the network automation architecture proposed:

This blog should provide you a good overview of the architecture as a whole to start understanding the role of each of its components. However, it’s not in the scope of this blog to go deep on any of them, that will happen in the next blogs of this series.

A similar architecture could also be used to automate other types of IT infrastructure. What makes the proposal “special” is the focus on networking. So, it does make sense to start first with the component that influences the rest, the Network Infrastructure.

Network Infrastructure

During the last few years, the “network” has evolved from being physical network devices to also including network virtualized functions, network controllers, and abstract network services provided by Cloud platforms. All these types of network infrastructure are targets for automation, each one with their own characteristics.

Along with the different network platforms, new ways for interaction have been added on top of the traditional SNMP and CLI interfaces. New interfaces, such as Linux API/Shell, NETCONF, RESTCONF, gNMI, or popular REST APIs, are now common ways to interact with network infrastructure. This makes automation integration much easier, and more capable.

It is not in the scope of this blog series to go deep on the details of each network type, but it’s important to keep in mind that the network automation architecture scope includes ALL of them, and we should take their features and limitations into account when determining the right tooling in each component.

There are some books available that will help you to learn about this topic, such as Network Programmability with YANG by Pearson, Network Programmability and Automation Fundamentals by Cisco Press, or Network Programmability and Automation (2nd edition) by O’Reilly.

Once we know what we want to automate, let’s move to how we interact with the network automation solutions.

User Interactions

In the end, a human(user) will interact with the automation solution. We call these entry points, User Interactions, that can be in different forms (as we can have different personas using it). There are multiple options in this space, for instance:

  • Web Portals
  • Dashboards
  • Command Line Interface (CLI)
  • Instance Messaging systems

Each one of them covers different use cases, so it is critical to identify who will use the solution to identify the proper user interaction, the how. For instance, a network engineer would be happy with a CLI-based tool, but an end user would require a web portal.

Next, we will discuss the brain of the network automation architecture, the Source of Truth.

Source of Truth

In short, the Source of Truth is where we store, and expose, all the data that defines the intended state of the network. Does it sound obvious? It may be the case if you come from an infrastructure automation background, but it is a radical change to traditional network operations.

Traditionally, networks have been designed in generic diagrams in manually generated documents. Then, this reference is interpreted by a network engineer into a device’s running configuration, that will keep evolving over time as new designs, features, or changes are added.

The Source of Truth concept is common in any infrastructure automation environment, being the reference point when evaluating if the operational state is matching the desired one. It can take multiple implementations, from centralized to distributed, but it’s always built on top of the concept of System of Record, the data source that owns a particular piece of data.

The SoT contains different data types in the networking space, as represented in the following diagram.

After this brief introduction to the SoT, now it’s time to move to the next component, the Orchestration.


When a network automation solution encompasses more than one task (quite common), there is the need to concatenate multiple tasks. This could range from a simple chaining to a complex combination of steps that depend on multiple interactions. Orchestration is what connects the dots in a network automation solution, and it can be implemented in multiple ways: human interactions, task schedulers, programmatic triggers, or listening to an event in a pub/sub paradigm.

One popular implementation paradigm is the event-driven network automation, where the network automation engine tasks are triggered by events generated internally or external. This approach offers a scalable and flexible way to interact with multiple automation components.

Next, we introduce what people usually associate with network automation solutions: interacting with the network infrastructure via programmatic access, the Automation Engine.

Automation Engine

This component contains all the tasks that interact with the network (via its available interfaces)—usually, to change the state of the network via configuration management processes. It’s important to understand this component not as an isolated one, but as the final executor of the outcome of the other components.

What we define as our intended state in the Source of Truth is what the automation engine will use to create the configuration artifacts necessary to move the network, from its current state to the intended state. A common use case is using Jinja templates with CLI commands, where the data variables are rendered for a specific network device.

Finally, it’s time to move to the last one, the Telemetry and Observability. There, instead of changing the state of the network as we did in the Automation Engine, we will be retrieving and storing the actual operational state.

Telemetry and Observability

Observability goes further than the traditional network monitoring approach, knowing that something is wrong, to understanding why something is wrong. It supersedes traditional monitoring, embracing data-model-driven streaming telemetry to improve operations via better visibility. It starts by collecting operational state from the network, and then making it available (with enrichment) to other components within the architecture. For instance, to implement close-loop automation, or simply exposing the information to the user via dashboard or alerts.

It encompasses different types of collected data: metrics, logs, and network flows. The data will come from different interfaces through a data pipeline ingestion process. But, before storing it, using the context from the Source of Truth, we enrich it with metadata that would improve its consumption via User Interactions, or triggering a remediation task via the Automation Engine when an anomaly between the expected and actual state is detected.

There is already a blog series about the Telemetry Stack proposed by Network to Code, but an architecture-focused blog will come about this topic.

Other Considerations

As we already introduced in the first blog, all the components of this architecture are implicitly leveraging software architecture best practices.

For the sake of brevity and cleanliness, the architecture focuses only on the intra-system communications, and not on the multiple potential inter-system communications, that are obviously part of any real implementation. All these API integrations are depicted as the lines between the components in the architecture.

According to the same brevity principle, other considerations such as CI/CD pipelines, scalability, and security constraints are not included in this analysis, but should be taken into account according each environment. Only, some special integration will be highlighted, for example a CI pipeline leveraging the Automation Engine for rendering configuration artifacts.

Next Steps

This blog went deeper into introducing the network automation architecture that Network to Code uses. In the coming blogs in this series we will deep dive to offer a more detailed description of each component.

We know it’s not the sole reference available (for instance, a few weeks ago an informative RFC9315 for Intent-Based Networking was published by IETF), but it really helps us to build effective network automation solutions, and improve its understanding, maintainability, and extensibility.

The next blog in this series will describe an actual use case that will leverage this architecture, adopting a network automation mindset.


Christian Adell