Jekyll2022-09-22T19:05:49+00:00https://blog.networktocode.com/feed.xmlThe NTC MagNetwork to Codeinfo@networktocode.comHackathon & Hacktoberfest 20222022-09-22T00:00:00+00:002022-09-22T00:00:00+00:00https://blog.networktocode.com/post/hackathonNetwork to Code is proud to announce our first community Hackathon and participation in Hacktoberfest! As with all things we do, we are focusing on empowering the community. And what better way to help someone have fun in open-source and the opportunity to hack away at some code? NTC will have a team of advisors standing by to help answer questions, unblock issues, and provide feedback in real time. Now let’s get into the details for the events!!

Where to Start

Qualifying projects will be any repo in the Network to Code and Nautobot GitHub organizations. NTC maintainers and leads will take time between now and October 1st to review the backlog of issues for their repos and start adding labels like status: accepted, good first issue, and hacktoberfest. These will be great starting points on getting ideas on what to hack away at. But what if there’s something you have an idea for and it’s not an issue? Please feel free to add it! What about if you have an idea for a new project extending one of the projects NTC maintains (like a new plugin or application)? Please feel free to start a discussion on the project you are extending and mention in the #hacktoberfest-2022 channel in the NTC community Slack workspace. Do you work best on your own? Perfect! We are excited for your participation!! What about forming a team? The more the merrier for the Hackathon! For Hacktoberfest, contributions will be based on the contributions each participant makes.

Hackathon

The Hackathon event will be October 14th & 15th. During this time NTC advisors will be dedicating time to help with issues and collaborate with participants in the #hacktoberfest-2022 channel in the NTC community Slack workspace. Times and availability of NTC advisors will be announced later via the Slack channel. With limited time, make sure to brainstorm on ideas that will not only be fun to implement but are also achievable. The Hackathon will have 1st, 2nd, and 3rd place winners that will be selected by NTC.

Hacktoberfest

Hacktoberfest will run for the whole month of October. To participate in Hacktoberfest there’s no need to sign up, all you need to do is submit pull request(s) for a qualifying repo. The NTC advisors will be monitoring the #hacktoberfest-2022 channel and trying to answer questions in a timely fashion. A few of the possible winners will be Most Diverse Contributions (quantity of repos contributed to) and Most Bugs Squashed (pull request opened and approved).

Presentations & Winners

Presentation submissions for the Hackathon will be open from October 17th until the 24th. Make sure to sing your praises on the work completed. Once presentation submissions are closed, NTC will begin reviewing and will announce winners on November 4th. Submissions will be shared once the winners have been announced. Hackathon submissions require an associated short-video to see your work in action! Just add the link to your video in the spreadsheet!

Prizes

Prizes will consist of NTC & Nautobot swag including shirt(s), stickers, mugs, and more. Final prizes will be added to this blog over the next week.

There will be different prizes for each of the following groups.

  • Hacktoberfest
    • New Contributors and all qualifying video submissions
    • Most Diverse Contributions (qty of repos)
    • Most lines of code changed (across repos)
    • Most bugs closed (PRs opened and approved)
    • Most docs updated (PRs opened and approved)
  • Hackathon
    • 1st Place
    • 2nd Place
    • 3rd Place
    • Anyone who submits a video!

Rules

  • Sign up here
  • Contributions are for projects in the Network to Code and Nautobot GitHub organizations.
  • Links to applicable contributions (pull requests, issues, etc.)
    • Pull Requests should be considered “ready to merge”
      • Should adhere to style guide
      • Tests should be passing
      • After contributions are merged, we will do our best to perform peer reviews in a timely fashion.
      • Feedback from pull request does not have to be resolved to be considered but must be resolved to be merged.
  • Be respectful and adhere to our Code of Conduct

Resources

A key item for ensuring success in a Hackathon is having as many resources at your disposal as possible. The resources below are just the tip of the iceburg when it comes to information that can be helpful during the Hackathon. Although their use is by no means required but it is highly recommended.

Phone a Friend

NTC has always had a strong focus on giving back to the community and staying true to our roots. We are not only hosting a public hackathon but will also have advisors from NTC standing by specifically for the October 14th and 15th event to help via Slack and Zoom breakout rooms. In the overarching Hacktoberfest event NTC will be engaged in discussions in the community Slack and is happy to provide guidance.

Zoom link and timing will be posted as pinned links in the Slack channel prior to the Hackathon. If you have any questions, jump into Slack!

Summary

We are all excited for the opportunity to work with the community and see the awesome work you will do!! Please do not forget to sign up!!!

~ Jeremy

]]>
Jeremy White
Introduction to Python Classes - Part 22022-09-22T00:00:00+00:002022-09-22T00:00:00+00:00https://blog.networktocode.com/post/intro-to-python-classes-part-2Last week we started a series on Python classes, and this week we’re continuing to talk about class design. We’re going to look at what it means when underscores (_s) appear at the beginning and/or end of a name in Python, and when you might want to use them or avoid using them in your own classes. We’ll be building on the example class introduced last week (below). First, a quick note on terminology: In Python parlance, “dunder” is short for “double underscore”, and it can refer to something with two underscores in front, like __method, or two underscores in front and behind, like __init__ (so one would usually pronounce __init__ “dunder init”).

# ip_man.py
import os

import requests


class IpManager:
    """Class to assign IP prefixes in Nautobot via REST API"""

    def __init__(self):
        self.base_url = "https://demo.nautobot.com/api"
        _token = self._get_token()
        self.headers = {
            "Accept": "application/json",
            "Authorization": f"Token {_token}",
        }

    @staticmethod
    def _get_token():
        """Method to retrieve Nautobot authentication token"""
        return os.environ["NAUTOBOT_TOKEN"]

    def get_prefix(self, prefix_filter):
        """Method to retrieve a prefix from Nautobot

        Args:
            prefix_filter (dict): Dictionary supporting a Nautobot filter

        Returns:
            obj: Requests object containing Nautobot prefix

        """
        url = f"{self.base_url}/ipam/prefixes/"
        response = requests.get(url=url, headers=self.headers, params=prefix_filter)
        return response

    def new_prefix(self, parent_prefix_id, new_prefix):
        """Method to add a new prefix within a parent prefix to Nautobot

        Args:
            parent_prefix_id (str): UUID identifying a parent prefix
            new_prefix (dict): Dictionary defining new prefix

        Returns:
            obj: Requests object containing new Nautobot prefix

            >>>
        """
        url = f"{self.base_url}/ipam/prefixes/{parent_prefix_id}/available-prefixes/"
        body = new_prefix
        response = requests.post(url=url, headers=self.headers, json=body)
        return response

    def get_available_ips(self, prefix_id):
        """Method to retrieve unused available IP addresses within a prefix

        Args:
            prefix_id (str): UUID identifying a prefix

        Returns:
            obj: Request object containing list of available IP addresses within Nautobot prefix

        """
        url = f"{self.base_url}/ipam/prefixes/{prefix_id}/available-ips/"
        response = requests.get(url=url, headers=self.headers)
        return response

    def delete_prefix(self, prefix_id):
        """Method to delete a Nautobot prefix
        Args:
            prefix_id (str): UUID identifying a prefix

        Returns:
            None

        """
        url = f"{self.base_url}/ipam/prefixes/{prefix_id}/"
        response = requests.delete(url=url, headers=self.headers)
        return response

One Trailing Underscore

One of the most notoriously difficult parts of programming is naming things. I often find myself wanting to name a variable id, hash, pass, or any number of words that are sadly already taken by Python builtins and the standard library. Python gives you the tools of your own demise here: it is possible to shadow (take the name of) builtins. Consider the following code:

>>> s = "Hello World"
>>> print(s)
Hello World
>>> print(id(s))
4421427760
>>> id = 5
>>> print(id(s))
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: 'int' object is not callable

While id is a built-in part of Python, there’s nothing stopping me from using id as a variable name. If I do this, and then later in my code I use “id”, I might now be referring to the wrong “id” (depending on the context), and I’ve made my code at least difficult to understand, if not actually buggy. Adding an underscore to the ends of variable names is a common convention to solve this problem, so id would become id_, for example.

One Leading Underscore

A leading underscore is a design choice that says to users, “Don’t touch this, please.” Methods, variables, etc. with one leading underscore are not considered part of a class or module’s public API. This is a PEP-8 naming convention. In our example class, we named _get_token with a leading underscore. By choosing to start the name with an underscore, we signaled to consumers of our class that they should probably not call this method directly. And that’s appropriate in the case of IpManager, because if someone is importing this class and using it in their code, they’re probably not interested in the functionality that this method provides, but in the other parts of IpManager that provide useful features. Note that this is just a convention; nothing in Python forces methods or variables with one leading underscore to actually be “private,” but users who access these internals directly should be aware that the results may be unpredictable.

Two Leading Underscores

Two leading underscores is a design choice that says to users, “Don’t touch this, please. I really mean it.” It invokes name mangling. These methods and variables can still be accessed by outside code, but the outside code has to work slightly harder. For example:

>>> class A:
        def __method(self):
            """A "private" method that we don't want others to call directly."""
            print("I am a method in A")
        def method(self):
            """A "public" method that relies on a private method."""
            self.__method()
>>> obj = A()
>>> obj.method()
I am a method in A
>>> obj.__method()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: 'A' object has no attribute '__method'
>>> obj._A__method()
I am a method in A

Notice that while code in the A class itself can call self.__method(), code outside of the class has to use the mangled name obj._A__method in order to access that method directly. Continuing with this example class, if we wrote

class B(A):
    pass
>>> b = B()

then there would be no b.__method, even though b is an instance of B which inherits from A, and therefore has all of A’s methods by default. But b does have _A__method, which is what name mangling buys you. So as a design pattern, this can be used to hide methods, even from inheritor classes.

Two Leading and Two Trailing Underscores

And finally, two underscores in front and two behind indicates a “magic” method or variable. These are usually things that Python objects get for free, but you can reference them (or override them) directly if you need to. Examples include comparison operators like __eq__, and even certain globals like __name__ (of if __name__ == "__main__": fame).

Consider the contents of the IpManager class (above). We defined six methods, including __init__, but if we look at the contents of the class once it’s loaded into the interpreter, there’s much more there:

>>> from ip_man import IpManager
>>> dir(IpManager)
['__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_get_token', 'delete_prefix', 'get_available_ips', 'get_prefix', 'new_prefix']

These magic methods were all applied to our class automatically by Python. The reason for this is that a lot of basic functionality under the hood in Python depends on objects having these methods. For example, the __str__ magic method is what the interpreter checks when it wants to convert an object to a string:

>>> ip_mgr = IpManager()
>>> str(ip_mgr)
'<ip_man.IpManager object at 0x10de09cf0>'
>>> ip_mgr.__str__()
'<ip_man.IpManager object at 0x10de09cf0>'

And other common operations like equality are implemented the same way:

>>> ip_mgr == 1
False
>>> ip_mgr == ip_mgr
True
>>> ip_mgr.__eq__(ip_mgr)
True

Designing a Class

So let’s take what we’ve learned and build a new class! Say we need to write a program to do some logic involving network prefixes. We might decide to write an object-oriented class to represent the concept of a prefix, so that we can then work with a bunch of these objects to figure out things like: Where are they being used in our infrastructure? Do they overlap? Are we aware of prefixes that still need to be imported to Nautobot? etc. The prefix objects we retrieve from Nautobot using IpManager each have an ID and a representation of the prefix in CIDR notation, as well as other properties. For now, our prefix class will keep track of just those two attributes, but we can expand it in the future to cover more attributes as our needs evolve:

class Prefix:
    """Class to represent network prefixes"""

    def __init__(self, id, prefix, **kwargs):
        """Accepts Nautobot prefix dictionaries as input"""
        self.id = id
        self.prefix = prefix

    def __str__(self):
        """Represents this object as a string for pretty-printing"""
        return f"<{self.__class__.__name__} {self.prefix}>"

    def __repr__(self):
        """Represents this object as a string for use in the REPL"""
        return f"<{self.__class__.__name__} (id={self.id}, prefix={self.prefix})>"

    def __eq__(self, other):
        """Used to determine if two objects of this class are equal to each other"""
        return bool(other.id == self.id)

    def __hash__(self):
        """Used to calculate a unique hash for this object, in case we ever want to use it in a dictionary or a set"""
        return hash(self.id)

>>> ip_mgr = IpManager()
>>> r = ip_mgr.get_prefix({"prefix":"10.0.0.0/8"})
>>> nautobot_prefix_data = r.json()["results"]
>>> prefix_objects = [Prefix(**item) for item in nautobot_prefix_data]
>>> prefix_objects
[<Prefix (id=08dabdef-26f1-4389-a9d7-4126da74f4ec, prefix=10.0.0.0/8)>, <Prefix (id=b4e452a2-25b2-4f26-beb5-dbe2bc2af868, prefix=10.0.0.0/8)>]
>>> print(prefix_objects[0])
<Prefix 10.0.0.0/8>

Here we’ve implemented several custom magic methods. There are many more, but there’s no need to override them all if we don’t need them yet. Use of magic methods where appropriate can be a fantastic design pattern, because it lets us implement features in a way that users of our code will be able to take advantage of with little to no special knowledge of our code. For example, if we provided a string representation of a Prefix via a method called Prefix.to_string, that would be great, but by using Prefix.__str__ we allow people to simply call str(prefix) or print(prefix) and have it work as expected.

Note also that even though id in the __init__ method shadows the builtin id, I chose to leave it there instead of renaming it to id_ because in this case, we want to accept prefix config from Nautobot, and it uses id, not id_, so I made a design choice to keep the class easy to use. If I were going to send this code through a linter, I might have to add a comment explaining that this isn’t a problem in this case. But if at some point in the future I wrote some code further down in the same module that needed to use the builtin id, I’d be setting myself up for failure.

Conclusion

I hope this clears up some of the confusion around the special meaning of underscores on the edges of Python names. They can be used to avoid name clashes, indicate private methods and variables, and implement Magic methods. Magic methods can be incredibly powerful, and should be considered when designing any class. The Prefix class above was just an example, but if you need a class like this, consider writing a more robust version and contributing it to Netutils so the whole community can benefit!

This series continues next week with a discussion of packaging — until then!

-Micah

]]>
Micah June Culpepper
Network Configuration Templating with Ansible - Part 22022-09-20T00:00:00+00:002022-09-20T00:00:00+00:00https://blog.networktocode.com/post/config_templating_p2In Part 1 of this series on configuration templating, we looked at how to take some basic configurations and extract values that can be made into variables based on different data points, such as location or device type. Now that you have a foundation of how to extract data from configurations in order to create a list of configuration variables, how do you use this information to generate configurations? The next step is looking at how to use these variables to programmatically generate the corresponding configuration files. In order to do this we use a templating language called Jinja2.

Jinja2

Jinja2 is a way to take template files (.j2 extension) based on the original text of a file, and do replacements of sections, lines, or even individual characters within the configuration based on a set of structured data (variables). In order to denote plain text from variable sections in the configuration, Jinja2 uses curly braces and percent signs to allow “codification” of sections of the text and double curly braces to denote variables to inject in the text.

Template Files

For example, if we look at a stripped-down version of the YAML variables from the first example in part 1 of this blog series (variables.yaml), and create a Jinja2 template file called template.j2 as follows:

# variables.yaml
ntp:
  servers:
    - ip: "1.1.1.1"
    - ip: "1.0.0.1"
    - ip: "8.8.8.8"
# template.j2
hostname {{ inventory_hostname }}
{% for server in ntp["servers"] %}
ntp server {{ server["ip"] }}
{% endfor %}

Running this template through the Jinja2 engine would yield the following text:

# result.cfg
hostname router1
ntp server 1.1.1.1
ntp server 1.0.0.1
ntp server 8.8.8.8

You may be wondering where router1 came from in the resulting configuration. inventory_hostname is a built-in Ansible variable that references the hostname of the device (from the Ansible inventory) that is currently being worked on. See Special Varialbes - Ansible for more information.

We can see the utilization of a for statement in {% for server in ntp["servers"] %} that will loop through all the server objects in the YAML file, fill in with the {{ server["ip"] }} variable, and generate a complete line for each of the server ip addresses in the YAML data. If you are familiar with Ansible and Python, the variable syntax will look similar when working with lists and dictionaries in Jinja2. Also, note that code sections have both an opening and closing set of braces and percent signs: {% for x in y %} and {% endfor %}. The text and variables inside these two statements is what will get acted upon by the Jinja2 engine. By carefully placing these, you can be very specific on which portions of the config get templated versus just being moved through the engine verbatim.

Placement and Spacing Are Important

If we change the template.j2 file (same YAML file) to look like the following example instead, there will be a completely different result. In some configurations, the config syntax puts all the server IPs on the same line. Note, Jinja2 is very particular on spacing and indentation. Spacing and indentation will be the same as it is laid out in the Jinja2 template file. (Notice the space after {{ server }} to get spaces between the IPs.)

# template.j2
ntp server {% for server in ntp["servers"] %}{{ server }} {% endfor %}
# result.cfg
ntp server 1.1.1.1 1.0.0.1 8.8.8.8

So, placement of the code blocks can be very flexible and allow for just about any combination of raw text and structured data to be combined.

Playbook

Now that we understand how to work with the structured data/variables, and how to build the Jinja2 template files, we can write an Ansible playbook to generate the configuration snippet. We assume Ansible is already installed on your machine for this.

For this demo, it’s assumed that your file structure is flat (no folders) with all files in the same folder, with the exception of the configs folder, which is where the generated configurations will be placed by Ansible. We will use the same template.j2 file that was used in the beginning of this post to generate NTP configurations for three routers.

File Structure

(base) {} ansible tree
.
├── configs
├── inventory
├── playbook.yaml
├── template.j2
└── variables.yaml
# inventory
router1
router2
router3
# playbook.yaml
- name: Template Generation Playbook
  hosts: all
  gather_facts: false
  vars_files:
    - ./variables.yaml

  tasks:
    - name: Generate template
      ansible.builtin.template:
        src: ./template.j2
        dest: ./configs/.cfg
      delegate_to: localhost

We’ll run the playbook with the following command ansible-playbook -i inventory playbook.yaml, and we should see three new files output in the current working directory. When not connecting to devices, it is important to use the delegate_to option, otherwise Ansible will try to SSH to the devices in your inventory and attempt to do the templating there. This normally doesn’t work for network devices, so we have the Ansible host generate the template files itself.

Playbook output:

(base) {} ansible ansible-playbook -i inventory playbook.yaml

PLAY [Template Generation Playbook] ***********************************************************************************************************************

TASK [Generate template] ***********************************************************************************************************************
changed: [router1 -> localhost]
changed: [router3 -> localhost]
changed: [router2 -> localhost]

PLAY RECAP ***********************************************************************************************************************
router1                    : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
router2                    : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
router3                    : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

We now see three new files in the configs folder:

(base) {} ansible tree
.
├── configs
│   ├── router1.cfg
│   ├── router2.cfg
│   └── router3.cfg
├── inventory
├── playbook.yaml
├── template.j2
└── variables.yaml

If we open up one of the .cfg files, we’ll see the contents are all the same, aside from the hostname, which is specific to the device. This is because we used the inventory_hostname variable in the Jinja2 template.

# router1.cfg
hostname router1
ntp server 1.1.1.1
ntp server 1.0.0.1
ntp server 8.8.8.8
# router2.cfg
hostname router2
ntp server 1.1.1.1
ntp server 1.0.0.1
ntp server 8.8.8.8
# router3.cfg
hostname router3
ntp server 1.1.1.1
ntp server 1.0.0.1
ntp server 8.8.8.8

It is possible to do even more complex variable replacements when variable inheritance/hierarchy is used, which will be discussed later in this series. For now, you should be mostly comfortable with generating basic templates and external variable files. This method can also be extended with Ansible tasks like get_facts or textFSM to gather “live” values from devices to further enrich templates.

Wrap-up

In Part 3 of this series, we will be covering macros and other Jinja2 functions, and how to use them in your templates.

In Part 4 of this series, we will cover advanced templating with variable inheritenace. This is how you can assign different values to variables based on a set of predefined criteria, such as location, device type, or device function.

Note

There is also a couple pieces of software to run these templating tests outside of writing code. This way it’s possible to test even before a decision is made on how the templating will actually be run (using Python, Ansible, etc.). The first one is j2live.ttl225.com, written by our very own @progala! Also noteworthy; TD4A - GitHub or TD4A - Online

-Zach

]]>
Zach Biles
Guest Blog: Network Automation with Nautobot &amp; IP Fabric2022-09-20T00:00:00+00:002022-09-20T00:00:00+00:00https://blog.networktocode.com/post/ipfabric-guest-blogAt Cisco Live US 2022, in glittering Las Vegas, IP Fabric had the pleasure of hosting Network to Code at our booth to showcase the integration of Nautobot ChatOps with our network assurance capability, with the result of making invaluable network insights easily accessible to teams that need them.

Christian Adell from Network to Code Presents Nautobot Christian Adell from Network to Code Presents Nautobot

We’re bringing this showcase to your screens with a brand-new webinar at the end of September, getting into the details of just how this integration can help you create a holistic network automation solution. Read on for a preview of what you can expect in the webinar, hosted by Paddy Kelly, Managing Consultant at Network to Code, and Daren Fulwell, Product Evangelist at IP Fabric.

What is your observed network state, how can you get an accurate representation of it?

Your observed network state is the very real inventory, configuration, forwarding behavior, and topology of your network - at a particular point in time. In most cases, we track this stuff manually, in spreadsheets, Word documents, and Visio diagrams with data gathered manually or using custom scripts. So, we need to trust that every time a change happens in the network that the responsible person ensures that the documentation is updated immediately and to a consistent level of detail.

That’s no way to prepare to make confident decisions about an enterprise network.

With IP Fabric’s automated network assurance platform, network snapshots taken on your schedule or on-demand ensure that you always have an accurate, up-to-date visualization of your actual network state as it is. This allows you to answer the question “what changed?” easily from day to day. With Nautobot ChatOps, you can ensure you have access to this information through the chat platform of your preference (e.g., MS Teams, Slack, Webex, and Mattermost) making the wealth of knowledge contained in your network snapshot readily available to all teams who need it.

What about intended network state?

With your observed network state taken care of, you know where you are coming from. Next up, you need to know where you are going. This is your intended network state: a set of business outcomes translated to an ideal network state that would support this and contained within a network source of truth that you can measure your actual network state against. Having these two elements in place means that you have a goal to meet and a benchmark of where you are so that each decision about your network can be made with the motivation to move your network state closer to your goal.

In the upcoming webinar, our presenters will show how Nautobot and IP Fabric integrate to support the pursuit of a single, aligned source of truth. The webinar will showcase and discuss the Nautobot Single Source of Truth application that allows users to synchronize their data between IP Fabric and Nautobot—in the direction that makes sense for their business.

How can this data help your entire organization?

Security operations teams, cloud teams, or even leadership often need answers about the state of the network, which can materially affect their interests. Without a self-service way of accessing this network data, they must go through the network team, pulling focus and taking time away from projects and workflows. Not to mention that in our continuously globalized work environments, this significantly hampers asynchronous work.

And how does that collaboration work?

If you could ask your network anything, what would be your first question? That answer likely depends on what the latest incoming high-priority trouble ticket was. Where do you find a particular PC? Can it reach the application the user needs to access for their day-to-day work? Are there any issues with routing protocols stopping it from talking to the services it needs?

Do you need to identify the location of a host on your network and detail how it is connected? Use the /ipfabric find-host along with an IP or MAC address to get host information and outline key components of the host’s entry point onto the network.

Assessing what devices are reaching end of life for network refresh planning? Get an accurate and up-to-date network inventory using /ipfabric get-inventory and filter inventory assets based on site, model, vendor, or platform.

This is just a glimpse into the possibilities – you can visit the Nautobot ChatOps plugin repository to suggest more commands, create a feature request or discussion, or even open a Pull Request!

Inject automation for a new way of operating the network.

With IP Fabric and Nautobot ChatOps, you can get the answer to all this and more – all within your preferred chat platform.

The Nautobot ChatOps Framework provides a way to efficiently communicate and collaborate with Operational Support Systems and IT tools. IP Fabric supports complex multi-vendor network discovery and gives full visibility into inventory, config, topology, and behavior. Whoever needs these answers can easily get them without reliance on your network team. Democratizing important network data by making it easy to access can improve efficiency and foster harmony across your organization.

Join us on September 29th, where we’ll demonstrate exactly how to make both the SSoT and ChatOps Nautobot integrations work for your network environment. You can join the webinar here!

-Daren

Links

]]>
Daren Fulwell
Introduction to Python Classes - Part 12022-09-15T00:00:00+00:002022-09-15T00:00:00+00:00https://blog.networktocode.com/post/intro-to-python-classes-part-1In this blog series we are going to provide an introduction to Python classes. The first blog will look at what classes are and how we can create and use a custom class. The second blog will explain the Python approach to private methods and variables including use of underscores when naming class attributes and methods. The final blog describes creating Python packages to help organize our code and promote reuse.

Python Classes/Objects

In Python you will interact with objects in almost all code you develop. A Python object is a collection (or encapsulation) of related variables (known as attributes) and functions (methods). The attributes define the objects state, and we use the methods to alter that state. If you are familiar with Python, you have already worked with Python objects, for example ‘str’, ‘int’, ‘list’, ‘dict’. Every object has a type, and we use Python classes to create new objects. For example, list() in Python is a class. And when we create a new list in Python terminology, we would say we are creating an instance (object) of the list class.

Let’s look at an example.

>>> routers = ['R1', 'R2', 'R3']
>>> type(routers)
<class 'list'>
>>> 

Here we have created a new list called routers and can see that its type is class ‘list’. In other words, ‘routers’ is an instance (object) of the ‘list’ class.

If we want to see all the methods of the list class, we can use the dir() function.

>>> dir(devices)
['__add__', '__class__', '__class_getitem__', '__contains__', '__delattr__', '__delitem__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__getitem__', '__gt__', '__hash__', '__iadd__', '__imul__', '__init__', '__init_subclass__', '__iter__', '__le__', '__len__', '__lt__', '__mul__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__reversed__', '__rmul__', '__setattr__', '__setitem__', '__sizeof__', '__str__', '__subclasshook__', 'append', 'clear', 'copy', 'count', 'extend', 'index', 'insert', 'pop', 'remove', 'reverse', 'sort']
>>> 

You will notice there are many methods associated with a list. The methods with two prefix and suffix underscores are known as special/magic methods. The second blog in this series will discuss these in further detail. For now it is important to know these methods have special meaning internal to a class and are not intended to be called directly. This leaves us with a number of ‘normal’ methods that you may be familiar with that allow us to change the behavior of our objects state (i.e., the elements of the list).

For example, to add an item to our list, we can call the ‘append’ method:

>>> routers.append('R4')
>>> routers
['R1', 'R2', 'R3', 'R4']
>>> 

Or to delete an element from our list, we can call ‘pop’:

>>> routers.pop(1)
'R2'
>>> routers
['R1', 'R3', 'R4']
>>> 

We have seen an example of creating objects using a standard library class, but what if we want to create our own custom objects? First, we must define a class using the ‘class’ statement. The class is a template (or blueprint) for creating new objects, and it is here that we define the attributes and methods of our new class.

Advantages of Classes

Before we take a look at creating our own custom class, it is worthwhile listing the primary advantages of doing so.

  1. Classes provide a mechanism for grouping related variables and functions, especially useful when you have a requirement to manage and act upon data (state).
  2. Grouping of related functions helps promote modularity, readability, and reuse.
  3. Further promoting code reuse is the ability to use inheritance, whereby a new class can inherit the properties of an existing class, an important feature of Object Oriented Programming.
  4. When using inheritance, you can override any method by using the same name, arguments, and return type (known as method overriding). This allows you to reuse code while also customizing methods to suit your own specific needs.

How to Define a Class

To create our own custom object, we use the ‘class’ statement. For this example, we are going to create a class to reserve IP prefixes in Nautobot using the Nautobot REST API.

Our use case for this new Python class is to automate the reservation of prefixes to be used for later deployment on network devices. In our example we have assigned a parent prefix to each site for point-to-point connectivity (e.g., network device interconnects). Each child prefix in this parent has a prefix length of /31, allowing for two usable IP addresses. In order to assign an unused /31 point-to-point prefix, our class must support the following methods:

  1. Locate the parent prefix with options to filter by tenant, site, role (e.g., ‘point-to-point’), status, etc.
  2. Create a new unassigned prefix within the parent prefix container.
  3. List all available IP addresses within a child prefix.
  4. A delete option to support the decommissioning of prefixes.

With these objectives in mind, let’s take a look at our new class.

As a best practice, docstrings are used on our class and methods. The docstring can be accessed via the ‘__doc__’ special method, for example, print(IpManager.get_prefixes.__doc__).

import os

import requests


class IpManager:
    """Class to assign IP prefixes in Nautobot via REST API"""

    def __init__(self):
        self.base_url = "https://demo.nautobot.com/api"
        _token = self._get_token()
        self.headers = {
            "Accept": "application/json",
            "Authorization": f"Token {_token}",
        }
        
    @staticmethod
    def _get_token():
        """Method to retrieve Nautobot authentication token"""
        return os.environ["NAUTOBOT_TOKEN"]

    def get_prefix(self, prefix_filter):
        """Method to retrieve a prefix from Nautobot

        Args:
            prefix_filter (dict): Dictionary supporting a Nautobot filter

        Returns:
            obj: Requests object containing Nautobot prefix

        """
        url = f"{self.base_url}/ipam/prefixes/"
        response = requests.get(url=url, headers=self.headers, params=prefix_filter)
        return response

    def new_prefix(self, parent_prefix_id, new_prefix):
        """Method to add a new prefix within a parent prefix to Nautobot

        Args:
            parent_prefix_id (str): UUID identifying a parent prefix
            new_prefix (dict): Dictionary defining new prefix

        Returns:
            obj: Requests object containing new Nautobot prefix

            >>>
        """
        url = f"{self.base_url}/ipam/prefixes/{parent_prefix_id}/available-prefixes/"
        body = new_prefix
        response = requests.post(url=url, headers=self.headers, json=body)
        return response

    def get_available_ips(self, prefix_id):
        """Method to retrieve unused available IP addresses within a prefix

        Args:
            prefix_id (str): UUID identifying a prefix

        Returns:
            obj: Request object containing list of available IP addresses within Nautobot prefix

        """
        url = f"{self.base_url}/ipam/prefixes/{prefix_id}/available-ips/"
        response = requests.get(url=url, headers=self.headers)
        return response

    def delete_prefix(self, prefix_id):
        """Method to delete a Nautobot prefix
        Args:
            prefix_id (str): UUID identifying a prefix

        Returns:
            None

        """
        url = f"{self.base_url}/ipam/prefixes/{prefix_id}/"
        response = requests.delete(url=url, headers=self.headers)
        return response

As you can see, we have a number of methods defined to meet our objectives. Class methods are defined using the ‘def’ statement as we would when defining Python functions. You may have also noticed each method has a first argument of ‘self’, and we have defined an interesting-looking method called ‘__init__’. Let’s take a closer look at both of these.

The ‘self’ Argument

The ‘self’ argument of class methods has special meaning, and when used is always the first argument. Class methods operate on an instance of the class. To enable this, it is necessary to pass the instance as an argument to each method. This is the purpose of the ‘self’ argument. We could give this argument any arbitrary name, but it is advisable to stick to the convention of using ‘self’. Having passed ‘self’ as an argument, each method has access to all the instances attributes. An example can be seen in our code, where ‘self.headers’ is declared in __init__ and used in other methods.

A class method omitting the ‘self’ argument is known as a static method. A static method is loosely coupled to the class and does not require access to the class object. In our code, ‘_get_token()’ is an example of a static method.

The __init__ Method

def __init__(self):
        self.base_url = "https://demo.nautobot.com/api"
        _token = self._get_token()
        self.headers = {
            "Accept": "application/json",
            "Authorization": f"Token {_token}",
        }

__init__ is an example of a special/magic method and has special meaning. The double underscore is called a dunder and will be covered in the second post of this series. Whenever we instantiate an object, the code in the dunder init method is executed to initialize state of the new object. In our IpManager class, the dunder init method sets the base_url, retrieves the authentication token, and sets the HTTP headers.

As a simple example, we can add a print statement to a class to demonstrate the execution of dunder init code on instantiation:

>>> class SimpleClass():
...     def __init__(self):
...         print("Class Instance created")
... 
>>> a = SimpleClass()
Class Instance created
>>> 

Variables

You will notice two of the dunder init variables are prefixed with self and one is not. A Python class can have class, instance, or local variables. The difference between each type is the namespace they operate in.

  • A class variable will persist across all instances of the class.
  • An instance variable is prefixed ‘self.’ and is significant to each instance of the class independently.
  • A local variable (for example _token) is significant only within the function in which it is declared.

Using Our New Class

We will now demonstrate use of our new class using the Python REPL.

Import and Instantiate the Class

First we must import our class from our newly created module (ip_man.py) and create a new instance. We use the dir() function on our object to list its attributes and methods as created in our class above.


>>> from ip_man import IpManager
>>> ip_mgr = IpManager()
>>> dir(ip_mgr)
['__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_get_token', 'base_url', 'delete_prefix', 'get_available_ips', 'get_prefix', 'headers', 'new_prefix']
>>> 

For this exercise, we are using the demo instance of Nautobot found at https://demo.nautobot.com. The goal is to assign a new /31 point-to-point prefix in the ‘ATL01’ site of the ‘Nautobot Airports’ tenant. The container prefix 10.0.192.0/18 is assigned to this site for point-to-point prefixes. From the output below we can see the most recent assignment is 10.0.192.34/31.

Nautobot Prefixes

Retrieve the Parent Prefix

To create a new prefix, we need the prefix_id of the parent prefix. For this we will create a dictionary defining the filters necessary to retrieve the parent prefix. A status code of ‘200’ signifies our API call was successful.

>>> prefix_details = {
...     "tenant": "nautobot-airports",
...     "site": "atl01",
...     "role": "point-to-point",
...     "status": "container",
... }
>>> prefix = ip_mgr.get_prefix(prefix_details)
>>> prefix.status_code
200
>>> prefix.json()['results'][0]['display']
'10.0.192.0/18'
>>> 

Note the ‘results’ object is a list. It is possible Nautobot will return multiple prefixes matching the filters, each a separate element in the list. In such a scenario additional logic is required to determine which parent prefix has availability to add a new prefix.

Create a New Prefix

We now call the new_prefix method to create a new prefix. As per our class code, this method uses Nautobot’s available-prefixes endpoint to create a new prefix within the parent. A ‘201’ status code signifies a new prefix has been successfully created. As seen from the REPL output and Nautobot, the next available prefix 10.0.192.36/31 was created.

>>> new_prefix_details = {
...     "prefix_length": 31,
...     "tenant": prefix.json()["results"][0]["tenant"]["id"],
...     "site": prefix.json()["results"][0]["site"]["id"],
...     "role": prefix.json()["results"][0]["role"]["id"],
...     "is_pool": True,
...     "status": "p2p",
... }
>>> new_prefix = ip_mgr.new_prefix(prefix.json()["results"][0]["id"], new_prefix_details)
>>> new_prefix.status_code
201
>>> new_prefix.json()['display']
'10.0.192.36/31'
>>> 

New Prefix

List Available IP Addresses in New Prefix

Using the ‘available_ips’ method, we can list the IP addresses available in our new prefix for assignment to network devices. As expected, we have two IP addresses available in the /31 subnet.

>>> available_ips = ip_mgr.get_available_ips(new_prefix.json()["id"])
>>> available_ips.status_code
200
>>> available_ips.json()
[{'family': 4, 'address': '10.0.192.36/31', 'vrf': None}, {'family': 4, 'address': '10.0.192.37/31', 'vrf': None}]
>>>

Tidy Up - Delete a Prefix

In the future if you want to decommission a prefix, you must first retrieve the prefix_id before calling the delete_prefix method. Let’s return Nautobot to the state we found it in by deleting our newly created prefix via the ‘delete_prefix’ method. A status code of ‘204’ confirms the deletion was successful.

>>> prefix_details = {"prefix": "10.0.192.36/31"}
>>> prefix = ip_mgr.get_prefix(prefix_details)
>>> del_response = ip_mgr.delete_prefix(prefix.json()["results"][0]["id"])
>>> del_response.status_code
204
>>>

Conclusion

I hope you found this introduction to Python classes helpful. If you are interested in building a module to interact with Nautobot via REST API, next steps to improve the module could include assigning IP addresses within a prefix to network device interfaces within Nautobot.

Having created this module, we can reuse the code in our Python projects. Examples include assigning prefixes as part of new branch deployment or adding/modifying infrastructure in a Data Center which may require loopback, point-to-point, and server prefixes. Stay tuned for the next blog post in the series, where we describe in more depth the use and meaning of underscores in variable and method naming.

-Nicholas

]]>
Nicholas Davey
Nautobot Application: BGP Models2022-09-14T00:00:00+00:002022-09-14T00:00:00+00:00https://blog.networktocode.com/post/nautobot-introducing-bgp-appWe are happy to announce the release of a new application for Nautobot. With this application, it’s now possible to model your ASNs and BGP Peerings (internal and external) within Nautobot!

This is the first application of the Network Data Models family which gave us a great opportunity to test some new capabilities of the application framework introduced by Nautobot. Data modeling is an interesting exercise, and with BGP being a complex ecosystem, this has been an interesting project. This blog will present the application and some design principles that we had in mind when it was developed.

New Routing Menu

The development of this application was initially sponsored by the Riot Direct team at Riot Games. Thanks to them for contributing it back to the community.

Overview

This application adds the following new data models into Nautobot:

  • BGP Routing Instance : device-specific BGP process
  • Autonomous System : network-wide description of a BGP autonomous system (AS)
  • Peer Group Template : network-wide template for Peer Group objects
  • Peer Group : device-specific configuration for a group of functionally related BGP peers
  • Address Family : device-specific configuration of a BGP address family (AFI-SAFI)
  • Peering and Peer Endpoints : A BGP Peering is represented by a Peering object and two endpoints, each representing the configuration of one side of the BGP peering. A Peer Endpoint must be associated with a BGP Routing Instance.
  • Peering Role : describes the valid options for PeerGroup, PeerGroupTemplate, and/or Peering roles

With these new models, it’s now possible to populate the Source of Truth (SoT) with any BGP peerings, internal or external, regardless of whether both endpoints are fully defined in the Source of Truth.

The minimum requirement to define a BGP peering is two IP addresses and one or two autonomous systems (one ASN for iBGP, two ASNs for eBGP).

Peering

Peer Sessions

Autonomous Systems

ASNs

Peer Endpoint

Peer Endpoint

Peer Group

Peer Group

Peering Roles

Peering Roles

Installing the Application

The application is available as a Python package in PyPI and can be installed atop an existing Nautobot installation using pip:

$ pip3 install nautobot-bgp-models

This application is compatible with Nautobot 1.3.0 and higher.

Once installed, the application needs to be enabled in the nautobot_config.py file:

# nautobot_config.py
PLUGINS = [
    # ...,
    "nautobot_bgp_models",
]

Design Principles

BGP is a protocol with a long and rich history of implementations. As we understand existing limitations of data modeling relevant to this protocol, we had to find right solutions both for innovations and improvements. In this section we explain our approach to the BGP data models.

Network View and Relationship First

One of the advantages of a Source of Truth is that it captures how all objects are related to each other and then exposes those relationships via the UI and API, making it easy for users to consume that information.

Instead of modeling a BGP session from a device point of view with a local IP address and a remote IP address, the decision to model a BGP peering as a relationship between two endpoints was chosen. This way, each endpoint has a complete understanding of what is connected on the other side, and information won’t be duplicated when a session between two devices exists in the SoT.

This design also accounts for external peering sessions where the remote device is not present in Nautobot, as is often the case when you are peering with a transit provider.

Start Simple

For the first version we decided to focus on the main building blocks that compose a BGP peering. Over time the BGP application will evolve to support more information: routing policy, community, etc. Before increasing the complexity we’d love to see how our customers and the community leverage the application.

Inheritance

Many of the Border Gateway Protocol implementations are based on the concept of inheritance. It’s possible to centralize almost all information into a Peer Group Template model, and all BGP endpoints associated with this Peer Group Template will inherit all its attributes.

The concept is very applicable to automation, and we wanted to have a similar concept in the SoT. As such, we implemented an inheritance system between some models:

  • A PeerGroup inherits from PeerGroupTemplate.
  • A PeerEndpoint inherits from PeerGroup, PeerGroupTemplate, BGPRoutingInstance.

As an example, a PeerEndpoint associated with a PeerGroup will automatically inherit attributes of the PeerGroup that haven’t been defined at the PeerEndpoint level. If an attribute is defined on both, the value defined on the PeerEndpoint will be used.

(*) Refer to the application documentation for all details about the implemented inheritance pattern.

The inherited values will be automatically displayed in the UI and can be retrieved from the REST API with the additional ?include_inherited=true parameter.

Peer Endpoint with Inheritance

Extra Attributes

Extra attributes allow to describe models provided by the application with additional information. We made a design decision to allow application users to abstract their configuration parameters and store contextual information in this special field. What makes it very special is the support for inheritance. Extra attributes are not only inherited, but also intelligently deep-merged, thus allowing for inheriting and overriding attributes from related objects.

Integration with the Core Data Model

With Nautobot, one of our goals is to make it easy to extend the data model of the Source of Truth, not only by making it easy to introduce new models but also by allowing applications to extend the core data model. In multiple places, the BGP application is leveraging existing Core Data models.

Extensibility

We designed the BGP models to provide a sane baseline that will fit most of the use cases, and we encourage everyone to leverage all the extensibility features provided by Nautobot to store and organize the additional information that you need under each model or capture any relationship that is important for your organization.

All models introduced by this application support the same extensibility features supported by Nautobot, which include:

  • Custom fields
  • Custom links
  • Relationships
  • Change logging
  • Custom data validation logic
  • Webhooks in addition to the REST API and GraphQL.

An example can be seen in the Nautobot Sandbox where a relationship between a circuit and a BGP session was added to track the association between a BGP session and a given circuit.

Next Steps

More information on this application can be found at Nautobot BGP Plugin. You can also get a hands-on feel by visiting the public Nautobot Sandbox.

As usual, we would like to hear your feedback. Feel free to reach out to us on Network to Code’s Slack Channel!

-Damien & Marek

]]>
Damien Garros, Marek Zbroch
Network Configuration Templating with Ansible - Part 12022-09-13T00:00:00+00:002022-09-13T00:00:00+00:00https://blog.networktocode.com/post/config-templating-ansible-p1When discussing network automation with our customers, one of the main concerns that come up is the ability to audit their device configurations. This becomes especially important during the last quarter of the year, as many corporations are going through their yearly audit to obtain their required approvals for PCI or other compliance standards. Our solution for that is to use the Golden Configuration application for Nautobot, but it’s also entirely possible to use simple Ansible playbooks to perform the audit. This blog series will go over the essential pieces to understanding network configuration templating using Ansible, but the same process can easily be translated for use with Nautobot.

To start templating your configuration you must identify the feature that you wish to audit. Whether it be your DNS or NTP settings, it’s usually easier to start with small parts of the configuration before moving on to the more complicated parts, such as routing or interfaces. With a feature selected, you can start reviewing the device configurations to create your templates. For this article, we’ll use NTP configuration from an IOS router as the chosen feature:

ntp server 1.1.1.1 prefer
ntp server 1.0.0.1
ntp server 8.8.8.8
clock timezone GMT 0
clock summer-time CET recurring

After you’ve identified the portions of the configuration that you wish to template for the feature, the next step is to review the configuration snippet(s) and identify the variables relevant to the configuration feature. Specifically, you want to extract only the non-platform-specific variables, as the platform-specific syntax should be part of your template with the variables abstracted away for use across platforms. Using the example above, we can extract the following bits of information:

  • three NTP server hosts
    • 1.1.1.1
    • 1.0.0.1
    • 8.8.8.8
  • preferred NTP server
    • 1.1.1.1 is preferred
  • time zone and offset
    • GMT
    • 0
  • daylight saving timezone
    • CET

With these variables identified, the next step is to define a schema for these variables to be stored in. For Ansible this is typically in a YAML file as host or group vars. As YAML is limited in the types of data it can document, lists and key/value pairs typically, it’s best to design the structure around that limitation. With the example above, we’d want to have a list of the NTP servers as one item with a key noting which is preferred, the timezone with offset, and the daylight saving timezone. One potential schema would be like the below:

---
# file: group_vars/all.yml
ntp:
  servers:
    - ip: "1.1.1.1"
      prefer: true
    - ip: "1.0.0.1"
      prefer: false
    - ip: "8.8.8.8"
      prefer: false
  timezone:
    zone: "GMT"
    offset: 0
    dst: "CET"

Defining this structure is important as it will need to be flexible enough to cover data for all platforms while also being simple enough that your templates don’t become complicated. You’ll want to ensure that all other variables that are for this feature are of the same structure to ensure compatibility with the Jinja2 templates you’ll be creating in future parts of this series. It’s possible to utilize something like the Schema Enforcer framework to enable enforcement of your schemas against newly added data. This allows you a level of trust that the data provided to the templates are of the right format.

The next step, once the variables have been defined and you’ve determined a structure for them, is to understand where they belong within your network configuration hierarchy. This means that you need to understand in which circumstances these values are considered valid. Are they globally applicable to all devices or only to a particular region? This will define whether you place the variables in a device-specific variable or a group-specific one, and if in a group which group. This is especially important, as where you place the variables will define which devices inherit them and will use them when it comes time to generate configurations. For this article, we’ll assume that these are global variables and would be placed in the all group vars file. With this in mind, you’ll want to start building your inventory with those variable files. Following the Ansible Best Practices, it’s recommended to have a directory layout like so:

inventory.yml
group_vars/
    all.yml
    routers.yml
    switches.yml
host_vars/
    jcy-rtr-01.infra.ntc.com.yml
    jcy-rtr-02.infra.ntc.com.yml

This should allow for clear and quick understanding of where the variables are in relation to your network fleet. This will become increasingly important as you build out more templates and adding variables. With your inventory structure built out, you can validate that the variables are assigned to your devices as expected with the ansible-invenotry -i inventory.yml --list which will return the variables assigned to each device like so:

{
    "_meta": {
        "hostvars": {
            "jcy-rtr-01.infra.ntc.com": {
                "ntp": {
                    "servers": [
                        {
                            "ip": "1.1.1.1",
                            "prefer": true
                        },
                        {
                            "ip": "1.0.0.1",
                            "prefer": false
                        },
                        {
                            "ip": "8.8.8.8",
                            "prefer": false
                        }
                    ],
                    "timezone": {
                        "dst": "CET",
                        "offset": 0,
                        "zone": "GMT"
                    }
                }
            },
            "jcy-rtr-02.infra.ntc.com": {
                "ntp": {
                    "servers": [
                        {
                            "ip": "1.1.1.1",
                            "prefer": true
                        },
                        {
                            "ip": "1.0.0.1",
                            "prefer": false
                        },
                        {
                            "ip": "8.8.8.8",
                            "prefer": false
                        }
                    ],
                    "timezone": {
                        "dst": "CET",
                        "offset": 0,
                        "zone": "GMT"
                    }
                }
            }
        }
    },
    "all": {
        "children": [
            "routers",
            "ungrouped"
        ]
    },
    "routers": {
        "hosts": [
            "jcy-rtr-01.infra.ntc.com",
            "jcy-rtr-02.infra.ntc.com"
        ]
    }
}

This allows you to validate and ensure that the variables you’ve created are being assigned where you expect. In the next part of this series we’ll dive into how to craft a configuration template using the Jinja2 templating engine.

-Justin

]]>
Justin Drew
Introduction to Work Intake - Part 12022-09-08T00:00:00+00:002022-09-08T00:00:00+00:00https://blog.networktocode.com/post/work-intake-part1Work. The never-ending laundry list of ‘To-Do’ items that greets us every day when we log on to our computer. Tasks seem to multiply in request queues faster than a soaking wet gremlin, yet they seem to eclipse the small delivery of items being completed.

We all have requests sitting in our work queues aging from days to weeks or months from their submission date. So how do we begin to make sense on which items can or should be done now? Is it better to tackle the quick wins by going after the low-hanging fruit? Or is it better to go after the larger corporate wins that may be more time-consuming but have the potential for a great impact? More importantly, how would we address unplanned activities (aka fire drills) or ever find the time to get to those bothersome back burner activities? These are some of the questions we will uncover as part of this blog series on work intake.

What Is Work Intake?

Work intake is a way of gathering requirement details to begin organizing, classifying, and prioritizing work efforts to truly understand what a customer is requesting. Over time, employing work intake strategies will help your customers formulate what they’re really asking for in a more consistent manner that will provide valuable downstream details to the engineers doing the requested work.

By employing work intake, we are gathering details, estimations, and business requirements to provide an effective strategy for making positive decisions and outcomes for the organization. Be careful, though, as an abundance of data or asking too many questions does not always translate to more effective results. Causal inference of the data becomes more valuable when it improves our understanding or outcomes. Therefore, using the right questions will help lead us to better decisions.

Where to Begin?

In order to see the complete picture from a request, we need to lay out all of the pieces of the puzzle. While no two requests are the same in nature, the types of questions to be asked of the customer should be in the same consistent and methodical process.

Throughout the series, we’ll use three separate and unique request examples to provide context around each work intake topic. These request examples, which are common requests across network services organizations, are:

  • Upgrading new infrastructure at a remote branch location
  • Migrating from SNMPv2 to SNMPv3
  • Automated provisioning of Data Center access ports

Starting Point

Critical details from the requester are typically fresh in a user’s mind as they are submitting a request. Employing a classroom technique called anchor charts with the 5 W’s + H of Who, What, When, Where, Why, How will help requesters visualize their request as well as lead to key insights which will be explored more in our next blog.

Taking the original use cases listed earlier, and applying the 5 W’s + H, would begin to provide necessary content to understand the requester’s ask. Here are some examples:

Upgrading new infrastructure at a remote branch location

  • Who: Remote branch users
  • What: Site refresh replacing legacy firewall, switch, and AP
  • When: October 31st
  • Where: Burbank remote location
  • Why: Legacy equipment is EoL (end of life) and is no longer supported
  • How: Replacement to include updates in SoT (Source of Truth) and monitoring

Migrating from SNMPv2 to v3

  • Who: Monitoring Team
  • What: Remediate 100+ network devices
  • When: Risk closure by Nov. 1st per Security Team
  • Where: All locations (35 sites)
  • Why: New security standard due to an internal audit
  • How: Device configurations moved to the new standard

Automate provisioning of Data Center access ports

  • Who: Network Implementation Team
  • What: Provide deployment of ports for new server build-outs
  • When: Servers to arrive Oct. 1st
  • Where: Brownfield DC
  • Why: Implementation team is
  • How: Automation to deploy


5w1h-Example

Does the 5 W’s + H anchor chart listed above provide valuable information to these use cases? Yes, it most certainly does. However, there are still meaningful questions and analysis that need to be understood in order to produce tangible artifacts for the engineering teams to process these requests. We’ll continue to delve deeper into this work intake analysis in future blogs, so stay tuned.

What’s Next?

Throughout the next parts of the work intake series, we’ll continue to expand on our three examples above to shed light on their complexities, dependencies, and outcomes. We’ll also discuss potential risks and rewards (business value) along with acceptance criteria. Lastly, we’ll formulate assumptions and prioritizations as we tie everything together into working artifacts so our downstream engineers can hit the ground running. As always, if you have any questions or comments, we’re here to help! Come join us on the Network to Code Slack.

-Kyle

]]>
Kyle Kenkel
Developing Batfish - Converting Config Text into Structured Data (Part 3)2022-09-07T00:00:00+00:002022-09-07T00:00:00+00:00https://blog.networktocode.com/post/batfish-development-part3This is part 3 of a blog series to help learn how to contribute to Batfish.

The previous posts in this series:

In this post I will be covering how to take the parsed data and apply that “text” data into a vendor-specific (VS) datamodel. I will also demonstrate how to take the extraction test we created in part 2 and extend it to test the extraction logic.

Basic Steps

  1. Create or Enhance a Datamodel
  2. Extract Text to Datamodel
  3. Add Extraction Testing

What Is the Vendor-Specific Datamodel?

The title of this blog post is Converting Config Text into Structured Data. Throughout this blog post I will be talking about the vendor-specific (VS) datamodel, which is the schema for the structured data. Modeling data is complicated; fortunately, the maturity of the Batfish project offers an extensive number of datamodels that already exist in the source code that can help with enhancing the datamodel I need to extract the route target (RT) data for EVPN/VxLAN.

The VS datamodel is used to map/model a feature based on how a specific vendor has implemented a technology. These datamodels tend to line up closely with how that vendor’s configuration stanzas line up for that technology.

As far as terminology, within Batfish I’ve noticed the names datamodel and representation are used somewhat freely and interchangeably. I will stick to datamodel throughout the blog post to avoid confusion.

Create or Enhance a Datamodel

As I finished up part 2 of this blog series, we had updated the parsing tree to support three new commands. We added simple parsing Testconfig files to ensure that ANTLR could successfully parse the new commands. In this post I will build upon what we did previously. I will start with extending the switch-options datamodel to support the features we added parsing for. To rehash, the commands we added parsing for are below:

set switch-options vrf-target target:65320:7999999
set switch-options vrf-target auto
set switch-options vrf-target import target:65320:7999999
set switch-options vrf-target export target:65320:7999999

The current switch-options model is comprised of:

public class SwitchOptions implements Serializable {

  private String _vtepSourceInterface;
  private RouteDistinguisher _routeDistinguisher;

  public String getVtepSourceInterface() {
    return _vtepSourceInterface;
  }

  public RouteDistinguisher getRouteDistinguisher() {
    return _routeDistinguisher;
  }

  public void setVtepSourceInterface(String vtepSourceInterface) {
    _vtepSourceInterface = vtepSourceInterface;
  }

  public void setRouteDistinguisher(RouteDistinguisher routeDistinguisher) {
    _routeDistinguisher = routeDistinguisher;
  }
}

This file is located in the representation directory.

The datamodel is describing what Batfish supports within the Junos switch-options configuration stanza. I need to extend this to support and add vrf-target. To do this, I need to define the type of the data and create getters and setters.

The next step is to identify how to use this data and the best way to represent the data. The easiest of these would be the auto. This command will either be on or off. If we parse the configuration and we have the ANTLR token for auto, we can set that in the datamodel as true; otherwise we would have it set to false and would expect to see one of the other commands. The other commands would be of type ExtendedCommunity, which is already defined as part of the Batfish vendor-independent datamodel.

In this command stanza the auto keyword can be used OR the community can be provided. For this I will create an representation for ExtendedCommunityorAuto which has already been created for this exact scenario in the Cisco NX-OS representations.

Enhance the Datamodel

Before I can extract the text data from the parsing tree and apply it to a model, the datamodel must be updated to support the additional feature set. For this example I will be adding a support for vrf-target and the three different options that are possible. The result of the update is shown below:

public class SwitchOptions implements Serializable {

  private String _vtepSourceInterface;
  private RouteDistinguisher _routeDistinguisher;
  private ExtendedCommunityOrAuto _vrfTargetCommunityorAuto;
  private ExtendedCommunity _vrfTargetImport;
  private ExtendedCommunity _vrfTargetExport;

  public String getVtepSourceInterface() {
    return _vtepSourceInterface;
  }

  public RouteDistinguisher getRouteDistinguisher() {
    return _routeDistinguisher;
  }

  public ExtendedCommunityOrAuto getVrfTargetCommunityorAuto() {
    return _vrfTargetCommunityorAuto;
  }

  public ExtendedCommunity getVrfTargetImport() {
    return _vrfTargetImport;
  }

  public ExtendedCommunity getVrfTargetExport() {
    return _vrfTargetExport;
  }

  public void setVtepSourceInterface(String vtepSourceInterface) {
    _vtepSourceInterface = vtepSourceInterface;
  }

  public void setRouteDistinguisher(RouteDistinguisher routeDistinguisher) {
    _routeDistinguisher = routeDistinguisher;
  }

  public void setVrfTargetCommunityorAuto(ExtendedCommunityOrAuto vrfTargetCommunityorAuto) {
    _vrfTargetCommunityorAuto = vrfTargetCommunityorAuto;
  }

  public void setVrfTargetImport(ExtendedCommunity vrfTargetImport) {
    _vrfTargetImport = vrfTargetImport;
  }

  public void setVrfTargetExport(ExtendedCommunity vrfTargetExport) {
    _vrfTargetExport = vrfTargetExport;
  }
}

In this example I have added a getter and a setter for each new dataset. This will give me the ability to extract the data from the configuration and instantiate the switch-option vendor-specific object. One important thing to notice is the use of ExtendedCommunityOrAuto. This did not exist in the Junos representation, since it existed in Cisco NX-OS I used the same representation code.

This representation is shown below:

public final class ExtendedCommunityOrAuto implements Serializable {

  private static final ExtendedCommunityOrAuto AUTO = new ExtendedCommunityOrAuto(null);

  public static ExtendedCommunityOrAuto auto() {
    return AUTO;
  }

  public static ExtendedCommunityOrAuto of(@Nonnull ExtendedCommunity extendedCommunity) {
    return new ExtendedCommunityOrAuto(extendedCommunity);
  }

  public boolean isAuto() {
    return _extendedCommunity == null;
  }

  @Nullable
  public ExtendedCommunity getExtendedCommunity() {
    return _extendedCommunity;
  }

  //////////////////////////////////////////
  ///// Private implementation details /////
  //////////////////////////////////////////

  private ExtendedCommunityOrAuto(@Nullable ExtendedCommunity ec) {
    _extendedCommunity = ec;
  }

  @Override
  public boolean equals(Object o) {
    if (this == o) {
      return true;
    } else if (!(o instanceof ExtendedCommunityOrAuto)) {
      return false;
    }
    ExtendedCommunityOrAuto that = (ExtendedCommunityOrAuto) o;
    return Objects.equals(_extendedCommunity, that._extendedCommunity);
  }

  @Override
  public int hashCode() {
    return Objects.hashCode(_extendedCommunity);
  }

  @Nullable private final ExtendedCommunity _extendedCommunity;
}

This allows for the VS model to have one field, and when it is set to auto or a specific community, it clears the other by changing the value of that single field.

Extract Text to Datamodel

In this section I will explain how to extract data from the parsing tree and assign it to the vendor-specific datamodel. This work is completed within the ConfigurationBuilder.java file.

ConfigurationBuilder.java is located in the grammar directory.

Note: For hierarchical configurations (Junos OS and PanOS) it’s ConfigurationBuilder.java. For most other vendors it’s actually <vendor>ControlPlaneExtractor.java. In order to see this, visit the <vendor>ControlPlaneExtractor.java (CPE) file within the grammar directory mentioned above.

The first extraction I’m going to focus on is the vrf-target auto command. In order to extract this command I need to create a Java method that takes the parser context as an input, and I will extract and analyze the data in order to assign it to the datamodel I enhanced earlier.

The first step is to import the parsing tree context.

import org.batfish.grammar.flatjuniper.FlatJuniperParser.Sovt_autoContext;

Next we can create an enter or an exit rule to extract and assign the data.

@Override
public void exitSovt_auto(Sovt_autoContext ctx) {
  if (ctx.getText() != null) {
    _currentLogicalSystem.getOrInitSwitchOptions().setVrfTargetCommunityorAuto(ExtendedCommunityOrAuto.auto());
  }
}

In this method I am accessing the Sovt_autoContext from the parser. And if the ctx variables getText() method is not null, I’m assigning the value of VrfTargetCommunityorAuto in the switch-option model to auto, meaning that feature is turned on.

This is something that confused me when I was initially learning how the conversions worked. I had to sit back and remember that in cases like set switch-options vrf-target auto, it will either exist in the configuration or it won’t; therefore, the parsing context would be null when it does not exist in the configuration.

It is also worth mentioning that this is an exit rule, which is the most common. If some processing is needed (e.g., set variable values) before the child rules are processed, an enter rule can be used.

To expand on an enter rule, imagine a similar configuration stanza in Junos, which is set protocols evpn vni-options vni 11009 vrf-target target:65320:11009. In this case I’d need to set a variable for the VNI that is being configured so that I can reference it later when I need to assign the route target for the VNI. This is an example where an enter rule could be used to assign the VNI as a variable that the child rules can use.

These concepts are followed in a similar manner for each extraction you need. I will not cover every different extraction for the commands in the post in order to keep it as terse as possible; however, below is an example of the extraction created for the set switch-option vrf-target target:65320:7999999 command.

The interesting data from this command is the route target community. In order to extract that, I have the following method:

import org.batfish.grammar.flatjuniper.FlatJuniperParser.Sovt_community_targetContext;

@Override
public void exitSovt_community(Sovt_communityContext ctx) {
  if (ctx.extended_community() != null) {
    _currentLogicalSystem
        .getOrInitSwitchOptions()
        .setVrfTargetCommunityorAuto(ExtendedCommunityOrAuto.of(ExtendedCommunity.parse(ctx.extended_community().getText())));
  }
}

First I validate the context is not null. Then I set the vrfTargetCommunity to the value that was parsed. One thing to notice in the code snippet above is that since my datamodel set VrfTargetCommunityorAuto to the type of ExtendedCommunity, I’m parsing the getText() value into an ExtendedCommunity. For the remaining few commands, the extraction methods will be very similar; so I will not be showing the remaining two conversions for import and export targets.

Add Extraction Testing

Now that I have the conversions written, I need to update the tests that I wrote in part 2 of this blog series. The test I created to validate the parsing of the Testconfig file is shown below:

@Test
public void testSwitchOptionsVrfTargetAutoExtraction() {
  parseJuniperConfig("juniper-so-vrf-target-auto");
}

Now I must extend this test in order to test the extraction of the vrf-target auto configuration. The test as shown above simply validates that the configuration line can be parsed by ANTLR. It does not validate the code snippets we wrote in the previous section that are taking the “text” data and saving it to the datamodel. The test I want to write is to validate that the context extraction is working and I can assert that when the command is found it is set to auto.

@Test
public void testSwitchOptionsVrfTargetAutoExtraction() {
  JuniperConfiguration juniperConfiguration = parseJuniperConfig("juniper-so-vrf-target-auto");
  ExtendedCommunityOrAuto targetOrAuto = juniperConfiguration.getMasterLogicalSystem().getSwitchOptions().getVrfTargetCommunityorAuto();
  assertThat(ExtendedCommunityOrAuto.auto(), equalTo(targetOrAuto));
  assertThat(true, equalTo(targetOrAuto.isAuto()));
  assertThat(juniperConfiguration.getMasterLogicalSystem().getSwitchOptions().getVrfTargetExport(), nullValue());
  assertThat(
      juniperConfiguration.getMasterLogicalSystem().getSwitchOptions().getVrfTargetImport(), nullValue());
}

In order to test the conversion, I’m using the same function and just extending it to pull data out of the parsed Juniper configuration. For this Testconfig I only have the set switch-options vrf-target auto command. As seen in the extraction test, I’m asserting that isAuto is true, and that the value of targetOrAuto is ExtendedCommunityOrAuto.auto(). The remaining options are not located in that Testconfig file, and therefore I am asserting their values are null.

Since I also created and explained the vrfTargetCommunity, the test for this extraction is shown below:

@Test
public void testSwitchOptionsVrfTargetTargetExtraction() {
  JuniperConfiguration juniperConfiguration = parseJuniperConfig("juniper-so-vrf-target-target");
  ExtendedCommunityOrAuto extcomm = juniperConfiguration.getMasterLogicalSystem().getSwitchOptions().getVrfTargetCommunityorAuto();
  assertThat(ExtendedCommunity.parse("target:65320:7999999"), equalTo(extcomm.getExtendedCommunity()));
  assertThat(false, equalTo(extcomm.isAuto()));
  assertThat(
      juniperConfiguration.getMasterLogicalSystem().getSwitchOptions().getVrfTargetImport(), nullValue());
  assertThat(
      juniperConfiguration.getMasterLogicalSystem().getSwitchOptions().getVrfTargetExport(), nullValue());
}

The logic I’m using is very similar. In this case I’m testing that the extracted ExtendedCommunity matches what I have in the Testconfig file, but I’m also validating that the rest of the switch-options that do not exist in the Testconfig files are null. For the remaining import and export rules, I created similar tests to validate the extraction of those ExtendedCommunity values.

Note: Batfish developers tend to use more Matchers in their tests, they almost never use assertTrue/assertNull; often it’s assertThat(getFoo(), nullValue()). Hamcrest Matchers tend to do a better job of explaining the mismatches than JUnit (e.g., assertThat(someList(), hasSize(5)) will be much better than assertTrue(someList().size() == 5).

Summary

In this post I provided more details on what a vendor-specific datamodel is and how it fits within the Batfish application. I identified that the switch-options datamodel/representation needs to be extended to support the new variables I needed. Next, I wrote and explained how to extract the “text” data and assign it to the datamodel. And finally, I explained and showed how to write some extraction tests to validate the extractions are working as intended.

What’s Next?

The last post in the series will be coming soon.

  • Developing Batfish - Converting Vendor-Specific to Vendor-Independent (Part 4)

-Jeff

]]>
Jeff Kala
Why You Should Invest in Training - Part 1 - As a Network Engineer2022-09-02T00:00:00+00:002022-09-02T00:00:00+00:00https://blog.networktocode.com/post/why_invest_in_training-part1As a network engineer, what is it that you want most out of your job? Maybe you want more engagement, or security, or higher pay. With half of employees surveyed saying they are not engaged1, the average half-life of a skill estimated to be only 4 years2, and wage growth stagnation3, where does this leave the individual contributor looking for more?

The key to improving almost every aspect of your job is training - and not just any training: training that directly improves job performance. Instead of looking at skills training as a stepping stone, what if we look at it as a journey? You might be starting out lacking the knowledge necessary to obtain your goal, and you might run into difficulties along the way; but instead of becoming discouraged and giving up, reframe your goal as a journey and you can be confident in the steps you have taken.

Let’s look at the practical application of using training as part of the career journey. Riley is a network automation engineer and has been at her company for long enough that everything feels comfortable, if a bit mundane. She sits squarely in the middle of the talent pool, capable, but not excelling. She knows that upskilling or re-skilling could help her, but with “nearly three out of four respondents worldwide say[ing] they aren’t equipped with the resources needed to learn the digital skills they need to succeed in the current and future workforce4,” and with 68% who feel intimidated when they need to learn how to use a new technology5, it is no surprise that Riley feels stuck. Here’s where the journey mindset with a focus on training helps someone like Riley.

Every journey needs a plan and preparation, and this is the point where NTC’s Network Automation Academy training becomes essential. The world of network engineering is moving at a breakneck pace. Network automation is the way of the future, and we work with customers across the globe who are investing time and money into these efforts, to lower opex, improve site reliability, increase business agility, and more–because they see the long-term benefits. Companies are looking for a workforce of engineers who support these same efforts. Companies are looking for more than self-led training certificates; they want employees who can learn new skills–especially in network automation–and implement those skills.

Here Are the Reasons NTC Academy Can Help You:


50% Lab Time Every Single Day.

If you have gone through any online training recently, you probably noticed something missing or minimized: hands-on learning and instant instructor feedback. At Network to Code, all courses and workshops have 50% lab time, ensuring that you get the necessary time to practice the skills and hit the ground running as soon as you complete the course.

You’re Joining a Collective.

Another component missing from most technical training is a sense of community, especially with today’s hybrid and remote workplaces. Learning new skills can be daunting, but when you choose Network to Code courses, you are also joining our thriving online community. Our public Slack channel has over 23,000 members and is a perfect example of a vibrant and inviting environment to ask for help or offer advice. We also regularly publish blogs ranging from general to technical topics to help you keep up-to-date with the ever-evolving world of automation.

You’re Learning from the Very Best.

To highlight the quality of our teaching for a moment, Network to Code was not only built on training, but our courses are developed by engineers and content creators that have been helping companies automate their networks for over eight years. These are people who have been in the field, utilizing the practical application of automation on a daily basis. Who better to learn from than the experts?

We Respect Your Time and Schedule.

We also realize that flexibility in training is essential. That’s why Network Automation Academy offers virtual courses as well as in-person offerings with experienced instructors well versed in the subject matter. Our instructors use proven methods of engagement for different learning styles and focus on the individual (we cap all of our courses at fifteen people). And for even more flexibility for smaller network teams that cannot take everyone out for a week of training, we offer a training credits program where credits can be applied to courses in our public schedule.

Network to Code is inviting you to join us on an automation journey fueled by training. No matter what stage you are at, from beginner to expert, we have the tools needed to galvanize you to be more efficient, gain security in your role, and position yourself for higher pay. Our community is here to support you and provide personalized help when you need it. There has never been a better time to start learning, and we hope you join us on this journey.

-Grant

1 Gallup: gallup.com/workplace/236366/right-culture-not-employee-satisfaction.aspx
2 Pluralsight: pluralsight.com/resource-center/state-of-upskilling-2022
3 Pew Research: pewresearch.org/fact-tank/2018/08/07/for-most-us-workers-real-wages-have-barely-budged-for-decades/
4 Salesforce: salesforce.com/news/stories/salesforce-digital-skills-index-details-major-gaps-across-19-countries/
5 UI Path: ir.uipath.com/news/detail/30/study-finds-nearly-50-of-businesses-around-the-world-will

]]>
Grant Paige