Jekyll2022-05-17T18:59:08+00:00https://blog.networktocode.com/feed.xmlThe NTC MagNetwork to Codeinfo@networktocode.comTen Things Customers Think About When Considering Network Automation2022-05-17T00:00:00+00:002022-05-17T00:00:00+00:00https://blog.networktocode.com/post/ten-things-customers-think-about-when-considering-network-automationBeing on the sales side of the business here at Network to Code puts me in a unique space when it comes to understanding why customers initially begin to think about moving forward with network automation and, ultimately, what has kept them (in their minds) from making the jump. This post will discuss some of the most common thoughts that I’ve come across.

The Top Ten Things Customers Think About When Considering Network Automation

Of the hundreds of clients I have spoken to regarding starting their network automation journey, here are the top ten topics (in no particular order) that seem to be a high priority:

  • My network engineers are not programmers
  • We have too much work to do to start yet another project
  • We do everything internally, and we don’t need assistance
  • The knowledge of what has been built will leave with your team
  • It is too hard to work across silos in our organization
  • I am not interested in eliminating jobs by creating automation
  • We cannot afford the tools needed to move forward with automation
  • Our environment and workflows are too complex to automate
  • There are security and outage risks
  • I am in! . . . But I am struggling to get buy-in from my leadership team

I am not here to say that all the above are not valid points when considering changing how your organization deploys, manages, and consumes the network, because they are indeed valid. I am here to have an open discussion about reframing the thoughts around these “blockers.” Stay with me now! I know blockers is a traditionally sales-y word, but in this context and list, I am truly talking about mental blockers that I see leaders struggle to wrap their minds around.

Let’s begin the conversation:

My network engineers are not programmers

You know your team’s current skill set better than anyone else. You also know what you hired your team to do, and my guess is that it was not to perform high-volume/low-value tasks all day. You hire your engineers to do high-level technical work, and your engineers likely came to work for your organization with the promise of opportunities for skill and knowledge advancement. Am I getting warmer?

While your network engineers are not programmers today, what if you opened the door for them to grow their skill set with automation? With the right training courses and partners, your engineers will be able to practice what they have learned in real-time and become network automation engineers before your eyes. Allowing your team to use their knowledge and experience as traditional network engineers paired with new automation skills delivers value back to the business.

We have too much work to do to start yet another project

Starting down the network automation journey should not be considered just another project to accomplish in the year. For example, this journey differs from just a hardware refresh because it shifts how the organization consumes and supports the network. It is important to understand that fact when considering “when is a good time.”

While this shift will not happen overnight, when done with intent, an automated environment will cut down the time to deliver on those traditional annual projects. When you think about the time cost to start this journey and compare it to the time cost of not starting, the decision is simple: starting somewhere is always better than not starting at all.

We do everything internally and do not need assistance

Doing work internally is great if you have the time and resources to do so. However, many clients come to me with a similar mentality and ultimately choose Network to Code as their partner due to resource constraints. I don’t need to tell you that the business is often asking IT teams to do more with less in today’s world.

The knowledge of what has been built will leave with your team

Our work is not done here at Network to Code until a full knowledge transfer of what has been built is complete. On top of this, our goal is never to develop anything in a black box. Instead, we maintain full transparency and work alongside your team to deliver workflow automation to your environment. For example, we often offer “office hours” with our network automation experts where your team can jump on a call and work through questions or gaps in understanding as we progress through the project. This is just one way we prove that we are here to truly be a partner in the automation journey and not just a vendor.

It is too hard to work across silos in our organization

You do not necessarily need to work across the silos of your organizations to start your network automation journey. Starting small with the tools and workflows your team is responsible for is a great way to show value to the other silos of your organization. As they see the good work you are able to do, the walls between the silos will begin to come down, and you will be tackling automating workflows across silos in no time!

I am not interested in eliminating jobs by creating automation

Automation is not here to take jobs away from your team. On the contrary, you will likely see employee satisfaction and retention increase when introducing automation into the environment! How? you may ask. What if your employees could apply high-level thinking to engineering work that challenges them? On top of being challenged in their role, what if they no longer needed to do middle-of-the-night maintenance work because the automation is programmed to handle this? They may never leave!

While developing an automated infrastructure will not replace your employees’ jobs, it will change the way they do their jobs. Therefore, it will be important that they receive the proper training to perform their new tasks. For example, rather than managing the network from the CLI, it will now be from the automation environment.

We cannot afford the tools needed to move forward with automation

When building an automation platform, certain components are essential, including a well-understood source of truth and orchestration tools. But that does not mean you need to go back to leadership and ask for budget dollars for yet another tool! The open-source community has come a long way in making these enterprise-grade tools available to anyone, anywhere, for free. Besides avoiding additional costs, open-source tools often allow for more flexibility in building a solution fit for your organization’s unique needs. Additionally, any good workflow automation should lean into the existing tools in your environment and only augment where absolutely necessary. This practice allows for less technical debt and lets you use tools that your engineers and internal customers are already comfortable with.

Our environment and workflows are too complex to automate - it’s too big of an undertaking

Network automation is a journey (I think I have mentioned this a time or two), and it’s important to not try to “boil the ocean”. Start small! Just as you would with any significant effort in your life, you make a plan and start with small pieces until you are where you want to be. With automation, let’s make a flexible roadmap and then start with the low-hanging fruit to deliver those quick wins back to the business. Once the company and engineers reap the benefits of automation, they will be hungry to keep going!

There are security and outage risks

Security and uptime are of the utmost importance to just about every organization. I am here to tell you that automation is not the enemy of security or uptime! Rather, automation can improve upon both.

When you automate specific workflows, you can often pull out the human error margin that exists in any work we do as humans, especially when maintenance is scheduled in the early morning hours. Machines do not skip steps when tired and do precisely as they are programmed. This means pre and post checks, as one example, happen exactly as intended, every single time. This is music to the security team’s ears!

I am in! . . . But I am struggling to get buy-in from my leadership team

Let us help! Let’s present the network automation journey in a way that will resonate most with your leadership team. Do they care about lowering operational expenses, increasing security posture, time to delivery, or response time to incidents? Network automation has benefits in each of these spaces that can be highlighted clearly using real-world examples from our experience with other customers in this space.

Final Thoughts

Understand that you are not alone in having hesitations around network automation. Hopefully, this article addresses some of those hesitations you may have if you are considering starting your network automation journey.

Thank you!

-Alexis Preese

]]>
Alexis Preese
Palo Alto Panorama ChatOps with Nautobot2022-05-12T00:00:00+00:002022-05-12T00:00:00+00:00https://blog.networktocode.com/post/nautobot-chatops-panoramaHere at Network to Code, we are continually developing new ChatOps integrations for the underlying Nautobot ChatOps Framework. We have recently released a new ChatOps integration for Palo Alto Panorama systems. This ChatOps application is used to interact with the Palo Alto Panorama system and comes prepackaged with various chat commands. You can now get specific information or run advanced ACL checks on Panorama using your existing ChatOps service including Slack, Team, Webex, and Mattermost.

For installation steps, refer to its README. To install the underlying Nautobot ChatOps framework, refer to the documentation found here.

Commands

The Nautobot ChatOps Panorama app extends the capabilities of the Nautobot ChatOps framework adding a new chat command: /panorama. As of version 1.1.0, (the current version as of this writing), there are seven commands available to use. They are:

  • capture-traffic
  • export-device-rules
  • get-device-rules
  • get-version
  • install-software
  • upload-software
  • validate-rule-exists

Panorama Commands

Capture Traffic

The capture-traffic subcommand will prompt the user to choose the interesting traffic that needs to be captured and the device name and interface to run the capture on. It will then gather the necessary information from Panorama and run the capture directly on the firewall. Then it will export the packet capture directly to the user via the ChatOps client as a .pcap file capable of being opened in Wireshark.

This is by far my favorite command available, as I’ve spent way too long trying to set up packet captures on firewalls over the years! One caveat to this command is that in order to use it Nautobot requires access to both Panorama and the management IP address of the Palo Alto device it’s running a capture on.

Export Device Rules

The export-device-rules subcommand will prompt the user to select a Palo Alto firewall, then generate a list of firewall rules on it and output it in chat in a CSV format.

Get Device Rules

The get-device-rules subcommand is similar to the previous command, in that it will prompt the user to select a Palo Alto firewall, then generate a list of firewall rules on it and output them to the chat client in an easy-to-read format.

Get Version

The get-version subcommand is one of the simplest commands available. It will simply return the current version of the Panorama system configured. It does not require any additional input or device selection.

Install Software

The install-software subcommand allows you to install a new OS version on a Palo Alto firewall that has been previously uploaded to it. As with any commands that make changes to a device, we recommend testing this on a lab or other non-production system first!

Upload Software

The upload-software subcommand allows you to upload a specific PanOS version to a Palo Alto firewall. This can be used prior to running the install-software command mentioned above.

Validate Rule Exists

The validate-rule-exists subcommand is another one of my favorites. It prompts the user to select a firewall device, as well as source and destination traffic information to check. It will then check the firewall rules to see whether there is a matching rule for this traffic. If found, it will return the results to the user. This can be very handy to quickly see whether a new rule being requested is already in place, helping prevent duplicate rule creations.

Keep It Going

These commands handle only a subset of the information that can be gathered by the Panorama chatbot. You can contribute more commands with minimal Python code! Because the Nautobot ChatOps plugin lowers the barrier of entry by already handling the interaction between Nautobot and chat applications like Mattermost, Microsoft Teams, Slack, and Webex, creating new commands is extremely easy. We encourage you to create your own commands by building on top of existing commands and plugins that we at NTC have created—or even create your own command to interact with something you use on a daily basis.

We also encourage, in the GitHub repo for the app, any feedback, feature requests, or reports of bugs you may find.

-Matt

]]>
Matt Vitale
Django & JavaScript - Part 4 Vue.js & Axios2022-05-10T00:00:00+00:002022-05-10T00:00:00+00:00https://blog.networktocode.com/post/intro-to-js-with-django-part-4This is the fourth post in a multipart series that explores web front-end technologies and how they can be used with Django. You can find the previous post here: Django & JavaScript - Part 3 JavaScript & jQuery.

To be able to do React JS & Vue.js justice, I have decided to break them into separate posts with this post focusing on Vue.js and using Axios for API calls.

Requirements

  1. Single process to host the UI & API
  2. Django + Django REST framework
  3. Overarching web framework from day 1
  4. Intuitive UI/UX that can be dynamically updated without loading a new page

DRF API Class

This API class will be used for all examples using the API, in which DRF handles all PUT/PATCH/POST/GET/DELETE operations.

class UserViewSet(ModelViewSet):
    queryset = User.objects.all()
    serializer_class = UserSerializer
    filterset_class = UserFilterSet

Vue.js

Vue.js is a JavaScript framework that builds on top of HTML, CSS, and JavaScript to provide a declarative model that helped me to develop the simple examples I am using to compare each framework. Vue.js is a lot more powerful than I will be demonstrating today and has been used to build some very robust single-page and multi-page webpages.

Creating a Vue Object

Since my examples are based on a Python back end and I will not be running node.js, I will be creating all of my Vue.js case as inline JavaScript using <script> tags in my HTML. The code could also be easily served via separate static .js files.

To instantiate my Vue objects for the examples, I will need a few pieces of information. I will need to know what my mount point is for Vue. A mount point is what informs Vue.js what parent element will be in scope of this Vue object. I will be using the ID of the element as my mount point. Next, I will be defining my set of data attributes that I will use when interacting with Vue and the DOM. Because I will be intermingling Django templating and Vue.js templating, I will also need to overload the default delimiters from double curly braces to something that will not conflict. Lastly, I will be defining a set of methods or functions that will be invoked based on triggers in my HTML code.

new Vue({
  el: "#my-dom-element", // Selects ANY DOM element with an ID of my-dom-element to tell Vue the scope of the app
  data: { // Create my initial variables
    var1: "initial value",
    var2: "initial value2"
  },
  delimiters: ["{(", ")}"], // Update my delimiters so the underlying Django process doesn't try to template on call time
  methods: { // Set of functions for interacting with the Vue app
    update_var1: function() {
      this.var1 = "Some new value";
    },
    update_var2: function(event) {
      this.var2 = event.target.value;
    }
  }
});

This initial Vue object will not do much of anything without wiring it up to some HTML and throwing in some trigger events. I will be tying var1 to a click event that will update the innerHTML of a <span> element and var2 will be updated based on keyup events in an <input> element to replace its respective innerHTML. I am informing the Vue object of the trigger events by specifying v-on:<trigger>="<function to call>". For example, v-on:click="update_var1" in the example below is notifying the Vue object that on clicking the button element I would like to run the update_var function that is declared in methods.

<div id="my-dom-element">
  <table>
    <tr>
      <th scope="col">Var 1</th>
      <td>{( var1 )}</td>
    </tr>
    <tr>
      <th scope="col">Var 2</th>
      <td>{( var2 )}</td>
    </tr>
  </table>
  <button v-on:click="update_var1">Update Var 1</button>
  <input placeholder="Type Here for Var 2 Update" type="text" v-on:keyup="update_var2">
</div>

The end result without CSS making it look fancy is the following.

Vue.js Example

Axios

Axios is a JavaScript library used to make HTTP promise-based requests from a browser (or server if using node.js). A JavaScript Promise is a construct where you have execution code and callbacks that allows asynchronous methods to return values similar to synchronous methods. The promise is to supply the value at some point in the future. There are three states pertaining to a Promise (pending, fulfilled, and rejected), with fulfilled being a successful completion of the execution code.

In Axios once the Promise is fulfilled it passes the response to the .then(response) method, which is where we implement some magic. In the event the request has an error, we have the ability to .catch(error) and handle the error appropriately.

In my opinion Axios has done an elegant job creating a simple API client that integrated with my Vue.js code flawlessly.

Example 1 - Build DOM from Button Click

Initial Page Without Profile Data

Get User Profile

Page with Profile Data

User Profile

Initial HTML


<div class="container" id="user-profile">
  <div class="col" v-if="name">
    <h1>User Profile</h1>
    <table class="table">
      <tr>
        <th scope="row">Name</th>
        <td>{( name )}</td>
      </tr>
      <tr>
        <th scope="row">Email</th>
        <td>{( email )}</td>
      </tr>
      <tr>
        <th scope="row">Admin Panel Access</th>
        <td>{( has_admin )}</td>
      </tr>
      <tr>
        <th scope="row">Super User</th>
        <td>{( is_superuser )}</td>
      </tr>
    </table>
  </div>
  <div class="col" v-else="name">
    <h1>Waiting to load user profile.</h1>
    <button
      class="btn btn-primary mt-2"
      v-on:click="get_profile('{{ request.user.id }}')">
      Load User Profile
    </button>
  </div>
</div>

Vue.js

new Vue({
  el: '#user-profile',
  delimiters: ["{(", ")}"],
  data: {
    name: "",
    email: "",
    has_admin: "",
    is_superuser: ""
  },
  methods: {
    get_profile: function (user_id){
      axios
        .get("/api/users/users/"+user_id+"/")
        .then(response => {
          this.email = response.data.email;
          if (response.data.is_superuser || response.data.is_staff) {
            this.has_admin = "Enabled";
          } else {
            this.has_admin = "Disabled";
          };
          if (response.data.is_superuser) {
            this.is_superuser = "Enabled";
          } else {
            this.is_superuser = "Disabled";
          };
          this.name = response.data.name;
        });
    }
  }
});

In the first example I am creating a Vue object with a mount point of the <div> that has an ID of user-profile. Within my first nested element I have also introduced if/else Vue statements as attributes of the child <div> elements, v-if="<conditional>"/v-else="<same conditional>". This will translate as: IF the name attribute is truthy (empty string evaluates as false in JavaScript) the table will be visible, ELSE the button to load the profile will be visible.

I have also intermixed Django templating by passing in the user ID of the user making the initial HTTP request to load the page and passing it into v-on:click event function call. While the Vue object has the delimiters set to {( <var name> )} to avoid conflicts.

Lastly, I use Axios to perform an HTTP GET to /api/users/users/<user id>/ and use the response data in performing Vue templating. As soon as I set the name attribute, the Vue object will remove the initial <div> with the button element and replace it with a new <div> of the table that I am building. I don’t have to worry about selecting elements to then inject HTML, or changing attributes of the <div>s to hide one and unhide the other. It’s all handled with the Vue object and the three v- attributes inside the HTML elements.

Example 2 - Input Validation

User Does Not Exist

User Does Not Already Exist

User Does Exist

User Does Exist

HTML Form

<form id="create-user" method="POST" action="/create-user/" autocomplete='off'>
  <div class="form-group mb-3">
    <label>Username</label>
    <input type="text"
      v-on:keyup="get_user"
      class="form-control"
      name="user">
    <div class="text-danger mt-2" style="color:red;">{( user_err )}</div>
  </div>
  <button type="submit" class="btn btn-success mt-2" v-if="button_enabled">Create User</button>
</form>

Vue.js

new Vue({
  el: "#create-user",
  delimiters: ["{(", ")}"],
  data: {
    user_err: "",
    button_enabled: false
  },
  methods: {
    get_user: function (event){
      if (event.target.value) {
        axios
          .get("/api/users/users/?username=".concat(event.target.value))
          .then(response => {
            if (response.data.count == 1) {
              this.user_err = "This username already exists";
              this.button_enabled = false;
            } else {
              this.user_err = "";
              this.button_enabled = true;
            }
          })
          .catch(error =>{
            this.button_enabled = false;
            this.user_err = error;
          });
      } else {
        this.button_enabled = false;
        this.user_err = "";
      }
    }
  }
});

In this example I decided to implement error handling, which I did not do on the previous two blog posts. The ease of use and more object-oriented programming make me feel like demonstrating the rejected status of the Promise. One difference is that I am not mixing templating languages. I still keep the delimiters overloaded, as this page would most likely be processed by some form of render in Django and I still want to avoid conflicts.

For input validation, if a user backspaces to completely empty out the <input> field, I am resetting the user_err attribute and removing the Create User button. This is meant to prevent unneeded error messages AND remove the user’s ability to click the button IF the user field is empty.

On the Axios call, I implemented similar implied logic as before—that if one user is returned, I have an exact match on the query params and I CANNOT create the user. The difference here is that if this conditional is hit, I not only display the error but I also remove the Create User button to prevent a user from submitting a known invalid user for creation. I have also implemented a catch that will remove the button; and the error will be the error encountered by Axios during the call, resulting in a rejected state of the Promise.

Summary

The further along in this series I get, the more I am realizing I never gave JavaScript frameworks the credit they deserve, it’s always been eww JavaScript. So far, having a zero JavaScript solution like HTMX, I am thrilled at the idea of all processing being done server-side. I left last week’s post on jQuery feeling like “heck it might not be so bad.” BUT this week as I reflect on jQuery, it feels as though I spent more time than I would like worrying about DOM element selection/manipulation and less time in development. That’s where getting to Vue.js has really stood out to me. Even in the simplistic examples provided, I never felt like I was building selectors to manipulate the DOM or access a value. As someone who is more Python back-end focused, Vue.js felt more native to me compared to my previous interactions in writing JavaScript.

~ Jeremy

]]>
Jeremy White
What It’s Like Being a Woman Network Engineer2022-05-05T00:00:00+00:002022-05-05T00:00:00+00:00https://blog.networktocode.com/post/what-it's-like-being-a-woman-network-engineerOur Manager of Training & Enablement, Elizabeth Yackley, sat down with the instructor of our free Women in Tech Network Programming & Automation Bootcamp, Elizabeth Abraham, to discuss Elizabeth A.’s experience as a woman in network engineering, her thoughts on how we can help women succeed in a very male-dominated industry, and what it was like teaching an all-women course this past March.

[Elizabeth Abraham Headshot]

Elizabeth Abraham currently teaches Network Automation with Python & Ansible for Network to Code. She also teaches numerous courses as a Cisco Devnet Professional Program Instructor, and has been recognized by Credly as an Instructor with 1,000 Students Reached and 100 Courses Delivered. She was most recently recognized by Cisco with an Instructor Excellence Award for achieving a 4.8 or above in the annual average “CCSI” score in customer satisfaction surveys in 2021.

An Interview with NTC instructor Elizabeth Abraham

Q: Was this the first time you’ve taught a class of all women engineers? What was that like in relation to all of the previous courses you’ve taught? What were some of the highlights from teaching the course this past month? Was it more meaningful and impactful to teach women in tech like yourself?


A: Yes, it was a pretty unique experience for me as I have never taught an all-women class in the 25 years I have been teaching!

Reflecting back, I thought there was more camaraderie, and I was able to connect better. In addition, the students were very detail-oriented, and they carefully followed instructions throughout the course, especially using the lab guide. That kind of skill really sits well in programming/coding in general, and because of this, I felt the class was especially impactful.

Most in the class were quite interactive and spurred on effective discussions, and encouraged me to dive deeper, so to speak.


Q: Could you take me through your career path (education and jobs/positions) that led you to your career as an instructor in network engineering and automation? What kind of struggles did you face over that time?


A: Well, during my early school years, understanding mathematics came quite naturally; this seemed to serve me well when logical thinking was required. Therefore, I chose to pursue Engineering in Electronics even though it was not considered a women’s line of work back then.

After graduating as an Electronics and Communication engineer (more to do with designing Integrated Chips/ASICs) from India, I moved to the Middle East as my family was there. However, opportunities there were even more limited for women engineers!

In my pursuit of an engineering job, I applied for a position with a technology firm; however, I ended up volunteering to learn and teach Excel, which got me some recognition within the company. (This was in the early 90s, LANs were making inroads into the old legacy stand-alone computer systems.)

At the company, the engineers seemed to have a hard time figuring out these new systems called Novell Netware. I was able to read the documentation and successfully install and troubleshoot quite fast… which started my journey into network engineering, eventually providing Microsoft and Cisco solutions. However, women were not hired, nor was it deemed safe for networking jobs on-site! Hence I moved to teach Novell, Microsoft MCSE, Cisco, etc.

For the last 20+ years, my sole focus has been teaching and implementing Cisco solutions. I have worked on different product lines of Cisco: Route/Switch, Security, VoIP/Collaboration, Datacenter, and finally ventured into Network Automation. My taste for programming/coding seems to be a good skill set for Network Automation.

As for the struggles I’ve experienced, it was a very difficult period, especially during that time and location. IT workplaces were almost exclusively male-oriented. Be it in a class environment or on-site, women were not taken seriously and not expected to be there at all.

Resources were very hard to get to learn networking; expensive hardware and software were just not available outside the production environment! There were no virtual or remote environments to work on.

Women in Tech Class Screenshot
Class screenshot from the March 2022 Women in Tech Network Automation Bootcamp


Q: As we know, there are fewer women in network engineering than in other tech industries. What are the ways you think we can shift those percentages to increase the number of women in this industry?


A: I think this is because of the physical requirements that used to be a main part of the job when women weren’t supposed to take jobs like that: checking cabling, sometimes moving equipment, unfriendly hours, and a generally male-dominated environment have been a deterrent in the past … things have changed a lot over the recent years.

Ways I think we can increase the number of women engineers in the industry:

  • Promote a more woman-friendly workspace
  • Same pay scales as men for the same position
  • Acquire an in-depth understanding of TCP/IP and OSI model

And most networks/equipment are becoming very “smart”; therefore, these can be worked upon remotely, and thus there is very little need to be physically at the location or NOC.

Virtual machines for every type of network device enable women to work on them as and when needed and improve their skillset.

The ability to initialize the required virtual machines in a lab environment outside of business hours encourages women to consider this field even more.

And, of course, gaining programming skills, like the skills acquired in our 5-day Bootcamp, are very helpful to push women forward into the field of Network Automation.


Q: What advice would you give to women in technology, specifically network engineering, to grow and progress in their careers?


A: These are my main points of advice:

  • Develop programming skills
  • Stay ahead of the latest technologies and have multiple certifications


Q: With all of your past experience, all of the courses and students you’ve taught, what are you the most proud of?


A: To sum up, I think my perseverance and not giving up the career I chose is what I’m most proud of. It was an incredibly tough journey, at least for the first 10 years, with many hurdles along the way. There were a number of times when I thought of changing course because it felt so unfair and discouraging as things were stacked up against me for being a woman, even though I was demonstrably better in my deep level of knowledge of networks and how they function.

The upshot, though, was that I was determined to enhance my knowledge of networks more than anyone would expect! So now, when I am teaching these classes, I can go into very minute details and break it down such that the students benefit immensely within a short duration. For more than a decade now, most students have told me how much they learned and enjoyed the class! We had such great feedback from the all-women class too. The glowing evaluations that I read from the students give me immense satisfaction, and I feel it was all worth it.


NTC will be offering another free Network Automation Bootcamp for women network engineers in early 2023. Please email training@networktocode.com to be added to the waitlist. For more information about our training courses, please visit https://www.networktocode.com/training/.


-Elizabeth Yackley

]]>
Elizabeth Yackley
Django &amp; JavaScript - Part 3 JavaScript &amp; jQuery2022-05-03T00:00:00+00:002022-05-03T00:00:00+00:00https://blog.networktocode.com/post/intro-to-js-with-django-part-3This is the third post in a multipart series that explores web front-end technologies and how they can be used with Django. You can find the previous post here: Django & JavaScript - Part 2 HTMX.

Requirements

  1. Single process to host the UI & API
  2. Django + Django REST framework
  3. Overarching web framework from day 1
  4. Intuitive UI/UX that can be dynamically updated without loading a new page

DRF API Class

This API class will be used for all examples using the API, in which DRF handles all PUT/PATCH/POST/GET/DELETE operations.

class UserViewSet(ModelViewSet):
    queryset = User.objects.all()
    serializer_class = UserSerializer
    filterset_class = UserFilterSet

Raw JavaScript + jQuery

By building the site based on raw JavaScript enriched by jQuery I am able to provide a tremendous amount of flexibility due to being beholden to fewer opinionated constraints. This simplistic solution pushed the the data processing and rendering to the client side, which is common practice in rich modern webpages. Where issues start to become apparent is as a project scales out. The continual need to add large amounts of JavaScript that is not constrained by an opinionated framework can lead to inconsistencies and tech debt to manage long-term.

Have you ever taken a look at a webpage’s JavaScript and seen $ used preceding a function call or a tuple with a string? There is a high likelihood that this was jQuery and the $ is the jQuery object. An example of invoking a function on the jQuery object would be $.ajax({....}). Whereas using jQuery to access and interact with an HTML element would be using $(<selector>, <content>...) syntax. If I wanted to select all <p> HTML elements and hide them via jQuery, I would use $("p").hide(). For the scope of this blog post we will be taking a simplified approach and focusing on jQuery selectors and the function calls that can be performed on the returned element(s). I will be using a mixture of JavaScript and jQuery in my blog post examples.

jQuery Selectors

jQuery selectors are very much as the name suggests—a method of selecting HTML element(s) via some type of descriptor. Keep in mind ANY function performed against a selector will apply to ALL items selected. This makes for quick updates to large amounts of elements, but you must be sure you are not changing elements that are not meant to be changed.

ID Selector

In HTML it is best practice for ID attributes of HTML elements be globally unique for everything in the DOM and to be able to select just the one element by the ID attribute it would be represented by with #<id attribute value> as the selector. In the example of <p id="jquery">My fun paragraph</p> I can access this element via $("#jquery").

Comparison
// jQuery
$("#jquery")

// JavaScript
document.getElementByID("jquery");

Element Selector

To select ALL HTML elements of an element type, you pass in a string representation of the element type. For instance, if I wanted to select all paragraph elements which are represented by <p> the jQuery selector would be $("p"). And if I wanted to select just the first appearance of a <p> element, I would use the same selector but add :first to the element type $("p:first").

Comparison
// jQuery
$("p")

// JavaScript
document.getElementsByTag("p");

Class Selector

Similar to element selectors, class selectors are meant to return all HTML elements, but in this scenario it is any element with a specific class attribute. To represent the class selector, use a . followed by the class name, as such $(.table).

Comparison
// jQuery
$(".table")

// JavaScript
document.getElementsByClassName("table");

Create Selector

Create selectors are done as the HTML element type in <> symbols and will create a single instance of an HTML element in memory that is not directly applied to the DOM. This can be done via selecting another element and manipulating the HTML from that element.

Comparison
// jQuery
$("tr:last").append( // Selects the last table row
  $("<td>").html("Table Data innerHTML") // Adds a table data to the table row with innerHTML
)

// JavaScript
let tr = document.getElementsByTagName("tr"); // Selects all table rows
tr[-1].innerHTML = document.createElement("td").innerHTML = "Table Data innerHTML"; // Creates and applies table data with innerHTML to the last index of table rows selected

jQuery Events

Applying a jQuery event to a jQuery selector will make it so on the event trigger. For that element the browser will execute JavaScript defined inside the event. Common event types are click, keyup, submit, ready, and change, however there are several more.

Paragraph Highlight Example

In this example, every time the mouse cursor enters a <p> HTML element it will change the background of that element to a light gray; and upon leaving, the background color will become unset for that element.

$(document).ready(function(){
  $("p").on({
    mouseenter: function(){
      $(this).css("background-color", "lightgray");
    },
    mouseleave: function(){
      $(this).css("background-color", "unset");
    },
  });
});

Example 1 - Build DOM from Button Click

Initial Page Without Profile Data

Get User Profile

Page with Profile Data

User Profile

Initial HTML


<div class="container">
  <div class="col" id="dom-container">
    <h1 id="container-heading">Waiting to load user profile.</h1>
    <button
      class="btn btn-primary mt-2"
      onclick="load_user_profile(this, '{{ request.user.id }}')">
      Load User Profile
    </button>
  </div>
</div>

JavaScript/jQuery

function build_row(thead, tdata) {
  let row = $("<tr>");
  row.append($("<th>").attr("scope","row").html(thead));
  row.append($("<td>").html(tdata));
  return row
}

function load_user_profile(field, user_id) {
  $.get("/api/users/users/".concat(user_id, "/"), function(data, status) {
    // Delete the `Load User Profile` button element
    field.remove()

    // Change header
    $("#container-heading").html = "User Profile";

    // Build table
    let table = $("<table>").addClass("table");
    table.append(build_row("Name", data.display));
    table.append(build_row("Email", data.email));
    table.append(
      build_row(
        "Admin Panel Access",
        (data.is_staff || data.is_superuser) ? "Enabled" : "Disabled"
      )
    );
    table.append(
      build_row(
        "Super User",
        (data.is_superuser) ? "Enabled" : "Disabled"
      )
    );

    // Append table to div
    $("#dom-container").append(table);
  });
}

In this example we have almost the same base HTML as we did for the HTMX post. However, we do not have a User Profile template. Instead we are applying an onclick event to the button and passing in the button element and templating the user ID from Django. On click, I trigger the load_user_profile, and the first task is to remove the button from the DOM with a remove() function call. Next, I access the <h1> element via a jQuery ID Selector and change the innerHTML to User Profile. After changing the <h1>, I start building the table in memory with jQuery Create Selectors, in which the table row creation is wrapped with another function that creates the <tr>, <th>, and <td> elements. Once they are returned, I append them to the table. Lastly, after the table is fully built, I append the table to the <div> with an ID of dom-container. This is a fairly simplistic mix of jQuery and business logic in JavaScript to accomplish the same end result we had working with HTMX.

Example 2 - Input Validation

User Does Not Exist

User Does Not Already Exist

User Does Exist

User Does Exist

HTML Form

<form method="POST" action="/create-user/" autocomplete='off'>
  <div class="form-group mb-3">
    <label>Username</label>
    <input type="text"
      id="check_user"
      class="form-control"
      name="user">
    <div class="text-danger mt-2" style="color:red;" id="user-err"></div>
  </div>
  <button type="submit" class="btn btn-success mt-2">Create User</button>
</form>

JavaScript/jQuery

$("#check_user").keyup(function() {
  $.get("/api/users/users/?username=".concat(this.value), function(data, status){
    let err_div = $("#user-err")
    if (data.count == 1) {
      err_div.html("This username already exists");
    } else if (data.count == 0) {
      err_div.empty();
    };
  });
})

In this example I am using the keyup jQuery event applied to the check_user input via a jQuery event to inform the browser to trigger a JavaScript function that calls the underlying Users API with a query param of the username field passed in. This does make an assumption that the query param should return only 1 instance of the User object when we have an exact match, else there should be 0 instances returned. I could have also performed the selector via an element selector and limited it based on the name attribute, $("input[name='user']"). But this could in theory return more than one element; and when I access a specific element, I prefer to access it via an ID.

Summary

It’s been a moment since I had the opportunity to write jQuery, and I have been surprised by how much I enjoyed writing predominantly jQuery with a small amount of raw JavaScript sprinkled in. For those that know me, you know that although I know some JavaScript, it is not my favorite language to develop in. Maybe I will warm up to JavaScript a little more by the end of this evaluation??? Or I could forever stay in Python development.

~ Jeremy

]]>
Jeremy White
Django &amp; JavaScript - Part 2 HTMX2022-04-26T00:00:00+00:002022-04-26T00:00:00+00:00https://blog.networktocode.com/post/intro-to-js-with-django-part-2This is the second post in a multi-part series that explores web front-end technologies and how they can be used with Django. You can find the inital post here: Django & JavaScript - Introduction.

Requirements

To hit home, I will be mentioning the requirements to satisfy in each blog post no different than mentioned in the previous post.

  1. Single process to host the UI & API
  2. Django + Django REST framework
  3. Overarching web framework from day 1
  4. Intuitive UI/UX that can be dynamically updated without loading a new page

HTMX

HTMX provides a simplified set of HTML attributes that provide the ability to access modern browser features without having to directly write JavaScript. Behind the scenes, HTMX is a JavaScript-driven package that relies on Ajax for HTTP requests to the underlying host and manipulation of HTML in the DOM. HTMX is built on the concept of server side processing and templating of HTML that is then injected into or replaces HTML elements without writing JavaScript. This provides a lower barrier for entry on developing more responsive modern web experiences, however it’s meant for the server to respond with HTML instead of JSON data. This can have its trade-offs, mainly when taking an approach of every model having a DRF-driven API, in which case there may be additional sets of views to build and manage that support each interactive component as we are performing server side processing of the data. This level of flexibility is a great fit for both single page and multi page applications.

Triggers

By default, triggers are based on the “natural” event for the specific element type. input, textarea, and select elements are triggered from change events; whereas form is triggered on submit, and all others are triggered based off of click events. If there is a need to overload the default behavior, you can set the hx-trigger attribute to some of the following triggers. It is also possible to chain triggers together and to modify a trigger with an event modifier.

  • load - triggered on load
  • revealed - triggered when an element is scrolled into the viewport
  • intersect - fires once when an element first intersects the viewport
  • click - mouse click
  • click[ctrlKey] - click of ctrl key on keyboard, can be any key and can be chained together via &&
  • every 1s - polled interval, must be used with a time declaration
  • keyup - on key up from typing

Additional triggers can be found in HTMX documentation.

Actions

Actions are the HTTP method for the request, and are set via the hx-<method> with the value being the requested URL.

  • hx-get Issues a GET request to the given URL
  • hx-post Issues a POST request to the given URL
  • hx-put Issues a PUT request to the given URL
  • hx-patch Issues a PATCH request to the given URL
  • hx-delete Issues a DELETE request to the given URL

Targets

Once the web server responds to the hx-target, it provides the HTML element to swap the HTML. The default target behavior is to interact with the HTML element where the action is trigger, but I find most times that is not fit for my purposes unless I change the swapping behavior. To override the default behavior, the target will accept a CSS selector. In the examples below, the select is based on an element ID. This is represented by hx-target="#<element ID>".

Swapping

Swapping refers to how the HTML is swapped inside the DOM upon receiving the HTTP response. The default behavior is innerHTML but can be overwritten via the hx-swap attribute.

  • innerHTML the default, puts the content inside the target element
  • outerHTML replaces the entire target element with the returned content
  • afterbegin prepends the content before the first child inside the target
  • beforebegin prepends the content before the target in the target’s parent element
  • beforeend appends the content after the last child inside the target
  • afterend appends the content after the target in the target’s parent element
  • none does not append content from response, but does process the response headers and out of band swaps (see HTMX documentation)

Example 1 - Build DOM from Button Click

Initial Page Without Profile Data

Get User Profile

Page with Profile Data

User Profile

Initial HTML

<div class="container">
  <div class="col" id="dom-container">
    <h1>Waiting to load user profile.</h1>
    <button class="btn btn-primary mt-2"
      hx-get="/user/profile/"
      hx-target="#dom-container">
      Load User Profile
    </button>
  </div>
</div>

User Profile HTML


<h1>User Profile</h1>
<table class="table">
  <tr>
    <th scope="row">Name</th>
    <td>{{ request.user.name }}</td>
  </tr>
  <tr>
    <th scope="row">Email</th>
    <td>{{ request.user.email }}</td>
  </tr>
  <tr>
    <th scope="row">Admin Panel Access</th>
    <td>{% if request.user.is_staff or request.user.is_superuser %}Enabled{% else %}Disabled{% endif %}</td>
  </tr>
  <tr>
    <th scope="row">Super User</th>
    <td>{% if request.user.is_superuser %}Enabled{% else %}Disabled{% endif %}</td>
  </tr>
</table>

Template View

class UserProfileView(TemplateView):
    template_name = "user-profile.html"

In this example we have a simple User Profile page that initially loads without any data. The data will be loaded via the Load User Profile button, which is triggered by a default hx-trigger of click. The action performed is hx-get to /user/profile, which makes a GET call to the URL, and the web server responds with the rendered table. Upon receiving the response, hx-target tells the browser to perform the hx-swap action of swapping the innerHTML of the element with an ID of dom-container.

Example 2 - Input Validation

User Does Not Exist

User Does Not Already Exist

User Does Exist

User Does Exist

HTML Form

<form method="POST" action="/create-user/" autocomplete="off">
  <div class="form-group mb-3">
    <label>Username</label>
    <input type="text"
      hx-trigger="keyup"
      hx-target="#user-err"
      hx-post="/check-user-exists/"
      class="form-control"
      name="user">
    <div class="text-danger mt-2" style="color:red;" id="user-err"></div>
  </div>
  <button type="submit" class="btn btn-success mt-2">Create User</button>
</form>

The name attribute will be used on accessing the field from the form in Django. The hx-trigger event in this case is every keyup event in the input field. The action is hx-post to perform a POST via /check-user-exists/. On response from the web server, the browser replaces the innerHTML of the div user-err with the response data, and also uses the default innerHTML hx-swap.

Simple Function Based View

def check_username(request):
    username = request.POST.get("user")
    if User.objects.filter(username=username).exists():
        return HttpResponse("This username already exists")
    else:
        return HttpResponse("")

In the first screenshot, the expected behavior, user-err div innerHTML, is replaced with an empty string because the queryset returned False when applying .exists() function. In the second, the .exists() function returned True, which replaces the innerHTML with This username already exists.

Summary

HTMX is a very powerful and scalable solution from the frontend perspective that has a very low barrier for entry when it comes to JavaScript development. This post barely scratches the surface when it comes to HTMX but is enough to be dangerous. Of the frameworks I have worked with in my past it has be a great breath of fresh air as a Python developer to not have to worry about writing a single line of JavaScript.

~ Jeremy

]]>
Jeremy White
Django &amp; JavaScript - Introduction2022-04-19T00:00:00+00:002022-04-19T00:00:00+00:00https://blog.networktocode.com/post/intro-to-js-with-djangoOver the course of my career I have built a number of web portals, most of which have been designed by the seat of my pants and based on a series of decisions that I later wish I had spent more time thinking through. Early stages were always multi page applications where every page transition involved a link to another page or a form post. As things evolved I would start sprinkling in some JavaScript, and later a need to perform an additional call to enrich data post page load would pull in jQuery. Although the end solution of a multi page application enriched by JavaScript + jQuery was not a bad combination, I would always get to a point where I contemplated whether my end goal would have been better served with an overarching framework like Vue.js or React JS. My next project is giving me the opportunity to take some additional time in the design phase to fully evaluate the right tool for the job. Ride along with me in my evaluation that I will be performing in a four part series to find the solution I feel best suits what I am trying to achieve.

Requirements

  1. Single process to host the UI & API
  2. Django + Django REST Framework
  3. Overarching web framework from day 1
  4. Intuitive UI/UX that can be dynamically updated without loading a new page

Multi Page Applications

Multi Page Applications (MPAs) are akin to traditional web applications and are built on a design philosophy where the heavy lifting is performed server side and the client side only has to worry about displaying what is being sent. This was initially the design pattern of choice for earlier web pages as older browsers had limited support for heavy client side scripting and this influenced web design for several years. MPAs are still a predominant design pattern but are no longer limited by server side scripting.

Single Page Application

A single page application (SPA) is as the name suggests: the webpage is built off of one underlying page where interactive components are used to build and destroy HTML elements. The initial page is commonly served as a static page or with minimal templating server side, as all the magic happens client side via JavaScript. A slight variation of an SPA is more of a hybrid approach where there are a few purpose pages that are treated as their own apps. An example would be having the main application based on one page and the administrative portal being its own page.

One misconception with this pattern is negating the need for URL routing. Although in an SPA the name suggests having only one page, it is common for how you navigate the application to impact what is displayed; and being able to navigate back to that view can be highly frustrating if it can’t be shared via a like or bookmarked. Most solutions have the ability to perform some level of URL routing to inform the client side app what to render but do require some additional effort to accomplish this.

JavaScript Frameworks

In each of the following sections I will be introducing which libraries/frameworks I will be evaluating. The subsequent blog posts will go into more detail on differences and the underlying rationale for how it stacks up in my view of the success of the project.

Raw JavaScipt + jQuery

By building the site based on raw JavaScript enriched by jQuery, I am able to provide a tremendous amount of flexibility due to fewer opinionated constraints to be beholden to. This simplistic solution can take a hybrid approach of data processing and rendering on both client and server side. But issues start to become apparent as a project scales out. The continual need to add large amounts of JavaScript that is not constrained by an opinionated framework can lead to inconsistencies and tech debt to manage long term.

HTMX

HTMX provides a simplified set of HTML attributes that provide the ability to access modern browser features without having to directly write JavaScript. Behind the scenes HTMX is a JavaScript-driven package that relies on Ajax for HTTP requests to the underlying host that is then used for manipulation of HTML in the DOM. HTMX is built on the concept of server side processing and templating of HTML that is then injected into or replaces HTML elements without writing JavaScript. This provides a lower barrier for entry on developing more responsive modern web experiences, however it is meant for the server to respond with HTML instead of JSON data. This can have its trade-offs, mainly when taking the approach where every model has a DRF driven API. In that case there may be additional sets of views to build and manage that support each interactive component as we are performing server side processing of the data. This level of of flexibility is a great fit for both SPAs and MPAs that are managed by teams that are stronger as back-end solutions rather than front-end frameworks.

Vue.js and React JS

JavaScript Expression (JSX) is a JavaScript syntax extension that allows you to embed HTML and CSS into JavaScript. JSX also has templating capabilities which do not conflict with Django templating.

In Vue.js it is common to see NodeJS serving the web front end or another solution hosting the HTML as static files with all templating/rendering being performed client side. This dynamic is a common pattern in SPAs but can also work in MPAs. Having an ORM-driven API like Django/DRF can be very powerful in these scenarios. But why is it not as common to see Django serve both the front and back end of these deployments? Simplistic answer comes down to using each framework to its full potential, the templating languages for Vue.js and Django having overlap, along with additional conditions. One simple way of overcoming the templating overlap is to change the delimiter characters for Vue.js. This simple approach allows for both templating languages to exist in harmony (or use JSX instead of Vue.js HTML templating). An additional option is to abandon the use of Django templating in favor of Vue.js, having Django serving static files for the UI and Vue.js calling the API. There are two other common libraries for performing API calls to the back end, Axios and Fetch API.

React JS is similar in vein to Vue.js, where it is common to see SPA front ends built on either framework. One key difference is React, unlike Vue.js, does not have a native HTML templating language; its primary focus on DOM manipulation is via JSX. React also commonly leverages additional libraries for API calls.

I will be evaluating both Vue.js and React, but I’ll be making sure to use separate methodologies for each. For Vue.js I will be using Axios for API calls and exploring HTML templating with both Django and Vue.js. With React, Fetch will be the API client and will be leveraging JSX for DOM manipulation.

Stay Tuned

Over the coming weeks expect to see three additional posts where I evaluate each solution and try to keep the same or similar examples to help keep the evaluation an even playing field for my project.

~ Jeremy White

]]>
Jeremy White
Automation Principles - Data Normalization2022-04-12T00:00:00+00:002022-04-12T00:00:00+00:00https://blog.networktocode.com/post/Principle-Series-Data-NormalizationThis is part of a series of posts to help establish sound Network Automation Principles.

Providing a common method to interact between various different systems is a fairly pervasive idea throughout technology. Within your first days of learning about traditional networking, you will inevitably hear about the OSI model. The concept being that each layer provides an interface from one layer and to another layer. The point is, there must be an agreement between those interfaces.

The same is true with data, which poses a problem within the Network space, as most interfaces to the network are via vendor- specific CLI, API, etc. This is what makes a uniform YANG model via an open modeling standard such as Open Config or IETF models so attractive.

The problem for adoption for such a standard is multifaceted. While cynics believe it is a vendor ploy to keep vendor lock-in, I think it is a bit more nuanced than that. Without spending too much time on the subject, here are some points to consider.

  • Vendor-neutral models are by nature complex, as they should contain the superset of all features
  • Complexity makes it more difficult to use the product
  • Despite the complexity, vendor-neutral models always seem to lack a core feature to any given vendor
  • Vendors have to extend models to support “their differentiators” or features, which is complex and subject to future issues
  • Data does not always map easily from the vendor’s model to any other model
  • All of this makes it complex to actually build out, if these features are not built from within the OS to start with

That’s a huge topic, with a 30,000-foot view of some pros/cons, no reason to dive deeper now.

Data Normalization in Computer Science

The construct on agreed upon interfaces has many names in Computer Science, depending on the context. While not all are related to data specifically, the concept remains the same.

  • Interface - as the generic term to define how two systems connect, not to be confused with a “network interface”.
  • API (or Application Programming Interface) - which is not always a REST API, is an agreed upon interface.
  • Contract - as a term to to reinforce the idea that there is an agreed upon standard between two systems.
  • Signature - as a type enforced definition of a function.

As mentioned, some of these terms are specific to a context, such as signature being more associated with a function, but these are all terms you will here often that describe the basic concepts.

Data Normalization in NAPALM

NAPALM provides a series of “getters”; these are basically what a Network Engineer would call “show commands” in structured data and normalized. Let’s observe the following example, taken from the get_arp_table doc string.

Returns a list of dictionaries having the following set of keys:
    * interface (string)
    * mac (string)
    * ip (string)
    * age (float)

Example::

    [
        {
            'interface' : 'MgmtEth0/RSP0/CPU0/0',
            'mac'       : '5C:5E:AB:DA:3C:F0',
            'ip'        : '172.17.17.1',
            'age'       : 1454496274.84
        },
        {
            'interface' : 'MgmtEth0/RSP0/CPU0/0',
            'mac'       : '5C:5E:AB:DA:3C:FF',
            'ip'        : '172.17.17.2',
            'age'       : 1435641582.49
        }
    ]

What you will observe here is there is no mention of vendor, and there is seemingly nothing unique about this data to tie it to any single vendor. This allows the developer to make programmatic decisions in a single way, regardless of vendor. The way in which data is normalized is up to the author of the specific NAPALM driver.

import sys
from napalm import get_network_driver
from my_custom_inventory import get_device_details

network_os, ip, username, password = get_device_details(sys.argv[1])

driver = get_network_driver(network_os)
with driver(ip, username, password) as device:
    arp_table = device.get_arp_table()

for arp_entry in arp_table:
    if arp_entry['interface'].startswith("TenGigabitEthernet"):
        print(f"Found 10Gb port {arp_entry['interface']}")

From the above snippet, you can see that regardless of what the fictional function get_device_details returns for a valid network OS, the process will remain the same. The hard work of performing the data normalization still has to happen within the respective NAPALM driver. That may mean connecting to the device, running CLI commands, then parsing; or that could mean making an API call and transposing the data structure from the vendors to what NAPALM expects.

Configuration Data Normalization

Considerations for building out your own normalized data model:

  • Do not follow the vendor’s syntax, this simply pushes the problem along
  • Having thousands of configuration “nerd nobs” requires expert-level understanding of a data model that will not likely be as well documented or understood as the vendor’s CLI. The point is to remove complexity, not shift complexity
  • Express the business intention, not the vendor configuration
  • Abstract the uniqueness of the OS implementation away from from intent
  • Express the greatest amount of configuration state in the least amount of actual data

Generally speaking, when I am building a data model, I try to build it to be normalized. While not always achievable on day 1 (due to lacking complete understanding of requirements or lacking imagination) the thought process is always there. Even if dealing with a single vendor, the first question I will ask is “would this work for another vendor?”

Reviewing the following configurations from multiple vendors:

I will pick out what is unique from the configuration, and thus a variable.

Based on observation of the above configurations, the following normalized data structure was created.

bgp:
  asn: 6500
  networks:
    - "1.1.1.0/24"
    - "1.1.2.0/24"
    - "1.1.3.0/24"
  neighbors:
    - description: "NYC-RT02"
      ip: "10.10.10.2"
    - description: "NYC-RT03"
      ip: "10.10.10.3"
    - description: "NYC-RT04"
      ip: "10.10.10.4"

With this data-normalized model in mind, you can quickly see how the below template can be applied.

router bgp {{ bgp['asn'] }}
  router-id {{ bgp['id'] }}
  address-family ipv4 unicast
{% for net in bgp['networks'] %}
    network {{ net }}
{% endfor %}
{% for neighbor in bgp['neighbors'] %}
  neighbor {{ neighbor['ip'] }} remote-as {{ neighbor['asn'] }}
    description {{ neighbor['description'] }}
    address-family ipv4 unicast
{% endfor %}

In this example, the Jinja template provides the glue between a normalized data model and the vendor-specific configuration. Jinja is just used as an example; this could just as easily be converted to an operation with REST API, NETCONF, or any other vendor’s syntax.

The Case for Localized Simple Normalized Data Models

Within Network to Code, we have found that simple normalized data models tend to get more traction than the more complex ones. While it is clear that each enterprise building its own normalized data model is not exactly efficient either—since each organization is having to reinvent the wheel—the adoption tends to offset that inefficiency.

Perhaps there is room within the community for some improvement here, such as creating a venue to easily publish data models and have others consume those data models. This can serve as inspiration, a starting point, and a means of comparison of various different normalized data models without the rigor that is required for solidified data models that would come from OC/IETF.

Data Normalization Enforcement

Enforcing normalized data models is filled with tools. This will be covered in more detail within the Data Model blog, but here are a few:

  • JSON Schema
  • Any relational database
  • Kwalify

You may even find a utility called Schema Enforcer valuable if you’re looking at using JSON Schema for data model enforcement within a CI pipeline. Check out this intro blog if you’re interested.

Conclusion

There are many ways to normalize data, and many times when the vendor syntax or output is nearly the same. You may be able to reuse the exact same code from one vendor to another. By creating normalized data models, you can better prepare for future use cases, remove some amount of vendor lock-in, and provide a consistent developer experience across vendors.

Creating normalized data models takes some practice to get right, but it is a skill that can be honed over time and truly provide a richer experience.

-Ken

]]>
Ken Celenza
Nautobot Ansible Variable Management at Scale2022-04-05T00:00:00+00:002022-04-05T00:00:00+00:00https://blog.networktocode.com/post/nautobot-ansible-variable-management-at-scaleAnsible’s variable management system is fairly extensible, however the cost of that extensibility and associated design choices can cause some inefficiencies when interacting with external systems. Specifically, it can become rather time consuming to get all of the data a playbook requires at run time. Leveraging Nautobot’s GraphQL and Nautobot Ansible collection, we will explore what an optimal solution would be.

A Quick Lesson in Ansible Inventory

Initially Ansible had created an Ansible Dynamic Inventory, which was a script that would print to the terminal a JSON serializable structure in a specific format. Once Ansible 2.4 was released, there was support for Ansible Inventory Plugins, which provide a more object-oriented and Pythonic experience, as well as a separation of the configuration (generally via YAML files) from the inventory itself.

With both of these dynamic inventory types as well as any static inventory, the inventory must be compiled before the play runs. This means that all inventory and variables are collected before a playbook is run. If your playbook requires a connection to only a single device and needs a single configuration parameter, this would still require the entire inventory and variables to compile, which is the same as if the playbook had to connect to thousands of devices for dozens of variables each.

This design certainly has its advantages, such as Ansible’s use of the hostvars (not to be confused with host_vars) magic variable. Meaning, even if you need to connect to a only single device, you can still have access to another device’s variable. This would allow you to do something like:

  - name: "SET SPINE_INTERFACE BY LOOKING INTO THE SPINES VARIABLE STRUCTURE"
    set_fact:
      spine_interface: "{{ hostvars[inventory_hostname[:5] ~ 'spine01']['interface_mappings'][inventory_hostname] }}"

However, such a requirement is not often needed, and it is perfectly valid to provide an alternative solution without such a feature, as we will explore.

The Speed Issue

It is obvious that the standard design causes a host of speed issues when not all variables are required. Within Nautobot, to collect all of the interfaces and config context of thousands of devices could literally take hours. This is because the queuing mechanism looks something like:

Normal Ansible Queuing

In this example, it could potentially take hundreds or even thousands of API calls before the first task runs, and all of that data needs to be stored in memory. This is true even if the only data we require actually looks like:

Required Data Needs

GraphQL to the Rescue

Recognizing the speed issues, at Network to Code we have worked with our customers for years on various work-arounds, which was one of the drivers to introducing GraphQL to Nautobot. What we have observed from dozens of engagements with our customers is:

  • Playbooks rarely need access to all data
  • There is generally a single “generate configuration” playbook that does need access to all data
  • There are usually different ways data may need to be requested
  • Managing separate inventories is complicated and leads to issues
  • The primary issue is the way in which data is queued, with Ansible expecting all data to be queued beforehand

With that in mind, we looked to change the way that variables are populated; this is different from saying we looked to change how the inventory plugin works. The basic premise is to get the bare minimum inventory from the inventory plugin, then populate the data within the play itself. The direct benefit is that inventory does not require nearly the amount of data (which must be present before any task in the play is run) before starting. And we have also distributed the amount of data to be smaller API calls made while the play is running. Additionally, if we do not need all data, we simply do not need to get that data at all.

It is in that second step that GraphQL really shines. GraphQL provides a single API that can be called to send only the data that is required. There is a lookup and an Ansible module within Nautobot’s Ansible Collection. This means that we can use a single inventory setup for all of our playbooks and have specific tasks to get the data required for specific playbooks. We can also change the majority of the API calls to happen per device rather than all up front. This has a significant performance impact, as bombarding the server with hundreds or thousands of API calls at once can cause performance issues—not only for the user of Ansible, but potentially deteriorating the performance of the server for everyone else.

GraphQL Two Devices

Even when you do require all of the data, it looks more like this (where time is left-to-right and scaled to your actual needs):

GraphQL All Data

Note: The depicted batch size is equal to the fork size you have chosen. There are also alternative Ansible strategies one can explore outside the scope of this blog.

Note: The API calls shown are not meant to represent the actual amount a production instance may have, but merely to illustrate the point.

Example Playbook and Inventory

So let’s take a look at what such a playbook and inventory may look like.

plugin: networktocode.nautobot.inventory
api_endpoint: "https://demo.nautobot.com"
validate_certs: False

config_context: False
plurals: False
interfaces: False
services: False
racks: False
rack_groups: False

compose:
  device_id: id

group_by:
  - site
  - tenant
  - tag
  - role
  - device_type
  - manufacturer
  - platform
  - region
  - status

A playbook to obtain and populate the data could look like:

---
- name: "TEST NAUTOBOT INVENTORY"
  connection: "local"
  hosts: "all"
  gather_facts: "no"

  tasks:
      - name: "SET FACT FOR QUERY"
        set_fact:
          query_string: |
            query ($device_id: ID!) {
              device(id: $device_id) {
                config_context
                hostname: name
                position
                serial
                primary_ip4 {
                  id
                  primary_ip4_for {
                    id
                    name
                  }
                }
                tenant {
                  name
                }
                tags {
                  name
                  slug
                }
                device_role {
                  name
                }
                platform {
                  name
                  slug
                  manufacturer {
                    name
                  }
                  napalm_driver
                }
                site {
                  name
                  slug
                  vlans {
                    id
                    name
                    vid
                  }
                  vlan_groups {
                    id
                  }
                }
                interfaces {
                  description
                  mac_address
                  enabled
                  name
                  ip_addresses {
                    address
                    tags {
                      id
                    }
                  }
                  connected_circuit_termination {
                    circuit {
                      cid
                      commit_rate
                      provider {
                        name
                      }
                    }
                  }
                  tagged_vlans {
                    id
                  }
                  untagged_vlan {
                    id
                  }
                  cable {
                    termination_a_type
                    status {
                      name
                    }
                    color
                  }
                  tagged_vlans {
                    site {
                      name
                    }
                    id
                  }
                  tags {
                    id
                  }
                }
              }
            }

      - name: "GET DEVICE INFO FROM GRAPHQL"
        networktocode.nautobot.query_graphql:
          url: "{{ nautobot_url }}"
          token: "{{ nautobot_token }}"
          validate_certs: False
          query: "{{ query_string }}"
          update_hostvars: "yes"
          graph_variables:
            device_id: "{{ device_id }}"

The above shows update_hostvars set, which will publish the variables for any playbook task after this point. Within a playbook that starts like the above, you would have access to the data. If the playbook did not have any requirements for the above data, you would simply not include such tasks.

Life without GraphQL

Without GraphQL the same can still be accomplished. In the past at Network to Code, we have used Ansible custom modules. Within the custom module you can populate the ansible_facts key, which will actually update the data associated with a device. So if a custom Ansible module had the below code:

    results = {"ansible_facts": {"ntp": ["1.1.1.1", "2.2.2.2"]}}
    module.exit_json(**results)

you could have access to the data in the playbook as usual, such as:

  - debug: var=ntp

Inventory Recommendations

If you will notice in the example inventory, the inventory is minimal. The basic premise is that you should disable any data not required to create groups and, generally speaking, retain only the minimum amount of information required to connect to the device, such as IP address and network OS.

Conclusion

Changing the queuing mechanism has dramatic effects on the overall speed, and Nautobot’s ecosystem was built to take advantage of these capabilities. But that is not the only way to work this, as you could build a custom module as well. When thinking about performance and scalability of the data, you should consider a lightweight inventory and more detailed data on a task level.

-Ken Celenza

]]>
Ken Celenza
Intro to Pandas (Part 3) - Forecasting the Network2022-03-29T00:00:00+00:002022-03-29T00:00:00+00:00https://blog.networktocode.com/post/forecasting-the-networkForecasting is a fascinating concept. Who does not want to know the future? Oracles from the ancient times, a multitude of statistical forecasting models, and machine learning prediction algorithms have one thing in common: the thirst to know what is going to happen next. As fascinating as forecasting is, it is not an easy conquest. There are phenomena that can be predicted, because we understand what causes them and we have a large amount of historical data. An example is electricity consumption: it exhibits seasonality and predictability. On the other hand, there are phenomena that are difficult to predict, such as market trends that depend on human emotion and unpredictable world events (wars for example).

Where does the network fall in the spectrum of forecasting ease and accuracy? How easily and effectively can we predict the next outage, a big dip in performance, or an anomaly that may point to an attack? Starting from the assumption that we have a large amount of data (and events mostly depend on machine behavior), the network can be quite predictable. A variety of events, such as outages, are predictable—some planned and some caused by happenstances, such as an overload or human error.

As any human, the network engineer would like to have an oracle at their disposal to let them know about the future occurrence of important events. Deciding on the size and availability of network resources based on forecasting traffic and usage models, knowing how often one should update or reconfigure with minimal disruption, and planning maintenances based on traffic patterns are some powerful use cases for a network operator. Hence this blog, which gives programmatic tools for the network engineer to automate forecasting of the network with Python Pandas.

Prerequisites

This blog is part of a series. You can read this independently of the series if you are familiar with Pandas and how to use Jupyter notebooks. However, you can start your journey from the beginning, especially if you want to actively read and work out the examples. I recommend starting with Jupyter Notebooks for Development and then Introduction to Pandas for Network Development. You can also read the Intro to Pandas (Part 2) - Exploratory data analysis for network traffic, however this part is not necessary in order to understand forecasting.

What Is Statistical Forecasting?

Statistical forecasting is the act of creating a model to predict future events based on past experience with a certain degree of uncertainty. In this blog, we will focus on statistical forecasting methods. A variety of machine learning forecasting is analyzed in other blogs; however simple is better, as has been shown by studies for the past 40 years in the M competition and analysis. Statistical methods are less computationally complex, and the best Machine Learning fitting methods are not always optimal for forecasting.

Basic Forecasting Methods

Below is a list of basic forecasting methods and their definitions:

  • Straight line: this is a naive prediction that uses historical figures to predict growth and only applies to an upward trend.
  • Moving averages: one of the most popular methods that takes into account the pattern of data to estimate future values. A well known implementation of moving averages is the Auto Regressive Integrated Moving Average (ARIMA).
  • Linear regression: in this case as well, a straight line is fitted to the data; however this time it can predict upward or downward trends.
  • Multiple linear regression: if we want to use two or more variables to predict the future of another variable, for example use holidays and latency to predict network traffic patterns, multiple linear regression is our friend.

We will review implementations of the two most popular techniques: moving averages and linear regression with Pandas libraries.

How to Implement Forecasting

These basic steps are part of almost every forecasting implementation:

  • Preprocessing: it may include removing NaN, adding metadata, or splitting your data in two distinct parts: the training data, which is used to make predictions, and the test data, which is used to validate predictions. Splitting your data is a whole article or two on its own: should you split data in half, in random chunks, etc.
  • Pick your poison…ehm…model: this may be the most difficult part, and some Exploratory Data Analysis may be required to pick a good algorithm.
  • Analyze the results: analysis is usually performed visually with graphical methods.
  • Iterate: periodic fine-tuning of the forecasting method may include changing algorithm parameters.

Forecasting the Network Example

Now that we know the basics about the theory of forecasting, let’s implement all the steps and apply moving averages and linear regression to a network dataset.

Dataset

The dataset that we will use is a the Network Anomaly Detection Dataset. It includes Simple Network Management Protocol (SNMP) monitoring data. SNMP is the de facto protocol when it comes to telemetry for network appliances and can track a variety of interesting data related to machine performance, such as bytes in/out, errors, packets, connection hits, etc.

You will find the code referenced in the examples at the Pandas Blog GitHub repository.

Preprocessing

Preprocessing of the data includes cleaning and adding metadata. We need to add dates to this specific dataset.

We begin with the necessary imports and loading the csv file to a Pandas data frame:

import numpy as np
import pandas as pd

network_data = pd.read_csv("../data/network_data.csv")
network_data.columns

Index(['ifInOctets11', 'ifOutOctets11', 'ifoutDiscards11', 'ifInUcastPkts11',
       'ifInNUcastPkts11', 'ifInDiscards11', 'ifOutUcastPkts11',
       'ifOutNUcastPkts11', 'tcpOutRsts', 'tcpInSegs', 'tcpOutSegs',
       'tcpPassiveOpens', 'tcpRetransSegs', 'tcpCurrEstab', 'tcpEstabResets',
       'tcp?ActiveOpens', 'udpInDatagrams', 'udpOutDatagrams', 'udpInErrors',
       'udpNoPorts', 'ipInReceives', 'ipInDelivers', 'ipOutRequests',
       'ipOutDiscards', 'ipInDiscards', 'ipForwDatagrams', 'ipOutNoRoutes',
       'ipInAddrErrors', 'icmpInMsgs', 'icmpInDestUnreachs', 'icmpOutMsgs',
       'icmpOutDestUnreachs', 'icmpInEchos', 'icmpOutEchoReps', 'class'],
      dtype='object')

The table column titles printed above include characteristic SNMP data (such as TCP active open connections, input/output packets, and UDP input/output datagrams) that offer a descriptive picture of performance status and potential anomalies in network traffic. After this we can add a date column or any other useful metadata. Let’s keep it simple here and add dates spaced evenly to days using a column of our data, the ipForwDatagrams:

dates = pd.date_range('2022-03-01', periods=len(network_data["ipForwDatagrams"]))

We are ready to review the fun part of forecasting, by implementing Moving Average.

Moving Average

Pandas has a handy function called rolling that can shift through a window of data points and perform a function on them such as an average or min/max function. Think of it as a sliding window for data frames, but the slide is always of size 1 and the window size is the first parameter in the rolling function. For example, if we set this parameter to 5 and the function to average, we will calculate 5 averages in a dataset with 10 data points. This example is illustrated in the following figure, where we have marked the first three calculations of averages:

How does this fit with forecasting? We can use historic data (last 5 data points in the above example), to predict the future! Every new average from this rolling function, gives a trend for what is coming next. Let’s make this concrete with an example.

First we create a new data frame that includes our metadata dates and the value we want to predict, ipForwDatagrams:

df = pd.DataFrame(data=zip(dates, network_data["ipForwDatagrams"]), columns=['Date', 'ipForwDatagrams'])
df.head()

Date ipForwDatagrams
0 2022-03-01 59244345
1 2022-03-02 59387381
2 2022-03-03 59498140
3 2022-03-04 59581345
4 2022-03-05 59664453

Then we use the rolling average. We apply it on the IP forward Datagrams column, ipForwDatagrams, to calculate a rolling average every1,000 data points. This way we use historic data to create a trend line, a.k.a. forecasting!

df["rolling"] = df["ipForwDatagrams"].rolling(1000, center=True).mean()

Finally, we will visualize the predictions:

# Plotting the effect of a rolling average
import matplotlib.pyplot as plt
plt.plot(df['Date'], df['ipForwDatagrams'])
plt.plot(df['Date'], df['rolling'])
plt.title('Data With Rolling Average')

plt.show()

The orange line represents our moving average prediction and it seems to be doing pretty well. You may notice that it does not follow the spikes in the data, it is much smoother. If you experiment with the granularity, i.e., smaller than 1,000 rolling window, you will see an improvement in predictions with loss to additional computations.

Linear Regression

Linear regression fits a linear function to a set of random data points. This is achieved by searching for all possible values for the variables a, b that define a line function y = a * x + b. The line that minimizes the distance from the dataset data points is the result of the linear regression model.

Let’s see if we can calculate a linear regression predictor for our SNMP dataset. In this case, we will not use time series data; we will consider the relationship, and as a consequence the predictability, of a variable using another. The variable that we consider as a known, or historic data, is the TCP input segments tcpInSegs. The variable that we are aiming to predict is the output segments, tcpOutSegs. Linear Regression is implemented by linear_model in the sklearn library, a powerful tool for data science modeling. We set the x var to tcpInSegs column from the SNMP dataset and the y var to tcpOutSegs. Our goal is to define the function y = a * x + b, specifically a and b constants, to determine a line that predicts the trend of output segments when we know the input segments:

from sklearn import linear_model
import matplotlib.pyplot as plt

x = pd.DataFrame(network_data['tcpInSegs'])
y = pd.DataFrame(network_data['tcpOutSegs'])
regr = linear_model.LinearRegression()
regr.fit(x, y)

The most important part of the above code is the use of linear_model.LinearRegression() function that does its magic behind the scenes and returns a regr object. This object gives us a function of a, b variables, that can be used to forecast the number of TCP out segments based on the number of input TCP segments. If you do not believe me, here is the plotted result:

plt.scatter(x, y,  color='black')
plt.plot(x, regr.predict(x), color='blue', linewidth=3)
plt.xticks(())
plt.yticks(())
plt.show()

The blue line indicates our prediction, and if you ask me, it is pretty good. Now how about trying to predict IP input received, ipInReceives, from ICMP input messages (icmpInMsgs)? Would we achieve such good forecasting? Let’s just change the x and y variables and find out:

x = pd.DataFrame(network_data['icmpInMsgs'])
y = pd.DataFrame(network_data['ipInReceives'])
regr = linear_model.LinearRegression()
regr.fit(x, y)

We use the same code as above to generate the plot. This one does not look nearly as accurate. However, the blue line indicates the decreasing trend of the IP in received packets based on ICMP inputs. That is a good example of where another forecasting algorithm could be used, such as dynamic regression or a nonlinear model.

Recap

We have reviewed two of the most popular forecasting methodologies, moving averages and linear regression, with Python Pandas. We have noticed the benefits and accuracy of forecasting as well as its weaknesses.

This concludes the Pandas series for Network Automation Engineers. I hope you have enjoyed this as much as I have and added useful tools for your ever growing toolbox.

-Xenia

Resources

]]>
Xenia Mountrouidou