Out-of-band Management ports on NetApp – e0M vs SP vs Serial (and BMC!)

One of the things I’ve seen new (and sometimes existing..) customers to NetApp be most confused about, are the various ways of connecting to the system for management.

Over the years, there have been a couple of different out of band management systems (RLM and BMC are the older systems, SP on the newer ones). This post focuses on systems with Service Processor, or SP, as used in the FAS2200, FAS2500, FAS3100, FAS3200, FAS6100, FAS6200 and FAS8000 families. Lets start by going through the physical ports on the back of the controller. Where the ports are varies slightly by model, but the icons are consistent.

netapp-management-ports

A common question is “ok, so the wrench port is e0M, why doesn’t it just say that?”. The short answer is that it isn’t – although you could be forgiven for making that guess. Even NetApp’s label set for Clustered ONTAP includes an e0M cable label, despite their systems not having a specific port labelled e0M. Let’s look at how the ports connect up, from the point of view of an administrator:

netapp-management-block

 

From this simplified block diagram, you can see how they all relate. The port on the outside of the box actually connects to a switch inside the box, and that has both ONTAP’s e0M and the Service Processor’s IP interface connected to it. It’s almost literally running Ethernet on the motherboard traces (it’s actually something called RMII, not normal 802.3, but close enough). The internal switch is unmanaged, which is why you can’t do VLANs over that port. To clarify some more – the service processor is an independent CPU, with its own RAM, flash and OS running on it. It talks to ONTAP very closely, obviously, and to sensors throughout the system, but it’s separate to the main kernel running on the x86 CPU that runs ONTAP.

On 7-mode systems, e0M is just another interface in ONTAP, but in Clustered ONTAP, it can only be used for management LIFs, not data LIFs (or Cluster LIFs). On the FAS2500 and FAS8000, the wrench port, and therefore e0M, are finally 1G, but on previous systems, it’s only 100M. On 7-mode systems, you have to be careful – you don’t want it on the same subnet as any of your data service IPs, or traffic might go out through it, instead of a 1G or 10G port. To stop this, set “options interface.blocked.mgmt_data_traffic on” for all systems (running ONTAP 8.0.2 or higher), but ideally put it on a different subnet. It’s best practice to have, at the very least, a different OOB subnet to data services.

From our diagram again, if you need to do something like monitor boot/shutdown/reboot during an ONTAP upgrade, you can either connect to the Serial Console or the SP IP – the output is the same. I’ve done lots of remote upgrades this way. Once the system is up, and the SP is configured, there’s almost never a need to use the Serial Console again. The SPs don’t talk to each other, so if one node is online and the other is offline, you can’t use the online node to connect to the offline one.

If you’re the type who like managing your 7-mode NetApp from the command line, you would normally SSH into the e0M IP address, while for Clustered ONTAP, you would normally SSH to the Cluster Management IP. You could go from the SP to the system console, but that will be limited to 9600bps output through the serial connection, and if you’re looking at a lot of text, or pasting a lot of text, that can be limiting. For using GUI applications like OnCommand System Manager, you connect to the e0M IP on 7-mode, and the Cluster Management IP on Clustered ONTAP Systems.

A final question I’ve heard is “what is that USB port for?”. Officially, for regular users, it’s unsupported. Unofficially, you can use it to charge your iPhone while its running in hotspot mode, or to power your Airconsole.

Could this all be made simpler? Well, there are good purposes for all of the different IPs and interfaces you might use, so I’m not 100% convinced it could be. Everything new is complex initially, but once you get a handle on it, it all makes sense. Hope this has helped you!

Edit: 2018-07-11

Since writing this article, we’ve released some new platforms, which enable the USB port while at the boot menu, have faster serial ports, and move from an SP to a BMC. They’re pretty similar, except the BMC doesn’t tap into a serial link to the ONTAP controller – it relays it over an internal network, and it doesn’t share the wrench port with e0M anymore.

Autosupport stopped working

I installed a Clustered ONTAP System about 4 months ago, and I’ve been working with the customer since then on migration of their several hundred workloads onto the system in a staged approach. While doing a regular checkin, I noticed that Autosupport had stopped working on three of their 4 nodes, despite working when I finished the initial build.

Some checks of logs and within the organization showed that it had stopped working at the same time that they had changed their mail server IPs. Easy, you think. Maybe I put in the IPs into the autosupport setup? Checked that, nope, it goes to the hostname. Well, maybe I put in an /etc/hosts entry? (system services hosts show) – nope, wasn’t that. Checked autosupport’s destinations were configured the same on all four nodes – and they were. Maybe there’s a firewall issue? Ping from the node management LIF to the SMTP servers all works. Maybe it’s a specific SMTP firewall block? Used debug mode systemshell and tcp_client (note: don’t try this at home..) – that all worked. I got their firewall and exchange admins to check logs for the node management LIFs trying to make connections, and no attempts, other than my tcp_client ones. Ran pktt on all interfaces with target IPs of the mailhosts, and found no attempts to send out from e0M (home of the node-mgmt LIF), only one of the data LIFs. NetApp KB 3012724 talks about LIFs, and has this to say on the topic:

Clustered Data ONTAP 8.2.x:

  • AutoSupport is delivered from the node-mgmt LIF per node.

Looking through the autosupport history, the attempts fail, and the last error recorded is “FTP: weird server reply”. Uhh.. transport can be either http, https or smtp. Why is it mentioning FTP?

NetApp KB 201727 shows how to access debug logs for autosupport. I did that and saw the error message of “421 Service Unavailable”. Remember the FTP error? Well, that dear readers is because your NetApp, at its heart, is a big FreeBSD box, and it uses curl to send autosupport emails. And when curl gets a “421 Service Not Available” response from the mail server, that’s what it does.

Looking at the pktt logs closer, it’s because the autosupport email is going out of one of the data LIFs for an SVM on the host. Why would you suddenly decide to do that?! Well, let’s look at KB 3012724 again..

By default, routes for the node mgmt LIF have a lower (more preferred) metric than routes of data LIFs. However, the metric is used as a tie-breaker. The more-specific route to the destination will always be picked regardless of the metric.

..

Case 3 – The node-mgmt LIF and data LIFs on different subnets, destination is on the same subnet as the data LIFs. The implicit subnet route of the data LIFs (which isn’t seen in ngsh) will be the most specific route to the destination, and will therefore be the selected route. A data LIF will be used.

So, despite the earlier assurance that autosupport uses the node-mgmt LIF, the actual story is somewhat more complicated. It uses the node-mgmt LIF, unless it likes another one better. As for why only one of the 4 nodes worked? Well that node didn’t have any SVM LIFs on the same subnet as the mail servers, so it didn’t try using them to send the ASUP email.

So what do you do? You can either create individual host routes (/32) in the routing group for the node admin SVMs, or create a subnet route in there to prevent it occuring if IPs change again. I also found (as did it seems another posted on NetApp Communities), that setting the metric lower didn’t solve the problem, you had to set the metric to “1”.

Going forward, part of my system installation will always include a route for the mail server in the routing group that the node management SVM uses.

Go away and I will replace you with a very small shell script..

I did a recent migration of SAN to NAS for a client recently, and had to unmount all of their datastores.

This little one liner for the esxi shell lists all SAN volumes, then gets rid of them..

# esxcfg-scsidevs -m | sed -e ‘s/\:1 //g’ | awk ‘{ printf(“esxcli storage filesystem unmount -l %s;\nsleep 1;\nesxcli storage core device set –state=off -d %s;\n”,$4,$1);}’ >; /tmp/unmountluns.sh

Review the output for sanity, and run.

Setting clock from CLI is not allowed in this VDC

If you’re trying to set the time on a brand new out of box Cisco Nexus 5500 and you get the message “Setting clock from CLI is not allowed in this VDC.”, it’s because the clock protocol is set to ntp, even though you didn’t configure NTP. Go into config and type “clock protocol none”, and then it will let you set the time.

Then, when you’ve finished the config, set up NTP!

And while you’re at it, this page from Cisco is awesome for troubleshooting VPC

Sometimes you can’t get to here from there.

Sometimes things look impossibe. Like screwing in a screw with a handle directly above it (seriously, if I ever met the person who designed this..)

If I ever meet the person responsible for this, I'm punching them in the face

But there’s always a way around things. In this case, I used my fingers to tighten it into place.

Or this screw, which was cross threaded and wouldn’t come out. It didn’t stand up to a pair of channel locks. Sometimes you have to take the hard way.

And sometimes? What you don’t know is a blessing. I don’t have any photos of this unfortunately. But let’s say there was a two storey building, and on the second storey, was a server room, with two 50U racks inside it. You would think – ok, I need to add another one, these two obviously got in here. You enlist some burly gentlemen to help you move the rack up the stairs, and find problem 1 – it doesn’t fit through the back stairs. They take it down the back stairs, and up the front. Problem 2 – it doesn’t fit through the front door standing up, so you lay it down and move it into the corridor in front of the server room. Problem 3 – there are fire sprinklers in the middle of the ceiling, so you can’t stand it up. I scratched my head for a while, and then started removing bits of drop ceiling, until I found a section big enough to get it standing up, without any sprinkler pipes under it. I stood it up, and then went to move it into the server room. Same issue – but found another part of the drop ceiling without pipes to angle it down again to fit through the lower server room door. And it’s done and in place. I went back to the company’s office and asked them how they got the original racks in there?

The older and wiser sysadmin, who I hadn’t been working on for this project, answer sagely: “we built the room around them”. Sometimes not knowing is the solution to your problem. I doubt I would have even tried if I knew that…