10fb does not support flow control autoneg - Vmware Esx 6.5

I had recently used the HPE custom ISO image downloaded from the vmware website for my HPE Gen 9 server. I wanted to install Vmware Esxi 6.5 on it. Everything went as expected and I was able to get that installed and running. This server had a dual port 10Gb network adapter added on as a card from Intel as you see below.

1.JPG

After a while, I started getting loads of messages in my VmKernel.log file under /var/log/vmkernel.log and the message read as

10fb does not support flow control autoneg

So after looking around, I figured this might be an issue with my native driver for this card which was of the version 1.4.1.

2.JPG

I then proceeded to download a different version of the driver for this card to see if that fixes the problem and I found a version ixgbe version 4.5.3 on the Vmware website and I downloaded that. Click on this LINK.

This is compatible with Esxi 6.5, so don't worry. After this was downloaded I copied it to a datastore locally on the host and then SSH'ed using the root password and installed the VIB using the below command.

esxcli software vib install -v {Complate path to the VIBFILE}

Reboot the host

After the host comes back up, disable the old driver and enable the new one

esxcli system module set -e=true -m=ixgbe (This enables the new driver)
esxcli system module set -e=false -m=ixgben (This disables the old driver)

After this has been completed, you can reboot the host and then check the vmkernel log again using the below command to see if you are getting the alert again, and hopefully you should not!

tail /var/log/vmkernel.log -f

You can check the driver on your vmnic adapter as well by using                                               esxcli network nic get -n vmnic6 and you should get the below output with the updated driver version.

3.JPG

-Ali Hassan

RPC: Port mapper failure - Unable to receive: errno 113 (No route to host)

So I was trying to make an NFS server on my Linux box running Centos 7 and after successfully installing the NFS utilities and exporting a share called /nfsshare, I was trying to mount it on my client box. However when I ran the command showmount -e 10.100.76.4,  which should show the existing mounts on the nfs server, I got the below error message.

clnt_create: RPC: Port mapper failure - Unable to receive: errno 113 (No route to host)

So I had to add some parameters on my NFS server's firewall and after that everything was fixed. On my NFS server I had to say:

firewall-cmd --permanent --add-service=mountd
firewall-cmd --permanent --add-service=nfs
firewall-cmd --permanent --add-service=rpc-bind
firewall-cmd --reload

 

Incorrect disk usage information on a Windows Server

I have a windows 2008 server and I ran into a situation, where one of my disks was showing up incorrect used space information and that was driving me crazy!

As you can see below, my drive was showing up as 81.6 GB utilized but in reality, I was only using 8GB of it. So where was the extra space coming from?! 

obackups.JPG

I checked my shadow copy settings because sometimes that is the culprit, but I had none configured.

shadow.JPG

Also, I ran tools like SpaceMonger which I use to identify space issues and that did not give me anything. So after a while, one of my team member pointed me to check the hidden files and system files and then I checked the size of my hidden "Recycle bin" which I found to be eating up all that extra space!

recycle.JPG

How to remove conflicting VIB's during Esxi host upgrade

So you want to upgrade your Esxi version from 5.5 to 6.5 using the host upgrade option with the vsphere client, which in my case that is what I was trying to perform. I have a Cisco UCS server, so I downloaded the custom Cisco image from the Vmware website to make life a little easy during the upgrade.

So when you create a new baseline for your Cisco UCS image and attach it to the host (I am assuming you guys know how to create a baseline and attach it to the host) and within update manager click on the "Scan for Updates" button and select "Upgrades".

4.JPG

You notice that after the scan completes in the summary box below, that the status details say that there are a few conflicting VIB's that need to be removed in order for the upgrade to proceed.

1.JPG

Here is what you do to remove those VIB's, so that you can proceed with the upgrade.

  • Enable SSH on the desired host from the security profile of the host
  • Use Putty and ssh to the host using the root credentails
  • Once at the console, type  the command --> esxcli software vib list
2.JPG
  • Match the conflicting Vib name that you encountered earlier to this list and note down the name. (In my case it was net-mst)
  • So at the ssh console type the command --> esxcli software vib remove -n=net-mst
3.JPG
  • Once the VIB has been removed, it will prompt you for the reboot. So go ahead and reboot your host.
  • Once the host is back and connected within your vcenter, go under update manager and scan for updates again and this time, you should see the image being Non-Compliant. Now you can proceed with the Remediation of this host to apply this image.
5.JPG

Can i cancel Vmware snapshot, stuck at 99%!

This is probably the most trickiest decision to make at any time, you have clicked on the action to delete a snapshot on a virtual machine and its stuck at 99% for like a very long time.  You want to cancel the task, but you are not sure if its safe?!

I have had to do this a couple of times and I am thankful, everything has gotten back to normal. So the situation is, you have a snapshot at 99% and you gather up the courage to cancel it and it gets stuck again and you are just waiting! First I recommend reading up the below blog and do a quick check if at all you need to perform a hard kill. (Read this). Its a pretty well written blog and in case you want to do ahead, which I did on a couple of occasions, just enabled SSH on your host and type in the command:

services.sh restart ( Please note that this will not affect your working virtual machines and its completely safe, you might see the host getting disconnected from vcentre for a bit but it will come back, so don't worry)

Once done, the snapshot tasks will be cancelled and then you may be prompted to consolidate your disks, which would be slow as well but please don't cancel the consolidation at this point.

I know it's not recommended to cancel a snapshot, but sometimes you have to take a leap of faith!