Configuring ISCSI Target using VSAN 6.5

Introduction to ISCSI Target using VSAN 6.5

With VMWare vSphere 6.5 release, VSAN 6.5 also extended workload support to physical servers using ISCSI Target Service. iSCSI targets on VSAN are managed the same as other objects with Storage Policy Based Management. vSAN functionality such as deduplication, compression, mirroring, and erasure coding, RAID-1, RAID-5, RAID-6 can be utilized with the iSCSI target service. CHAP and Mutual CHAP authentication is supported . Leveraging VSAN, physical servers and clustered applications can take benefits simplicity, centralized management and monitoring, and high availability. A LUN is represented by an individual .VMDK file, backed by VSAN object.

There are no floating VIP’s, ISCSI will be using vmkernal ports. All the hosts should have the same vmk nics for iSCSI. iSCSI will work with an active/passive architecture with a target being active on a single host.  An initial connection can be made to any host, and iSCSI Redirects will be used to redirect client traffic to the host and VMKernel port that owns the target.

Once we have VSAN configured correctly, next step is to Enabled the iSCSI Target Service.

Enable the iSCSI Target Service

  1. Login to VMware vSphere Web Client and Enable the iSCSI Target Service. 

1

2

3

Now we have ISCSI Target Service successfully enabled next step is to create ISCSI Target.

4 5 6 7 8

As we have ISCSI Target ready, Physical workload’s ISCSI initiators can be configured to get access to VSAN ISCSI Target.

11 12

14

VSAN is always known for simplicity. That’s how you can simply configure ISCSI Target in VSAN 6.5 and can be presented as ISCSI LUN to a Windows 2012 Server,

VMware VSAN Network Design consideration

VMware Virtual SAN is a distributed shared storage solution that enables the rapid provisioning of storage within VMware vCenter. As Virtual SAN is a distributed shared storage, it is very much dependent on correctly configured network for Virtual Machines I/O and for communication between Virtual SAN Cluster nodes.  Because the majority of virtual machine I/O travels the network due to the distributed storage architecture, highly performing and available network configuration is critical to a successful Virtual SAN deployment.

In this post we will be covering few important points need to be considered from network perspective before VMware VSAN deployment.

Supported Network Interface Cards

In a VMware Virtual SAN hybrid configuration, Virtual SAN supports both 1 GB and 10 GB Network Interface Cards. If you have 1 GB Network Interface card installed on ESXi host than VMware requires this NIC to be dedicated only for Virtual SAN traffic. If a 10Gb NIC is used, this can be shared with other network traffic types. It is advisable to implement QoS using Network I/O Control to prevent one traffic to claim all the bandwidth. Considering the potential for an increased volume of network traffic between the hosts to achieve higher throughput, for Virtual SAN All Flash Configuration VMware supports only 10 GB Network Interface Card which can be shared with other network traffic types.

Teaming Network Interface Cards

Virtual SAN supports Route based on IP-hash load balancing, but cannot guarantee improvement in performance for all configurations. IP-hash performs the load balancing when Virtual SAN traffic type is among its many network traffic types. By design, Virtual SAN network traffic is not designed to load balanced across teamed network interface cards. NIC Teaming for VMware Virtual SAN Traffic is a way of making the Virtual SAN traffic network high available, where standby adapter take over the communication if primary adapter fails.

Jumbo Frame Support

VMware Virtual SAN supports Jumbo Frame. Even if use of Jumbo frame reduce CPU utilization and improve throughput, VMware recommends to configure jumbo frame only if the network infrastructure already supports it. As vSphere already use TCP segmentation offload (TSO) and large receive offload (LRO), Jumbo frame configured for Virtual SAN provides limited CPU and performance benefits. The biggest gains for Jumbo Frames will be found in all flash configurations.

Multicast Requirement

Multicast forwarding is a one-to-many or many-to-many distribution of network traffic. Rather than using the network address of the intended recipient for its destination address, multicast uses a special destination address to logically identify a group of receivers.

One of the requirements for VSAN is to allow multicast traffic on the VSAN network between the ESXi hosts participating in the VSAN cluster. Multicast is being used in discovering ESXi host and to keep track of changes within the Virtual SAN Cluster. Before deploying VMware Virtual SAN, testing performance of switch being used for Multicast is also very important. One should ensure a high quality enterprise switch is being used for Virtual SAN multicast traffic. Virtual SAN health services can also be leveraged to test Multicast performance.

Summary of network design considerations

  • Virtual SAN Hybrid Configuration support 1 GB and 10 GB network.
  • Virtual SAN All Flash Configuration support 10 GB network.
  • Consider implementing QoS for Virtual SAN Traffic using NIOC.
  • Consider Jumbo frame for Virtual SAN traffic if it is already configured in network infrastructure.
  • Consider NIC team for availability / redundancy for Virtual SAN traffic.
  • Multicast must be configured and functional between all hosts.

I hope this is informative for you. Thanks for Reading, be social and share it on social media if you feel it is worth sharing it.  Happy Learning … 🙂

Configuring Windows Server 2016 as iSCSI Server

In this post I’m going to show the steps to install and configure iSCSI Server in Windows Server 2016. iSCSI ( Internet Small Computer System Interface) allows to send SCSI command over LAN or WAN. iSCSI devices are disks, tapes, CDs, and other storage devices on another networked computer that you can connect to. While accessing storage devices using iSCSI, the client will be referred as iSCSI initiator and the storage device will be referred as iSCSI target.

  Step – 1  : Configuring Windows Server 2016 as iSCSI Server

The first thing required to configure Windows Server 2016 as a iSCSI Server is to install iSCSI Target Server role on Windows Server 2016.  Open the Add Roles and Feature Wizard and choose iSCSI Target Server from the list of Roles under File and Storage Services. Click Install to proceed further.

1Do not choose any option from the feature list. Click on Next on another slides to finish the iSCSI Target Server Role installation.2

Post sucessful installation of iSCSI Target server role. Open Server Manager and Click on File and Storage Services.

3

4

Click on iSCSI. To share storage, the first thing is to create an iSCSI LUN. iSCSI virtual disk is backed by a VHD. Click on “To create an iSCSI virtual disk, start the New iSCSI Virtual Disk Wizard” 5

Select the Server and the volume. Click on Next.6

Specify the iSCSI virtual disk name and click Next.

7

Provide the size of the virtual disk. Choose the option from Fixed Size, Dynamically expanding or Differencing depending on your organization requirements. 8Choose New iSCSI Target.9Choose the Target Name10Next step we need to do is to choose the Access Server who will be accessing iSCSI Server. Click on Add.11

Before adding iSCSI connecting initiator to the list configure iSCSI initiators to connect to this iSCSI Server.

Click on Add iSCSI initiator. You can see all the configured iSCSI imitators connecting to this iSCSI Server. Click OK to proceed further.

17Click on Add to add more iSCSI initiator’s to the list of Access servers.18As we don’t have CHAP authentication configured. Click on Next to proceed.19Review the setting. Click on Create to finish the setup.20

21

22

In this post, we covered the steps to configure Windows Server 2016 as an iSCSI Server. I hope this is informative for you. Thanks for Reading!!. Be social and share it in social media, if you feel worth sharing it

Health and Performance Monitoring Service in VSAN 6.2

Being a VSAN administrator we generally use VSAN observer while troubleshooting VSAN Performance issues. VSAN observer was first introduced in 5.5 U1 GA and is a part of Ruby vSphere console ( RVC )  tool an interactive command line shell for vSphere management. VSAN observer allows to dig deep into VSAN’s performance, see the IOPS, latency, Outstanding I/Os and congestion’s and lot more. The information is provided at different layers in the Virtual SAN stack to help troubleshoot storage performance. Even VSAN Observer is a powerful tool it itself have some drawbacks.

Drawbacks of VSAN Observer :

  • VSAN Observer only provides real-time view of the system. It does not provide any historic performance data.
  • Not integrated with vSphere Web Client
  • VSAN Observers is a complex tool
  • Impacts vCenter Server as the tool is launched via RVC on vCenter Server.

To overcome these drawbacks, in VSAN 6.2 introduced Performance Service to get detailed understanding of VSAN performance. Virtual SAN performance service monitors the performance of your Virtual SAN environment, and investigate potential problems. The performance service collects and analyzes performance statistics and displays the data in a graphical format. You can use the performance charts to manage your workload and determine the root cause of problems. You can view detailed performance statistics for the cluster, and for each host, disk group, and disk in the Virtual SAN cluster. You also can view performance charts for virtual machines and virtual disks.

VSAN Performance service is disabled by default. Turn on Virtual SAN performance service to monitor the performance of Virtual SAN clusters, hosts, disks, and VMs. When you turn on the performance service, Virtual SAN places a Stats database object in the datastore to collect statistical data. The Stats database is a namespace object in the cluster’s Virtual SAN datastore. Performance and Health monitoring of VSAN Components can be performed on Cluster as well as on individual host.

Procedure to Enable VSAN Performance Service

  • Login to vSphere Web Client and navigate to Virtual SAN Cluster.
  • Click on Manage tab and click settings.
  • Select Health and Performance, click on Edit to edit the performance service  settings.

ScreenshotSelect the Turn On Virtual SAN performance service check box. Select a storage policy for the Stats database object and click OK. Screenshot-1

You can see Health and Performance service is enabled and status as Healthy and compliant.

Screenshot-2Performance and Health monitoring of VSAN Components can be performed on Cluster as well as on individual host. Select

  • Virtual SAN – Virtual Machine Consumption : Virtual SAN displays performance charts for clients running on the Cluster / Host, including IOPS, throughput, latency, congestions, and outstanding I/Os.
  • Virtual SAN – Backend : Virtual SAN displays performance charts for the cluster back-end operations, including IOPS, throughput, latency, congestions, and outstanding I/Os.

Screenshot-3

Click on Ask VMware to know more details on the counter. Once you click Ask VMware, it will open a Knowledge Base Article related to the counter.

Screenshot-4

Screenshot-7

Screenshot-8

Screenshot-9

Screenshot-10

This concludes the configuration of Health and Performance service in VSAN 6.2. I hope this is informative for you. Thanks for reading !!!. Be social and share it in social media, if you feel worth sharing it.

 

 

Deploying EMC vVNX Community Edition

Virtual VNX (vVNX) is a software stack that provides many VNX features. vVNX Community Edition is a freely downloadable  virtual storage appliance (VSA), that can be downloaded onto ESX 5 or 6 servers, to run a software-defined unified VNX array. Once installed, you can leverage the vVNX vApp to provide storage services and apply VMware-based availability and protection tools to maintain it. It delivers unified block and file data services on general purpose server hardware, converting the server’s internal storage into a rich, shared storage environment with advanced data services.

Environmental requirements:

  • VMware infrastructure: VMware vCenter and ESXi Server, release 5.5 or later
  • Network infrastructure: 2x 1 GbE OR 2x 10 GbE
  • Battery-backed Hardware RAID controller required (512MB NV Cache recommended)

Virtual appliance configuration options:

  • 2 vCPUs at 2GHz+ and 12 GB RAM
  • Up to 4 TB Storage

During deployment of vVNX, deployment wizard will create three disk. Do not modify the existing disk. For capacity you should be adding additional disk. Don’t add any additional disk till the time appliance boots up completely first time. First boot of vVNX appliance will take a longer time. It took around 35 min in my lab first time. Subsequent boot won’t take longer time.

Deployment process

Login to vSphere Web Client and choose the host or cluster you want to deploy OVA.

Screenshot

Accept extra configuration and click next.

Screenshot-1

Accept the license agreement and click NEXT.

Screenshot-2

Screenshot-3

Choose the disk format. It is recommended to use Thick. As I am running the appliance in LAB, I configured disk format as Thin.

Screenshot-4

Choose the appropriate port group.

Screenshot-5

Provide the management interface IP Address.

Screenshot-6

Select Power on the VM and click on Finish. Do not add any disk till the time appliance boots up completely once.

Screenshot-7

Login to EMC Unisphere accessing MGMT IP Address using Internet browser. Login using default username / password i.e. admin/Password123#

Screenshot-8

Once logged in you will see configuration wizard for post deployment configuration.

Screenshot-9

Accept the License agreement and click on NEXT.

Screenshot-10

Change the admin password.

Screenshot-11

Wizard will give you the System UUID which is required for registering the product.

Screenshot-13

Login to EMC portal to download the License file. Provide the System UUID to generate license file.

Screenshot-14

Download the license file.

Screenshot-15

Import the license file in the appliance.

Screenshot-16

Choose the license file and click Finish.

Screenshot-17

Screenshot-18

Click Next post importing License to the appliance.

Screenshot-19

Provide the DNS Server IP address.

Screenshot-21

Provide the NTP Server IP Address and click Next.

Screenshot-23

Now it is required to create Storage pool. I have added additional disk to the appliance. Click on Create Storage Pools.

Screenshot-24

Give the name and description for the Storage pool.Screenshot-25

Click on the Highlighted icon to choose whether you want to use the Storage Tier for Capacity or Performance. Screenshot-26

Screenshot-27

Click Next Screenshot-28

Screenshot-29

Define the Capability profile. This is required if you want to use the Storage Tier for VMware VVols based storage provisioning.

Screenshot-30

Add the additional Tags if needed.

Screenshot-31

Click Finish to create Storage Pool.

Screenshot-32

Screenshot-33

Screenshot-34

Next you can configure iSCSI network interface to make device accessible to SCSI Clients. Provide the networking details for iSCSI interface.

Screenshot-35

Screenshot-36

Next we can configure the appliance as NAS Server. Click on highlighted icon to configure the appliance as NAS Server.

Screenshot-37

Type in the server name and the storage pool to be made available to NAS Clients.

Screenshot-38

Choose the interface and provide the network details for the interface.Screenshot-39

Choose appropriate sharing protocols.

Screenshot-40

Configure Directory Service if needed.Screenshot-41

Enable DNS for NSX Server.Screenshot-42

Click Finish to configure NAS Server.

Screenshot-43

Screenshot-44

Screenshot-45

Click Next to finish the configuration wizard. Screenshot-46

Screenshot-47

Screenshot-48

In this post we covered the process to deploy vVNX appliance and configure it as iSCSI and NAS Server. I hope this is informative for you. Thanks for reading !!!. Be social and share it in social media, if you feel worth sharing it.

 

 

Configuring SANSymphony – V10 Storage

While working on preparing Lab for Site Recovery manager, I was looking for a storage solution supported by VMware Site Recovery Manager. SANsymphony-V10 is a 10th generation Software-defined Storage solution for centralized SANs and Cloud storage. The SANsymphony-V software runs on standard x86 servers, providing one set of common storage services across all storage devices.

DataCore’s SANsymphony-V offers seamless integration with:

  • VMware vSphere® 5
  • VMware vSphere® Storage APIs for Array Integration (VAAI)
  • VMware vCenter™ Server
  • VMware vCenter Site Recovery Manager™ (SRM)

http://www.datacore.com/products/technical-information/sansymphony-v-prerequisites.

Lab Environment

  • SANSymphony Server – 192.168.109.235
  • vCenter Server – 192.168.109.200
  • ESX Host – 192.168.109.7

In this article I will be configuring virtual disk in SAN Symphony – V10 which can be used to create a VMFS Datastore or LUN for RDM on ESXi Host.

Login to the SAN Symphony Server using the Management console

Click on DataCore Server Panel

Screenshot-23

Double click on the SANSymphony Server. Under physical disk you can see the list of all physical disk attached to the server.

Screenshot-24

Click on Disk Pools and click on Create Disk Pool to create a new disk pool.

Screenshot-25

Disk pools are where physical storage devices—from JBOD enclosures to intelligent storage arrays—are centrally pooled and managed. Assign any name to new Disk Pool and Select the physical disk to be part of Disk Pool. Click Create to create the Disk pool.

Screenshot-26

Next is to create the virtual disk using the new disk pool.

Screenshot-28

A virtual disk is a storage resource made to look and behave like a complete disk volume when made available or “served” to a host. The host will discover a virtual disk in the same manner as locally-attached physical disks and it can be partitioned, assigned drive letters and used as any standard storage. Virtual disks can be created in logical sizes up to a maximum of 1 petabyte (PB), although the amount of space allocated from the pool will reflect the amount of physical space that has actually been used by the host.

Screenshot-29

Choose the Disk Pool Name and the DataCore Server.

Screenshot-30

Click on finish to create the virtual disk.

Screenshot-31

Click on Hosts to register the individual ESXi host or vCenter server. If you register VMware vCenter it will show all the connected host in vCenter.

Screenshot-39

Screenshot-40

Screenshot-41

Next step is to configure the virtual disk to serve the ESX hosts.

Screenshot-42

Right click on the virtual disk and click on Serve to Hosts.

Screenshot-43

Select the host and click on Next.

Screenshot-44

Click on Next.

Screenshot-45

Click on Finish to complete the configuration.

Screenshot-46

Screenshot-47

SANsymphony-V software may be installed on hosts (such as application servers, hypervisors, or virtual machines) to bring advanced SAN features closer to running applications. A hypervisor host should be identified for each virtual machine running SANsymphony-V software. This hypervisor host setting is intended for use with Virtual SAN configurations and allows virtual machines running SANsymphony-V software to create virtual disks from local storage resources and serve them directly to any of the hypervisor hosts in the group to use as storage.

To set the hypervisor host:

  1. Open the DataCore Server Details page for the virtual machine running SANsymphony-V software.
  2. In the Settings tab, select the Hypervisor host from the drop-down box or select None to remove the hypervisor host.
  3. Click Apply.

Screenshot-48

Next step is to add the IP address of the SAN Symphony server in the list of iSCSI Targets on the ESXi host.

Screenshot-36

Screenshot-37

You can see the disk after rescan.

Screenshot-53

Now this disk can be used to create VMFS Datastore or provision as a RDM to any virtual machine.