- Power off the virtual machine.
- Unregister the virtual machine. Right-click the virtual machine in the Inventory and choose Remove from Inventory.
- Connect to the host:
- For ESX, use an SSH client. For more information, see Connecting to an ESX host using a SSH client (1019852).
- For ESXi, use Tech Support Mode. For more information, Tech Support Mode for Emergency Support (1003677) or Using Tech Support Mode in ESXi 4.1 and ESXi 5.0 (1017910).
- Change directory to the folder where the virtual machine resides:
cd /vmfs/volumes/datastore_name/virtual_machine_folder/
- Edit the virtual machines configuration file with a text editor.
- Add this line:
sched.swap.dir = /vmfs/volumes/datastore/
- Register the virtual machine again. For more information, see Registering or adding a virtual machine to the inventory (1006160).
*********************************************************************************
Just relocating the swap files to a different LUN does not result in performance issues.
Another issue could be a slowness in the process of vMotion if the swap files are not located on a share storage (instead the are on a local storage.)
Ques: From a architecture view, if we have all the swap space setup on a single lun for the production environment, would we need a corresponding same size lun setup for the dr site? Or would it be better just to have the vm take care of the swap at the dr site?
Ans: I would not suggest you to dedicate a lun for Swap files in DR site, because this storage space would be kept idle for most of its life. Storage being expensive you might not want to keep it idle. Instead, let the VM’s boot with the swap file with the virtual machine file.
However, if you ever get into a failover situation and run VM’s from the DR site, at the time of failback the Swap files would be replicated back to the Primary site, creating a similar situation which you have in hand today. Still the cost to replicate this during Failback might be much less as compared to idle storage space of this capacity.
Chose what suits your environment the best.
– As you might be aware storage vmotion creates a temp second vswp file on the source, so how did you size the .vswp datastore to incorporate storage vmotion functionality
– In the scenario you shared how did you addressed a memory overhead vswp file “VMX swap file” created with each VM post ESXi 5.0
– If I assume business critical VM’s part of your SRM scenario do not undergo frequent power cycles and the solution architect correctly for configured and memory reservation the .vswp file will be static with no changes to storage blocks post first full copy would you still recommend a dedicated .vswp datastore
There are also chances of the .vswp storage block changes as the .vswp file is not a static file although it may seem so (due to it’s size calculation on the datastore) but remember this is used for swapping and it is not guaranteed that the same memory pages are swapped all the time.
Now the SRM and swap file connection, in SRM your VM is going to have a reboot any way in the DR site in that case the vswp file is an additional load that you have to carry even for the initial copy. As we know the .vswp file gets created when the VM is powered on and it gets deleted when the VM is powered off.
So in SRM it’s good to have the .vswp files stored in some other datastore than the one participating in the Protection group.
Local swap files with SSD is a good design consideration which can give better performance if there is swapping.
As we all know that when we look at Interoperability with Storage vMotion and Storage DRS
“Due to some specific and limited cases where recoverability can be compromised during storage movement, Site Recovery Manager 5.0.1 is not supported for use with Storage vMotion (SVmotion) and is not supported for use with the Storage Distributed Resource Scheduler (SDRS) including the use of datastore clusters. If you use SVMotion to move a protected virtual machine from a datastore that is in a protection group to a datastore that is not protected, you must manually reconfigure the protection of that virtual machine.”
The above extract from the SRM Release notes clearly shows that its not a great idea to use storage vmotion for vm’s which are protected by SRM, since it would result in a reconfiguration of protection group, or a resync of entire data in case of vSphere Replication…
To your second point, in case you need to do a storage vMotion, you would have to have extra space on the data store as the storage vMotion would create a new swapfile for the period of storage vmoition… Since you would not perform a storage vmotion on all the vm’s at the same time, a 20% head room on the Swapfile LUN should be enough to help any environment….
To your third point, I agree that your super critical machines would not Swap out as much, however the files will be created… So if you have around 50 machines which you are protecting with 4 GB of swap each, you have to replicate around 200GB of the data which is of no use…. Companies can burst out bandwidth to do so, however it comes with a cost…..
Why not we plan future esx hosts with at lease 64GB SSD for Swaps. As i think this this solves 2 issues
1. improve performance of VM due to swap out.
2. improves vmotion if the swap is on local DS.
I understand this is not for all environments but can used for most of them and its justifiable from the cost factor also.
http://www.vmware.com/pdf/Perf_Best_Practices_vSphere5.0.pdf
Alright, this discussion is heating up and we have got some great comments adding a couple from @amitchell01
Hi Nice post. I do agree that swap file replication is not a good idea. However my query is that if we have swap file in different datastore, won't the protection group complaint about the vm part not being replicated and you may need to power off vm everytime you want to add it to Protection group? Please suggest if there is any option to avoid that
LikeLike