inaccessible vmware что делать
Как убрать статус виртуальной машины в VMware vCenter — «inaccessible»?
Перезагрузился сервер виртуальных машин и у некоторых машин статус «inaccessible», как его убрать?
Иногда, в результате сбоя, в дереве виртуальных машин VMware VirtualCenter некоторые ВМ отображаются курсивом и в сером цвете. При этом машины помечены как “inaccessible”, т.е. являются недоступными — изолированными. В частности такой статус возникает у выключенных машин при обрыве связи между хотами ESXi и datastore (хранилищем). Включенные на момент сбоя виртуальные машины после устранения ошибки самостоятельно продолжают свою работу. Выключенные же ВМ становятся изолированными и их необходимо подключить — зарегистрировать в VirtualCenter ручками.
Добавление (регистрация) изолированной виртуальной машин в VMware VirtualCenter
Чтобы добавить виртуальную машину в vlCenter с помощью VI Client, нужно:
— на вкладке Summary для хоста VMware ESX Server нажмите правой кнопкой на виртуальном хранилище (Datastore) и выберите Browse Datastore;
— выберите папку с виртуальной машиной, зайдите в нее и выберите конфигурационный vmx-файл виртуальной машины;
— нажмите на нем правой кнопкой и выберите Add to Inventory;
— укажите имя и расположение виртуальной машины, хост или кластер, ресурсный пул и нажмите Finish.
Виртуальная машина появится в VMware VirtualCenter.
Также необходимость регистрации возникает после сбоя при миграции виртуальной машины. Физически файлы ВМ переехали, все или частично, но ни на одном хосте машины нет. В этом случае недостающие файлы вручную переносятся на новое хранилище (с помощью WinSCP или FastSCP) и машина регистрируется на новом хосте.
Inaccessible vmware что делать
Lost a NFS datastore
On My VM’s the datastore shows as inaccessible. These VMs have no files on this datastore matter of fact the datastore is empty no files at all.
I tried to unmount the datastore but it fails with in use errors
I shutdown the VMS that had this inaccessible datastore made no difference.
How do IU get ride of this datastore?
Many VMs had cdrom mounted to iso files that were on the bad datastore
Once changed the cdrom to host device the inaccessible datastore was not longer there.
The two VMS that still had NFS01
Once I added the updated NFS01 back and migrated those two vm’s to the NFS01 datastore I only had one datastore on the VMS
This has been resolved.
Since there are no error details.
Here are two KB’s that might help you. If you haven’t tried them already.
esxcli storage nfs list
Volume Name Host Share Accessible Mounted Read-Only isPE Hardware Acceleration
NFS06 10.2.8.50 /volume1/NFS06 true true false false Not Supported
NFS05 10.2.8.47 /volume1/NFS05 true true false false Not Supported
NFS04 10.2.8.46 /volume1/NFS04 true true false false Not Supported
NFS02 10.2.8.51 /volume1/NFS02 true true false false Not Supported
NFS03 10.2.8.48 /volume1/NFS03 true true false false Not Supported
NFS01 10.2.8.13 /volume1/NFS01 false true false false Unknown
Need to remove the last one in the list
What error are you receiving when you try to remove it?
I was able to remove the datastore from each esxi host using this
Now all ESXI hosts do not have that NFS datastore.
My problem is it still is in the Datastore cluster
When I go to datastore Clusters and select my datastore cluster I still see NFS01
How to I remove it from the datastore cluster.
when I select unmount datastore I get an empty list which means the esxi hosts do not have the datastore any longer.
I found that several VM’s had a cdrom mounted which was pointing to the datastore
After I changed the cdrom to host client all the the NFS01 datastore disappeared from them
I only have two VM’s that show this datastore still
Once I can figure out how to remove them from the VM then I will be able to move forward with getting my VMs but online.
Any ideas on how to remove the datastore at the VM level?
What part of the VM files could possibly be on that datastore?
Seems like you’ve sorted ISOs, but do those VMs have snapshots?
That datastore was corrupted had to rebuild the volume on the nas device which wiped all files. The Hard drives where fine. Just lost the Volume
I had to use the replica VM from my Veeam B&R to bring up the VMs’ The replicas are stored on the internal drives of the ESXI host as a safety feature. Good thing.
They have replicas from when the get backed up and those files are on the internal disks
Those two VMs’ are they only ones that have that datastore
Inaccessible vmware что делать
In short, I am looking for help recovering «inaccessible» VMs.
My operating theory is that during maintenance on the UPS connected to our SAN rack, I accidentally tripped the power on one of three fiber channel switches. When this happened, roughly 30 VMs were in vMotion through that switch, and as a result of the power loss several datastores hosted on the SAN were disconnected and listed as inaccessible. The datastores did not reconnect to effected hosts as expected (FC connections are redundant, there was always a path from host to storage array). The roughly 30 VMs previously mentioned were unresponsive. Attempting to open VMRC for any of the affected VMs failed with error: «Unable to connect to the MKS: Could not connect to pipe \\.\pipe\vmware-authpipe within retry period.» These VMs reported vmtools not running, 0 bytes allocated storage, but with static CPU load (seemed stuck at whatever the last recorded load was). No commands to the affected VMs completed successfully, most timed out or continued to run indefinitely.
I migrated off unaffected VMs to unaffected hosts. This process completed without issue. Afterwards, all affected hosts were restarted and successfully reconnected to the datastores. At this point, I expected the inaccessible VMs would be identified on the datastores and return to normal operation. This did not happen, and the VMs are still listed as inaccessible.
Brief Version/Config Info:
ESXi 5.5.0 update 3 (VMKernel Release Build 3248547)
vCenter Server Appliance 5.5.0.30500 Build 4180648
All hosts are part of a single cluster which has vSphere DRS enabled (automated)
All datastores are part of a single datastore cluster with Storage DRS enabled (automated) and Storage I/O Control, VMFS5
Thank you for taking the time out of your day to read this. I was fortunate that the failed VMs were not critical, but one of them is important and I would like to understand why I cannot restore it from the datastore.
I run VMWare on 3 hosts, with local storage on each.
A VM went «offline» last evening, opening the console showed a blank screen. I opted to reset it through the console, which it got to 95% then stalled. I’ve seen articles discussing this. All options at this point were greyed out and it took about 45 minutes for the operation to time out.
Restarting services through SSH, took a long time on HPHelper but everything else went through smoothly.
Attempting to browse to the datastore just spins and searches. Through SSH it will show the files within but states they are inaccessible.
The server shows as overall healthy and was advised to reboot the host to resolve but I am hesistant to do that with other such critical VM’s on the host, I am opting to Live migrate them to a different host and then reboot just in case there is a larger issue at play here.
While this is going on my question is, what would cause this issue for a datastore to go blank/inaccessible like this?
My Infrastructure has not been touched in a year and a half as far as major changes go. Once it was set and running smoothly I have not modified.
This is also one of those times where I feel a SAN would benefit me greatly vs local storage, instead of potentially being a 10 minute fix, it’s a 24 hour fix.
The major reason for a datastore to become inaccessible is a loss in connectivity. This is not uncommon with network attached storage and even SAN, and is why multipathing is so important.
Since your storage is local, I would suspect either the firmware on your disk controller may have gradually gone out of date as your servers were patched and other components of the system were updated. Or, the general stability of the host has been compromised by running for an extended period of time without a restart.
ESXi is not Windows, and I have seen ESXi servers running reliably for way more than a year since they were last booted, but still, I am most comfortable rebooting my hosts as often as available patches provide an opportunity.
I suspect restarting your server will fix the issues you’re seeing browsing the local datastore. I’d be interested in hearing either way.
Inaccessible vmware что делать
Имеется сервер esxi 5.5, с контроллером adaptec 51245, без проблем проработал около 2х лет.
Появилась проблема: после перезагрузке нету доступа ко всем виртуальным машинам, приходится каждый раз добавлять диски вручную через vsphere configuration/storage.
Установлен последний драйвер контроллера: scsi-aacraid 5.0.5.1.7.29100-1OEM.500.0.0.472560 Adaptec_Inc CommunitySupported 2015-01-30.
Все массивы в отличном состоянии.
Как-то у вас все очень «весело».
1. Локальный контроллер именуется как vmnic, а не как vmhba
2015-05-18T02:50:08.073Z cpu3:33329)VMK_PCI: 395: Device 0000:02:00.0 name: vmnic0
2015-05-18T02:50:08.073Z cpu3:33329)DMA: 612: DMA Engine ‘vmnic0’ created using mapper ‘DMANull’.
2015-05-18T02:50:08.073Z cpu3:33329)ScsiScan: 976: Path ‘vmnic0:C0:T0:L0’: Vendor: ‘Adaptec ‘ Model: ‘ESXi ‘ Rev: ‘V1.0’
2015-05-18T02:50:08.073Z cpu3:33329)ScsiScan: 979: Path ‘vmnic0:C0:T0:L0’: Type: 0x0, ANSI rev: 2, TPGS: 0 (none)
2015-05-18T02:50:08.074Z cpu3:33329)ScsiUid: 273: Path ‘vmnic0:C0:T0:L0’ does not support VPD Device Id page.
2015-05-18T02:50:08.074Z cpu3:33329)ScsiScan: 1105: Path ‘vmnic0:C0:T0:L0’ : No standard UID: Failure. ANSI version ‘SCSI-2’ (0x2).
2. Датасторы не подключаются, потому что они определяются как снапшоты VMware KB: vSphere handling of LUNs detected as snapshot LUNs
2015-05-18T02:50:25.383Z cpu1:33173)LVM: 8349: Device mpx.vmnic0:C0:T0:L0:3 detected to be a snapshot:
2015-05-18T02:50:25.383Z cpu1:33173)LVM: 8356: queried disk ID:
2015-05-18T02:50:25.383Z cpu1:33173)LVM: 8363: on-disk disk ID:
2015-05-18T02:50:25.391Z cpu1:33173)LVM: 8349: Device mpx.vmnic0:C0:T2:L0:1 detected to be a snapshot:
2015-05-18T02:50:25.391Z cpu1:33173)LVM: 8356: queried disk ID:
2015-05-18T02:50:25.391Z cpu1:33173)LVM: 8363: on-disk disk ID:
2015-05-18T02:50:25.395Z cpu1:33173)LVM: 8349: Device mpx.vmnic0:C0:T1:L0:1 detected to be a snapshot:
2015-05-18T02:50:25.395Z cpu1:33173)LVM: 8356: queried disk ID:
2015-05-18T02:50:25.395Z cpu1:33173)LVM: 8363: on-disk disk ID:
2015-05-18T02:50:25.401Z cpu1:33173)LVM: 8349: Device mpx.vmnic0:C0:T3:L0:1 detected to be a snapshot:
2015-05-18T02:50:25.401Z cpu1:33173)LVM: 8356: queried disk ID:
2015-05-18T02:50:25.401Z cpu1:33173)LVM: 8363: on-disk disk ID:
Советую помимо драйверов обновить прошивки для адаптека.
есть у вас картинки? лог vmkernel собранный после перезагрузки?



