What's new

4300u, VT-d, and Hyper-V

Brad

New Member
I have the 4300u in my SP2 so this isn't a 4200 vs 4300 thread. I'm just wondering if even though the CPU is VT-d capable, is it allowed for in the BIOS, etc? Does it need to be enabled? I'm using Hyper-V for virtualization and wondering if the 4300 is using the extra virtualization features? I know VMware Workstation is a type-2 hypervisor so it doesn't likely make use of it in any case. Would any of the others make use of VT-d?
 

jrapdx

Member
I have the 4300u in my SP2 so this isn't a 4200 vs 4300 thread. I'm just wondering if even though the CPU is VT-d capable, is it allowed for in the BIOS, etc? Does it need to be enabled? I'm using Hyper-V for virtualization and wondering if the 4300 is using the extra virtualization features? I know VMware Workstation is a type-2 hypervisor so it doesn't likely make use of it in any case. Would any of the others make use of VT-d?

I've been using Hyper-V on my SP2 since Dec 2013. Actually the first SP2 was "killed" by the infamous Dec 2013 firmware incident, and its replacement had the 4300u CPU.

A few months ago there was some discussion re: hardware VT-d. As I remember, Hyper-V did not use such capability on the SP2, though I don't recall the technical reasons for it. Anyway it's unclear to me how much benefit would be provided if Windows could make use of it on the SP2.

Just curious--what's your "use case" for Hyper-V? My scenario is (currently) running FreeBSD in a VM in turn running a web/app server under development. Quite nicely this allows a Windows browser to access the VM's server in exactly the same way as a remote host on the Internet. VM performance is adequate if limited to command-line programs only; attempting a GUI in the VM slows things to a crawl.
 

GreyFox7

Super Moderator
Staff member
Intel® Virtualization Technology for Directed I/O provides VMM software with the following capabilities:
  • Improve reliability and security through device isolation using hardware assisted remapping
  • Improve I/O performance and availability by direct assignment of devices
Is your use case targeted at security or performance? Id think that the latter is a practical impossibility as there are not redundant resources available to directly assign a resource to a VM.

ok you could put a VHD on a USB attached disk at which point its the only one using it but put another VHD on it and you cannot dedicate the resource.

In theory you could plug two USB nics into a USB hub and dedicate those to separate VMs but they share the same USB connection. Bonding in this case would actually be better for performance. Moreover my understanding is that the Nic needs to support this capability and that's unlikely of any nics in this scenario.

Do we know how Hyper-v would dedicate these resources... I've seen some discussion that only disks are supported and it certainly gets more complicated when that disk is USB attached.

I'm not sure how it could happen realistically... but I'm open to the idea.
Additionally it would seem that SSD disk performance would make the memory, CPU, or CPU (throttling) the limiting factor.

I did a little testing on a setup but haven't implemented yet with 2 SSDs and 2-4 USB Nics on a laptop with 2 USB 3.0 ports and did not exceed the bandwidth of the USB 3.0 ports but I'm afraid that would exceed the capacity of one USB 3.0 port. I think you could do one SSD and 2 nics. but your going to get hot in there and that will ultimately limit your performance.
 
OP
B

Brad

New Member
Thanks for the reply. Sounds like it's not a huge benefit, if at all. Likely more of just a supply decision between Microsoft and Intel.

My use case is purely development. Based on environment configs I require for different projects it's more convenient to spin up a VM per project. I use VMware at work on a Precision M4700 but it's a tank to carry around. I'm more familiar with VMware but it seems that on the Surface Pro Hyper-V squeaks out a bit more performance.
 

GreyFox7

Super Moderator
Staff member
its probably more complex for Intel to exclude it from the package than to leave it in but there's little practical use for it with a U class processor.
 

jrapdx

Member
Intel® Virtualization Technology for Directed I/O provides VMM software with the following capabilities
...
Is your use case targeted at security or performance? Id think that the latter is a practical impossibility as there are not redundant resources available to directly assign a resource to a VM.
As I recall, client Hyper-V, as in the SP2/W8.1, can't directly access disks, and is limited to VHD's.
...
In theory you could plug two USB nics into a USB hub and dedicate those to separate VMs but they share the same USB connection. Bonding in this case would actually be better for performance. Moreover my understanding is that the Nic needs to support this capability and that's unlikely of any nics in this scenario.
Of course, a VM can access an external USB NIC. In fact, I use this just to be able to have wireless internet access for both the Win8.1 host as well as in the VM. Seems one can't have the internal wireless NIC attached to the VM and available in the host at the same time. I guess if there were two VMs running, and two external NICs each VM could have a NIC attached to it. But you're probably right, I/O throughput may not be increased and VMs could even see somewhat reduced I/O performance.
Do we know how Hyper-v would dedicate these resources... I've seen some discussion that only disks are supported and it certainly gets more complicated when that disk is USB attached.

I'm not sure how it could happen realistically... but I'm open to the idea.
Additionally it would seem that SSD disk performance would make the memory, CPU, or CPU (throttling) the limiting factor.
Indeed, performance isn't improved, but that's not the main motivation for such a setup (multiple disks, NICs) IMO. It's only worthwhile if there's a need to have the access to the resource despite performance being no better or even worse.
I did a little testing on a setup but haven't implemented yet with 2 SSDs and 2-4 USB Nics on a laptop with 2 USB 3.0 ports and did not exceed the bandwidth of the USB 3.0 ports but I'm afraid that would exceed the capacity of one USB 3.0 port. I think you could do one SSD and 2 nics. but your going to get hot in there and that will ultimately limit your performance.
Haven't noticed unusual thermal issues with multiple attached devices, but I/O performance limits are about as I'd expect considering the constraints intrinsic to the tablet form factor.
 
VT-d is completely useless on the Surface Pro unless you want to passthrough PCI devices. "Passthrough" here means _complete_ passthrough, i.e. the virtual machine gains access to the PCI card _instead_ of the host.

Passthrough of PCI devices is not even implemented on Hyper-V client editions. You would need to install a server edition of Windows. Or one of the pricey editions of VMware.

Besides, the only PCI devices on the Surface Pro are:
- The USB controller. This is useless to pass through because passing individual USB devices is virtually as efficient. And since the keyboard, etc. are connected via USB you'd lose them on the host computer.
- The sound controller. Also useless because it's easier and equally performing to do via the normal means.
- The SATA controller. Completely useless because the host loses access to it.
- The VGA controller (aka Intel GPU). This is slightly more interesting but passthrough of VGA devices is currently a research topic mostly. Using Xen this would enable you to e.g. boot Windows as a guest under a Linux host where the Linux host has "relinquished" the graphics card. There were some articles around about guest computers doing accelerated OpenCL this way, too.
- Nothing else. Not even the netword card is PCI. Individual USB devices are also not PCI (they are virtualized in a different way which does not required VT-d -- the way client HyperV, VirtualBox, vmware, etc. do it).

Basically don't get the 4300U because of VT-d. You will _never_ use VT-d.
 
Top