Dynamic Memory – Not Your Father’s Memory Overcommit: Part 3
“Memory Overcommit? Isn’t that a VMware capability?”
Before we talk about it, let’s take a look at the word “Overcommit” (from The Free Dictionary):
O·ver·com·mit (vr-k-mt) v. o·ver·com·mit·ted, o·ver·com·mit·ting, o·ver·com·mits v.tr.
- To bind or obligate (oneself, for example) beyond the capacity for realization.
- To allocate or apportion (money, goods, or resources) in amounts incapable of replacement.
- To be or become overcommitted.
That phrase “beyond the capacity for realization” is important. To overcommit memory means to obligate more memory be used than the capacity we actually physically have.
“Is that a good thing?”
It can be, if, in the case of the consolidation ratios of virtual machines on a physical host, it’s more important for you to pack more onto a box than it is to get decent performance out of those virtual machines.
“Dynamic Memory” is Microsoft’s solution (in Hyper-V) to do something similar. But in this case, Microsoft does not overcommit. By contrast, it allocates memory to or from VMs sharing a virtualization host based on the memory demand of the VMs.
Today in Part 3, my friend Dan Stolts expands on this definition and these technologies for us.
(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)