What's new

Civilization V on SP2

Korlon

Member
I've heard such great things about Civilization V and today I finally installed it on my SP2 with 8g ram but have run into some issues.
First the game doesn't run full screen. I've heard about scaling to 100%, but how do I do that.
Second after playing three or four turns I get an error box pop up saying I'm running out of memory. In the Task Manager it shows I'm at 41%.

I've read that many are playing the game on the SP2, so any suggestions on how to fix these issues?
 

Philtastic

Active Member
Some games for whatever reason do not automatically scale the game to your fullscreen if running at lower than 1920x1080 (eg. Civ V and Saints Row 4). You can fix this by setting the screen resolution to your SP2's native screen resolution of 1920x1080. Civ V is such a lightweight game that it will run quite fine at medium'ish settings at that res.

Concerning the memory issue, just ignore it. I found that I would only get that message when I was experimenting with disabling the pagefile to possibly save my solid-state drive until I read more about it and saw mostly recommendations to leave it on no matter how much RAM you have.
 
OP
K

Korlon

Member
Thanks Phil,

Unfortunately there is another problem I can't work around. When I start a game the damned map keeps moving up and left. I move elsewhere on the map and the same thing happens. Do you know how to fix this issue?

I thought it could be the Wacom digitizer so I stopped the service and re-ran the game but same problem.
 

kozak79

Active Member
Thanks Phil,

Unfortunately there is another problem I can't work around. When I start a game the damned map keeps moving up and left. I move elsewhere on the map and the same thing happens. Do you know how to fix this issue?

I thought it could be the Wacom digitizer so I stopped the service and re-ran the game but same problem.

You might be having the same problem that Starcraft II has. Right click on the executable and go to Compatibility tab. Set compatibility to Windows XP SP3 and check "Disable DPI Scaling". As for resolution, try running the game at 1920x1080 resolution, if it's too slow, and at a lower resolution you still can't get it full screen, then you have to set windows resolution to 1280x720 before you start the game, then start the game and set it to the same resolution. After you are done, you can change the resolution back to 1920x1080.
 

Philtastic

Active Member
What mode are you starting the game in? I had problems using the touchscreen-enabled version until I had used the normal dx11 version to set the proper resolution and restarted in touchscreen mode.
 

GoodBytes

Well-Known Member
Concerning the memory issue, just ignore it. I found that I would only get that message when I was experimenting with disabling the pagefile to possibly save my solid-state drive until I read more about it and saw mostly recommendations to leave it on no matter how much RAM you have.
Yes, you are correct. The page file should not, in fact never be disabled. It's more than just extending your RAM, that is only 1 function of it, but it also provide the ability of defrag open space.

PageFile
The reason why you never want to disable pagefile, AND want it to be AT LEAST the size of your RAM is simple:
Your RAM needs to defragment itself, but when we talk about fragmentation, we are not talking about data being split up, but rather empty space. Let me explain,

A running program is called a process. When a process runs it consumes a certain amount of memory on your RAM. That capacity stays always in 1 block, and can't be cut into peaces like a file on your HDD. So in a way it never fragments. Great! Also, for many and complicated reasons that I won't cover, the block of memory that a process occupies cannot be moved while the program(s) is/are running. Ok, right.
But, this means that by doing this it fragment itself.. Wait what? Yup!

It fragment empty space. Basically you have plenty of empty space blocks separated, and not together as you open and close programs.
Let's have a look at what happens (in a simple way)
- Process A consumes 10% of your RAM.
- Process B consumes 40% of your RAM.
- Process C consumes 5% of your RAM.
- Process D consumes 30% of your RAM.
- Process E consumes 5% of your RAM.
- 10% Free space after Process E to the end of your memory.
Total: 90% used, 10% free.
Now, you close Process C. So now, as nothing can move, you have a 5% free space hole in your RAM. In total, you have 15% of free space.
Now, you open Process F, which consume 15% of your RAM. Well you are out of RAM. Because while you have the space, it doesn't fit into 1 block. Remember nothing can move. You have fragment free space. So you get a low memory error, despite having plenty of free space.

What normally happens in such situation is that Windows (or wtv OS you use) flushes out the RAM clean. And copies pagefile to your RAM in a organize clean way (defragmented).

Your page file is a backup of your RAM. Everything you put on your RAM is duplicated on your pagefile, to make this perform the fastest as possible (else it would need to copy the RAM to your HDD or SSD and then flush it, and copy the page-file back to the RAM. We know how much time it takes transferring several GB of data, it sure isn't instant especially using an HDD.

But why the backup? I can wait! you say.
Well it's because, anything that isn't in your RAM, can't be processed by the CPU. So, your system will temporally lock up, as your RAM will be emptied.

Now, Windows (well Vista and up) is smart. To hide this, it will first priorities key component of the OS, related to the user interface and your interaction with the system, by transferring that, first onto your RAM, this will make it invisible to you as its just a few MB and everything else will follow, with some fancy algorithm on what you are currently using, and other guess work. Pretty smart.

In the old old days back when we had 128/256MB of RAM with XP or older, you might recall that your system, after starting a program, just becomes unresponsive for a moment (temporarily freezes), and comes back to normal. When this occurred, your system was doing this process.

So lesson: People that tell you to disable page-file (and it's not for a specific reason), and your not using a legacy OS are, well, don't know what they are talking about.

This advice, actually was seen as good one (in the sense of acceptable), back in XP days, where XP dump everything it can to pagefile only, to maximize empty space on your RAM. This was critical back in the days as RAM was limited, for what we were doing with our computers. Sadly XP, doesn't auto-adapt. If you have high capacity of RAM (1GB of more) it still acts as if you had little memory and dumped everything to page file. So, to have a more responsive experience, people, at the risk of having what you experience, disabled pagefile.
Since Vista, this is not the case. Vista and up priorities RAM first. Once the RAM is filled, pagefile is used to extend the RAM. That is another reason why Vista performance was terrible on system that didn't have 2GB or more of RAM, which is what most people had when they put Vista on their system (they had 1GB of RAM, or 2GB, which 2GB being the limit, you needed 4GB for a smooth experience, not to mention 256MB of RAM of dedicated GPU memory (as system RAM was too slow, unless the OEM was too cheap and put standard DDR1 or DDR2 for the graphic card and not GDDR3), and pixel shader 2.0, but that's a different topic).


---------


SSD durability concern

That being said, you should not worry about the life of the SSD of your Surface Pro.
First of all, SSD's have improved in durability CONSIDERABLY, like it's not even funny, since it's early days back in its early days. Durability was top priority for all manufactures of SSDs, controllers, and chip manufactures. All working together. Now, on the consumer market, you have different types of SSDs: TLC (Samsung 840, and EVO series are the only SSD so far on the market that uses these chips). This is the worst chips you can get in terms of durability. It doesn't mean they are bad at all. But if you trash them with writes, like really trash them with writes, more than the normal system operation. I would not trust it after 3 years. They are designed for standard office computers or media center computer. If you are a programmer or running a server, or doing video encoding or stuff like that.. not a good SSD for such task. Assuming it has that (it doesn't as it's only Samsung uses these chips, and it doesn't have a Samsung SSD), already at 3 years, will you keep your Surface Pro 2?

Then you have the polar opposite, which I believe this is what the Surface Pro 2 has, due to it's performance. They are the fastest SSD's around, and they are tanks in durability (they are also used in the PCI-E version of SSD's). They are MLC synchronous nand chips in configuration form. You can identify them with their 5 year warranty in retail space. You can trash them like no tomorrow, and they should pass 9 years no trouble. You will definitely not have your Surface Pro 2 by then.

Finally, you have the in between model: MLC asynchronous ones, which are the rest of the SSDs and those last in between the 2 above. By then you will for sure have another Surface Pro. Will safely last 5 years if not more.

The above, of course, assumes no manufacture errors, and time is estimated. We don't have a time machine, or I don't think you want to wait 9 years to see what will happen for sure.
 
Last edited:

GoodBytes

Well-Known Member
No offense but your description of how memory is managed and why we have page files is basically totally wrong. If you're interested in actually understanding how this is managed a good start would be:

Virtual memory - Wikipedia, the free encyclopedia
No, it is correct.

What you are showing is Virtual memory... I am talking about pagefile... the other aspect of it (in a simple, brief, manner... I know I skipped details... that was on purpose, it's already a wall of text. If you want to read more, then you can get a book on Operating Systems design, and now it talks fully about it. You usually have 1-2 chapters just on this).
 
Last edited:
Yes, I know a lot about OS design, nobody does base/bounds style process memory management anymore. There is no external fragmentation of process memory in modern systems and therefore no need to defragment process memory (from the OSs point of view -- your process can still have internal fragmentation but that's your problem). Virtual Memory is the mechanism that solves the external fragmentation problem. Pagefile is only an extension of physical memory, think of it as a VERY slow cache for main memory.
 

GoodBytes

Well-Known Member
I stand corrected... but none the less, the pagefile should not be disabled, as its still prone to free space fragmentation if it is disabled as explained correctly above.
Basically the only thing that is wrong is the "defragmentation" process.
 
Well....not exactly :)

Modern OSs implement process memory cache policies that attempt to keep the current "working set" in physical memory while evicting less used code/data to the pagefile. This allows the sum of all process memory to be larger than physical ram while maintaining reasonable performance. In other words, many programs exhibit the behavior of allocating a lot of memory while only using a small set of the memory most of the time. Windows, linux, mac os, etc. all have their own heuristics for tuning their eviction policies.

Pagefiles generally don't exhibit external fragmentation either because they are a fixed size and allocated contiguously on the fast part of the disk, but there are a lot of caveats to this...too much to get into. :)

I know what you're talking about with XP, but from what I remember, people turned off their pagefile to minimize the general overhead related to pagefile management (it takes a long time to push little used code/data to disk, it takes a long time to read that data when it's needed). But the trade off is that everything has to fit into physical memory or you're done. So at the end of the day, it was a classic time/space trade off and people who were tuning for best performance in a specific low memory usage case vs tuning for best performance in the general case, chose to give up space.
 
Top