Most of the time, if a program is compiled in C and it is using an unusually large amount of memory, then there is a memory leak. Unless you have some specific reason to expect a very large data set in memory, you should first attempt to find the memory leak.

In most cases, there are techniques you can utilize to optimize your code and reduce your memory footprint. You might also need to change your strategy for handling your computations. Perhaps you can slice your dataset and run more smaller jobs to achieve your results. This is going to give you much more redundancy and a better chance of completing your work, simply because you'll be less vulnerable to losing a lot of work when one job is somehow killed.

It is not unreasonable to need more than a couple of GB of memory these days, especially if you are using Matlab or another product which can add a fair amount of overhead. In those cases, you should run a test job to determine the size of your memory footprint. Run a job and then use the condor_history -long command to view the classed attributes of the job. Look for image_size. This is the maximum observed image size reported in KB. Use this as a guide for setting the request_memory attribute in your command file. That attribute is set in KB as well. This attribute tells condor that your job may use up to the amount of memory specified. Condor will use this attribute to match appropriate resources so that you job will not run on a machine with too little memory.