Jump to content

Wikipedia:Reference desk/Archives/Computing/2014 September 27

From Wikipedia, the free encyclopedia
Computing desk
< September 26 << Aug | September | Oct >> September 28 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


September 27

[edit]

Extracting a statement of recent activity from PayPal.

[edit]

I have a merchant account with PayPal and I need to write software to use their REST API to go in once per day and extract a list of payments made into that account. I can see a bunch of stuff out there on how to do this - but it looks like a steep learning curve.

Does anyone know of example code (preferably in PHP or C/C++, but I'll take what I can get) to grab account statements or to make specific requests for payment transactions? I really don't need to learn all of the details, I just need to get one, very specific, job done using it.

Surfing around to find this myself, it seems like I have to learn way more than I have time for, just to ask the right questions and understand the answers!

TIA SteveBaker (talk) 15:07, 27 September 2014 (UTC)[reply]

You could go to the general help section of paypal which does seem to have a search are that provides a great deal of information. https://www.paypal.com/us/webapps/helpcenter/helphub/home/ it also looks as though you could potentially chat with someone in the tech area which could provide you with a better break down of information. When in doubt, I google, here is a sample of what I found: https://developer.paypal.com/docs/classic/paypal-payments-standard/integration-guide/autobill_buttons/ Hope this helpsMnwildcat71 (talk) 15:42, 3 October 2014 (UTC)mnwildcat — Preceding unsigned comment added by Mnwildcat71 (talkcontribs) 15:39, 3 October 2014 (UTC)[reply]

Is this a memory leak?

[edit]

I have a class that looks a little like:

Class example
typedef struct{
	float a, b, c;
} thirdLevel;

typedef struct{
	thirdLevel thirdLevels[3];
} secondLevel;

typedef struct{
	int anInteger
	std::vector<secondLevel> secondLevels;
} firstLevel;

class className{
	public:
		className();
	private:
		std::shared_ptr<firstLevel> structChain;
};

className::className(){
	structChain.reset(new firstLevel);
}

And I use it like this:

std::list<className> instanceList;		//Create a container for the instances
instanceList.push_back(className());		//Create an instance

According to top, the program starts out by using about 1.1% of my memory. If I create a batch of, say, 30 className instances, it goes up to 1.3%. If I then delete them with several calls to "instanceList.pop_back()", the memory usage never goes back down. However, when I repetitively create an instance and then delete it without creating another instead (so that there are never more than 2 instances at one time), the memory usage doesn't seem to go above 1.1%.

I assume that this means that no memory leak is occuring and that the memory is just kept for further usage by the program rather than the OS. Is that correct? If that's indeed the case, is there something that can be done to free it for the OS? SphericalShape (talk) 16:00, 27 September 2014 (UTC)[reply]

That's correct, and there may be nothing you can do about it. The allocator probably won't bother to release such small amounts of RAM back to the system no matter what you do. If you allocate thousands of objects, you probably will see a decrease in RAM usage when you free them. -- BenRG (talk) 17:23, 27 September 2014 (UTC)[reply]
Unless I'm quite mistaken, or things have evolved a lot in the recent years, UNIX processes obtain heap space via brk() and sbrk(). This is then partitioned and handed to the user code via malloc() (or new() in sissy code). In principle, the malloc-library could return this space, but only if no single part of it is still used. In practice, this is so rare that it almost never happens, and many (most?) malloc-libraries simply never bother to return that memory. The use-case would be a process that uses a lot of memory at one time, and than much less later - not impossible, but unlikely. Most long-lived processes will either find a steady-state, or keep using more memory. And reserved space that is not actually used will mostly get swapped out anyways, so there is no performance hit. For very particular memory usage patterns, the user process can always use mmap() and handle it manually. --Stephan Schulz (talk) 06:44, 28 September 2014 (UTC)[reply]
This page suggests that glibc uses both sbrk and mmap and can release memory to the system through both, though obviously only if special conditions are met. This particular program should meet those conditions since it frees everything. -- BenRG (talk) 07:54, 28 September 2014 (UTC)[reply]
That is a good reference, and it got me to the mallopt man page. In its default configuration, glibc malloc() will use mmap() only for individual blocks greater than 128Kb. I don't think the program of the OP meets this condition. Similarly, glibc will return memory from the top of the heap if more than M_TRIM_THRESHOLD is free - this also is, by default, 128 kB. It might be interesting to use #include <malloc.h> mallopt(M_MMAP_THRESHOLD, 4); mallopt(M_TRIM_THRESHOLD, 4); in the program to see if that changes the behaviour. --Stephan Schulz (talk) 08:37, 28 September 2014 (UTC)[reply]
Thanks for the information! I just began learning C++ and wanted to be sure I wasn't creating leaky code from the start. It's not a problem for me to have it not release the memory. I tried setting the malloc options to a few multiples of 4 and 8 just for fun -- they all seemed to release the memory, but when I was using values below 96, the reported memory usage was higher than it normally is (6.8% with the options at 4, %2.0 at 64, 1.1% at 96). SphericalShape (talk) 10:52, 28 September 2014 (UTC)[reply]
Memory allocated with mmap() has to be aligned on page boundaries. So on a normal Linux system (and many other systems) every malloc() that is filled with mmap() will consume a multiple of 4Kb. I suspect that your biggest data type is (with memory management overhead) 90ish bytes, so when you go below the 96 byte limit, you suddenly use 4096 bytes where 96 would suffice. And as you lower the limit, this affects more and more of your data types. --Stephan Schulz (talk) 11:31, 28 September 2014 (UTC)[reply]

MMIX

[edit]

I am confused by the section of MMIX relating to the local register stack. It seems to be suggesting that a context switch saves registers on "the stack". I thought the registers were the stack. Using R0 through R255 as the name of the physical registers in the CPU, initially I thought $0 was R0 etc up to (say) $5 being R5. Then with a context switch I thought rO was used to now make $0 = R6, and if you now refer to $5 you will get R11. Am I wrong? The paragraph isn't clear. -- SGBailey (talk) 21:47, 27 September 2014 (UTC)[reply]

According to http://mmix.cs.hm.edu/doc/mmix-doc.pdf, §42, there are 256, 512, or 1024 registers in the internal register stack, so you are basically correct except that the registers may go up to R1023 in your notation. I improved the wording in the article. (Incidentally, "context switch" typically means a switch between threads or coroutines, not a subroutine call or return.) -- BenRG (talk) 23:49, 27 September 2014 (UTC)[reply]
The document you link to is not the same as the published Fascicle, which I presume to be canon. The book explicitly says 2^8 general registers being both the local and the global ones. I find §1.4.1' on subroutines confusingly written. I think DK talks about complex stuff before he details the basic stuff - Anyway, my impression is that (somehow) the subroutine call does a context switch creating a new $0 and I think initially at least the new local registers only have the one registerallocated regardless of how many the caller had. I'm changing your 1024 in MMIX to 256. -- SGBailey (talk) 21:12, 28 September 2014 (UTC)[reply]
Is this the Fascicle you're talking about? It discusses an internal register ring (which is what I should have called it, not a stack) on page 79. The size of the ring is called ρ. It doesn't say that ρ = 256, only that ρ is a power of 2 and ρ ≥ 256. The registers making up the ring are called l[0], ..., l[ρ-1]. If $0 through $5 are initially mapped to l[0] through l[5] and you execute a push(5), $0 through $5 will then point to l[6] through l[11]. The global registers are not stored in l. This is basically what the other source said except that this one doesn't require ρ ≤ 1024.
It's not clear to me whether an MMIX implementation is required to have this register ring at all, or whether it could just store all of the local registers in RAM and treat $n (n < rL) as a memory reference M8[rO+8n]. I don't think Knuth could have intended this Fascicle to be the primary specification of MMIX, as it's far too vague about such things. In any case, you can imagine for most purposes that the local registers are stored in RAM, except that they can be aggressively cached and that cache needn't be consistent with the cache used by ordinary load and store instructions. -- BenRG (talk) 19:09, 29 September 2014 (UTC)[reply]