Wikipedia:Reference desk/Archives/Computing/2014 September 10

From Wikipedia, the free encyclopedia
Computing desk
< September 9 << Aug | September | Oct >> September 11 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


September 10[edit]

Editing a TTF file with python[edit]

Hi there,
I would like to know which module do I need to edit TTF titles, and how do I install it on Windows 7.
Thank you. — Preceding unsigned comment added by Exx8 (talkcontribs) 19:10, 10 September 2014 (UTC)[reply]

FontForge can be run as a library within python. I have no idea if it works in Windows, but I don't see why not. --Mark viking (talk) 19:26, 10 September 2014 (UTC)[reply]
well actually I got FontForge and I got Python running on my computer, but how should I install the module? is it on the installed software folder or should I download it from somewhere?Exx8 (talk)
It is probably best to check out the fontforge documentation on this, e.g., [1] and [2]. --Mark viking (talk) 23:39, 10 September 2014 (UTC)[reply]
I read these lines: is there prepared-compiled binaries the suit windows? I mean I use windows 7 and not linux?Exx8 (talk) 00:10, 11 September 2014 (UTC)[reply]

many small files[edit]

Hello. I run a 32-bit virtual linux box with a 7.6G disk. The other day I got "no space left on device" errors when trying to create a small text file. du -h showed that the disk was only 55% full. Deleting a half gigabyte file did not resolve the issue. On the disk were maybe half a million files, each about 100 bytes long. But when I deleted a few thousand of these 100 byte files, I could again create files with no problem. I thought that maybe I was running out of inodes, but this doesn't seem realistic. Or maybe each 100 byte file actually occupied more space on my disk. Can anyone suggest what was going on? thanks, Robinh (talk) 21:27, 10 September 2014 (UTC)[reply]

Naively, a filesystem will allocate a whole number of blocks for each file; so a file of even one byte will allocate a whole block. Different file systems use different blocks sizes (or can be configured to use different sizes), but 4Kbytes is the most common (e.g. tmpfs and the extfs family). Do a du -h * and you'll likely see something like:
   4.0K    foo
   4.0K    bar
even though foo and bar are a few bytes each. As you've noticed, if you have lots of small files this makes for very inefficient use - and it might be slow too (depending on the access pattern) as reading each file means reading a distinct block (again usually 4K) from the disk. So some filesystems have special small file support, where small files don't get allocated a whole block to their own - they're instead stored together, or appended to the inode that describes them. I think (too late tonight for experiments for me) that filesystems which do something like this include btrfs, zfs, xfs, reiserfs, and ntfs. A side note: it seems the way ecryptfs is set up on many systems (e.g. on Ubuntu) by default it uses 12K blocks. ext4 has a feature called inline-data, which allows small file sup to 60 bytes to be stored after the inode in the same disk block (but my very brief late-night experiment just now suggests the one ext4 volume I have doesn't do this, and I can't find a tune2fs/mkfs.ext4 option to enable it). -- Finlay McWalterTalk 22:43, 10 September 2014 (UTC)[reply]
If someone wants to experiment with this while I'm asleep, I wonder if a mkfs.ext4 -T small might implicitly enable inline-data? -- Finlay McWalterTalk 22:46, 10 September 2014 (UTC)[reply]
Hmm, but du should report accurate usage, and you said you were at 55%. So perhaps the block size thing is a red herring, and instead your /tmp (assuming its a tmpfs or is otherwise size limited) was full, and the program you were using to create the file was erroring because it couldn't make a journal file there. -- Finlay McWalterTalk 22:49, 10 September 2014 (UTC)[reply]

(OP) Thanks Finlay. I made a mistake above: when I said "du -h reports 55% disk usage", that should have been "df -h" reports 55% disk usage". But if I'm interpreting correctly, a 100 byte file actually occupies 4 kilobytes of disk space, and so most of the 4k is wasted. Robinh (talk) 23:16, 10 September 2014 (UTC)[reply]

df reports the wasted space (slack space) as used. Given that df says the disk isn't full, and deleting a few thousand small files (occupying at most a few tens of megabytes) solved the problem while deleting a half-gigabyte file didn't, I think you ran out of inodes. -- BenRG (talk) 00:07, 11 September 2014 (UTC)[reply]
So use df -i to check for that. --65.94.51.64 (talk) 04:09, 11 September 2014 (UTC)[reply]

(OP) I didn't know about df -i. I had run out of inodes. thanks everyone! Robinh (talk) 09:21, 11 September 2014 (UTC)[reply]

Resolved