Jump to content

Wikipedia:Reference desk/Archives/Computing/2017 November 5

From Wikipedia, the free encyclopedia
Computing desk
< November 4 << Oct | November | Dec >> November 6 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


November 5

[edit]

Text on a computer

[edit]

Can someone give me a rough idea of how are computers programmed to display text at the lowest level? —Polyknot (talk) 03:12, 5 November 2017 (UTC)[reply]

Our article on text mode have some details. WegianWarrior (talk) 10:32, 5 November 2017 (UTC)[reply]
Even simpler than that is the Hitachi HD44780 LCD controller, which is very widely used, for devices from coffee machines to Arduino and upwards. Andy Dingley (talk) 12:33, 5 November 2017 (UTC)[reply]
Video card and Graphics Device Interface might also be useful. "The lowest level" is not a very well-defined term these days, as there's a great deal of "stuff" going on between the API call that the programmer uses and the actual appearance of text (or other content) on the monitor (or other display device). Tevildo (talk) 12:37, 5 November 2017 (UTC)[reply]
  • That would depend somewhat on what the OP means by "lowest level". I remember old single-board computers and PC text mode, which is what text mode covers. This is "the lowest level" as the simplest sort of display hardware.
Looking at today's computers though (even phones) there are bitmapped graphic displays everywhere and so the question could also be read as "How does my current device show text?", which would be more about "video card" and topics like bitmap fonts vs. TrueType. A far higher level of technology, yet the "low level" within a contemporary device. Andy Dingley (talk) 12:59, 5 November 2017 (UTC)[reply]
You might like this Steam Driven Poetry Machine where he explains how it all works too, :) Dmcq (talk) 14:13, 5 November 2017 (UTC)[reply]

I mean, does someone have to design a font for each character pixel by pixel and then turn that into code? Hope that helps. —Polyknot (talk) 16:13, 5 November 2017 (UTC)[reply]

No, they're not even described by pixels nowaadys but by curves and hints like this line has to be the same width as that line. See [1] for a description of designing a typeface. Dmcq (talk) 16:48, 5 November 2017 (UTC)[reply]
What about on the command-line/DOS etc was it done pixel by pixel? —Polyknot (talk) 18:37, 5 November 2017 (UTC)[reply]
Yes. In the old days, the characters would be stored in ROM (or generated directly by the hardware logic) as raster fonts. Text mode (already linked) is the relevant article for this application. Tevildo (talk) 19:46, 5 November 2017 (UTC)[reply]
So let’s say for arguments sake that I was building a new OS in C. I can’t just use printf; somewhere I have to define how the characters are to be drawn or access them from the ROM as mentioned? — Preceding unsigned comment added by Polyknot (talkcontribs) 21:27, 5 November 2017 (UTC)[reply]
Yes, you need to know how to tell the video card to display text where you want it. For a VGA-compatible card in text mode, you just need to call INT 10h and the hardware does the rest. For more modern cards and for graphic modes, you'll need to write a suitable driver to control it. Tevildo (talk) 22:19, 5 November 2017 (UTC)[reply]
  • If you did this, say 30 years ago on a PC clone, you would still be able to use the INT 10H BIOS services. A section of memory would be allocated as the screen text buffer. Writing ASCII codes into the bytes of this array would display text mode characters on the screen - done for you by the video card, without needing the processor to do much more, or anything to be coded in your app or OS.
There was no need to define the character glyphs for this, as they'd already been loaded by the video board makers. With their limited (typically 8×8) dot resolution there was little scope for differences between them, so there might be the odd difference, but not much. ASCII fits into 7 bits, so there was also room for another 127 characters of a high range character set above this, including the IBM PC's novel and distinctive box drawing characters.
About this time, I was trying to write an analogue data capture system needing a real-time multi-channel colour display. I used 24 channels, each with three vertical bar graphs (a graph and a set of upper/lower marks). As the PCs of the day weren't quick enough to do this in a bitmapped graphic mode (which would have needed the processor to draw each new dot) I did it in text mode. Redefining 24 of these as bar graph characters (a bar that was 18, 28 ... 78, 88 full, and two other sets for the tick marks) meant that I could now draw pixel-level graphics (at least graphics of bar graphs) very fast, at text-mode speeds, rather than the slow bitmapped graphic speeds. Andy Dingley (talk) 00:23, 6 November 2017 (UTC)[reply]
Describing the fonts using an array of dots was the least of the problems in old computers. Just reading a keyboard was far more bother with having to cope with the order of the letters and the various modes and cope with key bounce and setting leds and multi key characters. Dmcq (talk) 13:43, 6 November 2017 (UTC)[reply]
Beside the already mentioned text mode, it might be an idea to reengineer the Color Graphics Adapter (CGA). Computers with 1 kb of RAM only, like the ZX81 had its ZX81 character set predefined in the ROM, which contains the whole firmware and the font definition of the charset as bitmap is just a part of it. The RAM could only store character numbers in the display section of the RAM. Generating the TV picture required to read the characters from the display section of the RAM, point to the ROM address where the character in the ROM was defined and queue the 8 bits to the video signal. As there were 128 chars defined only, the most significant bit tell just to invert the font information for inverted characters. The CPC464 came with 64 kB of RAM, and an ASC standard aligned charset, not compatible to the ZX81. The video section of the RAM were located in the last quarter (=16 kB) of the RAM. Depending on the graphic mode, the char needed to be converted for the RAMDAC, which had only to point to the color palette. 2 to 16 instant colors where displayable. Mode 2: One color background=0, one color to print=1, 80 chars per line. Mode 1: 4 colors, 40 chars per line. Mode 0: 16 colors, 20 chars per line. Each Mode, 25 text lines. This is similar to the CGA adapter of the PC. The RAM stored exactly the bitmap to be displayed on the screen. Printing a char, the CPC made a copy of the chars bitmap font to the video memory. Including the C64, all these home computers had 8x8 pixel bitmap fonts only. The CPC464 could not get text directly from the screen, due it was printed as bitmap graphic. To recognize a char from the screen was the reason for this horrible keyboard key, labeled with "COPY" next to the cursor keys. The CPC464 and successors came with the feature to a customize the font. To cache the fixed font to the RAM, the command SYMBOL AFTER n was used. 0…255 was valide for n. Chars higher 'n' were copied to and taken from RAM. The command SYMBOL char_number,1st-Bitmap-Byte,2nd-Bitmap-Byte,3rd-Bitmap-Byte,4th-Bitmap-Byte,5th-Bitmap-Byte,6th-Bitmap-Byte,7th-Bitmap-Byte,8th-Bitmap-Byte customized a characters font. Note, the right bit an last byte are the space to the neighbor char. This is what the article text mode is talking from. --Hans Haase (有问题吗) 20:07, 6 November 2017 (UTC)[reply]
See p.181, 274, 299 --Hans Haase (有问题吗) 23:48, 6 November 2017 (UTC)[reply]

Cryptographic key vs. encryption key

[edit]

Hi. I was wondering if there is a difference between a cryptographic key and an encryption key. A question, put forth in the chatroom of Stack Overflow and later, in Super User, describes this situation:

An editor working on a computing article, replaces all instances of "encryption key" with "cryptographic key" with the rationale that a cryptographic key can also be used for decryption. After replacing many, he/she runs into the phrase "the first unique encryption key", whereupon he/she reverses all his/her changes of this certain type. Ouch!

But why?

As far as I can tell, they are synonyms and although the fact of metonymy must have discouraged him/her from making the change in the first place, why suddenly going through the tedious task of reversing it? Must "encryption keys" be unique and "cryptographic keys" non-unique?

5.219.19.143 (talk) 14:34, 5 November 2017 (UTC)[reply]

In symmetric-key cryptography, a key is a key. One key does all functions.
In asymmetric or public-key cryptography, the functions of encryption and decryption are separated into a "key pair" of two related keys (which are both still "cryptographic keys"). The encryption and decryption keys are now distinct: you can only use each of them for that one part of the operation, so turning a message from plaintext to ciphertext and back again to plaintext will need both. Because the encryption key cannot decrypt, it's possible to publicise it (so that anyone can send you an encrypted message, which only you can read).
In some (but not all) systems, a key pair can also be used in reverse: i.e. a decryption key for messages from A→B could be used as an encryption key for messages from B→A, and A's original encryption key would be able to decrypt them. Andy Dingley (talk) 14:47, 5 November 2017 (UTC)[reply]
I hear you loud and clear. But I don't see how it make any difference in this case. If what you say was the factor, the editor must have reversed decision upon reaching "decryption key", not "first unique encryption key". Or at least would have tried the opposite, i.e. changing the "cryptographic key" into "encryption key". 5.219.19.143 (talk) 17:04, 5 November 2017 (UTC)[reply]
It is done purely for decency's sake. It is a joke intended to say editors have no idea what the hell they are editing. FleetCommand (Speak your mind!) 13:09, 8 November 2017 (UTC)[reply]
"Cryptographic key" is less specific than "encryption key". The former may refer to a key used in authentication (e.g. a MAC key, a signing key, a signature verification key). "Encryption key" often refers to a key in a symmetric-key cipher, whether the particular operation concerned is encryption or decryption. Sometimes "cipher key" is used with the same meaning, but "encryption key" is common and well-accepted usage. "Encryption key" is also used in the context of public key cryptography to refer to the public key (in a key pair) used in encryption. --98.115.54.114 (talk) 01:58, 9 November 2017 (UTC)[reply]

Font creation (re-creating an existing font)

[edit]

I'm looking for a font creation tool where I could insert letters from scans of a 70s book series to electronically re-create that font for non-commercial, electronic use at home. Basically, I wanna scan the book covers, cut out and crop to the individual letters with an image editor (Paint Shop Pro, in my case), then insert the individual scanned and cropped letters into a fontmaker where I can easily manipulate kerning between letters, and finally export a font file or format that I can use to type with both in OpenOffice and especially Corel Paint Shop Pro X2.

Eventually, the font would have to consist of three "styles" in the end: One for the "white" version of the title font (that could be the final electronic font's "regular" version), one for the "dark" version of the title font (which could be on "bold"), and a light one for the writer's name placed over the book's title (which could be on "italic").

Are there any free and easy-to-use tools you could recommend? Note: I'm not on a smartphone, I need an online or desktop tool for Windows 10. --79.242.202.112 (talk) 17:31, 5 November 2017 (UTC)[reply]

I think you can use one of these tools. Ruslik_Zero 18:50, 5 November 2017 (UTC)[reply]
Bear in mind that typefaces ('font' refers to one of the various sizes, weights etc. of a typeface) used for printing were/are designed and created by individuals type designers and may well still be the intellectual property of their designer or of an organisation that has acquired those rights, so copying and manipulating them might lead to legal problems, whether or not you intend any commercial exploitation (I Am Not A Lawyer).
Similarly, distinctive lettering designed for a particular set of book covers (and, for example, LP covers) will usually been designed by an artist for that purpose, and that artist or publisher (or their heirs) will likely still own the rights to it. {The poster formerly known as 87.81.230.195} 90.200.138.27 (talk) 00:20, 6 November 2017 (UTC)[reply]
I'm pretty certain that the legal situation when I'm putting together a font this way to home-print them on a poster and hang it on my wall in my home is the same as even just scanning a cover in the first place. It's when I'm starting to publically distribute a TTF or OTF font file (such as on the web) when things are getting iffy. --79.242.202.112 (talk) 02:02, 6 November 2017 (UTC)[reply]
Well, your certainty is not proof of what's legal, and we aren't allowed to advise here on what is. However, you may find Intellectual property protection of typefaces interesting. --69.159.60.147 (talk) 07:47, 6 November 2017 (UTC)[reply]
Thank you for that link. Via those interwiki links there, I've found that my country didn't even have a copyright law for typefaces up until 1973. The 1973 law, which is the only one that exists for typefaces that only exist in print, states that it's only protected for 10 years since its creation/first publication in general, and at the end of those ten years, you can pay money to have it protected for another 15 years at outmost. Since the typeface has been in use since at least 1970, it's definitely fully legal to do this with this font since 1995, and even to publically share the resulting electronic font online. It would be different if I was to copy an existing electronic font, as my country classifies those as "computer programs" which have far heavier copyright protection. --2003:71:4E07:BB22:C4C6:C0DB:6D12:9E8 (talk) 13:55, 7 November 2017 (UTC)[reply]
'Oh! let us never, never doubt; What nobody is sure about!' - Hilaire Belloc on the Microbe. Dmcq (talk) 11:49, 7 November 2017 (UTC)[reply]
You don't create fonts by scanning them. They aren't little photos. They are a set of vectors. You will have to scan the photo, then trace it in a vector editor, and then save those vectors as a single letter in your new font set. Further - it isn't really that easy. What you actually do is create glyphs and combine those to create letters. Then, you have to add hinting to ensure that important parts of the letters don't vanish when the font size is reduced. Yes - in the very old days, fonts were raster graphics. They aren't anymore. 209.149.113.5 (talk) 13:43, 7 November 2017 (UTC)[reply]
I know that fonts are not rasters but vectors. I've had a few looks at Glyphr Studio by now. Looks like I'll scan and crop the letters with a transparency channel in PSP, output them as PNG, use a batch converter to convert them to SVG, and then load every single SVG glyph into Glyphr Studio. --2003:71:4E07:BB22:C4C6:C0DB:6D12:9E8 (talk) 13:55, 7 November 2017 (UTC)[reply]