Jump to content

Wikipedia:Reference desk/Archives/Computing/2011 March 16

From Wikipedia, the free encyclopedia
Computing desk
< March 15 << Feb | March | Apr >> March 17 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


March 16

[edit]

question about LCD screen backlighting

[edit]

I fixed a friend's laptop the other day - it needed the display inverter and backlight replaced, not too difficult a task. It got me wondering, though, why LCD screens don't have differences in illumination between the top and bottom. I mean, the CCFL backlight is a long tube that runs along the bottom of the display, but the top of the display is not noticeably darker than the bottom, and there's nothing obvious in the construction of the display that would create an even distribution of light. what am I missing? --Ludwigs2 01:00, 16 March 2011 (UTC)[reply]

The back of the "display sandwich" is a reflector, then there's the backlight, then one or several layers of diffuser film which distribute the light around so it doesn't show the pattern of the backlight elements. This blurb talks about that diffuser (in rather general terms) and has a schematic diagram of the whole sandwich. -- Finlay McWalterTalk 01:11, 16 March 2011 (UTC)[reply]
That Makrolon stuff the Bayer thing talks about is basically polycarbonate; it seems its competing against poly(methyl methacrylate) sheets to do the same job. It seems both PC and PMMA are also used as optical diffusers for lighting fixtures. Bayer sells sheets as thin as 1.5 mm, but I've not found specifically how thick the sheets used as a screen diffuser are. -- Finlay McWalterTalk 01:22, 16 March 2011 (UTC)[reply]
Backlight#Backlight diffusers talks a bit about the pattern of bumps in the diffuser, but it isn't very detailed and isn't well sourced. -- Finlay McWalterTalk 01:39, 16 March 2011 (UTC)[reply]
Well, the laptop LCDs are a little different - just one light at the bottom rather than a series of lights across the back - but that may have something to do with the way the lamp is recessed into the frame (giving a relatively narrow band of light pointed almost straight up - the diffusers even out what small top/bottom differences are left). Interesting reads, though. thanks. --Ludwigs2 04:56, 16 March 2011 (UTC)[reply]

Input-dependent flickering

[edit]

I wrote this trivial AWT program to demonstrate flickering and tearing:

public class Flicker extends java.awt.Frame {
  public void paint(java.awt.Graphics g)
  {for(int i=0;i<900;++i) g.drawOval(i/3,i%30*10,10,10);}
  public static void main(String[] args) {
    final Flicker f=new Flicker();
    f.setSize(300,300);
    f.setVisible(true);
    while(true) {
      try {Thread.sleep(20);} catch(InterruptedException e) {}
      f.repaint();
    }
  }
}

I'm running it with OpenJDK Runtime Environment (IcedTea6 1.9.7) (6b20-1.9.7-0ubuntu1~10.04.1) (on X11, of course). The flicker is obvious and not mysterious, and I'm not asking how to get rid of it. What I'd like to understand is this phenomenon: any stream of input (waving the mouse over the frame or, when it has focus, holding a key down) greatly reduces the flicker. What is the mechanism of this effect? --Tardis (talk) 01:37, 16 March 2011 (UTC)[reply]

The flicker is because you are not using double buffering. In Java Swing, this is implemented by default (see this Swing vs. AWT Painting comparison and consider upgrading your Frame object to a Swing JFrame). To implement double-buffering manually in AWT, you need to create a duplicate of your Graphics object; paint to that, and swap the buffer manually. Here's an example set of applets, with code: "DoubleBuffering" at RealApplets.com. Nimur (talk) 01:53, 16 March 2011 (UTC)[reply]
That call to repaint() (which is java.awt.Component.repaint()) adds a repaint-requested event to the Window's event queue, to be serviced later in the corresponding thread. That queue is also the recipient of mouse and keyboard events; moving the mouse in particular adds these at quite a rate. The real "cause" is, I think, the event coalescence mechanism. It wraps the dequeue from that event queue, and if it sees more than one adjacent repaint-request, it elides them together into a single event (as there's no point in asking the system to repaint the same object twice with no gap between). So consider the two scenarios:
  1. (no mouse events). Every 20ms a repaint-request is added to the queue (which is empty). The event handler thread is called, dequeues the only event, and calls your paint method. So your paint method should be called, pretty much like clockwork, every 20ms
  2. (mouse events too). The same repaint-request is added every 20ms. But (particularly when your rather involved paint() method is running) a bunch of mouse events are also added to the queue. If they can't all be serviced in time, a slight backlog of repaint-request events builds up. When the coalescer dequeues these, it elides them together.
So, my thinking is, that in the latter case there are fewer calls to paint and so less flicker. If you want to test this, put in a (synchronised) counter that counts each call to paint(), and occasionally (in your main while look) retrieve that, zero it, and print it. If the above logic is correct, you should see fewer paint events being processed when you're injecting the mouse events. It might also be informative to measure how long your paint() handler's call takes (with System.nanoTime()). I'm off to bed now, but let me know how you get on, if you choose to do this. -- Finlay McWalterTalk 02:04, 16 March 2011 (UTC)[reply]
I think you may be onto something with the thread timing, but unfortunately I had already ruled out the number of paints changing. (I should have mentioned that, but that was days ago, so it didn't occur to me.) I used this in paint():
if(last==0) last=m();
if(++paints%100==0) {
  final long now=m();
  System.out.println("100 paints in "+(now-last)+"ms");
  last=now;
}
with the obvious definitions for last, paints, and m(). I get numbers on 2025-2030 whether or not I'm interacting with the window. I added code to measure the average duration of the paint, and got about 540 μs with no interaction and 460 with it (but the variability is pretty large). I don't know that that's a big enough change to have the effect I see on the flicker, nor do I know how the painting goes faster with input. --Tardis (talk) 03:55, 16 March 2011 (UTC)[reply]

personal cloud

[edit]

How difficult is it to combine personal computers in a home network to pool internal memory, processors and cores, and disk drives so that they function from one as a single computer? — Preceding unsigned comment added by DeeperQA (talkcontribs) 02:10, 16 March 2011 (UTC)[reply]

Sharing a memory-address-space across multiple machines is very difficult - even in professional/industrial/research settings, that is a hard task. But you can run a parallelism framework like OpenMPI or Hadoop to spread different types of computational tasks to multiple machines with very little effort. If you just want to aggregate your disks, a shared file system like SAMBA or SSHFS can be easily set up. Nimur (talk) 02:25, 16 March 2011 (UTC)[reply]
I'm a bit confused about your question; do you mean that you want to virtualize the CPU and RAM resources of multiple computers into a cloud, or do you in fact want to combine multiple computers into a cache-coherent machine running a single-system image? If you want to virtualize your resources, and run the necessary instances of virtualized resources, you could achieve this at no cost with Ubuntu Enterprise Cloud. If OTOH you want to combine multiple computers into a single cache-coherent machine, the computers would generally have to have been designed for the purpose - you can't achieve this with commodity home PCs. See the Cray CX1000-S for an example of a computer designed for this purpose, and particularly the CX1000-SC which can be combined with up to three others to form a larger symmetric multi-processing computer. Rocketshiporion 04:30, 16 March 2011 (UTC)[reply]
Take a look at the the memory hierarchy article, particularly the traditional pyramid diagram. Storage that's over-the-network is sadly absent from the diagram as we have it, but it occupies a spot below hard drives because it's slow. Because schlepping data over the network is slow and hard drives are cheap, local storage is always preferable. (Of course, you may still want to store information "in the cloud" in order to have it available everywhere, or to keep it in sync between various places, or just because it means that you don't have to set up the infrastructure yourself.) So virtualizing storage isn't useful for performance reasons.
CPUs, on the other hand, can be shared, but, because of network slowness, it's very hard, and often not useful at all. There's a lot of latency over the network (heck, the comparatively tiny latency inside a computer is a problem), so in order to use CPU power from different machines for a problem, the problem has to be embarrassingly parallel -- that is, it has to be possible to break it up into chunks, and have each node process each chunk independently for seconds (at a minimum) at a time. It turns out that there aren't a lot of tasks people use computers for that are worth breaking up into pieces like that. (Exceptions include scientific computing projects like protein folding -- in some sense, World Community Grid is the world's largest supercomputer.) Paul (Stansifer) 04:47, 16 March 2011 (UTC)[reply]

It is technically complicated, but it has been done: see openMOSIX. It's basically an extension of Linux that runs on multiple computers simultaneously, and does not require any changes to the Linux software that you want to run. Unfortunately openMOSIX is no longer maintained, but it should still work, and the LinuxPMI project is picking up the maintenance work. 130.188.8.9 (talk) 09:04, 18 March 2011 (UTC)[reply]

Cray CX2

[edit]

Hello Everyone,

  I've recently read (somewhere on Yahoo!, I'm not sure where) that the Cray CX1 is going to be replaced by the Cray CX2. However, googling "Cray CX2" and Cray + CX2 did not turn up anything, and there's no information AFAIK on the Cray website itself. Does anyone here know anything about the Cray CX2, especially product specifications?

  Thanks to all RefDesk volunteers! Rocketshiporion 08:59, 16 March 2011 (UTC)[reply]

I unfortunately did not make it to SC`10 last November; but I read their presentation briefs and abstracts. Cray did announce some stuff at the Disruptive Technologies forum. In November they were calling it "XMT", which is on their website; this system looks suspiciously like the SGI Altix stuff. Not being a Cray programmer, the XMT was news to me (though it might have been a 2009 release). I never heard anything about a CX2... Nimur (talk) 18:03, 16 March 2011 (UTC)[reply]
As the CX1 is a deskside cluster, I would expect the CX2 to also be a deskside machine. The XMT is a multi-cabinet supercomputer, and AFAIK its been around since early 2010. But thanks nonetheless, Nimur. Rocketshiporion 00:44, 18 March 2011 (UTC)[reply]

Historical webpage

[edit]

can I look up a webpage as it once was anywhere? Kittybrewster 09:54, 16 March 2011 (UTC)[reply]

Archive.org Wayback engine Shadowjams (talk) 10:05, 16 March 2011 (UTC)[reply]
You can only look up a webpage's previous versions if somebody saved a copy of it. Many common places to check for old versions of pages: on the original website (in an "archive" section); at the Archive.org group's website; in the caches or page-histories at Google, Bing, or other search-engines; or on your own computer, if you manually or automatically saved a backup. Nimur (talk) 17:59, 16 March 2011 (UTC)[reply]

What are "Transparent Huge Pages", and should we have an article

[edit]

The article "Linux 2.6.38 Boosts Performance in Internetnews.com" says: "There are many performance enhancements that went into 2.6.38, Transparent Hugepages is one of those noticeable features,".

The article goes on to say part of it is allocating larger page chunks. What is THP and should we have an article on it? -- Q Chris (talk) 10:24, 16 March 2011 (UTC)[reply]

This is some nitty-gritty stuff. First, let's make sure you understand what paging means in this context - it's a memory (RAM) optimization performed by Linux kernel. (In case you're lost on terminology - it has nothing to do with web-pages, nor with visual opacity/transparency, at all). Now, HugePage is a Linux implementation for certain types of processors (especially ia64 64-bit computers) that allows page sizes as large as 16 megabytes. This is something which has apparently been around for a long time, but was sort of a kluge hack - it required careful programmer management to prevent certain memory-allocation and freeing errors. In this recent post to Kernel.org, transparent hugepage core, Andrea Arcangeli (a Red Hat kernel engineer) proposes a new "transparent" mode that allows Linux Kernel 2.6.37 to use hugepage mode without the kluge. See the post for a point-by-point overview of what "transparency" means in this context - mostly it's about handling large memory pages with the same treatment as if they were small memory pages. It seems that this feature-set/bug-fix has been implemented in Kernel 2.6.38. I would really doubt we need a Wikipedia article on this topic - it's really something best left to the Kernel.org documentation team, because it's so specific and technical. See the HugePage proposal. Nimur (talk) 18:12, 16 March 2011 (UTC)[reply]
(ec) See Page (computer memory)#Huge pages. 75.57.242.120 (talk) 21:04, 16 March 2011 (UTC)[reply]
Which mentions they are also supported to a greater level in newer x86-64 processors (both from AMD and Intel) which makes sense since I wonder how much people would be bothering for IA-64 at this stage, Red Hat for example abandoning it. Nil Einne (talk) 23:01, 16 March 2011 (UTC)[reply]

Indian Highways Map in localised language

[edit]

I wish to get this map in tamil. Is there any software to generate maps in our own language? -- Mahir78 (talk) 12:06, 16 March 2011 (UTC)[reply]

You might be better asking on the wp:Village Pump. However, it is a .png of a .svg file. SVG files can be edited in a standard text edtior; make sure that it is saved in UTF-8 afterwards. Programs like inkscape will convert the file back to .png; those on the Village Pump will point you in the correct direction. CS Miller (talk) 14:07, 16 March 2011 (UTC)[reply]
Here's the original map in SVG format (direct-link to SVG source). It is fairly straightforward to change the names in that file; then you can export the cropped PNG file again. Nimur (talk) 17:57, 16 March 2011 (UTC)[reply]
Thanks for informations. Actually i was looking for svg. Now, I installed inkscape, a very useful tool and learning. I translated some parts of the map in tamil. its here. I wish to translate in other languages using this tool -- Mahir78 (talk) 07:40, 20 March 2011 (UTC)[reply]

Image splitting

[edit]

Hello. I need software that can automatically snip a given image into 32x32 images. I've been trying to fiddle one myself with Java, but I ran out of luck the very second image handling kicked in. But there must be some free program that can already do this. Anyone know? 88.112.51.212 (talk) 20:12, 16 March 2011 (UTC)[reply]

Imagemagick will do this: convert -crop 32x32 in.jpg out_%d.jpg -- Finlay McWalterTalk 22:59, 16 March 2011 (UTC)[reply]

Sims 2 open for bussiness how to install

[edit]

"Let's just say i know someone who downloaded sims 2 open for buissness from the internet and aready had double deluxe he had a mds and a mdf file but we cant figure out how to install it so how do we get to play it"! Please no downloading is wrong crap —Preceding unsigned comment added by 174.124.236.187 (talk) 20:22, 16 March 2011 (UTC)[reply]

Here is the official Sims Help webpage. Nimur (talk) 20:42, 16 March 2011 (UTC)[reply]
Call Electronic Arts tech support and ask them for installation help. Comet Tuttle (talk) 16:27, 17 March 2011 (UTC)[reply]
You need to have Sims 2 before you install Open For Business. Do you have the original version of Sims 2 installed?[1] --Colapeninsula (talk) 10:42, 18 March 2011 (UTC)[reply]

lyx to tex bib problem

[edit]

I have been asked to use lyx instead of standard latex for a paper. No problem. I wrote up the paper in lyx and it nicely used the bib file that I made. Now, I want to export it to latex for my own use (and I don't have lyx on my computer). I have the bib file, all images, and the tex file. When I pdflatex the tex file, it completely ignores the bib file. When I make my own latex files, this is not a problem. Is there some trick in lyx to make the exported latex file automatically recognize the bib file? -- kainaw 20:47, 16 March 2011 (UTC)[reply]

I found a solution. You have to latex it, then bibtex it, then latex it twice. Then, you can produce a pdf. It works some of the time, but not all of the time. I'd prefer a proper solution. -- kainaw 21:28, 16 March 2011 (UTC)[reply]
Well, you have to do the same thing with hand-written LaTeX, so I'm not sure how much better a solution you are likely to find. Looie496 (talk) 21:32, 16 March 2011 (UTC)[reply]
Yeah, it's wacky, but that's what you're supposed to do. (I suspect that the reason that it doesn't do the re-running for you is that it should be possible to create a file where intra-document references affect where the page breaks fall, which in turn affect the typeset size of those same references, so you can re-run it forever and still keep getting "there were undefined references". That's not a terrifically good reason, though.) You say it still doesn't work; what's the failure mode? Paul (Stansifer) 22:12, 16 March 2011 (UTC)[reply]
I opened the source, deleted everything that said "lyx specific" and now it works just fine. I never noticed that I had to run latex over and over because I use pdflatex, which apparently does that internally. -- kainaw 00:35, 17 March 2011 (UTC)[reply]
I found the problem. There was a missing comma in the bib file. Fixed the error and the tex/bib file compiles without a problem. -- kainaw 12:22, 17 March 2011 (UTC)[reply]
In case you are interested, there are various bits of software that will do all the necessary latexing/bibtexing automatically. There's a perl script called latexmk [2] to do this from the command-line, and several IDEs, like Kile and TeXnicCenter, which have lots of other handy features. 130.88.134.227 (talk) 14:58, 18 March 2011 (UTC)[reply]