Wikipedia:Reference desk/Archives/Computing/2013 November 28

From Wikipedia, the free encyclopedia
Computing desk
< November 27 << Oct | November | Dec >> November 29 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


November 28[edit]

Size of electronics[edit]

These days, all our electronics are getting thinner and lighter but at what cost are manufacturers doing this? Is it performance? I dont think its cost as the high end professional electronic items still tend to be larger and chunkier, as well as more expensive. 82.40.46.182 (talk) 00:24, 28 November 2013 (UTC)[reply]

Actually performance tends to go up as size goes down, since the electrons don't have as far to travel, so get there sooner. Reliability can be a problem, though. NASA tends to use older, bulkier electronics, so a stray particle won't destroy them. Here on Earth, we have the atmosphere to protect us from that, but still subtle things like temperature and humidity changes could cause problems. StuRat (talk) 00:38, 28 November 2013 (UTC)[reply]
I suspect that batteries are the things that suffer the most. Making super-thin, lightweight batteries is problematic - and all of the advances that could be used to extend the life of the battery are instead going to making them thinner and more lightweight. SteveBaker (talk) 14:40, 28 November 2013 (UTC)[reply]
At the cost of the poor designers' hair is the main thing. Smaller is in general better but there's all sorts of problems, the wavelength of the light is far larger than some features, cosmic rays and radioactivity can cause problems, and the things don't work the same if you just halve the sizes. For instance if you halve all the dimensions of a wire, its diameter as well as its length, the resistance doubles which means you generate double the heat in an eighth the volume. Its not as bad as that luckily!, the voltage required can goes down, one can thicken the metal and the heat spreads around easier and the surface is two dimensional and there always seems to be some amazing new property of matter turning up that can be exploited to make things smaller again. Dmcq (talk) 16:14, 28 November 2013 (UTC)[reply]

Connect a laptop to a HDTV[edit]

Thinking of getting a flatscreen HDTV to hook up to my laptop, mostly so I can watch streaming Netflix movies on a bigger screen. I have a Toshiba Tecra A-7 (bought 2006 or 2007). I checked the archives but couldn't find anything that would apply to my case. My laptop doesn't has a HDMI port as far as I can tell but an S-video outlet. Do I need some extra hardware, is there a cable that would do the trick or is the solution, if there is one, completely different? Appreciated for any helpful input. thank you.71.101.136.168 (talk) 02:17, 28 November 2013 (UTC)[reply]

You will need a converter : http://www.ebay.com/bhp/s-video-to-hdmi-adapter

S video? Do your laptop even have VGA output? 140.0.229.39 (talk) 02:56, 28 November 2013 (UTC)[reply]

Mmh, yes, of course it has a VGA output too. Is there a way to connect it to a TV through that or do I still need the converter mentioned above?71.101.136.168 (talk) 03:07, 28 November 2013 (UTC)[reply]
Also as a side note: With that converter i won't get HD I guess, unlike on my laptop? Unless Netflix (silver light) tricks me into thinking I have HD. Hard to see a difference on such a small screen thus I don't know for sure what I get.71.101.136.168 (talk) 04:46, 28 November 2013 (UTC)[reply]
The best resolution from that laptop was 1280 x 800 (WXGA) so it will display "720p" at the low end of "HD". Dbfirs 13:05, 28 November 2013 (UTC)[reply]
Honestly, I'd splurge the $35 to get a "Chromecast" contraption from Google. It plugs into the HDMI port on your TV and lets you transmit any browser window to your TV - or have it play movies for you directly after you launch from the laptop. Because it connects directly to the TV, your laptop's resolution becomes irrelevant. SteveBaker (talk) 14:36, 28 November 2013 (UTC)[reply]
  • Thank you for all answers. Very helpful. Chromecast sounds great but I checked and XP is not supported. Bummer. I'll have to go with one of those converters.71.101.136.168 (talk) 18:34, 28 November 2013 (UTC)[reply]
  • Update: Just ordered a HDTV with VGA and S-video input [1]. Guess that'll do the trick ;) . Thanks again for your help.71.101.136.168 (talk) 23:57, 28 November 2013 (UTC)[reply]

Dual Monitor Setup with Different Size Monitors[edit]

I have a dual monitor setup with two different size monitors. The issue I am having is when I use remote desktop to connect to my server, on the left monitor (the larger one), when I maximize, I don't have to scroll right/left and up/down. Then when I move it over to the right monitor, I have to scroll right/left and up/down to display everything on the screen. Is this because I am using two different monitors of different resolutions? Are there are software fixes for the issue? — Preceding unsigned comment added by 2605:E000:5FC0:1E:15CB:3E18:656C:8F1E (talk) 03:23, 28 November 2013 (UTC)[reply]

Nice question. I too am wondering about that setup. Using several PC's on one screen,keyboard and mouse is easy done but what the other way around?71.101.136.168 (talk) 03:42, 28 November 2013 (UTC)[reply]
Yes, it is because you are using two different resolutions. The only fix I can imagine is to use the same res on both. StuRat (talk) 05:50, 28 November 2013 (UTC)[reply]

Shuffle a deck of cards in Java[edit]

If I'm working with a deck of cards that is held in an array, can I use Collections.shuffle to shuffle the deck? It says that it's used on lists but I have an array. Dismas|(talk) 06:29, 28 November 2013 (UTC)[reply]

I think you might just implement shuffling routine yourself as a method for the array. Algorithm is described in the doc you linked:
This implementation traverses the list backwards, from the last element up to the second, repeatedly swapping a randomly selected element into the "current position". Elements are randomly selected from the portion of the list that runs from the first element to the current position, inclusive.
that is, going with pos from N–1 downto 0 make random number r such that 0 ≤ rpos and swap array elements at pos and r. Done.
You might also use Arrays.AsList which gives a List-like interface to an array. :) --CiaPan (talk) 06:51, 28 November 2013 (UTC)[reply]
Dismas, you can convert your array into an ArrayList, using Arrays.asList. Now you have a Collection object that you can safely pass to the Collections shuffle function. Nimur (talk) 10:32, 28 November 2013 (UTC)[reply]

Sorry if this is obvious but how do I get my ArrayList back into an Array then? I'm new to this if you hadn't figured that out yet. Dismas|(talk) 12:51, 28 November 2013 (UTC)[reply]

You don't. You don't need to. It is still there as an array, just use it. Assuming 'tab' is your array, do
Collections.shuffle( Arrays.AsList( tab))
then access 'tab[]' items as before. --CiaPan (talk) 14:15, 28 November 2013 (UTC)[reply]
Method AsList does NOT convert your array into a list, as Nimur says. It only adds a list–like interface to the array, but the array itself is not transformed into any new data structure. --CiaPan (talk) 14:20, 28 November 2013 (UTC)[reply]
That is a very good point, CiaPan: in recent Java incarnations like Java 6 and Java 7, the AsList method is a efficient (cheap-to-call) method that bridges the Collections framework, while still using the original array to back the data structure. When I first began programming in Java 1.2, our code pre-dated the Java Collections framework, so my memory of the implementation details is rusty. A lot changed in Java 1.4; and it's been fairly stable since then. You should definitely verify with the documentation that corresponds to your current Java JDK. Nimur (talk) 16:11, 30 November 2013 (UTC)[reply]
Oh good grief! Just write the shuffle code for an array - it's an utterly trivial algorithm and it generates a "perfect" shuffle (my Java is a little rusty...but I think this is right):
  Random rand ;
  for ( int i = 0 ; i < 52 ; i++ )  // For every card in the deck
  {
    int pos = rand.nextInt(52);  // Pick a place for this card to go.
    Card tmp = cards[i] ; cards[i] = cards[pos] ; cards[pos] = tmp ;  // Swap the i'th card with the one in some random place. 
  }
By the time you've figured out how to convert one data structure into another (and back again) ...you could have written this is a trivial piece of code ten times over! Then consider the wasted CPU time to convert back and forth and that shuffling a list is inherently more code than shuffling an array.
SteveBaker (talk) 14:31, 28 November 2013 (UTC)[reply]
That's the broken "permute-with-all" variation of the Knuth shuffle. (There is a reason to use the standard libraries!) --Tardis (talk) 15:34, 28 November 2013 (UTC)[reply]
How is it 'broken'? AndyTheGrump (talk)
There is an explanation in Knuth shuffle - but for 52 cards, the deviation from statistical perfection is utterly trivial - there is essentially no conceivable application of card shuffling where the difference matters. But yeah, it's a little bit off. However, the lack of "true" randomness in the underlying random number generator by far exceeds the error from using the above code. SteveBaker (talk) 03:15, 29 November 2013 (UTC)[reply]
Well, that would depend on what you want it for, I suppose. If I've understood the explanation in the article, the problem with this algo is that the permutation it gives you is always a cycle of all the cards, which is true for only 1 in 52 of the possible permutations. That's not what I would call a small error. To me, a small error would be, say, one that gave some permutations with probability of 1.01 times the theoretical value, and some others with 0.99 times the theoretical value. But this one gives 52x for some permutations, and 0x for others.
Now, it might be true that, for most games, you won't really notice whether the permutation is a cycle or not, because it doesn't much affect your chance of getting a royal flush or a 2NT opener. But I can imagine that, say, there might be solitaires where it would be key, where, I don't know, you can win if the permutation decomposes into cycles of odd length, and not otherwise. ---Trovatore (talk) 03:39, 29 November 2013 (UTC)[reply]
You're lookin at the wrong implelmentation error. The relevant part is the second paragraph of the "implementation errors", not the first. Steve is right that the difference is not huge, and every combination is possible. MChesterMC (talk) 11:20, 29 November 2013 (UTC)[reply]
Ah, my bad. Didn't think about it all that carefully. --Trovatore (talk) 21:05, 29 November 2013 (UTC)[reply]
It does bring up an interesting question, though. Just how far off is this algo? The argument that it can't be perfect, simply because n! does not divide nn, is easy to follow, but is there any equally clear way to say something about what the errors are? For example, is the identity permutation more likely or less likely to show up than its theoretical value? --Trovatore (talk) 01:19, 30 November 2013 (UTC)[reply]
Thanks, Steve, but I'm not as concerned with being able to write the for loop as I am in building experience with various libraries and the API. And the CPU cycles don't bother me. I'm aware of the trade offs between the two methods. Thanks again for your suggestion but it's just not the direction I wanted to go.
Thanks to CiaPan for helping to explain the implementation of the commands. Dismas|(talk) 00:34, 29 November 2013 (UTC)[reply]


OK, I have a little something to report on the question of how far off the "broken" implementation is. I wrote a little program that tries all nn possibilities and keeps track of how many times each possible permutation gets hit.

Of course I couldn't go up to n=52. The program took more than an hour for n=8, and that was several hundred times what it took for n=7, so while there's no doubt some serious low-hanging fruit in terms of making it run faster, we're not going to get past 9 or 10 with this approach.

Still, the results are intriguing, and if they generalize (and aren't just an artifact of some bug in my code), then the implementation actually seems to be rather badly broken:

n 2 3 4 5 6 7 8 9 10
Mean hits 2.000 4.500 10.67 26.04 64.80 163.4 416.1 1067.6 2755.7
Std dev 0.000 0.500 1.841 5.803 17.25 49.83 141.5 397.98 1111.3
C.o.v 0.000 0.111 0.173 0.223 0.266 0.305 0.340 0.373 0.403

So subject to the above caveats, it kind of looks like the ratio of the standard deviation (note: s.d., not variance) to the mean is, if anything, going up, not down, as n gets larger. Presumably it will level off somewhere (WAG: e−1), but this suggests that a substantial fraction of permutations may have probabilities differing by at least a factor of 1.3 from their theoretical values. That's pretty bad, at that level of analysis.

It still might be true that Steve is right that it doesn't make much practical difference for actual games; I don't know. (Or, it could be that I have a bug.) --Trovatore (talk) 04:09, 2 December 2013 (UTC)[reply]

Update: Turned out it's hugely faster to work a little harder on figuring out a code for a permutation rather than just dumping them into unordered_map (though, to be fair, I was using a really dumb hash function, so STL might be more competitive if I worked harder on that). With that change, I was able to get n=9 without too much pain. I've added the extra column above. (My WAG was wrong.) --Trovatore (talk) 07:24, 2 December 2013 (UTC)[reply]

New update: Added column for n=10. That was painful — my laptop kept threatening to get to 96 Celsius and shut down. I had to nursemaid it by hitting Ctrl-Z every now and then to let the fans catch up. Any further work at higher n along this lines, at least on my computer, will have to be Monte-Carlo, which complicates interpretation. But anyway it looks pretty clear that it's a bad way of doing a random shuffle. (Again, unless I have a bug, and I concede that I don't know whether there's any actual game for which this shuffle would be a problem.) --Trovatore (talk) 09:01, 2 December 2013 (UTC)[reply]


There's a more serious issue here. When you do a deck shuffle with this code, the final state of the deck is determined by the original random seed. (Remember, each seed is used to generate the next seed.) That random seed is 32bits, or possibly 64bits.
A deck of card contains 226bits of data.
Believe it or not. The majority of deck configurations can never be generated by the code that Steve posted.
You'll need more than 226bits of true entropy to properly shuffle a deck. APL (talk) 19:33, 2 December 2013 (UTC)[reply]
Oh, of course, it's discussed here in the article : Fisher–Yates_shuffle#Pseudorandom_generators:_problems_involving_state_space.2C_seeding.2C_and_usage
APL (talk) 19:37, 2 December 2013 (UTC)[reply]
I don't know anything about the Java Random class; it may be that it only has a 32-bit or 64-bit state, but that would be pretty silly. That made sense a long time ago, when 32K was a lot of RAM (maybe all you had), but not now. I was assuming that it was a modern stateful RNG and treating that as a black box. I really think that's a less serious issue, not more serious. --Trovatore (talk) 20:43, 2 December 2013 (UTC)[reply]
It's not really about the internal state, It's about the starting condition, which is defined by the seed.
Java rand is seeded by a Long. (64bits.)
Worse, C's rand is seeded by an Int, which can be 32bits on many systems!
APL (talk) 21:31, 2 December 2013 (UTC)[reply]
Well, these are details of the RNG implementation; I've written RNGs myself and dealt with that sort of thing, but I'm not interested in it here. You can certainly set up the system to get lots of true-random bits for the seed, if you're interested in doing that. Look into RNGs with a fat state, like Tausworthe or the Mersenne Twister, and consider seeding them from an entropy pool. But this is pretty much orthogonal to the issue at hand, which is how bad this version of the Knuth shuffle is intrinsically.
As a practical matter, I'm pretty confident that, even if the supply of seeds is small, any decent RNG will give a decent statistical sample of the space for most purposes. For example, I would be shocked if it systematically preferred even permutations over odd (or vice versa), when using the correct implementation. For the errors caused by this permute-with-everything implementation, I'm not confident of that at all. The statistics make me think that some property of certain permutations is systematically preferred by that implementation, and I would conjecture that it's a property that has some conceptual understandability beyond the bare statement that it is preferred by the implementation. That's just intuition for now; I haven't found out what the property is. --Trovatore (talk) 21:49, 2 December 2013 (UTC)[reply]

(unindent) There are clearly two very separate issues here:

  1. Is the random number generator sufficiently random?
  2. Is the algorithm introducing some kind of statistical anomaly.

The first is all about the quality of that Random class in Java. I suspect it's probably very good...but the algorithm doesn't depend on a particular random number generator - if you need a better one, you can write one and use it. That's really not the question here.

The second issue is the critical one. And I agree that the algorithm that I presented isn't as perfect as I first thought. (It is, however, plenty good for practical card-shuffling purposes because no matter how bad it is, it's not remotely as bad as a human doing the job with a real deck of cards!)

To understand *WHY* it's flawed is a bit subtle. Our article points out that the number of possible permutations of cards (52! or 52 factorial)

80,658,175,170,943,878,571,660,636,856,403,766,975,289,505,440,883,277,824,000,000,000,000

...does not divide exactly into 5252

252,480,111,352,266,526,589,168,512,857,250,850,253,488,701,621,266,216,973,895,038,073,875,801,560,933,776,490,496

...which is the number of possible ways that the algorithm can have moved the cards around. If the algorithm was perfect, then each pattern of cards would come about in exactly 5252/(52!) different ways through the software. That number is:

3,130,248,245,973,452,170.5993100879

And what's significant about that number is that it's not an integer. Which means that some patterns MUST come up more frequently than others - which is the final proof that the algorithm is statistically imperfect even if the random generator that drives it is perfect.

Unfortunately, that tells us nothing about HOW imperfect it is. The best-case scenario would be if one card pattern showed up about 0.00000000000000001% more often than any of the others...which would be entirely irrelevant for any reasonable purpose and way less bad than the statistical flaws inherent in any known software random number generator. But it might be worse than that...maybe MUCH worse.

The tricky part is understanding just how bad that statistical unfairness really is...particularly in the context of a particular kind of card game. For example, in the game of Contract Bridge, it doesn't matter what order each player gets their cards in - only which cards end up which which people. In the childs game of "SNAP" - it doesn't matter what suits the cards are.

However, our OP's question wasn't about finding a halfway decent shuffling algorithm - but actually about moving one data structure into another (and back again) in Java. So I guess we've already gone much deeper than is needed to answer that.

SteveBaker (talk) 01:39, 3 December 2013 (UTC)[reply]

Well, OK, but you don't seem to have taken into account that I have done a good deal of work into characterizing how imperfect it is. Granted, I didn't get anywhere close to n=52, but there are clear trends, and they are definitely not in the direction of the permutations being distributed as-evenly-as-possible given that 52! doesn't divide 5252. --Trovatore (talk) 01:50, 3 December 2013 (UTC)[reply]
Interestingly, the algorithm used in Java Collections.shuffle differs only slightly from mine:
 Random rand ;
 for ( int i = 51 ; i > 0 ; i-- )  // For every card in the deck except the first
 {
   int pos = rand.nextInt(i+1);  // Pick a place for this card to go.
   Card tmp = cards[i] ; cards[i] = cards[pos] ; cards[pos] = tmp ;  // Swap the i'th card with the one in some random place. 
 }
That tiny change results in only 52! possible shuffles - and so long as we believe that every possible card ordering is reachable, it is perfect (assuming the random number generator is also perfect). SteveBaker (talk) 15:44, 3 December 2013 (UTC)[reply]
So, no comment on the statistics, then? Really I thought they were fairly surprising. They show not merely that the permutations are not distributed as evenly as possible; they aren't even distributed "randomly", so to speak. That is, if you threw an n!-sided die nn times, and kept track of the permutation coded by one throw of the die (that's why it has n! faces), then you would get a MUCH more even distribution than this one.
That's why I think it's likely to be some property that we can find. I have the data (I'll share them with you, Steve, if you like, or anyone else who asks) for n=2 through 10, and it should be possible to analyze them to see (for example) whether the position of 0 in the permutation is, on the average, where we expect it to be. Or whether there's an inordinate fraction of odd permutations. Or lots of other possibilities. --Trovatore (talk) 19:54, 3 December 2013 (UTC)[reply]
Trovatore, I would be interested to follow up on your investigation, as I have frequent needs for cryptographically secure, alias-free shuffling algorithms (both for practical and non-practical purposes). It is always good to get a refresher on pathological ways that random sequence generators can alias. Maybe later this week you can share your method and data. In the meantime, I'll spend some time thinking about the problem, and ways to safely and sanely experiment with larger n.
In practice, I usually use the Mersenne twister, or an srand-like function that gets seeded with hardware entropy; but I'm sure my random numbers are only as good as the weakest post-processing that I perform on them. I don't frequently shuffle cards, but I do schedule tasks, allocate resources, and so on - so this might be a fun experiment. Nimur (talk) 04:30, 4 December 2013 (UTC)[reply]
OK, here's the link to my code. I've never tried to share code this way before; I don't know if you can just copy/paste without getting rid of some screwy characters or something, but it's short enough that even completely retyping it wouldn't be too bad.
I've done a little analysis — it looks like the number that winds up in the 0 position is not at all evenly distributed. For n=10, there's exactly a 10% chance that 0 winds up there, which is exactly right, but the chance that 1 winds up there is almost 13%. Then it drops off steadily, with 4 having a 10.44% chance and 5 a 9.78% chance, and winding up with 9 having a 7.75% chance. I thought about this a bit and it kind of makes sense to me; I'll post the explanation later unless someone anticipates me.
So it looks like part of the "favored property" is just having certain cards wind up in the first (perhaps early in general?) position(s). --Trovatore (talk) 10:13, 4 December 2013 (UTC)[reply]
"It compiles and runs," ($ c++ -o out.exe main.cpp permutation.cpp ). I will spend some more time reading code and investigating results. If this question ends up archived... I know how to find your talk-page; I don't expect I can spend a lot of time on "fun programming" today, but I'll report back whenever something neat comes up. Nimur (talk) 15:45, 4 December 2013 (UTC)[reply]
I went ahead and uploaded the data for n=10 (the only one that doesn't run in seconds) to the same folder -- look for out10new.bz2. The format is the same as the output from the program you have. --Trovatore (talk) 19:48, 4 December 2013 (UTC)[reply]

TrueCrypt Pre-Boot Authentication - can it be temporarily disabled?[edit]

Hi, I'm looking for a way to temporarily disable the built-in Pre-Boot Authentication feature of TrueCrypt, so that security updates / software installations that require a reboot don't require physical presence of somebody to type in the password. Does anybody know how that could be accomplished, preferably purely in software or with the use of a regular USB storage medium (no special "Token generator")? I'm looking for a way that would either have it bypass the authentication "just once" or a pre-defined, selectable number of reboot cycles, or as long as the USB storage medium is plugged in.

Kind Regards, 149.172.200.27 (talk) 12:44, 28 November 2013 (UTC)[reply]

As I'm sure you know, the password is required to actually derive the key that the system then uses to decrypt/encrypt your data on the fly. I guess it might be possible to modify the preboot code to read your passphrase/key from somewhere, and then boot as normal, but I don't know of anyone who's done that, and it has the problem of storing your passphrase. It might be possible too to use a secure token / smartcard, but I don't know if that works with preboot/system encryption. It should be obvious that there's no simple "don't require a password this time" option because unlike your windows password, which provides software controlled access control, full disk encryption requires the key and never stores it (hopefully) on the disk, because doing so would defeat the entire purpose. Shadowjams (talk) 02:33, 30 November 2013 (UTC)[reply]
It wouldn't defeat the purpose if it is done temporarily for the reason given above, would it? AFAIK, commercial encryption software vendors offer such a feature, I was just hoping it is available with a free and open-source solution like TrueCrypt, too. -- 188.105.122.86 (talk) 23:46, 1 December 2013 (UTC)[reply]
If you ever write the key (or what can derive it) to the disk, you've given up on every major advantage that full disk encryption offers. I would be very interseted in how the commercial vendors offer this feature. Shadowjams (talk) 05:06, 4 December 2013 (UTC)[reply]
I should point out that you imply "temporary" storage to the disk. That has all sorts of caveats with it, and SSD drives make that even more complicated. Short version is, if it ever gets written to the disk, it's no good. If this is the kind of security you require then basic access control security from the OS should be enough. Shadowjams (talk) 05:09, 4 December 2013 (UTC)[reply]

How does SQL JOIN work with WHERE clause in the background to make query more efficient[edit]

I know how using a WHERE clause works in the end product. But, recently at work I ran some query and it ran out of spool space. I was eventually able to get it to run by deleting out a couple tables that were joined. I want to know more about writing efficient queries. I know the basic process of writing queries in general. I have several specific questions on this topic. I would love a webpage on the subject if you know of one. I have googled for such a webpage many times and been unsuccessful. I may not know enough to use the right words.

I have heard that when you do a JOIN it creates the Cartesian product of the tables. If I write a query of the form SELECT stuff FROM somewhere WHERE conditions, do the WHERE conditions make the Cartesian product smaller? That is, are the filters applied to the various tables before the Cartesian product is performed? Or, does it create the Cartesian product first and then go through and limit the rows? This would be a huge difference in how much computing power is needed and I can't find the answer any where. If it does create the Cartesian product first, is there a way to make it more efficient?

I once read that if you move the WHERE conditions into the JOIN state it can make it more efficient. Is that true? For example, I'm saying that instead of putting WHERE a.number > 50000 in a WHERE clause, you should put that same condition in the join, i.e., a JOIN b ON (a.ID = b.ID and a.number > 50000). Is this true? Does this make it more efficient?

Also, if you do a bunch of JOINs in one query, does it create a huge Cartesian product of all tables all at once? Or does it do the first two, then limit, then the result with the next table, then that result with the next table, etc.?

Any references or answers would be greatly appreciated. Thanks. I would love references more than anything probably so I can learn more and more.

StatisticsMan (talk) 15:33, 28 November 2013 (UTC)[reply]

If you want to know how a specific complex query of your own is actually evaluated, most DB engines have a mode where they show the basic operations they do to satisfy a query. In MySQL and I think Postgress it's called EXPLAIN; db2 has an explain utility, and Microsoft SQL has SHOWPLAN. Some suites also have a tool that helps you design queries better by looking at actual queries on representative data - db2 has Design Adviser, for example. -- Finlay McWalterTalk 15:41, 28 November 2013 (UTC)[reply]
How a query is evaluated can also depend on statistics gathered about the tables, for instance how many records there are in each table and whether a field has only a few values or mostly unique values. The same query may be evaluated in quite different ways as the database changes. It is even possible that a small trial is done to evaluate some statistics before settling on the final scheme. I see in fact Wikipedia has an article Query optimization about all this. Dmcq (talk) 15:53, 28 November 2013 (UTC)[reply]

MSDN (Microsoft Developer Network) has groups/forums on Sql Server. You can post there because as far as I remember anyone can post even without being a subscriber. http://social.msdn.microsoft.com/Forums/sqlserver/en-US/home?category=sqlserver&filter=alltypes&sort=lastpostdesc — Preceding unsigned comment added by 2601:7:7680:626:86D:846D:AD0B:96BD (talk) 00:45, 29 November 2013 (UTC)[reply]

How do I mount my SD card's 2nd partition on my Samsung Galaxy Victory? (TWRP doesn't let me select "Mount SD Card.")[edit]

I can't use Link2SD to transfer movable apps to the SD card unless I "mount" the 2nd partition.

TWRP won't let me select the option "Mount SD Card," so I figure TWRP isn't the right app to mount the 2nd partition from. **What will let me?**

Once I successfully mount the 2nd partition, I'll next get 4 choices: ext2, ext3, ext4, FAT32 / FAT16. I'm trending toward ext4, so ext4 is correct?

I've been wanting to transfer many movable apps to the SD card because I have anemic internal storage space (<5 GB.) That's why I partitioned my 32 GB SD card. I still have yet to figure out how to "mount" said partition and what file system to choose, before I move the apps. Thanks. --Let Us Update Wikipedia: Dusty Articles 17:30, 28 November 2013 (UTC)[reply]

Unwanted pop-ups II[edit]

I posted recently on the subject of my having these unnerving popups. They still bother me. This is what I did this morning. When they started I opened "the "Task manager" and looked at processes running. One of them caught my attention: FlashUtil64_11_9_900_152_ActiveX.exe. I noticed that when I close the browser (Internet Explorer) with the popups on it, the process disappears in the Task Manager. I then traced it to this directory: Windows\System32\Macromed\Flash. This subdirectory has a dll, an exe and a log file with numerous downloads. I don't even remember if I ever allowed so many downloads.

Can I delete the whole directory? Is it the one that is the troublemaker?

Thanks, 2601:7:7680:626:BD56:6784:7D52:E377 (talk) 18:07, 28 November 2013 (UTC)Alex[reply]

Well, Flash animation is used on lots of internet games and such, so you probably want to keep that. But, you could uninstall it and reinstall it, to hopefully leave out the malware. StuRat (talk) 11:59, 29 November 2013 (UTC)[reply]

OK, I just checked. This flash thing is NOT in "Uninstall Programs." What can I do? I don't play any video games. Can I just delete the whole directory perhaps? Thanks2601:7:7680:626:D430:E4A9:403A:140 (talk) 01:40, 30 November 2013 (UTC)[reply]

It turned out it was Adobe Flash Player. I uninstalled it and it seems those nasty popups are gone now.2601:7:7680:626:D430:E4A9:403A:140 (talk) 02:26, 30 November 2013 (UTC)[reply]

Ok, but you might soon run into something that says it needs to install the Flash player to run. At that point you can reinstall it or just skip running that particular application. (Note that Flash itself isn't the problem, just some malware that inserted itself into the Flash directory.) StuRat (talk) 06:28, 30 November 2013 (UTC)[reply]

Sure you are correct. I wonder how do they do it and if there is a law to bring them to justice? I hope it is not Adobe themselves that do it but some people who are familiar with the application, perhaps former employees who have the source code.2601:7:7680:626:3542:4C7:B0A6:21C0 (talk) 17:08, 30 November 2013 (UTC)[reply]

It probably isn't either: the Adobe Flash Player has many security holes that clever programmers can exploit. This is primarily because the Flash Player needs to be able to execute code on your system to work: Windows does not have very good access control, so with programs like Flash, there's a conflict between granting enough access to be useful and preventing others from exploiting this access to do things they shouldn't. OldTimeNESter (talk) 18:12, 2 December 2013 (UTC)[reply]
My rule of thumb is, if it tells you that "Flash is required", it's not worth looking into, anyway. Flash is highly unsafe and as video formats go, not any better than competing formats like MP4. Adobe are not exactly trying to cut down on the obfuscation – they keep pushing new features before last year's features are debugged, and web developers who know this but still use Flash are, IMO, best avoided.
If you can live without the interactive features of Flash, you can try the same. - ¡Ouch! (hurt me / more pain) 13:07, 4 December 2013 (UTC)[reply]