Jump to content

Wikipedia:Reference desk/Archives/Computing/2011 June 3

From Wikipedia, the free encyclopedia
Computing desk
< June 2 << May | June | Jul >> June 4 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


June 3

[edit]

128 bit gaming

[edit]

I've seen video games progress from 8 bit -> 16 bit -> 32 bit -> 64 bit. Why is it that the PS3 and XBOX360 remained at 64 bit, and the Wii regressed to 32 bit, instead of any of them moving to 128 bit and (later) 256 bit? Ballchef (talk) 02:58, 3 June 2011 (UTC)[reply]

I suspect that the answer is that the benefits aren't worth the effort, or the hardware costs. Extra bits only really amount to extra precision in games - i.e. you can model things more accurately. What is needed most is usually to model things faster - so you can model more of them. AndyTheGrump (talk) 03:09, 3 June 2011 (UTC)[reply]
I see! And by "more accurate" do you mean, for example, more life like animation? That would explain why the Wii has gone back to 32 bit. Ballchef (talk) 03:29, 3 June 2011 (UTC)[reply]
The old practice of doubling the advertised number of "bits" each generation was purely a marketing gimmick. There was no aspect of the hardware that these numbers described, although there probably was always some N-bit bus somewhere that they could point to to justify their claim. If they wanted to call current consoles 256-bit or 1024-bit, they could find a way to justify that. But I suppose they decided that the large numbers were starting to look silly. I think that a few consoles were advertised as 128-bit. -- BenRG (talk) 04:30, 3 June 2011 (UTC)[reply]
I have to disagree with your claim "There was no aspect of the hardware that these numbers described". According to orders of magnitude (data) the numbers referred to the "word-size" (instruction length) of the console. While I don't know what this means, it seems to have a purpose. Also you missed my point - why do they not continue to grow the "word-size" of these machines?. If andy the grump is referring to what i think he is, then they should so that the images and video of games look more realistic. Ballchef (talk) 07:31, 3 June 2011 (UTC)[reply]
The word size of processors have not increased in recent consoles because it isn't necessary to do so. Regarding the relation between word size and realistic 3D graphics, 32-bit floating-point numbers are adequate for modeling 3D models so accurately that whatever distortion from the lack of accuracy is essentially indistinguishable — in my (very limited) experience, lack of accuracy only causes 3D geometry to do weird things (such as two objects separate from each other joining or overlapping when moved close enough to each other) when very fine scales are used. In games, such scenarios never happen because of what games are; they are not real-world simulations, and they don't let the player end up in such situations.
Regarding the word size and instruction length, our article happens to be incorrect about this issue; the PlayStation 2 processor had 32-bit instructions. There is no relation between the size of words (quantities of bits that the processor can process as one piece of data) and the length of instructions. For example, the x86-64 processors in PCs have instructions that have a variable length, 8 to 120 bits IIRC, but the processors only operate of words up to 64 bits in length. Rilak (talk) 08:06, 3 June 2011 (UTC)[reply]
That can't be right, because instructions on intel chips fall exactly at byte bounaries. It must go up to 128 bits. i kan reed (talk) 13:08, 3 June 2011 (UTC)[reply]
15 x 8 = 120. AndyTheGrump (talk)
I can't help if I can't do elementry math anymore. i kan reed (talk) 14:02, 3 June 2011 (UTC)[reply]
First, Like BenRG says, those numbers don't always mean anything. For example, the Dreamcast was frequently advertised as a "128-bit gaming system", but It used a CPU with a 32-bit word size.
Secondly, CPU word size isn't really an indication of CPU power. It primarily has to do with precision, and with how much RAM the CPU can use effectively. There are a few other things a larger word size is better for, but in general it's not really a big deal compared to other aspects of the CPU.
Really, word size is a boring technical detail that advertisers and marking people latched onto for a while because it was an easy to understand number that kept going up. APL (talk) 17:37, 3 June 2011 (UTC)[reply]
Word size is a boring technical detail, but it may have important ramifications to other things like maximum sustainable frame-rate, or maximum number of renderable polygons per second, which are more intuitive specs for a gamer to understand. The hard part of marketing digital electronic computer hardware is trying to figure out which of the six billion transistors are worth advertising: every part of a modern computer's hardware is used for something, and different architectures differentiate themselves in different ways. In previous generations, processor word-size was a good "umbrella" for a set of additional peripheral features, sort of like describing an automobile only by the number of cylinders in its engine. Technically, you could put a 12-cylinder engine on a tin can, but nobody sells high-end systems with low-end peripherals. Similarly, a 64-bit system "implies" that a set of additional hardware, including SIMD, multimedia extensions, advanced DMAs, and other boring technical details will also be included; consequently, the game features will be less limited by hardware.
Anyway - specifically regarding 128-bits: you might find 128-bit buses all over your system; but this is a feature that is opaque to most game programmers, who tend to reside in application space nowadays. As hardware has become more sophisticated, games programmers spend less time tuning to hardware details, relegating this difficult task to hardware driver programmers and operating systems programmers. I think you will be hard-pressed to find any modern game-programmer who writes to the bare metal of her machine, without an operating system to hide the number of bits and other boring technical details from them. In the old days, when Nintendo programmers found ways to slam AI code into audio-processor ROMs and similar such trickery, it really mattered to performance if the system audio bus was 4, 8, or 16-bits; but these sorts of hardware hacks are essentially nonexistent on today's platforms. Nimur (talk) 18:15, 3 June 2011 (UTC)[reply]

(homework question, got it resolved by my prof, thanks!)

Multicolumn text flowing multipage sliding

[edit]

Dear All,

Hi.. I need help… Multicolumn text flowing through multipage sliding, should calculate total column, gap, text size, image size for different browser. To be control with Previous Page & Next Page. At the ending of story the next Page sliding should stop. Please give me suggestions how to implement with CSS, javascript, jsscript, if any...

Best Regards, baski — Preceding unsigned comment added by Indianbaski (talkcontribs) 09:22, 3 June 2011 (UTC)[reply]

More MS Office Problems

[edit]

Every time I try to open a document by double clicking on it, MS Word opens up as normal and as expected, however, without fail I get a message saying that Word cannot find the file and to make sure I've 'spelt the address properly'. When I then double click on the document again, with Word still open, it opens up fine. I've looked around for info on this, and was able to find someone advising others to 'add [blah blah blah] folder to trusted files' and stuff, which I've done for some folders - but it doesn't work for all the folders inside folders (it says it does, but it doesn't) and I don't like having to use a workaround for something that was not broken a short while ago. Does anyone know what I can do to fix this?

Also, possibly unrelated but, in recent weeks, any Word .doc file has been showing without its default square W icon. I've been getting an icon that looks like it should be .rtf or .xml or something. .docx files, however, show up fine with their default rounded W icon. Can I fix this?

I am not sure if it's relevant, but I have all the updates for Office 2007, plus Acrobat X Pro and TRADOS Studio 2009 & Multi Term addons. --KägeTorä - (影虎) (TALK) 11:48, 3 June 2011 (UTC)[reply]

As for the appearance of .doc files in Windows Explorer, perhaps you installed some software that tells Windows that .doc files belong to it instead of to Word? I use Windows 7, but I think the following will also work on XP and Vista: Right-click one of the .doc files. On the "General" tab, it should say "Opens with: Microsoft Office Word". If it lists a different app instead, click "Change...", and then the "Browse" button if necessary, to change it back to Word. Comet Tuttle (talk) 23:05, 3 June 2011 (UTC)[reply]
Thanks, but right-clicking on a .doc in Vista doesn't give me anything with tabs. There is a list of options, and one of them is 'Open with...', which then leads to a list of programs, and 'Word' is at the top. I have manually set .doc to open with word as default since this happened, too, but it makes no difference. Double-clicking on a .doc file opens Word, but I get an error saying Word can't find the file (which also happens with .docx). Incidentally, all of these problems only happen with Word, and no other component of Office 2007. --KägeTorä - (影虎) (TALK) 10:15, 4 June 2011 (UTC)[reply]

common practices request

[edit]

Hi,

I'm about to join a development "team" that had consisted of a senior developer, who was supposed to take me on for a while until I learned the ropes as a junior developer. Unfortunately, the senior developer has departed prematurely, and I will have no guidance whatsoever. The situation is a common LAMP setup, without a staging server or test server; instead the production process consists of work on two machines: development and testing on the developer's machine, and the production server. I did ask the senior developer about the process he uses on his own machine, and he uses Git for version control on his own machine (and, obviously, none on the production server). So, I hope to do the same thing. My request here is for a detailed description of the common process someone would use to maintain code with Git from their own machine to be then used on a production (single-server) LAMP environment. The code is not complicated, and the Git repository does not have several branches, just a single one. (Sorry if I'm misusing the terminology, I did not use this version-control yet). Could you, in particular, walk me through the process I would probably use? (The previous developer tests on his own machine using a VM). Is it something like this:

-> from the Linux shell (bash etc), check out (by hand, typing a checkout command), files to be edited
-> Edit them in (alphabetically listed -) Emacs or Vim
-> Check them back in
-> ???
-> Test in my VM, which would get the files from the Git Repository?
-> ???
-> ???

Basically, I'm a bit fuzzy on the workflow... However I am very motivated, and am sure if you can figure it out if you help on what you think would be the most common practice for this situation! Thanks so much. Also, the guy had only edited the HTML files by hand - no wysiwyg layout software or anything like that... would it be normal for a designer (which is more my background) to set one up to somehow use Git as well? How? --86.8.139.65 (talk) 18:29, 3 June 2011 (UTC)[reply]

Using git on a personal machine like this is useful for removing changes. That is the sole reason for it. It isn't for backup - if he loses his machine, he loses the repository. It isn't for multi-user version control. He is the only developer. So, he is using it to make a change, demo the change, and possibly remove the change. He is certainly running a copy of the site on his own computer, which is where he views/tests changes. Once all is good, he just copies the files to the webserver. No need to have it load from git because it always has the latest version once changes are accepted. As for editing by hand, that just means that he knows what he is doing. Using a WYSIWYG editor is painfully tedious if you just want to make a site work. Anecdote: I developed our entire site here by hand. Any changes were implemented very quickly. No pain. No fuss. Then, the order came down from on high that all websites must go through SiteExec. It took weeks to move the existing site to SiteExec. It took weeks to agree that much of the site simply couldn't be implemented through SiteExec. It took weeks more to agree on how to downgrade anything of interest in the site because it was far to painful to make simple changes (such as fixing a typo in a person's name) through SiteExec. Now, the website is broken and outdated and everyone refuses to fix it. It is possible that some day in the distant future we'll be allowed to get rid of the WYSIWYG editor and be able to take control of our site again. -- kainaw 18:42, 3 June 2011 (UTC)[reply]
Thanks for the highly relevant info! --86.8.139.65 (talk) 19:17, 3 June 2011 (UTC)[reply]
It's difficult to know what "best practice" is without a better idea of the importance of the server/site to the company (e.g. what happens to the company if the site is down for two days? or if the last week of transactions on the database are lost?). Knowing how important it is, and what steps have been taken to resist loss or downtime due to system crashes, powercuts, fires, etc., will define how paranoid "best practice" needs to be. It sounds like the system justifies the time of a full-time developer; if that's true, it's very difficult to understand why it wouldn't justify a proper staging server and a battery of tests to be done there before changes are pushed to the live server. -- Finlay McWalterTalk 19:36, 3 June 2011 (UTC)[reply]
Thanks, also extremely useful. Without further access to the previous developer on my part, could you provide any other guidance you think would be useful for me? Other than git and emacs, what do you guess the previous developer would have been using? I am kind of in the deep water on this one, as I will basically just receive access (passwords) and then am on my own, despite being only a junior developer. So anything you can throw my way would be extremely useful from this perspective. As for your questions, it is a tiny company on the brink of collapse and I'm working for peanuts with the hope of proving myself. (However, I had thought I could do so with some guidance from the senior developer, who was not paid in full and so has completely stepped down early; basically he will just give me passwords, "handing me the keys")... So, this is obviously not an ideal situation, and much more akin to an internal (intranet) site than a customer-facing one, but shit happens and I'll try my best. I hope you will have more guidance for me! Thanks so much. --86.8.139.65 (talk) 19:55, 3 June 2011 (UTC)[reply]
incidentally, and this could be superfluous information, but there's nothing wrong with the business model (customers are coming and paying, the phone's ringing) it's just internal organization issues between the founders, all but one of which has moved on to something else, that has led to this situation. There's nothing wrong with the prospects of the company, it's just that it could really use active development again, which hasn't happened in several months. so, all of the suggested information above about a staging environment is quite helpful (though I might not be the person to do it) but as for putting out the fire right now, I'm really looking for the most practical things I will need to "pick up the reins" so to speak. Thanks for anything like that which you could suggest! By the way, we do have test scripts, so it's not as bad as it could be :) --86.8.139.65 (talk) 19:59, 3 June 2011 (UTC)[reply]
(edit conflict)I see, it's trial-by-fire for you (and it sounds like you're healthily philosophical about it; after all, firefighters only learn how really to be firefighters by fighting real fires - just don't get burned). Don't assume that the departing guy did things in a terribly proper way; I've seen plenty of places where someone would just login to the hosting machine and edit files with emacs, and if this made the running site go wonky a bit during the process, they didn't care. If I were you, I'd first make sure there are backups and replications going on as best you can, so that if a piece of hardware (like a disk) dies you can get things back - the trouble is that, even though you know that only a properly configured redundant system with tested failover and fallback mechanisms, with fault tolerant raid arrays and offsite backup storage - when something dies it'll be you on the hook to get things working, and you who gets the blame when the site is down for a week while you're building a new server. It's especially stressful as the new guy, because eventually you will break something (or become so paralysed by fear that you'll become so change-averse that you won't get anything done). In the long term, a decent web app ends up being built from templates (either official templating-system templates or things cobbled together in php) rather than being hand-written code or the output of web-design programs. So you'd hope that departing-guy was using a template engine of some kind to build the site (even if it's just static content); sooner or later you'll be asked to rearrange the menu on every page (or something that, to management, seems like a trivial request) - if the site as 100 pages then you'll kill to be able to edit just one template and rebuild. As to workflow - I've heard of places that do something like this:
  • You pull the tree on a developer's machine, implement and tests changes, and then commit them back to the tree with a tag
  • On a staging server (I guess a staging VM for you; you want it as close to the production environment as you can) you pull that tag and run acceptance and module tests, as appropriate. If the site we're talking about is really an app (and not just some static pages) then you'd really hope the guy has written a bunch of tests for it (and if he hasn't, maybe it's a valuable thing for you to do). In a sensible organisation, where the app is reasonably mission-critical, you often get a sign-off process at this point, where a manager or another developer approves the patch (and that approval gets recorded in some document somewhere, so if things go wrong then the blame is carried by everyone involved, not just some lowly frightened junior developer pushing his first patch to a site he barely knows).
  • Only then, on the live server, do you pull the tag. Depending on your app, this itself can be a scary proposition, as you need to think about what happens to the traffic that's on the site while you're making that patch (which, if it's more than one file, isn't entirely instantaneous). It's common for sites to have two or more web-facing servers, and when they push a change they set the load-balancing system to send no new requests to that machine; once all its current business is done and no live transactions are outstanding on it, you push the changeset (confident that there won't be "skew"). Then the load-balancer sends traffic to that one and you repeat for each customer facing server (often with a delay, so you can back-off if a change makes a server overload or crash.
If you don't have a load-balancer and multiple servers (which it sounds like you don't) then you either need to briefly take the site down to make a patch safely (which may be okay, depending on your business) or you need to find out how to do an atomic-update on a live site - how to do this depends on the site runtime. Some runtimes (e.g. many Java servet containers) do it automatically - they can keep both the old and new versions around, and only purge the old when all the traffic that was using it is done. Don't assume, of course, that the departing guy thought very hard about this stuff. -- Finlay McWalterTalk 20:25, 3 June 2011 (UTC)[reply]

User DMCA report system

[edit]

Does anybody know if there is some sort of DMCA reporting system for users/fans?

The thing is, the DMCA demands that copyright infringements be reported only by the copyright owner or an authorized person. This in turn makes it impossible to report even the most blatant copyright infringements on a third-party website (e.g. at a filehosting service).

If there were some centralized website where users can report copyright infringements to the respective owners, that would go a long way towards fighting copyright infringement, much of which today takes place via filehosting services and online forums.

Registering at that supposed central DMCA reporting website would preferably be organized such that each user's account is tied to their real-life identity, to prevent abusive flooding of the report system. This could be achieved e.g. via unique email-addresses (from official bodies like universities etc.).

In my imagination, that reporting system would feature a very simple interface with just a couple of fields to enter the name of the work and artist, and the url(s) where the copyright infringement takes place (which may involve several places, e.g. a file at one or more filehosting services plus meta-search engine entries pointing to that file).

I'm pretty sure that for any given work, there would be at least one dedicated fan somewhere in the world, eagerly willing to scour the depths of the internet on behalf of their idol(s). So it would probably work even without any monetary incentive.

Does anybody happen to know if such a thing was ever tried or talked about? I believe it does make sense, even if it could not fight all copyright infringement, it may help to cut down on a lot of it, esp. at the most easily accessible places (flash streaming websites, filehosting services, forums).

Alternatively, I'd be glad if anyone could enlighten me about possible flaws and/or improvements to this idea. --195.14.222.40 (talk) 18:36, 3 June 2011 (UTC)[reply]

Firstly, a DMCA claim is that it's a legal assertion (made under penalty of perjury) that a given work is being distributed outwith its lawful licence. Only the copyright owner, or their agents, knows for sure what is or isn't being used unlawfully. Secondly this is a civil dispute, and general principles of civil law limit actions to those who have "standing" in the matter; courts aren't interested in random filings from any old person, and third party DMCA claims are equally meaningless. And thirdly, in making a DMCA claim a party accepts certain liabilities, particularly if their claim turns out to be false - those parties to whom the claim is sent may be able to recover the costs involved in servicing the claim (which might include expensive legal bills). -- Finlay McWalterTalk 19:26, 3 June 2011 (UTC)[reply]
(edit conflict) Um, maybe I wasn't precise enough. My idea involves a website operated by a conglomerate of voluntarily participating copyright owners and/or legal representatives, who would examine the filed infringement reports and then decide to respond themselves. Any report would be forwarded to the respective work's owner, and leave everything else up to them.
It would simply be a quick, easy, streamlined and most importantly centralized way for independent users to bring copyright ingringements to the respective owner's attention.
Again: No reports would be filed to third party websites, let alone courts. It would be a friendly heads-up along the lines of "Hey guys, methinks that's a copyright infringement of one of your works over there at that url". And if a user files too many faulty reports, that user could simply be banned. --195.14.222.40 (talk) 19:38, 3 June 2011 (UTC)[reply]
It's perfectly reasonable, however, for the fans of celebrity X to report what they imagine are copyvios of X's stuff to X or X's agents. Armed with that, X can file a DMCA claim if they wish. It wouldn't be a bad idea to set up a little company that allowed fans to report such claims (for subscribing celebrities) back to the corresponding agent. None of that involves legal filings made by anyone other than X and their representatives, so there's no problem with it. -- Finlay McWalterTalk 19:28, 3 June 2011 (UTC)[reply]
Yes, that's my exact idea. So something like that doesn't already exist? Isn't that kinda weird? It seems like such a simple idea, which just occured to me while doing the dishes. I can't be the first one to come up with something like this. --195.14.222.40 (talk) 19:40, 3 June 2011 (UTC)[reply]
It reminds me of the Business Software Alliance. --Mr.98 (talk) 19:50, 3 June 2011 (UTC)[reply]
There are already several companies that conduct automatic scans of video networks, file sharing networks, and the like, and are sometimes empowered to act as agents of copyright owners (and so to issue DMCA or other proceedings). They're also the ones that collect info for litigation against individual file-sharers, and sometimes they contaminate searches or corrupt file-sharing networks with defective content or wonky peers. It'd be they, rather than X, who would likely run (or buy) your crowdsourced service (and only if it could outperform their army of automated agents). The downside of that is that such chaps have gained a reputation for heavyhandedness (in some cases bordering on malfeasance) leading to them keeping as low a profile as they can. Crowdsourcing works when the well-intentioned outnumber and out-effort those opposed to the effort by a wide margin. I don't think your model has been tried on a large scale, and like all untried business models no-one can say it'll work, or that it definitely won't, until it has been tried. -- Finlay McWalterTalk 19:52, 3 June 2011 (UTC)[reply]
Well, those companies are not very successful. Copyright infringements are omnipresent throughout the web. I defy you to name any remotely popular movie or music album which I or anyone couldn't find inside of five seconds, ready to download at several megabytes per second. I wouldn't have formed the idea of such a crowdsourced system if the companies dedicated to that task had any actual success at fighting copyright infringement. --195.14.222.40 (talk) 20:33, 3 June 2011 (UTC)[reply]
As you say, the trouble isn't finding the stuff. The bottleneck, and the expense, is the legal backend. -- Finlay McWalterTalk 20:39, 3 June 2011 (UTC)[reply]
Is there currently a bottleneck? Doesn't feel like it, tbh.
At any rate, thanks for your insightful replies! Since I don't know the first thing about setting up a business or a website, I guess I'll have to leave it to someone else... :) --195.14.222.40 (talk) 20:41, 3 June 2011 (UTC)[reply]
The point Finlay is trying to make is that finding infringement is trivial. Google works pretty dang well for that. Therefore the "difficult" parts probably lie elsewhere: issues with actual implementation, jurisdiction, legal fees, and so on. --Mr.98 (talk) 20:46, 3 June 2011 (UTC)[reply]
Ah, ok. I was making a point about the inefficiency of those companies dedicated to fighting copyright infringement.
As to Finlay McWalter's point, I'm simply talking about a frontend for volunteers to report infringements. Everything else would be up to the copyright owners themselves. So I don't actually see where such a bottleneck or problem would come in play, except for setting up that frontend reporting website and convincing the large copyright holders (record companies, film studios etc) to join in (for free, no less, since my idea is not to make money but rather the question why such a system has not long since been established). --195.14.222.40 (talk) 20:54, 3 June 2011 (UTC)[reply]
(edit conflict)The trouble, from the copyright enforcer's perspective, is that the violators aren't large, readily-sued companies with stable operations. They're tens of thousands of virtually judgment proof tweenagers and their ordinary parents. DMCA filings are mostly ineffective, as filing one takes days and all you've done is prevent a single person (one of those tens of thousand) from sharing. That's why copyright owners have worked so hard to get laws adopted (in places like France and the UK) which put more of the burden on ISPs, and which aim disconnect persistent file sharers, or criminalise them. I doubt these will work either. -- Finlay McWalterTalk 20:57, 3 June 2011 (UTC)[reply]

the violators aren't large, readily-sued companies with stable operations -- Well, like I said above, my reporting system would be most suited to targeting infringements hosted on forums, filehosting services and streaming websites (I believe that these are particularly harmful due to their ease of use).

DMCA filings are mostly ineffective -- I don't believe they are. Look at how long copyright infringements are online. Files uploaded to some filehosting service are oftentimes up for years without the copyright holder (or anyone working on their behalf) ever catching on. In such cases, many downloads could be prevented. Needless to say, my system couldn't root out infringement. But it might make it a tad harder and more uncomfortable to share and especially to find and download files. What makes the current situation so incredible is that anyone can download or stream material. Web-savvy people will of course always find ways to share. But the masses need an easy avenue, and my system could very well work as a roadblock for many less apt web users, by reducing response times of the copyright holders (and by alerting them to many infringements in the first place). --195.14.222.40 (talk) 21:18, 3 June 2011 (UTC)[reply]

Oh boy. A system that encourages people snitch out their friends, neighbors, and employers! That's never brought out the worst of humanity!
Besides, in all seriousness. I'm not sure it would help nearly as much as you think. The biggest offenders here are well known. Fox doesn't need your help to know that there are Simpson's clips on Youtube, or that dozens of "transcripts" of every episode are available just by googling for them. APL (talk) 21:51, 3 June 2011 (UTC)[reply]
A system that encourages people snitch out their friends, neighbors, and employers -- Why are you putting a bad spin on combatting illegal activities? Anyway, my idea is mostly about reporting files hosted illegally on the web. If it happens to have been uploaded by one of my --in that case-- ex-friends, too bad for them. I don't share this very childish "us vs them" mentality.
doesn't need your help -- Hm, that's what I was wondering about. Apparently, they do need help (which makes me wonder how serious the big copyright holders actually are about fighting infringement). --87.78.136.233 (talk) 22:30, 3 June 2011 (UTC)[reply]
The issues regarding copyright laws in the United States are quite complicated. You might check out Lawrence Lessig's Free Culture for some great examples of that. It's a serious work written by a highly-respected law professor. It's available for free at his website. There are a lot of people who believe that the current state of US copyright law is an aberration brought about by excessive legislative pandering to a few major media companies. Whether you agree with it or not, you should be aware that a significant number of people online — including many who are very tech savvy — would see your site as a place for "snitching". --Mr.98 (talk) 23:01, 3 June 2011 (UTC)[reply]
Oh sure. As a practical matter, you might as well put up a giant sign that says "All hacker groups welcome to attack this site!". APL (talk) 08:18, 4 June 2011 (UTC)[reply]
Copyright violation is a civil matter between the copyright holder and the violator. I strongly question the motives of any 3rd party who wants to get involved.
People don't snitch because they're interested in Truth, Justice, And the American Way. They do it either because they want to get someone they already are angry at, they do it some reword or favoritism from the authority they're snitching to, or they do it because they get a sick thrill from pretending to be an authority figure. In my opinion, any of those three makes you a bad person. APL (talk) 08:18, 4 June 2011 (UTC)[reply]

�:You could certainly set up the site as described — there's no bottleneck there. Whether people would want to participate, I have no idea. But I wouldn't expect it to make any difference in actual prosecution of copyright infringements. Again, the issue here is not that the companies cannot find instances of infringement. If you and I can do it within seconds of a Google search, so can they. If you can use a Torrent website, so can they. The issue isn't that the companies don't know that there are instances of infringement out there. --Mr.98 (talk) 21:52, 3 June 2011 (UTC)[reply]

But then why on god's green earth don't they move much quicker? I mean, there's even a "report copyright violation" button at some meta search engines. Why don't they do that? Maybe that's what I actually can't wrap my head around about this whole thing. What the heck are they doing all day, complaining about dropping revenues, when they could be systematically searching the internet and demand removal of illegally hosted copies of their material? --87.78.136.233 (talk) 22:30, 3 June 2011 (UTC)[reply]
The main problem is that copyright infringement, even though it is easily locatable on the Internet, occurs on such a massive scale that it would require a very, very large staff to even feel like you're maybe making an impact on it. This is true with or without your reporting mechanism — as you know, even without the reporting mechanism, it would take a copyright owner only a few seconds to start finding dozens or hundreds of infringements. Many companies publicize e-mail addresses to which you can send reports of copyright infringement, but I'm 0 for 1 with reports to those companies: Once I was irritated at a brazen Craigslist posting in which a guy was selling DVDs full of PSP games for US$40. I was so annoyed that I e-mailed Sony's "stop piracy" e-mail address to report the post. Their response? Two weeks later I got an e-mail from a Sony person who said they had just investigated and found that the posting had expired, so they were closing the case. Bravo, Sony! And this was from one of the RIAA members, who are more aggressive against piracy than most other companies. Comet Tuttle (talk) 22:53, 3 June 2011 (UTC)[reply]
And having it centralized would still not cut out the need for a staff to vet, file letters, etc. (And again, all of this is jurisdictional specific anyway. DMCA only applies to content hosted on American servers.) --Mr.98 (talk) 23:01, 3 June 2011 (UTC)[reply]
  • Thank you all for your helpful responses! 98, I'll look into Lawrence Lessig's book, thanks for the pointer. Comet Tuttle, yeah, I've had similar experiences, not with Sony but other major players -- most of whom never bothered to respond or take any action. --87.78.136.233 (talk) 23:08, 3 June 2011 (UTC)[reply]

Why does Internet Explorer alway's say, " Internet Explorer cannot display the webpage?"

[edit]

Whenever I want to go to Facebook, Google, Microsoft.com, Internet Explorer alway's say " Internet Explorer cannot display the webpage with Diagnose Connection Problems and More Information on the bottom. It constantly says that.

I have att dial-up internet because I live in the country. Nothing's wrong with the modem, or anything. I think it's just the att dial-up internet connection that's causing this.

I also have Internet Explorer 8. I'm right now downloading Internet Explorer 9. I don't know if this internet will help.

I've tried diagnosing connection problems, refreshing the page, and everything else.

What is wrong???

Please help me :/

thank you =) — Preceding unsigned comment added by ShadowAngel1994 (talkcontribs) 22:55, 3 June 2011 (UTC)[reply]

To clarify, have you tried using Mozilla Firefox or some other web browser? Do your other Internet-using applications work, such as Skype, web browsers, or Internet games? Comet Tuttle (talk) 23:06, 3 June 2011 (UTC)[reply]
I'd try making sure you don't have any proxies filled in. To do this go into the browser options (Press Alt-T and then Options whilst running Internet Explorer) and then select the connection tab. From here click LAN settings. Normally you would only have "Automatically detect settings" ticked, but to be safe I'd just try unticking everything and then pressing OK. You shouldn't need to restart your browser, but I would just in case and see if it has made a difference.  ZX81  talk 00:04, 4 June 2011 (UTC)[reply]
I have not once had the pleasure of using Internet Explorer 9, as every time it starts up I get this bag of b*ll*cks - which is why I use Firefox, Opera, or Chrome. Unless you absolutely need to use Internet Explorer, why not give another browser a go? --KägeTorä - (影虎) (TALK) 10:23, 4 June 2011 (UTC)[reply]
You appear to have a lot of toolbars installed. I would start IE in safe mode and disable them along with any unnecessary plugins and in the future be more careful about what I install since installing random junk is a good way to cause problems in more than IE Nil Einne (talk) 19:32, 4 June 2011 (UTC)[reply]
I have done that and this is what I get, every time. Oh, and that is a grand total of two toolbars added on by software when I had no choice (one even came with the brand of PC), and one - SnagIt - which I chose to add. I am just as careful as I should be. --KägeTorä - (影虎) (TALK) 02:36, 5 June 2011 (UTC)[reply]
I'd recommend starting a new question about your computer (as this is someone elses post), but obviously that isn't a problem that affects everyone. However, I suspect the answers you would receive would probably be (after the above mentioned uninstall toolbars or try safe mode) to try a system restore to when it worked and failing that reinstall the whole computer (which isn't handy, but would fix it)  ZX81  talk 18:51, 5 June 2011 (UTC)[reply]
I'm actually not bothered about fixing this problem, as I never use IE and have no intention of using it. On a side note, though, this problem has been around since I installed IE9, which, incidentally, came along with all of the updates to Vista (i.e. all updates, from factory-release right up to now), which I received within a day and a half of reinstalling Windows only a few weeks ago, so I really doubt a fresh reinstall of Windows will help at all. --KägeTorä - (影虎) (TALK) 13:40, 6 June 2011 (UTC)[reply]
I don't actually understand why you'd install a toolbar in a browser you never intend to run but to each their own I guess. Nevertheless in future I would recommend disabling or better yet uninstall any toolbar you don't want to run before installing more toolbars you do want, no matter who originally installed the first toolbar. BTW in this day and age, I would consider installing any software which forces you to install a toolbar you don't want rather poor practice since, and this coming from someone who installs all sort of junk, it's been many years since I've seen a program like that.
P.S. I may have misunderstood the SnagIt part. If you were saying you wanted the software but not the IE toolbar, then good news the current (10.0.1.58) version or trial at least does not force you to install a toolbar (well I didn't finish installing but you can deselect the IE addon which I guess is the toolbar, along with whatever else you don't want). I'm actually surprised they ever did since they don't look super dodgy. I wonder when they were so dodgy, I tried 9.1.3.19 as well and it similarly does not appear to force you to install the toolbar.
Nil Einne (talk) 14:02, 8 June 2011 (UTC)[reply]

cable modem dropping and wired/wireless router

[edit]

I have a cable modem with a wired/wireless router. There are four computers wired to the router and we have four wireless devices. I've been having a lot of problems with the cable modem dropping - a couple of the lights will go out but it will come back in 1-2 minutes. The cable company says that I am getting a strong signal and thinks that the router is "pushing off" the modem. The signal goes to the modem before it goes to the router,so I don't see how the router could be causing the modem to drop. Is that possible/likely? Bubba73 You talkin' to me? 23:22, 3 June 2011 (UTC)[reply]

It is possible. It's important to note that the actual Internet service passes through the modem to the router. I've seen cases where the MTUs were set too high on a router and caused the modem to kick offline every so often. That being said, cable plant problems are sometimes hard to figure out and narrow down - the signal may look fine at a glance from their office, but there may still be problems with the service. (a lot of T3/T4 timeouts on the modem, High Forward error corrections, low SNR on the cable node, et cetera) Avicennasis @ 01:04, 2 Sivan 5771 / 4 June 2011 (UTC)
About 9 - 14 days ago I had this problem bad, Then one morning it was totally down for a while, but when it came back the problem was much better. Then today I was dropped probably nearly 20 time by 3PM. The cable company said to bypass the router and try it. I had it that way for a few hours and saw it drop only once. Then I put the router back in and it dropped twice in short order, but I haven't seen it drop since. Bubba73 You talkin' to me? 02:03, 4 June 2011 (UTC)[reply]
(It dropped again.) Can a normal user change the MTUs? Bubba73 You talkin' to me? 02:19, 4 June 2011 (UTC)[reply]
On most recent routers, yes. There should be some way to change them - you'd have to look at the support section in the manual/manufacturer's website to see how, since it's different for all of them. It should be pretty easy once you find the instructions. Avicennasis @ 03:44, 2 Sivan 5771 / 4 June 2011 (UTC)
How do I know what they should be set at? Bubba73 You talkin' to me? 04:29, 4 June 2011 (UTC)[reply]
Usually either the ISP or the Router OEM have a suggestion. Failing that, I usually recommend 1500, and if it keep dropping, lower it by 100 each time. If you get to 900 and it's still dropping, then that's not your problem, and you can change it back to the default and call your ISP again. Avicennasis @ 05:05, 2 Sivan 5771 / 4 June 2011 (UTC)
Thanks, I'll work on that tonight. I've been dropped seven times in less than 8 hours today, and I have been away from the computer most of the time. Bubba73 You talkin' to me? 20:15, 4 June 2011 (UTC)[reply]
It was set to 1500. I lowered it to 1400 and updated the router firmware and rebooted it while I was at it. I've been dropped twice since th change to 1400. Bubba73 You talkin' to me? 20:47, 4 June 2011 (UTC)[reply]

Setting it to 1400 didn't fix it but setting it to 1300 may have. The modem log shows that it had to renitialize the MAC 60 times between 14:14 and 21:17, but none in the 2 hours and 13 minutes since then. (I'm not sure when I set it to 1300, but it was about then.) Bubba73 You talkin' to me? 03:31, 5 June 2011 (UTC)[reply]

Actually most of the critical errors in the modem log are of two types:

  • No Ranging Response received - T3 time-out (US 8)
  • Unicast Ranging Received Abort Response - Re-initializing MAC

There has been only one in the last 14 hours, one of the T3 time-outs. Bubba73 You talkin' to me? 15:19, 5 June 2011 (UTC)[reply]

Well, today it has dropped a lot of times, so I'm trying 1200. Bubba73 You talkin' to me? 23:34, 5 June 2011 (UTC)[reply]

Does it need to be 64-bit to use both cores?

[edit]

I'm using 32-bit software on 32-bit Windows on a processor with two cores but a program of interest is using only one of those cores. Does it need to be 64-bit to use both cores? — Preceding unsigned comment added by 129.215.47.59 (talk) 23:46, 3 June 2011 (UTC)[reply]

No and I'm afraid there's nothing you can do, the software needs to be specifically written/changed so that it actually actually uses both cores.  ZX81  talk 00:01, 4 June 2011 (UTC)[reply]
To clarify, those are two seperate issues. A program must be multi-threaded to use more than one core. Whether or not it is multi-threaded in any useful way is unrelated to whether it's a 32 or 64 bit program. APL (talk) 00:14, 4 June 2011 (UTC)[reply]
To clarify even further, "...there's nothing you can do..." unless you are a software programmer who has access to the program's source-code, and can meaningfully find some exploitable parallelism in the features you're using. Nimur (talk) 00:25, 4 June 2011 (UTC)[reply]
And even further, you won't have luck running a 64-bit program on a 32-bit machine regardless of how many cores it has. -- kainaw 00:44, 4 June 2011 (UTC)[reply]
Two 32-bit cores = 64 bits!  :-) Bubba73 You talkin' to me? 02:46, 4 June 2011 (UTC)[reply]
...And by the same logic, my new i7 950 64-bit Hyperthreaded Quad-core PC is 64 x 2 x 4 = 512-bit architecture! ;-) Nope, as has already been pointed out, it don't work like that, though running the same (single-thread 32-bit) application on an operating system that understands multiple cores may help a little, by running background tasks on the other core. AndyTheGrump (talk) 03:05, 4 June 2011 (UTC)[reply]
I have heard people say that two 2-GHZ = 4GHz. Bubba73 You talkin' to me? 04:47, 4 June 2011 (UTC)[reply]
Those people are wrong. Having two 2GHz cores is, at best, like having two parts of the program each run separately on a 2GHz processor. In practice, contention for resources always makes the situation slower than that. Also see our articles megahertz myth and our messy article clock rate. Comet Tuttle (talk) 18:17, 4 June 2011 (UTC)[reply]
Even if you're a software engineer, you might not be able to do anything. Making use of multiple cores is still an ongoing research problem. Paul (Stansifer) 08:18, 4 June 2011 (UTC)[reply]