Jump to content

Wikipedia:Reference desk/Archives/Computing/2014 August 8

From Wikipedia, the free encyclopedia
Computing desk
< August 7 << Jul | August | Sep >> August 9 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


August 8

[edit]

Using school Wifi vs broadcasting own SSID?

[edit]

I am on a US university's Wifi network where I have to enter my student ID and password in order to log onto the Wifi network. A friend is suggesting that the Wifi will be more secure if we use our own router, protected by WPA2 protection, plugged via Ethernet to the school's dorm internet network and broadcast our own SSID. Is the latter more secure?

Acceptable (talk) 03:32, 8 August 2014 (UTC)[reply]

More secure against what? In theory, an access point YOU control plugged into the LAN port would be more secure than an AP someone else controls, but ultimately you are ending up on the same network which is only as secure as its weakest point. Using your own router to connect to the network isn't practically going to protect you in any meaningful way I can think of. Probably running a decent firewall and AV on your own computer would be more beneficial in either case. . Vespine (talk) 04:07, 8 August 2014 (UTC)[reply]
Is it more secure against hackers potentially intercepting your traffic? For ex, if I upload stuff onto Google Drive or Dropbox, would having my own SSID broadcast point be more secure against potential interception? Like you know how they tell you never to log into your email on a public Wifi hotspot, such as Starbuck's or something? Does the same precaution apply when using a university's wifi network, where you have to enter a student ID and password to log on? I'm thinking the same precautions should not apply here because Starbucks is an unsecured wifi hotspot (you don't have to enter a password to log onto the wifi), whereas here, I have to enter a student ID and my password. Acceptable (talk) 13:58, 8 August 2014 (UTC)[reply]
It may be more secure between your device and the point where it hits your campus's wired network. It will not be any more secure from there onwards - someone could plug a computer into a network socket in a neighbouring room which is wired back to the same switch, and could possibly use a variety of tricks to make that switch broadcast your data on their port.
If everything transferred over it is encrypted (properly) anyway, then it won't make any difference if someone intercepts it. You may wish to check out HTTPS Everywhere. davidprior t/c 18:19, 8 August 2014 (UTC)[reply]

I don't know that topic very well and my knowledge is only based on some articles I have read. So (0, 255, 0) is a quite light green tone called lime. Why is the model still called RGB (red, green, blue) and not (red, lime, blue)? I know there is this clash between HTML and X11 color names. Can you explain the history of the use of these two sets? The article doesn't describe it in detail. This is pretty confusing and also affects Wikipedia articlies. The infobox in the article Green shows lime (0, 255, 0), not the darker variation (0, 128, 0), which most people perceive as the standard "green". In addition to this, the infobox in the article Lime (color) shows (191, 255, 0), probably because it refers to some other model. You have to scroll down to see the web color "lime". Same for Orange (colour), which shows (255, 127, 0). If you click on another Wikipedia in a different language, others values are shown. Finally I found out that the web color is (255, 165, 0). With the Simple English article Orange, I also found out that (255, 127, 0) is the Color Wheel Orange, yet another model? Shouldn't we agree to one model on the English Wikipedia? --2.246.13.110 (talk) 04:18, 8 August 2014 (UTC)[reply]

See color theory, color model, color space, color perception. Also search the ref desk archives for "color", this sort of thing comes up a lot. My perspective is that the physics of color is fairly straightforward, it's just about the spectrum of light, and how much energy is at each frequency. However, once we get into production of color and human perception, there are many different systems and words, because there are many different uses and goals. Color psychology has a section on culture, which explains a bit about how different people have different words for the 'same' colors, or how the same name is associated with different colors. I haven't read it, but this book also looks like a good reference for this sort of thing: Basic_Color_Terms:_Their_Universality_and_Evolution
I personally don't agree that we need a standard model for en.wikipedia, each article can and should use whatever terms are most appropriate for the topic. E.g. it would be wrong in my opinion to treat prussian blue as an RGB triplet. There is a good reason that the image there is a photograph of a paint smear, and not a computer-generated signal. SemanticMantis (talk) 17:13, 8 August 2014 (UTC)[reply]

How can I get a 128x128 transparent bitmap?

[edit]

It can have, like, one black pixel if necessary, but I want to know how I could get ahold of a 128x128 transparent bitmap. I don't have photoshop or any image editing software except MS Paint. 98.27.241.101 (talk) 07:04, 8 August 2014 (UTC)[reply]

Why not treat yourself to some free image editing software such as GIMP? --TrogWoolley (talk) 13:15, 8 August 2014 (UTC)[reply]
What are you trying to do, create an invisible icon? Anyway, if by "bitmap" you mean a file in .bmp format, then I agree that installing GIMP is a reasonable solution. Looie496 (talk) 13:41, 8 August 2014 (UTC)[reply]
My reading of bitmap is that it can be generally used as a term for a raster graphic, compared to a vector graphic, but of course also often means the specific .bmp format. Anyway, yes the Gimp is the obvious free choice, but it may take more than 5 minutes for a new user to figure out how to do it. SemanticMantis (talk) 17:18, 8 August 2014 (UTC)[reply]
I'm making my own Half-Life Mod, and I'm hoping to use a model from a western mod where the player character carries scorpions as a weapon. However, the Western character uses a "bare flesh-colored hands" texture, whereas Gordon's black gloves were made by being textureless. Since I don't know how to delete the flesh texture, I was hoping to replace it with an invisible texture in the hopes that would solve the problem. Also, I read online that GIMP doesn't support transparent bitmap, only the workaround with pink squares, and I don't think Half life honors the pink squares as transparent. Can someone direct me to a tutorial explaining how to do this in GIMP? 98.27.241.101 (talk) 19:05, 8 August 2014 (UTC)[reply]
Pretty sure someone gave you bad info. They key term is alpha channel. Have a look here [1] for official documentation, and here [2] for a more detailed tutorial. The basic idea is that you'll create a 128x128 black square, then set it to 100% transparency. If your mod system will accept it, you might get better results using .png, rather than .bmp. SemanticMantis (talk) 21:58, 8 August 2014 (UTC)[reply]

Where do hackers start out?

[edit]

I'm pretty sure nobody is born knowing about technology, so how do future 1337 h4xx0rz start learning about it? I mean a reasonably smart person who can use a computer well enough for the everyday things it is marketed to do, but knows nothing about it beyond that. How do you get from there to writing software, or cracking websites, or building a Star Wars laser system out of eBay bits to shoot down mosquitoes in your back garden? 79.3.107.253 (talk) 08:18, 8 August 2014 (UTC)[reply]

Many hackers and even many regular every day programmers are autodidacts. They find something that interests them and they learn about it. They find something that they want to do with their computer and learn what they need to do it. I did this with my first web site back in the mid-90s. Learning these things is relatively easy to do these days with the advent of online learning and open courseware. But people could still do it back in the days of computer clubs. Dismas|(talk) 08:38, 8 August 2014 (UTC)[reply]
Sure, but even then, they gotta start somewhere. Just wanted to get a few ideas of what that might be, so I could take a look at them myself. Or is it really a case of you only know it when you see it? 79.3.107.253 (talk) 08:45, 8 August 2014 (UTC)[reply]
I'm a bit unclear on what you're looking for. What ideas are you looking at?
Taking a stab at what you're looking for here... Let's say that you wanted to build a web page. A simple one could be put together with just rudimentary HTML which you can learn from something like w3schools.com. Or you could buy a book on it. There are many out there. O'Reilly publishes quite a few and is probably the leader in the field of computer how-to books. If you wanted to learn about other programming, maybe Linux, or such, then a Raspberry Pi might be the most affordable way to do so. All it would take is access to a keyboard, mouse, and monitor to plug into a US$35 RasPi. From either of these, one could build in the direction they wanted to go. Dismas|(talk) 09:16, 8 August 2014 (UTC)[reply]
The mosquito laser is what prompted me to ask this question. I'd like to know how someone might get from knowing nothing about technology or engineering, to the point where building that in their garage was a serious possibility. The Raspberry Pi looks pretty cool too though. Time for me to figure out what I could try with one as a total noob :) 79.3.107.253 (talk) 11:46, 8 August 2014 (UTC)[reply]
You could start by following the Wikilinks in that article, if the idea of killing mosquitoes with lasers interests you. The key bits are in the lead (laser > LED > semiconductor > electrical conductivity > ?). Then follow those out to the sources cited. By now, you'll have plenty of terms to research and shop for, and gradually more knowledge. Through trial and error (no speedy montages in real life, sadly), you either eventually build that deathray or find something else to do. InedibleHulk (talk) 11:57, 8 August 2014 (UTC)[reply]
  • Everybody follows a unique path, but here's a sort of typical story: (1) Start by installing Linux, which provides a good programming environment without having to pay anything; (2) Read some basic stuff about programming, and then type in hello.c -- everybody's first program -- and compile it; (3) Experiment with programs of steadily increasing complexity; (4) Set up a web page and fool around with the code that runs it; (5) Set up some shell scripts to do cool things; (6) Get involved with some open source software project that does something you're interested in; etc. Looie496 (talk) 13:36, 8 August 2014 (UTC)[reply]
I would start earlier than that: learn math. Learn lots of math. Get really good at math. When you think you've learned "enough" math, study math for five or six more years.
Why!?
Let's talk about real technology problems that we face in Year 2014. Especially if you are interested in physics, electronics, or computing, you will never face a problem caused by moving parts. Every technology problem you encounter will have no moving parts; its root-cause will be invisible; and you will need to use your brain to fix it. You will need to learn to think abstractly - but not in the way that artists or painters of drug-trips think. You will need to think abstractly but subject to rigorous formal rules about structure and process. This is the way that mathematicians think.
When you learn the earliest types of algebra - like the way our teachers taught us long division in 2nd or 3rd grade - you are learning more about computers than your five-year-old brain can even comprehend. You are learning problem-solving-by-procedure. As you get older, and you study harder problems (like solving for roots of a parabola), you're reemphasizing that structured procedural thought-process. Finally, you'll get to the point where you can really get creative - you'll know how to solve the roots of a parabola by the quadratic equation and by successive approximation - but what other ways can you find those answers? Presto, you have just come up with a method to build a machine to do ____.
For any sufficiently complex problem δ, you'll need sufficient quantitiy of creativity ε and a sufficient search-space of tool functions ζ{...} that spans (or intersects) the solution-space. You will need a meta-method to efficiently inject δ onto the appropriate tool-function... and presto, your solution is discovered. The actual construction of the machine naturally follows. If your method is sound, and your problem is relevant, the physical construction of that machine is so incredibly easy that somebody will build it. If you're the one who figured it out, and your solution method was actually in the critical path, rest assured that our economic system will find ways to reward you proportionally.
I find it unfortunate that so many people get "wow"ed by shiny metal and flashing blinking lights and visions of giant monochromatic text consoles. That sort of fiction has been portrayed by movies and internet clickbaiters, and it's put forward as "hackery." That's exactly what it is! Just like a screen writer who regurgitates tired storylines and silly tropes - we call him a hack - because his work is not very good, and its only selling point is that it looks almost like other work that actually has merit.
Nimur (talk) 17:11, 8 August 2014 (UTC)[reply]
"Every technology problem you encounter will have no moving parts" [citation needed]. I suppose you may personally have never had a flat bicycle tire, or a faulty doorknob, etc. But for most of us, moving parts still exist and still occasionally fail. Also, OP specifically talks about a robotics example, so at least some familiarity with how physical things can be fitted together is required for that. I mean, sure, learning more math will help OP in many endeavors, but it's not as though s/he should not start playing with linux or an arduino until after studying math for several years ;) SemanticMantis (talk) 17:24, 8 August 2014 (UTC)[reply]
I do not consider doorknobs or bicycles to be at the forefront of current technology.
Speaking as an individual owner-operator of a few robots of various forms, the mechanical parts of the robot are not very interesting: they are commodity hardware and their availability and quality are pretty much dictated by materials-cost. There have not been significant technology changes in the gear or the dc motor since probably the 1940s. Nearly all current problems in robotics are issues of algorithms in multidimensional control theory and statistical signal processing. There are an immense number of special cases: machine vision, bipedal robot stability, force feedback, cooperative robotics, sensor fusion,... the list goes on and on. But if you believe that you'll solve those kinds of issues by welding a steel bar to an aluminum frame, you're actually about eighty years behind the state of the art. The mechanical parts issues are solved problems, and there are experts who know how to solve those problems with "cookbook recipe" solutions. Nimur (talk) 17:32, 8 August 2014 (UTC)[reply]
Two of the absolute best software developers I've ever known both had history degrees from the University of Chicago. So I guess that would be my recomendation. Just kidding, although the thing about the two developers is true and I say it to emphasize what others have said, there is no one correct path. Some things I would recommend to get started, some of these are fairly old but they are classics and IMO every software developer should still read them: The Mythical Man Month by Fred Brooks, No Silver Bullet also by Fred Brooks, Extreme Programming explained by Kent Beck, The Spiral Model by Barry Boehm. I would say Java is the best language to start with because it's state of the market and also a true programming langauge unlike some of the scripting languages. You can download the Eclipse Java IDE from IBM for free. It's also used a lot and a great environment to get familiar with. On a more theoretical level the book Elements of the Theory of Computation by Lewis and Papadimitrious is great. Probably best to take a class that covers those kinds of issues. It also depends if you want a career in Informatoin Technology (i.e., to progress to manager and above) or if (and nothing wrong with this) you just want to be a really outstanding developer. If you want to progress my recommendation is to keep in mind that while technology is cool it's the human issues: defining requirements, usability, i.e., not just building a technical solution but making sure that the solution you are building actually is the right thing for the business that is critical. Also, just as an editorial aside I have no use for people who are dogmatic about this or that technology and say things like "Microsoft sucks" or "PHP is an affront to humanity". The critical thing is matching the appropriate technology to the specific business probolem and business environment. Religious arguments about some technology being the answer to everything or the devil that can never be used are childish and pointless. --MadScientistX11 (talk) 17:56, 8 August 2014 (UTC)[reply]
User:MadScientistX11, IBM? They were there at the beginning but saying it's "from IBM" isn't so accurate anymore. Dismas|(talk) 19:16, 8 August 2014 (UTC)[reply]
Going along with Nimur, I would say that you should start trying to solve problems that are interesting to you. Don't worry if they've been solved before (they have, usually for better and cheaper). The key is not just to learn the math, programming, etc., but to learn how to apply it to new situations that you encounter: ones that aren't in the textbook. For example, I was interested in building a device to flip a coin so that it would land on the same side every time. I remembered my basic physics equations for 2-D motion, so I used them to calculate how high the coin would travel for a given initial velocity. I then thought about how to impart that velocity: I thought about using a spring, but since that would be difficult to calibrate, I used an air compressor instead. Then, I had to relate the pressure applied by the compressor to the height of the coin clip, which led me to a much clearer understanding the definition of pressure as force per unit area. I never did get it to work (largely because trying to impart the correct rotation was beyond my capability), but I learned a great deal, and I had a lot of fun trying! OldTimeNESter (talk) 14:09, 13 August 2014 (UTC)[reply]

Why do I have stripes on my screen? Why don't screenshots ever capture them?

[edit]

The stripes remind me of a vampire - they're not seen in mirrors, because the stripes are never seen on my screenshots.

That's why I had to take an external picture with my cameraphone.

http://imgur.com/GKRkyxU

Explain these stripes - why do I see them, but the screenshots won't have them? How do I fix these stripes and make them disappear? Thanks. --Shultz the Editor (talk) 08:18, 8 August 2014 (UTC)[reply]

How do you stop them from showing up? Stop taking pictures with your camera phone. They're caused by the refresh of the monitor. Let's see if we have an article, refresh rate... We do! Dismas|(talk) 08:30, 8 August 2014 (UTC)[reply]
I have to disagree with Dismas - those are not caused by the refresh rate, but is likely due to a faulty screen or electronics. They don't show up on a screen shot since the operating system didn't put them there - a screen grab only captures what the OS think should be there.
You probably needs someone skilled in computer repair to assess it in order to figure out how to fix it. It could be as simple as reseating a loose connector, or you may need major replacement of parts. WegianWarrior (talk) 08:45, 8 August 2014 (UTC)[reply]
Wegian, what may cause those faults in the first place? --Shultz the Editor (talk) 08:58, 8 August 2014 (UTC)[reply]
Are we talking about the same stripes? I took the colored vertical stripes on the right to be part of the wallpaper. And I thought the top of the screen, where I see a black horizontal area, is what Shultz is referring to. Dismas|(talk) 09:10, 8 August 2014 (UTC)[reply]
The two vertical stripes is what Im assuming OP is referring to, one of my co-workers had a similar issue (one stripe, and on the left side of the screen). In his case he needed to replace the graphics adaptor - but it could also be a loose connector, a bad chipset or a broken screen. Impossible to say for a layman, double so from a photo. Again, I recommend talking to a specialist who can look at it in person. WegianWarrior (talk) 09:19, 8 August 2014 (UTC)[reply]
Looking at it again, I think you're right. If it were the refresh rate, then the vertical lines wouldn't show up in the top third of the screen. Dismas|(talk) 09:35, 8 August 2014 (UTC)[reply]
Can you press on the screen or twist the screen frame and make the lines go away or change color? If so, then it is usually a problem with the connection to the LDC panel. You can open the frame and check the connections. Sometimes just loosening the screws will fix the problem, or sometimes putting a cardboard shim in the frame. There are a number of YouTube videos on this. --  Gadget850 talk 10:51, 8 August 2014 (UTC)[reply]
I had a very similar (but worse looking) problem once with a Macbook pro, and I agree with Wegian's diagnosis. I was able to fix my the problem by re-seating a loose ribbon cable connecting to the display screen. I then bought a tight plastic clamshell case for the laptop to keep it cinched up tight, and it has not come loose again in the following ~3 years. Anyway, something you could definitely try yourself before spending money on it. Just google /[your make/model year] graphics adapter connection/, and with a little luck you'll find a description of how to get to it. SemanticMantis (talk) 17:04, 8 August 2014 (UTC)[reply]
In my experience, problems like this tend to worsen over time. So, my suggestion is to have an external monitor and cable ready, in case the display goes out entirely. (Test it beforehand, so it's all set to use.) If the laptop is old, you might want to replace it, and just use the old one as a PC (hooked up to the external monitor) after that. StuRat (talk) 00:40, 9 August 2014 (UTC)[reply]
A screenshot captures what your computer sends to the monitor. Thus a discrepancy between a screenshot and what you see implies a flaw in the monitor itself. —Tamfang (talk) 02:21, 10 August 2014 (UTC)[reply]
Well, it captures the contents of the primary frame buffer. The problem could be with the video card's output circuitry. -- BenRG (talk) 05:22, 11 August 2014 (UTC)[reply]
I had similar stripes on my laptop before it died. Bubba73 You talkin' to me? 06:17, 11 August 2014 (UTC)[reply]
[edit]

Google Maps seems to have radically changed, and I can't find the handy button that used to give you a shareable link to your current view. —Steve Summit (talk) 14:44, 8 August 2014 (UTC)[reply]

As you move to a new place (either by scrolling, or by double-clicking a point on the map) it updates the url in the browser's bar (and zooming out changes the z parameter in the url too). So now the url that's shown already is the link to here. I think the old Google Maps had the button because not all browsers let them change the URL in the bar without forcing a reload. 87.114.184.180 (talk) 15:44, 8 August 2014 (UTC)[reply]
The technical wherewithall to do so is provided by HTML5's pushstate call, as illustrated in this example. The "new" Google Maps needs an HTML5 capable browser; a browser which doesn't support this will still use the old interface, which needs the "link to here" button. 87.114.184.180 (talk) 16:30, 8 August 2014 (UTC)[reply]
Thanks for those explanations. I noticed that the browser bar URL was changing as I panned and zoomed, but I wasn't sure I could trust it. —Steve Summit (talk) 17:07, 8 August 2014 (UTC)[reply]