Wikipedia:Reference desk/Archives/Computing/2011 October 4

From Wikipedia, the free encyclopedia
Computing desk
< October 3 << Sep | October | Nov >> October 5 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


October 4[edit]

Output smartphone to monitor[edit]

Most smartphone owners want the ability to output their device display to a monitor as well as maintain their input capabilities, essentially allowing the user to "dock" with a monitor. Which smartphones allow you to do this in the easiest manner, and what's holding up wireless smartphone to monitor docking? Viriditas (talk) 03:24, 4 October 2011 (UTC)[reply]

There are a small number of Android phones that have an HDMI output. Many modern monitors and TVs will take an HDMI input. This fact is typically mentioned in Comparison_of_Android_devices, but be careful, some of them will only output hdmi in certain situations, and some of them will blank the output if you're watching a copy-protected movie. So don't base buying decisions on that list alone.
I'm not a 100% sure that "most smartphone owners" want this functionality. In my experience most people couldn't care less about this functionality. It's mostly just us developers who find it handy. APL (talk) 09:56, 4 October 2011 (UTC)[reply]
I'm fully aware of the Android functionality, but the HDMI is useless. Nobody wants that. People want wireless smartphone to monitor docking. The cloud computing model implies that devices will be used to view, not store data, which is fine when we're mobile, but when we get into a car, we want our phone to connect to the speakerphone, and when we get home, it would be convenient to dock the phone to a large monitor if needed. If you're at all familiar with human computer interaction, then you know that people interact with technology in the same way that they have relationships with their jackets, their briefcase, purses, and other possessions they lug around. People don't want multiple devices, they want one universal device that can be integrated into their daily life and interact wirelessly with other devices and technology. With the cloud computing model, this means smartphones become primarily viewing devices, with the data living in the clouds. This implies that the ultimate working and entertainment environment would be able to connect with the device to enlarge the viewing experience. Wireless smartphone to monitor docking is exactly what users want because it gives them the ultimate mobile experience. In the corporate world, this means you can go from cubicle to office to meeting room to the car and just about anywhere without having to use anything but your smartphone to facilitate data viewing. Viriditas (talk) 02:12, 5 October 2011 (UTC)[reply]
Ah! You say "This implies that the ultimate working and entertainment environment would be able to connect with the device to enlarge the viewing experience". This may be the source of the misunderstanding. It doesn't imply any such thing. What it implies is that the ultimate working and entertainment environment would also connect to the cloud. There's no need for a complicated intermediary step. All that would do is introduce complications and potential incompatibilities. (And headaches for the non-nerds in the crowd!)
The goal in cloud computing is for every electronic device you own to have instant, up-to-date access to all your stuff, your contacts, your photographs, your documents, etc. All without any direct connection between any of the devices individually.
So long as the cloud remains online, This greatly simplifies life for users. Especially non-nerd users. Instead of having to remember and manage which device connects to which, all devices will magically have what you need at all times. APL (talk) 20:17, 5 October 2011 (UTC)[reply]
Thanks, but there's no misunderstanding on my end. I know very well how cloud computing works. What I don't know, is why smartphone users can't wirelessly connnect their phones to any monitor as a viewing device. The working environment I am talking about is not connected to the cloud at all, it is just for viewing. Surely you've seen Minority Report or the Iron Man films, which show how you can view data with multiple viewing devices? In this case, I'm talking about a basic viewing environment, such as a monitor or television. Ideally, there would be a touchscreen involved or some kind of gestural input. The point is, your smarthphone would allow you to connect to it wirelessly. This has nothing to do with the cloud. Viriditas (talk) 22:27, 5 October 2011 (UTC)[reply]
I admit that would be neat. And in fact I've worked in environments where I've got half a dozen computers around me all working together on the same thing, and it's super fun. However, what most people would do with such a system could be achieved just as easily with the cloud based systems.
As a computer programmer I've often been disappointed that everyone now owns these remarkably multi-purpose machines, but for the most part they all do the same dozen things on them. For the vast majority of people, computers are for Documents, Email, Web, mp3, and games. Maybe with the occasional music or video editing. The kind of sci-fi set you're describing wouldn't do anything useful for any of that that couldn't be achieved better and easier with a cloud solution.
Anyway, right now I'm viewing my google calendar in three different ways on three different devices, any change I make to one will be instantly reflected on the others. Besides the fact that none of them are hologram projectors, I'm not sure how this is different than Tony Stark (AKA Iron Man) viewing a document on three different computers at once. You don't know that his computers weren't connected to an in-house cloud server. In fact, that would probably be a very sensible way to run a multipurpose inventor's workshop like he seemed to have. APL (talk) 00:16, 6 October 2011 (UTC)[reply]
Unfortunately, the desktop metaphor needs to be retired. Running an extended desktop is interesting, as it allows you to widen your perspective, but its time is over. I completely disagree that computers are for documents, email, web, mp3, and games as you say, and that's the kind of stale thinking that has held progress back on human computer interaction for so long. What I'm describing is not science fiction in terms of fundamentals; I can connect a laptop to a wireless monitor right now. So, why can't I connect a smartphone? You're still talking in terms of devices, when I'm talking about viewing and manipulating information in any environment, regardless of location. For example, let's say you're walking out of the office and your wife calls you on your smartphone. "Honey, meet me at the new Chinese restaurant on 3rd and Main," she says as you're getting into your car. Well, because she called you from that location and her phone was location enabled, that call has a location tag embedded in the log, telling you where to go, flagged as a destination. But you don't even look at it. As you get into your car, your in-dash speakerphone and GPS recognizes your phone automatically and pulls up the flagged location from your call log and announces that you'll be going to the Chinese restaurant as soon as you put your seatbelt on. You don't know where it is, but the GPS knows, and that's all that matters. This is the fluid, pervasive environment I'm talking about. It doesn't matter what device you are using or what you are using to view the information with, because the smartphone talks easily with every appliance it comes into contact with. It's all about allowing this rich environment to flourish, not hold it back. Allowing smartphones to wireless connect to any viewing device or appliance is a step in this direction. So anyway, you get into your car and it takes you to the restaurant. You see your wife sitting at a table near the front next to a clear window screen above the table. As you sit down, the screen recognizes your phone and asks you, by voice, if you would like to connect. You say "yes", and your wife asks how the new architectural design for the Smith account is going. You respond with, "let me show you", and you say "open house view, show Smith account", and the screen above the table comes into action. This is what I'm talking about. It isn't about the hardware, and isn't about the device. It's is and always has been about the data. And even if the data is coming from the cloud to your phone, you will still want to view it externally for interaction or presentation. Viriditas (talk) 05:12, 6 October 2011 (UTC)[reply]
No that wouldn't work at all. I don't have a wife. APL (talk) 10:15, 6 October 2011 (UTC)[reply]
Inability to empathize with an archetypal example used for illustration, noted. Viriditas (talk)
Lots of Android phones have HDMI out these days, but hardly any have 1080 lines as opposed to 720. I'm going to hold out for 1080. 71.141.89.0 (talk) 11:11, 4 October 2011 (UTC)[reply]
Agree with APL's second paragraph. Even with tablets it doesn't seem that important to most people. Nil Einne (talk) 12:51, 4 October 2011 (UTC)[reply]
I'm afraid you're wrong. Try to pay attention to how people use technology. Viriditas (talk)
I do hence why I know you're wrong, in fact you seem to have completely misunderstood what cloud computing is about. (In fact what you're suggesting seems to be the pre-cloud computing model where the phone was supposed to be the central computer because it had all the content.) In the cloud computing model the phone is just one way to access the content and apps, that's in fact one of the key points. No one wants to need their phone to view stuff on the monitor, particularly not when their phone battery life means they get perhaps 2 hours of viewing if they're lucky or they need to connect their mobile to a power source. (Even worse of course if you're using some kind of wireless 'docking' which implies the phone not only need to pointless retrieve the content from the cloud when the monitor could do that itself, their phone probably has to re-encode it to serve to the monitor. I'm presuming you do appreciate uncompressed 1080p50 needs about 2.5 Gbit/s bandwidth so serving it uncompressed wirelessly as HDMI and every other physical interconnection method does is unrealistic, you'd need to compress it in real time and worry about the sync, latency and power issues that entails.) They'd much rather their monitor has direct access to the content and apps via the cloud, with the phone completely uninvolved or at most used to control the monitor if that is needed, probably with a well designed interface intended for the purpose rather then simply a duplication of what's on the monitor. In the corporate world, this means you go to another office or a meeting room and your content is already there whether or not you have your phone, it's on, it has battery life or whatever, that's one of the key points of the cloud computing model. (Good way to impress your boss, NOT: Sorry but we need to end this meeting, my phone's battery is running low and even though my content is all on this fancy cloud you're paying for, I still can't show it to you without my phone. Even better if your phone automatically displays a text message and the whole room sees the private text message from your partner.) You may use your phone to control the content, but may be not, it depends entirely on the meeting room and what you're doing. Again, even if you do use your phone to control the content, often you will prefer a well designed interface rather then one simply duplicating what you're presenting, for example it means you can fix problems without having to show the whole room. Of course, as always, if you have any actual evidence for your claim, you're welcome to present it. But it seems it may help if you follow your own advice, the best evidence of course is ask any non-geek with a smart phone and most geeks as well and they'd think your suggestion is strange or frankly dumb. (Given how much success they've had, looking at Apple is likely a decent bet, and their cloud computing model is definitely not one where the phone is the centre of everything but rather one like I suggested where the phone is just one of the devices you can use to access your content and apps with the content and apps ideally automatically and always being there for any device.) Perhaps thinking about what you're suggesting would also help. Games and apps are one of the hot features for phones but trying to control most of these using your touch screen phone while viewing a monitor doesn't work very well. Browsing the internet using your phone works slightly better but will still often be limiting, many would much rather have a keyboard and mouse and ditch the phone completely. Ditto for composing documents etc. Really it's primarily listening to and viewing media (audio, photos and videos, personal and public) where you people may want their phones involved when they are using a monitor although again they'll often prefer a suitable interface intended for control rather then a simple duplication of what's on the monitor. Nil Einne (talk) 10:55, 5 October 2011 (UTC)[reply]
Hilarious! You didn't understand a word of what I said above. Reading comprehension is important. I said that people want to use their phones to connect wirelessly to monitors. This does not mean using the phone interface at all. It means using phone as a universal viewing device to facilitate wireless viewing on a larger monitor and accessing your data and apps in the clouds. In the scenario I presented, you would presumably have the phone near you or on your body, but you would not be physically using it at all. This means monitors, of whatever kind, become the interface, and the phone isn't used at all. I said this quite clearly above. And yes, this is what non-geeks want, as I said above, based on human computer computer interaction habits. The current environment for information device design benefits the manufacturer, not the user. If you're working on a project on your mobile, you should be able to sit down at your desk and continue that project using the same workspace without any change. And, if you are away from your desk, you should be able to walk up to any monitor and wirelessly connect to that same workspace using your phone as the intermediary viewing device, while it is in your back pocket. That doesn't mean you would be viewing your phone's screen. So, back to my original question: where are the wireless viewing devices that allow you to connect to your phone? Viriditas (talk) 19:40, 5 October 2011 (UTC)[reply]
Viriditas, where the heck do you live? Mars? This question and others illustrate that you have a very strange understanding of consumer desires and motivations. I'm pretty sure, as I sit here right now drinking an energy drink from a clear frosty mug just like I always do, that I've never met the hypothetical "average consumer" you keep describing. Please take a moment to consider the possibility that you are describing yourself, and not the mass of consumers.
To answer this thread, I'm afraid that Nil Einne is mostly right. While us computer nerds tend to prefer powerful self-contained machines with a zillion attachments, the general public is quickly moving towards thin-clients and cloud computing. The functionality you desire is being implemented through a completely different mechanism than what you're imagining. I won't say that "cloud computing" is entirely "finished", but when it is, everything you can do on your phone will be available on everything else you own, from your laptop, to your TV, to your phone, to your tablet. Not because it's been implemented on all those different places, but because you're not actually doing it on those devices, you're doing it on the cloud and the devices are just your 'window' to the cloud.
I'm sure you don't believe me, because you've decided that not only are you an average consumer, but that the only way of making satisfying your desires is the exact technical solution that you've imagined.
Finally, please consider the possibility that someone has actually read and understand your post, and still disagrees with your fundamental premise. APL (talk) 20:06, 5 October 2011 (UTC)[reply]
I'm afraid you are only reading what you want, and nobody has answered my question regarding why it is that a smartphone can't connect wirelessly to a monitor. I'm not talking about myself at all but how people use computers and how we are in the midst of a transition from a desktop to a pervasive paradigm, with smartphones representing one step in that change. Nil Einne's ridiculous misrepresentation of my comments shows that he actually agrees with what I've said. I'm not talking about power users who require a desktop for processing power, so what computer nerds prefer has no bearing on this discussion. What I'm talking about is not just using your smartphone as a window to the cloud (as I've already previously described) but being able to access any monitor, television, or view screen with that smartphone, in effect using the smartphone as a viewing device to connect me to the cloud. It's an easy question, but why it is nobody can address it is fascinating. Contrary to what you and others continue to claim, the marketplace recognizes that consumers want to do this with their computers, and they have offered several products to do just that, including Intel Wireless Display and Warpia Easy Dock as only two examples. There is essentially no difference in wanting to do this with your smartphone. Easy question yet no easy answer. Awaiting your next misrepresentation... Viriditas (talk) 22:22, 5 October 2011 (UTC)[reply]
The basic answer is that device-to-device communication is a pain, requires that every device in the chain honor the same standards, and is unnecessary in a world of thin-clients. Not to be insulting, but that kind of connectivity would probably be considered "old fashioned" in the same sense that flying cars and jet packs are old fashioned visions of the future.
Direct device-to-device communication and control produces unnecessary dependencies and configuration hassles. The precise way you're describing it is also a ridiculous bandwidth waste, though admittedly that could be addressed, with only minor changes to your vision.
(Laptop docking stations are a special case. They're typically laptop specific, and should really be viewed as two halves of a single device. Anyway, it's been years since I've seen anyone use one, despite the huge increase in laptop sales.)
As an example, Right now, my favorite new productivity tool is "Simple note". A cloud-based note-taking app. There are web/pc/mac/android/iphone clients for it. I can start typing a note on my desktop, then pickup my tablet, press one button, and instantly pickup where I left off. I even could type alternate letters on different devices if I felt like being goofy. Meanwhile, at any moment, if I took out my phone, it would also have an up-to-the-second version of that same note. All of this is achieved without, at any time, my computer becoming a "viewing device" for my tablet or my tablet becoming a "viewing device" for my computer. (Also on my desk : A clear frosty mug of ice tea. The mug that previously held the energy drink is now washed and back in my freezer.)
I haven't tried it, but I'm sure I could use the web-based Simple Note client on my Playstation, and if I was into that sort of thing (Documents, on a tv?) I could get one of those set-top android boxes.
Admittedly, this doesn't have the slick coolness of the scene in Avatar where a document slides off a computer monitor and onto a tablet, but in general it's immensely more useful, and easier for end-users. (I don't have to worry about compatibility, configuration, or security of a computer->tablet link-up, because no such link-up exists. )
So, the answer to your question is that such a system could be created without too much trouble, but no one is going to buy a new phone and a new tv and a new computer monitor, and then fiddle with configuration and security, when that way of doing things literally gives no increase in functionality over a cloud-based approach that will largely work with their current equipment.
Of course, all of this depends on trusting the cloud itself, which is a legitimate point of contention, but as far as I can understand, that's not your point, so I won't address it. APL (talk) 00:16, 6 October 2011 (UTC)[reply]
You don't need a new TV and a new computer monitor. Please try to think outside the box for once. Apple TV is a network appliance and for the most part, has some of the same or similar capability of what I am talking about, such as letting a monitor view data on multiple devices. Again, this discussion is not about cloud computing. It's about being able to view your data (on the cloud or not) and interact with your phone on any screen anywhere at any time. A wireless virtual laser keyboard, a finger mouse, and any number of accessories that you could probably carry on a keychain would make this work. As you are no doubt aware, Microsoft Table already has this functionality, so it already exists. Viriditas (talk) 04:52, 6 October 2011 (UTC)[reply]
If you insist on your old fashioned master/slave view of how technology would work, I can see why that would make sense to you. But it's out-dated. Try to think outside of your box, instead of stubbornly holding onto the future we were promised in the '90s. Every example you've given so far would work better with a cloud/thin-client model. (Especially once you grant that you're going to buy a separate computer to plug your TV into! At that point, why use the damn phone at all? There's nothing in my phone that I can't access from elsewhere.)
I could start doing work on my phone, stop mid-sentence, throw my phone into a wood-chipper, and continue the work on my TV seconds later without losing a single keystroke of work.
Even your wife story above could be much more easily accomplished with a cloud-based solution. There's no need to transfer the GPS fix from the phone to the carputer, because the carputer would be connected to your Google cloud and would get the data the same instant your phone gets it. The table computer wouldn't get the architectural drawings from your phone, it'd get them from your cloud once it's identified you. Admittedly, the phone could be used as an authentication device, but that's a nicety. It'd work just as well if you took a second to type in your email/password, or if you scanned a QR code you keep on your key-fob, or whatever. You don't need to wait for the future to do this. Find yourself an internet cafe, invite your wife, order a couple of coffees, and look at some architectural drawings.
That's the wonderful thing about cloud computing. The data is completely separated from the devices it's on. The idea that you're going to physically carry your data around with you inside of a small plastic and metal box is outdated and old-fashioned. Stop obsessing about the specific piece of hardware. Just trust that wherever you go, your data will be there. Your car, your phone, your computer, the computer at the cafe, the computer at the office, your tablet, your TV, whatever. Doesn't matter. Did your wife say "Hey honey, use your small plastic box to control this larger plastic box."? No! Of course not! She just wants to see the architectural drawings.
You keep saying "This has nothing to do with the cloud", but that's like asking why you can't buy good buggy whips anymore and insisting that your question has nothing to do with automobiles.
(And yes, I'm aware of Microsoft's $10,000 coffee table. I've used them, and their competitors, and frankly haven't been impressed by any of them. It'd be a neat way to play "Monopoly", though.) APL (talk) 10:15, 6 October 2011 (UTC)[reply]
This may surprise you, but it's 2011, the year Google Wallet came out for use on your smartphone. Ever heard of near field communication? Next time I see you lugging your desktop and television through the checkstand, I'll shed a tear in your honor. Seriously, talk about not getting it. Keep truckin' with your "desktop" way of thinking, old man. Cloud computing has absolutely nothing to do with what I am talking about, as you have been repeatedly informed over and over again. Viriditas (talk) 11:13, 6 October 2011 (UTC)[reply]
Uh, Google Wallet is an example of cloud computing. The money is not literally stored on the phone, neither is the cash register being used to "view" data on the phone. Try to keep up.
Point is this: Next time the two of us bring dates to an internet cafe, and the two lovely ladies ask to see our 'architectural drawings'. You're going to launch into a diatribe into how The Man won't give you the technology you need to connect your phone to the computers in the cafe, etc, etc, etc.
Meanwhile, I'll spend five seconds bringing up the drawings without ever touching my phone. APL (talk) 11:27, 6 October 2011 (UTC)[reply]
(Incidentally, thanks for reminding me what year it is. Reading your posts makes me think I've somehow time-warped back to about 1995. Did you just find a copy of "The Road Ahead", or something? Because I have to tell you, that book, while visionary, is badly out of date. APL (talk) 11:35, 6 October 2011 (UTC))[reply]
Er, why would I need to connect my smartphone to a computer? Seriously, you've lost the plot completely. 1996 called, they want their brick back. Viriditas (talk) 11:43, 6 October 2011 (UTC)[reply]
A rose by any other name.
You can make it in any shape you want, bit it's still a computer. Even your phone is a computer. Sorry that not all computers are shaped like the ones in your sci-fi inspired imagination. But while you're bitching about what kind of computers you have available to you, I'll be successfully completing every single one of the use-cases you've described here. APL (talk) 11:47, 6 October 2011 (UTC)[reply]
You've lost the plot completely. This isn't about connecting a smartphone to a computer or using the clouds for access and storage. This is about viewing and interfacing with data from a screen/monitor/television/surface. For example, I can connect a laptop wirelessly to a monitor. I should be able to do the same with a smartphone. Why can't I? Viriditas (talk) 11:54, 6 October 2011 (UTC)[reply]
I dunno. Maybe there's some obscure phone that does this, but like we said at the beginning, very few people want to do that because it's completely unnecessary. (And a serious battery-suck.) Some phones have docking stations that can be connected to monitors via hdmi just like I said at the very beginning, however, like we've said a million times before, that kind of thing tends to be vender-specific. Which isn't very useful in the general case. (Like at the Chinese restaurant where your wife wants to see architectural drawings.) That's why people have moved away from that master/slave metaphor, (Just like people are moving away from the desktop metaphor) to a new paradigm where all the displays are smart devices. (Which I won't describe again.) APL (talk) 12:10, 6 October 2011 (UTC)[reply]
It's not unnecessary at all, it is the future. Viriditas (talk) 12:11, 6 October 2011 (UTC)[reply]
You seem to be imagining a future where data is carried around in small plastic boxes.
That's come and gone. What the industry is striving for now (And don't argue with me, because I can't change it) is a world where "devices" and "viewing technology" (as you put it) are one and the same. You'll probably get your coffee table display soon enough, but it won't be a dumb computer monitor, it'll be "device" that doesn't need your phone at all. (But works absolutely seamlessly with it, through the cloud.) APL (talk) 12:36, 6 October 2011 (UTC)[reply]
I'm afraid you've stubbornly stuck to a misinterpretation of something I've never said or implied. This has nothing to do with data, and I've already said, several times, that the smartphone in question isn't used for its data but to connect with viewing devices. The cloud delivers the data to the device, but that is completely irrelevant to this discussion. Not sure why that it so hard for you to understand. Viriditas (talk) 13:11, 6 October 2011 (UTC)[reply]
I understood what you said earlier, and it's clear you were wrong. There's no reason for the phone to be involved period. The phone is just one device to view content. The phone doesn't need to dock with the monitor. The monitor itself retrieves the content or apps from the cloud. There is no reason why the phone should be involved, that's the whole point of the cloud, keeping the phone involved is just silly and misses the point of the cloud. It's possible the phone could be involved as a user identification and geolocation device (to the monitors), but that's about all and most people are going to want it to be optional.
Also, for someone who complains about people missing their points, you seem to have missed my point. What I was saying is a lot of the time people do not want to see their phone's screen on another monitor not that people are still going to be viewing their phone screen despite the monitor. They may want certain apps and certain content to show on the monitor, but they would often want to be able to use their phones for other purposes and they definitely don't want that private text message from their partner showing up while they are giving a presentation because the monitor is just showing what's on their phone. So no, the monitor doesn't just become a view port for the phone.
BTW, right now plenty people do go between computers in offices or even go home to work and keep their workspaces. (For security and related reasons, they do usually need to do a short login process.) They don't usually use these same workspaces on their phones because again, the workspace optimised for a larger monitor doesn't work well on a phone.
And a lot of the time, the interfaces they would want on a big monitor vs a small mobile are quite different. (In fact, again Apple has demonstrated this quite successfully, it's generally accepted that one of the reasons they were so successful was because they recognised that a touch screen smart phone's and a tablet's interface needs to be specialised and not the same as what you use for an older computer or for that matter a large monitor.) And I didn't say anything about needing a desktop for processing power, quite the contrary. And again, you have completely ignored the power issues which I raised earlier.
And in the same vein, I did explain (and APL hinted at) why it wasn't easy for a phone to connect wireless to a monitor. If you still don't understand that 2.5 gigabit/s is a lot of bandwidth even for a fairly nearfield wireless communication device with likely hefty power and space requirements; or that that realtime compression of video to transmit it to a monitor and all the latency, sync and power issues that raises which will be needed if you want a more reasonable bandwidth solution, that's hardly my fault. It's easy to find info on this if you're more interested, e.g. [1] mentions a WirelessHD chip which uses 1.3 and 2 watts.
Anyway as with APL it appears you are once again, as you have done before on the RD, convinced you are right and no amount of reasoning by multiple people or examples are going to convince you otherwise so this will be my last comment.
Nil Einne (talk) 07:40, 6 October 2011 (UTC)[reply]
Forgive me, but I am not in the slightest bit convinced by your continual misrepresentation and misreading of everything I've ever written. You may have an issue with reading comprehension, in which case, thanks for your comments and good luck to you. Viriditas (talk) 08:37, 6 October 2011 (UTC)[reply]
Anyway, the simple answer to your question is that few devices interconnect in the way you're imagining because the problems those sorts of interconnects solve can also be solved by pervasive cloud computing, and that's what everyone's doing nowadays. Every single potential use-case you've described (Including the movie scenes) works the same, if not better with a cloud solution, so there is little to no demand for type of thing you're asking for.
There just isn't a need for two separate ways of doing the same things. APL (talk) 11:09, 6 October 2011 (UTC)[reply]
Actually, smartphones are being used, not desktops and not appliances. My concern is with connecting smartphones to viewing devices. Cloud computing has nothing whatsoever to do with this topic. You appear to be obsessed with pushing the antiquated "desktop" paradigm, when it is more than likely that wireless viewing stations/screens will replace them and are replacing them as smartphones become the universal device. All we need is a way to view and manipulate the data in the clouds using our smartphones as the wireless interface. The technology is already here. Viriditas (talk) 11:15, 6 October 2011 (UTC)[reply]
I haven't mentioned "desktop" once. In fact, I'm not sure if you've noticed, but the desktop metaphor is in the process of being deprecated as people move to web-based solutions. (The same cloud based solutions I've been yammering on about.)
You keep saying this : "All we need is a way to view and manipulate the data in the clouds using our smartphones as the wireless interface. "
Why? Why involve a specific plastic box in the process?
The whole thrust here is to separate data and hardware.
I've got a tablet sitting here. It's lovely. Why would I want to use my phone to manipulate it? Why not manipulate it directly, let it access the data directly, and leave the phone out of the loop? Same with my TV, my computer, my other tablet, my PDA, my Pandora, etc. Why should my phone hold any special place of pride in this pantheon of diverse ways of viewing the data? APL (talk) 11:27, 6 October 2011 (UTC)[reply]
You evidently must live in a basement and not get out very much. When you open the door and go outside, there's this thing that's called a "smartphone" that keeps you connected with the rest of the world. Sorry, I don't have time to explain it to you. Viriditas (talk) 11:44, 6 October 2011 (UTC)[reply]
Actually, this apartment is a good foot below grade, So I guess you've got me pegged.
Seriously though, the future you want is here. All these smart devices, work together to do exactly what you want. You just need to accept that the "connection" you're looking for is called "cloud computing". Then it all works. Just like you've been asking all along. Really. Free your mind, and join the 21st century. You don't need a desktop computer at all. Not even a little. Get yourself an android TV box, and android car-computer, and a nice, large tablet, and they will all work together exactly as you've described. (If it helps, you can call everything except your phone a "Display Device".) Only the the technical detail of how they're connected will differ, but you don't care about technical details, you've said so yourself. APL (talk) 11:58, 6 October 2011 (UTC)[reply]
Sorry, but you're not getting it and I don't think you ever will. The future of the computer isn't the desktop or a tablet. It's a device that disappears completely but allows you to interact with your environment. When you are truly mobile, you aren't carrying a large desktop around with you, nor are you carrying a television box, a car computer or a "large" tablet, as you put it. You're carrying a small, thin, tiny smartphone that is virtually undetectable on your body and you're interacting with the world around you as if the hardware were truly transparent and the data ubiquitous. This is why in the interim, the smartphone is the dominant device when you're mobile, more so when its voice driven and location aware. And users want to be able to go from environment to environment, from office to car to restaurant as I explained in my example above, without doing anything with a device. Wireless augmented reality devices can also feed off the smartphone and display data, using wearable computing clothes, eyeglasses and sunglasses, and even shoes. At the end of the day, however, it is the viewing technology that becomes important, not the device. The device essentially disappears. Viriditas (talk) 12:08, 6 October 2011 (UTC)[reply]
Why do you need to carry the phone? (Besides actually making phone calls, of course.) APL (talk) 12:11, 6 October 2011 (UTC)[reply]
Data collection, data sharing, and communicating with viewing devices. Imagine a third grade field trip. A teacher carries a smartphone in her pocket. Her class of 5 kids each wears wireless, transparent glasses upon which are projected augmented reality labels. The teacher pulls a smartphone out, turns on a wireless hotspot, adjusts the AR teaching app and sets the display to "botany". The botanical scavenger hunt and quiz begins. Wherever the kids look, little, unobtrusive botanical names pop-up. If a kid runs up to a shrub and stares at it for longer than two seconds, another menu comes up, allowing them to drill down for further info, phylogenetic tree, etc. It also records which kid has studied which plant and allows them to take a quiz in the field, the results of which are recorded and sent back to the teacher's smartphone and uploaded to the school server and assigned a grade. By the time they get home, their parents have already received their report card. Viriditas (talk) 13:09, 6 October 2011 (UTC)[reply]

Regenerative Keyboarding[edit]

Why hasn't anyone apparently conceived of regenerative keyboarding, especially when speaking of laptops? When the energy from typing on keys gets recaptured, it goes back to the battery to recharge it (or speed along the recharging if it's plugged in.)

How would a laptop keyboard be made to recapture the energy given by the fingers clacking on the keys and give it to the battery, and why haven't such keyboards been made yet? What kinds of challenges would need to be overcome to make them? --70.179.176.157 (talk) 06:57, 4 October 2011 (UTC)[reply]

The amount of energy you could retrieve from keypresses would be so small it's difficult to see how that could power anything, let alone a laptop.--Shantavira|feed me 07:20, 4 October 2011 (UTC)[reply]
Lots of people have conceived of it. Googling generating energy from keyboard returns a lot of discussion. Here is a NY Times article about Compaq patenting a method to extend battery life. But as already mentioned, you're not going to be able to power your laptop solely by keystrokes, not unless its power requirements are dramatically reduced. --Colapeninsula (talk) 10:03, 4 October 2011 (UTC)[reply]
You'd need to know a few things first:
1. How much energy could be captured from one keypress (this is probably variable depending on how difficult you want to make pressing keys to be)
2. How many key presses on average people use per hour or so (this probably varies a lot with different type of uses — browsing the internet uses very few, for example)
3. How much energy per hour the computer requires (it would be great to correlate this with the different types of uses)
The odds are that 3 dwarfs the product of 1 and 2, in the way that adding a bike pedal to a car for recharging the battery would not get you very much actual energy. The amount of watts generated by human power is generally pretty low compared to our standard level of energy consumption, as anyone who has slaved over various exercise equipment that tells you such things would know. You could try to decrease 3 by making the computer extremely energy efficient, but even then, it seems unlikely to me that key pressing would be more advantageous than, say, cranking a handle (per the OLPC XO-1), which is probably a more efficient way to transfer force than moving keys a 10th of an inch. --Mr.98 (talk) 14:05, 4 October 2011 (UTC)[reply]
On a stationary handbike, I can burn calories at a rate of roughly 30 or 40W, but it's tiring, and I probably couldn't keep going at that rate for more than 15 minutes. If the device could 100% efficiently capture that energy (which it can't), it could just about power my laptop. Typing probably takes a couple orders of magnitude less power, and people don't type continuously, so there just isn't enough energy to make an appreciable difference, even if efficiency weren't a problem. Paul (Stansifer) 14:27, 4 October 2011 (UTC)[reply]
From an article in Journal of Dynamic Systems, Measurement, and Control (DOI: 10.1115/1.1902823), I get these numbers as reasonable values:
  • key displacement 4mm
  • rate for a typist 90 words/minute or 450 key presses per minute
  • Force for space bar 0.8N
  • Force for normal keys 0.4N
  • Laptop power consumption between 20 and 90 Watt (from wikipedia)
Taking 0.5N as average force, a typist rate of 450 keystrokes per minute, and power consumption of 30 W, results of one hour typing would be:
  • Energy produced: 54 J
  • Energy used: 108,000 J
DS Belgium (talk) 01:19, 6 October 2011 (UTC)[reply]
Good work on that math. I can't help but point out that another way of looking at the data, is that this would only work if you could learn to type at 180,000 WPM. APL (talk) 10:01, 6 October 2011 (UTC)[reply]

Kinetically-regenerative phones[edit]

When users place phones in their pockets, the movements from walking and etc. would involve energy. Couldn't a kinetic energy-recapturing mechanism be miniaturized enough to be placed in phones so that their batteries can last longer (or even be recharged with more juice) thanks to this? What engineering challenges would need to be overcome in order for this regenerative system to work? --70.179.176.157 (talk) 06:57, 4 October 2011 (UTC)[reply]

Yes, you are referring to a motion-powered charger. These are still in their infancy, and the amount of energy they capture is very small, but if applied over a sustained period it could be useful. However, you would have to do an awful lot of walking to recharge a battery.--Shantavira|feed me 07:31, 4 October 2011 (UTC)[reply]
For one, I think phones would have to consume a lot less energy for it to be worthwhile. Even self-winding watches will run down if you don't lead an active life, and watches take very little energy to run. APL (talk) 09:51, 4 October 2011 (UTC)[reply]
While I don't have any actual power numbers, this sort of thing is probably possible with phone technology available today- you could probably add motion charging to a proper cell phone like a Nokia 1200 to bring standby life to three weeks or a month. Switch to an e-ink screen like the Motorola FONE to save even more power, and use the latest battery technology, and maybe you'd never have to recharge if you lived an active enough life. But then you'd be looking at a price point similar to or in excess of that of a lot of flashy smartphones for something that is just an honest phone. The market for such phones among people who are actually two weeks away from a wall jack or genset just isn't that great. Nevard (talk) 23:20, 4 October 2011 (UTC)[reply]
Note that this is two weeks away from a wall jack and you don't use the phone. I find it hard to believe that this sort of thing would really compensate for the amount of energy it takes to actually make calls (which seems to be the majority drain on the phones). --Mr.98 (talk) 20:12, 5 October 2011 (UTC)[reply]

BlackBerry not updating Facebook[edit]

My BlackBerry Bold has very suddenly stopped updating Facebook via the 'Social feeds' app – it says it's done it. It updates Twitter fine. But the Facebook posts just don't appear. I've tried logging out and in again. No luck. Can anyone advise...? ╟─TreasuryTagvoice vote─╢ 10:30, 4 October 2011 (UTC)[reply]

And the relevance this has to Wikipedia is...? (FWIW, iPhone app isn't updating either) The Rambling Man (talk) 12:41, 4 October 2011 (UTC)[reply]
It's a computing-related question - seems reasonable enough to me. AndrewWTaylor (talk) 12:48, 4 October 2011 (UTC)[reply]
Indeed ... this is the Reference Desk, not the Help Desk - questions posted here don't have to about Wikipedia. Gandalf61 (talk) 12:49, 4 October 2011 (UTC)[reply]
Working on iPhone again. What a useful thread! The Rambling Man (talk) 13:54, 4 October 2011 (UTC)[reply]
meta discussion moved to WT:RD
The following discussion has been closed. Please do not modify it.
Question: is this just a chat-board that has no relevance to Wikipedia at all, other than dubious links to, say, Facebook (although not in this case)? Presumably the aim here would be to link responses to Wikipedia articles? So questions like "my app doesn't work" surely don't belong here? The Rambling Man (talk) 16:48, 4 October 2011 (UTC)[reply]
There is a link to the Ref Desk Guidelines in the header. Beyond that, any discussion should take place on the Talk page. --LarryMac | Talk 17:05, 4 October 2011 (UTC)[reply]
Gotcha. So whenever an app fails to work on a Blackberry or an iPhone we should entertain a thread here. That doesn't happen too often, after all.... The Rambling Man (talk) 17:08, 4 October 2011 (UTC)[reply]
What's the matter with you? This is the computing RefDesk. Furthermore, you've been told something that you're experienced enough to know anyway: that if you wish to discuss the functions and concept of the RefDesk, WT:RD is ready and waiting. This isn't the place to do it. ╟─TreasuryTagActing Returning Officer─╢ 17:15, 4 October 2011 (UTC)[reply]

Nothing, just surprised we waste resources here by answering "why doesn't my app work?" questions which clearly have no interest in improving Wikipedia. A total waste of time. The Rambling Man (talk) 17:17, 4 October 2011 (UTC)[reply]

You want me to say it again? I'll say it again. You've been told something that you're experienced enough to know anyway: that if you wish to discuss the functions and concept of the RefDesk, WT:RD is ready and waiting. This isn't the place to do it. ╟─TreasuryTagCANUKUS─╢ 17:18, 4 October 2011 (UTC)[reply]
Done already. Surprised someone with your "experience" needs to ask why a particular "app" stops working for a few hours. On the other hand, not that surprised. The Rambling Man (talk) 17:20, 4 October 2011 (UTC)[reply]
I don't have to justify myself to you, particularly over such a trivial non-issue. If you aren't interested in providing helpful responses then the RefDesk probably isn't the best environment for you. ╟─TreasuryTagCounsellor of State─╢ 17:24, 4 October 2011 (UTC)[reply]
Ironic. The Rambling Man (talk) 17:30, 4 October 2011 (UTC)[reply]
I wasn't "hiding my behaviour" (per your helpful edit summary), I was doing as you asked, and moving to the talk page. The Rambling Man (talk) 17:37, 4 October 2011 (UTC)[reply]

Signal boost for samsung intensity[edit]

I want to know if theres anyway i can boost my cell signal for my samsung intensity 2. I do NOT want to buy anything to do this. I have seen people boosting cell signals with wire stuck in the antenna jack but my phone does not have one. I want to know if there is some way to boost my signal strength without an antenna jack (I am somewhat okay with opening it up to do this. I unscrewed it and opened it up today but wasnt able to tell which part was the antenna.) — Preceding unsigned comment added by 99.89.176.228 (talk) 21:40, 4 October 2011 (UTC)[reply]

Since nobody else has answered, I'll suggest that as far as I know it might be possible to improve reception by lengthening the antenna (some people suggest simply inserting a USB cable in the USB socket), but not the transmission strength as this is controlled automatically. (Transmission might even be adversely affected by doing this.) If you Google "how to boost cell signal" you will find plenty of tips (some of which might void your warranty).--Shantavira|feed me 16:15, 5 October 2011 (UTC)[reply]
Another possibility is to make your reception/transmission more directional versus omni-directional. Try this article, Cantenna, and point it at your nearest cell tower. Can't guarantee it will work at whatever frequency your phone uses. See also Unwired Forum, section 4.3, Antenna for a few ideas about reflectors. - 220.101 talk\Contribs 17:41, 5 October 2011 (UTC)[reply]
Lots of ideas here. Cell phone antenna (the ScienceRef desk might have been a better venue for this question?) - 220.101 talk\Contribs 18:54, 5 October 2011 (UTC)[reply]

WiFi USB 5dbi[edit]

If you compare a WiFi USB adapter (5dbi) to a normal plain laptop wifi, how much is the difference in distance? Quest09 (talk) 23:07, 4 October 2011 (UTC)[reply]