Jump to content

Wikipedia:Reference desk/Archives/Computing/2015 November 25

From Wikipedia, the free encyclopedia
Computing desk
< November 24 << Oct | November | Dec >> November 26 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


November 25

[edit]

How do they do it?

[edit]

How do programs like Spoon Studio and ThinApp redirect registry and filesystem read/writes of third party programs without dissembling and rewriting their source code? — Preceding unsigned comment added by 94.42.140.10 (talk) 01:14, 25 November 2015 (UTC)[reply]

See Hooking for a description of the process and some code examples. Tevildo (talk) 01:33, 25 November 2015 (UTC)[reply]

Website downloader

[edit]

Hello, I'm looking for a reliable open source software similar to this. Can someone help me with this please? Regards. -- Space Ghost (talk) 05:51, 25 November 2015 (UTC)[reply]

cURL is very popular. It is a free software tool and library that can be used as a web crawler. Nimur (talk) 12:06, 25 November 2015 (UTC)[reply]
It may help if you specify what exactly you're looking for that's different from what you already have. You linked to HTTrack which as the page you linked to and our article says, is FLOSS released under GPL v3, so open source isn't the issue. Is it that it isn't reliable? If so, why isn't it reliable? Or is it reliable but doesn't have certain features you need? If so, what are these? Nil Einne (talk) 12:42, 25 November 2015 (UTC)[reply]
wget also can copy or download a html or other file without following the links inside. It simply graps the target or the forward, the webserver redirects itselves to the requested file. --Hans Haase (有问题吗) 13:15, 25 November 2015 (UTC)[reply]
Nil Einne, Nimur, wget: Someone gave me the link and told me that the software downloads the whole website, you can also use its links and everything without using the internet dongle after download too. I'm looking for something similar. I went to the website, the information was confusing plus I was confused after reading the following i.e., This free software program is not guaranteed, and is provided "as is". I thought I ask you guys if you could help me with this or something similar... What do you guys suggest? I'm happy as long as I can download and move through the links without an internet connection. -- Space Ghost (talk) 18:44, 25 November 2015 (UTC)[reply]
What you read is standard terminology. Pretty much all FLOSS is not going to be guaranteed in any way. The only way to get some sort of guarantee would be to pay for a support contract. Although it isn't really the FLOSS part, closed source software that's free will generally likewise come with no guarantees. Heck even a lot of commercial proprietary software actually has little guarantee if you read the EULA. Nil Einne (talk) 18:54, 25 November 2015 (UTC)[reply]
"As is, no guarantee" doesn't mean it won't work, just that you can't sue the author for damages if it doesn't work. No one could afford to make free software if they couldn't disclaim liability in this way.
I think you will find any website mirroring program difficult to use because it's inherently a difficult problem (impossible in general). HTTrack is probably a good choice because it's widely used, which means you'll have an easier time getting help (but not from me, as I've never used it). -- BenRG (talk) 18:51, 26 November 2015 (UTC)[reply]
I tried a website, it raped me for my mb (145mb); was still downloading. I really wasn't expecting it to be more than 100Mb. -- Space Ghost (talk) 19:50, 26 November 2015 (UTC)[reply]
Did you look at what it was downloading? It may have spread to pages outside of that site (since most sites link to other sites, and there's not necessarily any clear way to tell what a site's boundaries are unless you tell it), or it may have been fetching an infinite number of useless autogenerated pages (such as blank pages 3, 4, 5... of a list of items that ended on page 2), or it may have fetched multiple copies of the same pages under different aliases (if the site includes a random ID in URLs, for example). If it's a popular web site, someone else may solved these problems and made a working HTTrack configuration that you could use. -- BenRG (talk) 06:54, 27 November 2015 (UTC)[reply]
No, but you are right... I guess I have to stick with the basics... -- Space Ghost (talk) 20:53, 27 November 2015 (UTC)[reply]

Thanks all -- Space Ghost (talk) 20:53, 27 November 2015 (UTC)[reply]

Could a Bottom-up Parser use left most derivation in reverse to parse a string?

[edit]

I would like to know whether a bottom-up parser succeed only if it parses in the reverse of rightmost derivation?Isn't it possible for this to happen in reverse of left most derivation?Is there any reason that makes bottom-up parser to parse only in the reverse of rightmost direction?JUSTIN JOHNS (talk) 09:11, 25 November 2015 (UTC)[reply]

A bottom-up parser succeeds iff there's any derivation (i.e., iff the sentence is grammatical), and among the possible derivations it finds the rightmost one. This is just a side effect of how the parser works; it isn't a limitation (unless you wanted some other derivation, I guess).
As an example, if the grammar is S→AB, A→pq, B→rs, then when parsing the only grammatical sentence, "pqrs", the bottom-up parser first collapses pq to A, then rs to B, then AB to S. That's the reverse of the rightmost derivation, which is S→AB then B→rs then A→pq. A bottom-up parser that read the sentence from right to left would produce (the reverse of) the leftmost derivation. -- BenRG (talk) 21:57, 25 November 2015 (UTC)[reply]

Yeah thanks.I just wanted to know whether bottom up parsing is possible to parse in the reverse of leftmost derivation?It's possible.I really couldn't get why bottom up parser always parse in the reverse of rightmost derivation.It might be because it reads the string from right to left isn't it.JUSTIN JOHNS (talk) 06:23, 26 November 2015 (UTC)[reply]

Well, a derivation is (by definition) top-down. The bottom-up parser does things from the bottom up, so it can only produce a derivation in reverse. It reads the string from left to right, and does reductions from left to right, so when you reverse the order (as you must to get a derivation), the expansions happen from right to left. -- BenRG (talk) 18:57, 26 November 2015 (UTC)[reply]

Global publication alert

[edit]

I have been trying to find out if there is such a thing as a global publication alert service/site - i.e. something that monitors the output of all indexed journals in a given field (or across all fields) and pings you if a publication matching your criteria is published. In my case, I just have the author's name and the likely keywords, but no publication info other than that it will be out somewhere, sometime next year (by which time I will have forgotten to go look for it). Individual publishers (e.g. Elsevier) have that service, but is there a cross-publisher scraper-type one?-- Elmidae 12:37, 25 November 2015 (UTC)[reply]

Google Scholar has an Alerts feature. As I understand it, you can use the same syntax as for normal Scholar queries, and new publications are matched against this. --Stephan Schulz (talk) 12:47, 25 November 2015 (UTC)[reply]
...you know, Scholar was the first place I looked, and that completely escaped my attention. Durrr. Thank you! -- Elmidae 12:54, 25 November 2015 (UTC)[reply]
You might try to use the seach key site:www.example.com beside your key words, to restrict the alert to a single page. Some pages will be retrieved for indexing a hundred times per day, some in few weeks only. --Hans Haase (有问题吗) 14:59, 25 November 2015 (UTC)[reply]
Pubmed has good alerts too, that's what I use. It sends me a weekly digest of my chosen terms. Fgf10 (talk) 20:16, 25 November 2015 (UTC)[reply]