[lnkForumImage]
TotalShareware - Download Free Software

Confronta i prezzi di migliaia di prodotti.
Asp Forum
 Home | Login | Register | Search 


 

Forums >

comp.lang.ruby

newbie to Ruby

Charles Pareto

8/27/2007 3:29:00 AM

Hi,
I was reading through Learning Ruby and was trying to get the example on
page 119 to work to scrape Google. But when I run it nothing happens.
Any help would be appreciated. Thanks.

require 'open-uri'

url = "http://www.google.com/search?q=...

open(url) { |page| page_content = page.read()

links = page_content.scan(/<a class=1.*?href=\"(.*?)\"/).flatten

links.each {|link| puts link}

}
--
Posted via http://www.ruby-....

5 Answers

Mark Gallop

8/27/2007 4:01:00 AM

0

Hi Charles,

Charles Pareto wrote:
> links = page_content.scan(/<a class=1.*?href=\"(.*?)\"/).flatten
>
I don't think that regular expression (regexp) works. Maybe google has
changed their code since the book was written. I think it goes "href"
then "class".

If you work out the correct regexp, let us know.

Cheers,
Mark

Dan Zwell

8/27/2007 4:26:00 AM

0

Mark Gallop wrote:
> Hi Charles,
>
> Charles Pareto wrote:
>> links = page_content.scan(/<a class=1.*?href=\"(.*?)\"/).flatten
>>
> I don't think that regular expression (regexp) works. Maybe google has
> changed their code since the book was written. I think it goes "href"
> then "class".
>
> If you work out the correct regexp, let us know.
>
> Cheers,
> Mark
>
>

As Mark said, google changed their code somewhat. If you work out the
correct regular expression and it still seems to give erratic results,
here is a hint: the naive solution uses ".*?" in a certain place, but
that will still match too many results. Try [^"]*? instead, because you
probably don't want to match quotes. (I just tried this, and that was
the problem I encountered.)

By the way, a robust regex to match all HTML links looks kind of nasty,
but perhaps you should try writing one--it's a good exercise. (Of
course, that's not what you want for this--you want to match all links
of class=l.)

Regards,
Dan

John Joyce

8/27/2007 5:13:00 AM

0

As those guys said, Google probably changed their code since the book
was written.
That's not to prevent web scraping, it's just that web sites are
pretty transitory. They change all the time and very easily. This
makes web sophisticated web scraping a moving target.

Jaime Iniesta

8/27/2007 6:22:00 PM

0

2007/8/27, John Joyce <dangerwillrobinsondanger@gmail.com>:
> That's not to prevent web scraping, it's just that web sites are
> pretty transitory. They change all the time and very easily. This
> makes web sophisticated web scraping a moving target.

Yes, web scraping using just open-uri and regular expressions is
pretty low-level.

Try Hpricot or scRUBYt for a higher level, more flexible, scraping.¡

--
Jaime Iniesta
http://jaimei... - http://... - http://freelance...

Charles Pareto

8/29/2007 10:15:00 PM

0

Jaime Iniesta wrote:
> 2007/8/27, John Joyce <dangerwillrobinsondanger@gmail.com>:
>> That's not to prevent web scraping, it's just that web sites are
>> pretty transitory. They change all the time and very easily. This
>> makes web sophisticated web scraping a moving target.
>
> Yes, web scraping using just open-uri and regular expressions is
> pretty low-level.
>
> Try Hpricot or scRUBYt for a higher level, more flexible, scraping.�

So I was trying out what everyone said and I got it to work. Here is
what I did.

page_content.scan(/<a href=\"([^"]*?)\" class=l[^"]*?/).flatten
--
Posted via http://www.ruby-....