Parsing Newb Help

Hey guys,

I’m pretty new to Ruby, and programming in general, and am having
massive trouble parsing some HTML pages I scraped from Yellow Pages.

So far, I’ve been using the link below as my template

I am trying to compile a list of restaurants in San Francisco, with the
price, ambiance and neighbourhood attributes. I want to import this list
into Excel. Does anyone have idea on how to adapt the script in the
template for YP?

I have successfully scraped the source code, but when it comes to
parsing, I’m having trouble inputting the right parameters.

Any help would be appreciated!

Benedict W. wrote in post #1074721:

I have successfully scraped the source code, but when it comes to
parsing, I’m having trouble inputting the right parameters.

What have you tried?

7stud – wrote in post #1074725:

Benedict W. wrote in post #1074721:

I have successfully scraped the source code, but when it comes to
parsing, I’m having trouble inputting the right parameters.

What have you tried?

require ‘open-uri’
BASE_LIST_URL =
Best 30 Restaurants in San Francisco, CA with Reviews

     LAST_PAGE_NUMBER = 157

    LIST_PAGES_SUBDIR = 'yp-list-pages'

     Dir.mkdir(LIST_PAGES_SUBDIR) unless

File.exists?(LIST_PAGES_SUBDIR)

     for page_number in 1..LAST_PAGE_NUMBER
             page = open("#{BASE_LIST_URL}#{page_number}")

            file =

File.open(“#{LIST_PAGES_SUBDIR}/yp-list-page-#{page_number}.html”,
‘w’)

             file.write(page.readlines)

             file.close

             puts "Copied page #{page_number}"

     sleep 4
  end

This copied all over the web pages onto my hard drive into a .html format.

Then, downloaded and installed Nokogiri gem

Next lines of code:
require ‘rubygems’
require ‘nokogiri’
require ‘open-uri’

     url =

<view-source:Best 30 Restaurants in San Francisco, CA with Reviews
age=1>
Best 30 Restaurants in San Francisco, CA with Reviews
page = Nokogiri::HTML(open(url))

     links = page.css('a')

puts links.length

(this printed out the number 982)

Then typed:

Hrefs = links.map{ |link|

Link[‘href’] }

doc_hrefs = hrefs.select{ |href|
href.match(‘title’) != nil
}
doc_hrefs = doc_hrefs.uniq

After this point, I got kind of lost.

On Wed, Sep 5, 2012 at 3:40 AM, 7stud – [email protected] wrote:

Getting Started with Nokogiri
More at
http://www.w3schools.com/xpath/
XPath 教程

Parsing html requires a good understanding of html structure, e.g.
parents, children, siblings, etc., and css, e.g. classes, ids, etc. As
a beginner it is better to take baby steps, not jump in the deep end of
the pool, so this project may be too hard for you.

When using Firefox there are some useful extensions for XPath testing,
namely
https://code.google.com/p/xpathchecker/
Firefinder - Robert's talk (needs Firebug)

Kind regards

robert

I’m not at all clear what the specific things are that you want to
extract from the website.

In any case, you need to click on View/Source in your browser and
examine the raw html to figure out what tags you need to extract
(or attributes of the tags) and how to identify them. You can examine
the
web page in your browser then use Find or Search to locate the
same text in the raw html.

Then read some basic xpath tutorials starting here:

Here is an example of how to get the names of the restaurants:

require ‘nokogiri’

#require ‘open-uri’
#doc = Nokogiri::HTML(open(“http://www.threescompany.com/”))

html =<<MY_HTML

Stuff

<a href="http://blah_blah_blah"
class="no-tracks url "
rel="nofollow"
title="Fishermen's Grotto">Fishermen's Grotto</a>

blah blah blah

<a href="http:/blah_blah
rel="nofollow"
title="Marnee Thai Restaurant">Marnee Thai Restaurant</a>

MY_HTML

doc = Nokogiri::HTML(html)

doc.xpath(‘//h3[@class=“title fn org”]/a[1]’).each do |a_tag|
puts a_tag.text
end

–output:–
Fishermen’s Grotto
Marnee Thai Restaurant

Parsing html requires a good understanding of html structure, e.g.
parents, children, siblings, etc.; as well as css, e.g. classes, ids,
etc. As
a beginner it is better to take baby steps–not jump in the deep end of
the pool, so this project may be too hard for you.