Ruby on Rails is a web-app development framework -- basically the exact thing you don't want to scrape with.
Generally speaking, you want the scrapers to either be their own daemon/service or being dispatched tasks out of a messaging queue. Requesting web pages is a blocking operation, and scraping often creates additional tasks ( ie, you derive another page to scrape ) , so those are things to consider as well. Scraping is often best when implemented in an "event driven" and asynchronous framework.
If you can start from scratch, I'd probably do everything in Erlang ( or Node.js ).
If you want to stay in Ruby, you should look at Redis+Resque ( https://github.com/resque/resque ).
In Python, you can do some decent scraping with Redis+Celery for task management. You can also do everything in Twisted Python. I've done both with great results. I already am biased with Python , but Python has the BeautifulSoup library for parsing and navigating HTML documents -- and that makes pulling data out of the scraped pages way way way easier.
If you can avoid scraping, I'd suggest doing it. There are companies like Embedly ( http://embed.ly/ ) that offer an API that gives most of the data you'd get from scraping , with a lot less work.