OCR processing millions of images on Amazon’s EC2

Note: I wrote this post nearly 2 years ago and recently discovered it in my drafts, some of the info is outdated.

Recently, I was tasked with running OCR on a huge set of images (3.4 million.) I’m going to post some brief details on how we processed these images in about a week.

Initially, we uploaded all of the images to S3 from a colocated server we have locally using s3sync. This took a long while (~1.5 TB of data.)

Once the images were all stored in S3, I retrieved all of the meta data and stored them in a MySQL database which was running on a small EC2 instance. This host became the queue manager.
Since some images were in non-English languages, I went through and specified the language (if it wasn’t english) in the database.

I wrote a simple perl script which would:

  • retrieve the next image to be processed from the queue manager
  • retrieve the corresponding image from S3
  • run OCR on the image (with language option if it wasn’t in English)
  • store the OCR output in the database on the queue manager and mark the image as processed

For cost reasons (and because the OCR output was adequate) we used tesseract to process the images. It did a good job (depending on the image quality) and handled foreign languages very well.

To ensure we were getting the most bang for our buck, I whipped up a hack-of-a-script to keep at least 8 processes running on each server. The OCR processing instances were High-CPU x-large servers.

From there, I dumped the contents of the database and handed them off to our indexing expert. Some of the content is currently posted on www.worldvitalrecords.com and the rest is in the works.

Lessons learned (things I would have done different)

  • Find a way to send the hard drives which contained the images to Amazon, instead of uploading the entire 1.5TB from our datacenter. I’ve heard rumors of them doing this for large datasets, but have not verified.
  • Create a better script to manage running jobs (I’d probably use a multi-threaded perl script)
  • Start processing images as soon as they were successfully uploaded. For simplicity’s sake, I uploaded all of the images, then processed them all at the same time.
  • Get increased allocations for resources ahead of time. I started out with a 100 instance limit on EC2 and quickly saturated that limit. Around the middle of the week, I was able to finally get that limit increased to 200 instances.
  • I’d consider using Amazon’s SimpleDB and Simple Queue Service in leiu of the MySQL database I used for the queue manager and for storing the OCR output.


  1. Hello!

    I am investigating a similar project where I want to install Tesseract on Amazon and run images through it. Would you be willing to provide Tesseract installation instruction and/or the script you used to process?

    Any help is appreciated!


    1. Hi Stacey,
      Since I was using Ubuntu on the EC2 instances, installing tesseract was as simple as ‘apt-get install tesseract-ocr’
      I think I have the scripts that I used for all of the job “crunching” around somewhere but would probably rewrite them if I was assigned such a project again.
      One thing I would do differently is use Amazon’s SQS (which wasn’t available at the time) instead of a MySQL database to manage the queue.
      If you’d like to discuss it further, feel free to email me (alt@ the domain of this blog) I’m also available for consulting if you need ongoing help with your project.

Comments are closed.