Short version of the situation is that I have an old site I frequent for user written stories. The site is ancient (think early 2000’s), and has terrible tools for sorting and searching the stories. Half of the time, stories disappear from author profiles. Thousands of stories and you can only sort by top, new, and 30-day top.
I’m in the process of programming a scraper tool so I can archive the stories and give myself a library to better find forgotten stories on the site. I’ll be storing tags, dates, authors, etc, as well as the full body of the text.
Concerning the data, there are a few thousand stories- ascii only, and various data points for each story with the body of many stores reaching several pages long.
Currently, I’m using Python to compile the data and would like to know what storage solution is ideal for my situation. I have a little familiarity with SQL, json, and yaml, but not enough to know what might be best. I am also open to any other solutions that work well with Python.
Definitely SQLite. Easily accessible from Python, very fast, universally supported, no complicated setup, and everything is stored in a single file.
It even has a number of good GUI frontends. There’s really no reason to look any further for a project like this.
One concern I’m seeing from other comments is that I may have more data than SQLite is ideal for. I have thousands of stories (My estimate is between 10 and 40 thousand), and many of the stories can be several pages long.
Ha no. SQLite can easily handle tens of GB of data. It’s not even going to notice a few thousand text files.
The initial import process can be sped up using transactions but as it’s a one-time thing and you have such a small dataset it probably doesn’t matter.
That’s good to know.
I would separate concerns. For the scraping, I would dump data as json onto disk. I would consider the folder structure I put them into, whether as individual files, or a JSON document per line in bigger files for grouping. If the website has good URL structure, the path could be useful for speaking author and or id identifiers in folders or files.
Storing json as text is simple. Depending on the amount, storing plain text is wasteful, and simple text compression could significantly reduce storage size. For text-only stories it’s unlikely to become significant though, and not compressing makes the scraping process, and potentially validating completeness of scraped data simpler.
I would then keep this data separate from any modifications or prototyping I would do regarding modification or extension of data and presentation/interfacing.
After reading some of the other comments, I’m definitely going to separate the systems. I’ll use something like json or yaml as the output for the raw scraped data, and some sort of database for the final program.
Python sqlite3 module for the metadata and it has some features now for full text search that can probably handle a few thousand stories. For a bigger collection like ao3, try solr.apache.org or elastic search etc.
If scraping is reliable, I’d use the classic python pickle or JSON.dump
For a few thousand I would just use a sqlite dB…
3 tables:
- Story with fields: Id, title, text
- Meta with fields: Id, story-id, subject, contents
- Tags with fields Id, story-id, tag
Use SQL joins for sorting etc.
Sqlite is easily converted to other formats if you decide to use more complex solutions.
I do a lot of web services and I’m a big fan of SQL, but I wouldn’t use a SQL database for this myself. Something like MongoDb or Cassandra would probably serve you better (depending on whether you prefer a REST interface to your data or something more conventional). You’ve got a very flat structure except for tags.
Tags are the one feature that might make me choose SQL due to the many: many relationship.
I’m not sure what role you think YAML would play. You could store each story as YAML, but then you’d have to parse basically everything to filter and sort. The story should just be a massive text field, and the metadata goes into respective fields. Tags might be comma delimited or in SQL you could normalize it so that you have three tables: stories, tags, and a table that basically looks like
StoryId TagId I’d at least first try to use a non-relational database structure because filtering and sorting by tag might still be fast enough. If it’s too slow then you could go SQL, but I’d aim for the less complex solution.
A few keywords in there I’ll have to look up, but I get the majority of it.
Yeah, I’m not too sure yet how complex the tags will be in the end. They are basically genres at the start, but I may make them more complex as I go.
After reading some of the other comments, I doubt I’ll use yaml as the main storage method. I do like the idea of using yaml for the scraper output though. Would give me a nice way to organize the data elements for each story in a way that can be easily read when needed.