So, as we discussed during meeting, I have offloaded the data (for stg) older than half a year to another database. This is how I did it (probably could have been done more efficiently, but hey, this worked, and I'm not postgres expert...):
$ pg_dump -Fc resultsdb_stg > resultsdb_stg.dump # dump the resultsdb_stg to file
$ createdb -T template0 resultsdb_stg_archive # create new empty database callend resultsdb_stg_archive
$ pg_restore -d resultsdb_stg_archive resultsdb_stg.dump # load data from the dump to the resultsbd_stg_archive db
$ psql resultsdb_stg_archive
=# -- Get the newest result we want to keep in archives
=# select id, job_id from result where submit_time<'2016-06-01' order by submit_time desc limit 1;
id | job_id
---------+--------
7857664 | 308901
=# -- Since jobs can contain multiple results, let's select the first result with the 'next' job_id (could be done as 'select id, job_id from result where job_id = 308902 order by id limit 1;' too, but this would automagically catch a hole in the job sequence)
=# select id, job_id from result where job_id > 308901 order by id limit 1;
id | job_id
---------+--------
7857665 | 308902
=# -- delete all the result_data, results, and jobs, starting from what we got in the previous query
=# delete from result_data where result_id >= 7857665;
=# delete from result where id >= 7857665;
=# delete from job where id >= 308902;
$ psql resultsdb_stg
=# -- since the db's were 'cloned' at the beginning, perform deletion of the inverse set of data than we did in archive
=# delete from result_data where result_id < 7857665;
=# delete from result where id < 7857665;
=# delete from job where id < 308902;