On Tue, Nov 10, 2020 at 7:26 PM Kevin Fenzi <kevin@scrye.com> wrote:
Has there been any thought about using fedora-messaging to update the
cache? ie, sync, then listen for messages and update as you go and only
need to do the full sync on startup?

Mhm, we've thought about this. It is something we've omitted for now not to make oraculum architecture too complex in the beginning. Effort/gain ratio didn't seem to favor using messages yet, we'll definitely look into this again. However, it is something that would require some lower level changes in oraculum which we would prefer to do in the future and not just yet.

Looking at the data we process, the slowest is package versions fetching, by a huge margin. Bugs, Package PRs and last builds from Koji (only for FTBFS) follow. These are also the only data we fetch per package, everything else is fetched just once (or once for each Fedora Release, eg. Koschei, Bodhi) and only calculated on a per-active-user basis.

Also, different tasks stress different resources. Bugs, PRs, package versions stress basically only the network. Processing Bodhi, Koschei and Health Check data stresses the CPU and RAM the most.

I am not sure what data sources produce messages, it seems it'd make the most sense to leverage it for package versions, pull-requests and bugs.
 
In fedora infrastructure we use a external database server (non
openshift). This allows us to do backups nicely and let apps avoid
needing to manage their own db.

Damn, I should have known better after the testdays migration, hopefully, pingou won't notice this lack of knowledge on my side :)

 
Do you do ssl termination in the nginx pod?
In fedora infra our proxies do the ssl termination (This allows us to
keep wildcard certs only on proxies).

In the current deployment (which is just a "normal" vps) we're using nginx because we'd expose gunicorn directly to the outside otherwise. In OpenShift deployment, we can omit the nginx pod and expose gunicorn to OpenShift route just fine, without nginx in between.

 
Well, right now... lets see...
We have 5 compute nodes:

os-node01.iad2.fedoraproject.org     693m         17%       8404Mi          35%       
os-node02.iad2.fedoraproject.org     381m         9%        16628Mi         69%       
os-node03.iad2.fedoraproject.org     2291m        57%       20539Mi         86%       
os-node04.iad2.fedoraproject.org     387m         9%        16014Mi         67%       
os-node05.iad2.fedoraproject.org     278m         6%        7764Mi          32%     

We can add more if needed.
We might get more a sense of needs deploying this in staging...
 
Currently, it runs on 4G RAM VPS (including the OS, DB, Redis and basically everything). We expect it might need a bit more after we announce it to the world through Fedora Magazine and other channels and more packagers learn about it (which we want to do once we're a bit more stable on the server side of things). Currently, roughly 100 packagers used it at least once in the last 14 days.
 
>
> Resource-wise it's not a small application (at least from my perspective :D
> ), but we believe it's a great value application which saves time for Red
> Hat and community package maintainers.

Yeah, it's pretty awesome. ;)

Thanks :)