EnjoyTheNoise posted:
To start - I have absolutely no clue about what tech stack do you use at dynasty, or what infrastructure you have. I also believe you already tried it, but maybe for some areas like pairings, authors, where content is not changing very frequently, maybe you could just cache those requests? (assuming they're mostly GET - that wouldn't be very tricky I think)
Thank you for the suggestion! Part of the motivation for the database changes and some possible future changes are based around the use of more aggressive caching. Reducing the load of single requests, and reducing their burden after first access (with more caching) are, I imagine, the easiest ways of alleviating some of our fringe performance issues. I'm not the one to know all the details front-to-back, but I think we're on the same track here!
I can imagine you have to also deal with bits of tech debt before going forward, so best of luck with whole process. :D
When I had to deal with caching stuff I usually just opted into using managed Redis instance, or if cost was more important, some self-hosted Redis (usually as Docker container). Option that I used in my Bachelor's thesis to improve GET performance, was splitting data into read/write models and storing read model in something like Elasticsearch (in my case it was Azure Search, but they work similar in that use-case), but that was aaages ago.
But that are just my 2 cents on this topic, I guess your team will know better what will fit into your needs (inb4 you already have this and I'm just making a fool out of myself XD),
last edited at Feb 4, 2022 12:36PM