We tend to
This will pressure the design to go in certain ways.
Quality attributes:
Typical requests:
IO -> Decode -> Compute -> Archive -> Encode -> IO
But had to do concurrency in that “pool of thread” model request. Plus we get contention in database, or race condition.
Solution: Pipelines architecture, and not coarsed grained parallelism
Feature Envy is what usually causes messes in codebases. Put fields in classes they belong, Make them cohesion intead of introduction coupling. That inherently makes performance 30% better just by making things cohesive.
Synchrous communications is the crystal meth of distributed computing.
Monitoring and telemetry will tend to drive performance and good observibility. Especially for microservices. You have to build the following from the very very start.
Averages are bad for telemetry, as this hides the outliers and the errors that are going on. Better to use percentile distributions.
What skills should we be practicing everyday? Focus on buildng good systems that are cohesive, decoupled, instead of just learning the new fads.