Belgrade, December 5, 16:00 (GMT +1)

The clone war: replicating data while maintaining performance and consistency

Migration from a monolithic application to a microservices architecture often forces data to be distributed among separate storages to reduce the system cohesion and improve its fault tolerance. In the process, it may turn out that some of the data must be accessed from different components of the system. Although there are many different approaches to solving this issue, most often it is necessary to find a trade-off between data consistency and overall system performance.

Andrew will share how they managed to separate data models between the system components with a database of tens of terabytes, a client workload of thousands of RPSs, and tens of millions of messages in asynchronous processing a day. He will also explain how asynchronous replication helped and why the standard approach to Event Sourcing was rejected when implementing it. Furthermore, Andrew will provide some recommendations on how to maintain a consistent distributed cache and handle sudden component failures.