I have worked with DDD for many years, and learned to not share databases and to use anti-corruption layers between systems and services. Therefore, I was at first very reluctant to connect Kafka directly to the databases. Wouldn’t that recreate the monolith that we have been working so hard to get rid of?
Consider a scenario where we use a source connector to connect to one database, and then add a consumer that uses a sink connector to update another database. After some time, we want to change the database schema in the source database. How do we do that without an anti-corruption layer that protects the rest of the flow from this change?
Connectors can be likened to mappers or anti-corruption layers themselves. They offer the capability to transform data before it is produced or consumed. Also, multiple connectors can be configured to handle various data formats, as illustrated in the example below:
Use Versioning for Managing Breaking Changes
Let’s say the whole flow is using messages of version 2:

When the decision is made to upgrade MongoDB to version 3, a new connector is introduced alongside retaining the old one. This strategy affords all consumers the necessary time to migrate to version 3:

Upon the cessation of version 2 usage, it can be safely removed:

While this method simplifies the management of breaking changes, substantial data duplications may occur, especially when dealing with significant data volumes and consumers potentially sticking to older versions. In such cases, an alternative approach, particularly useful in most scenarios, involves using an Avro (or Protobuf) schema, allowing messages of different schemas to coexist within the same topic.
Avro Schemas
Adapting data structures is inherently complex, demanding coordination across multiple teams. Employing schema mapping, such as with Avro or Protobuf, facilitates the handling of changes more effectively.
Avro schemas are stored in the Kafka Schema Registry, and they can manage schema changes in a way that makes the producers and consumers independent of each other. If a new property is added, a default value can be defined for producers not supporting that schema version yet. If a property is removed, a default value can be set for it to support consumers still using that value. Of course there are limitations in this which makes it a bit harder in the real world than it sounds here.
Explore different compatibility types as defined in the links below:

Read more here: Testing Schema Compatibilit, and here https://docs.confluent.io/1.0/avro.html.
Conclusion
- Database connectors can be considered as anti-corruption layers or mappers, protecting the inner structure of the database.
- While versioning offers a means to navigate breaking changes, it comes with the drawback of data duplication until all consumers align with the latest versions.
- Avro can address a significant portion (70-80%) of change scenarios, but for cases involving complete structural overhauls, relying solely on schema mapping might prove insufficient.
- Implementing fundamental structural changes still necessitates meticulous planning and active coordination between teams, a topic explored in my article Kafka – Breaking Contracts.