NOTE: The above is the set of components at all levels and aspects of the system. Changing an administrator has costs as much as changing a piece of code, and they work the same way!
The one that minimizes TCoA across the lifetime of the application (not just temporarilly)
Mathematically speaking we need to solve and find the global minimum
(thank you)
Rule | Why |
---|---|
Slow changing subsystems | Minimize complexity without affecting coupling cost |
Platform concerns | Minimize complexity without affecting coupling cost |
Considering all components at all levels | Don't ignore effects of hidden coupling |
Grouping around consistency boundaries | Minimize very strong coupling - C(αβ) |
Isolate workflows | Minimize complexity - without affecting coupling |
1. Anyway... Problems: - Loooot's of theory - Lot's of advice on what _not_ to do - Some advice on how to gather info - Info about patterns (PoEA) - Info on effect of good arch and bad arch, but no info how to get there - Some info on what "good arch" looks like, but VEEERY context specific, without defining the context. ARGHH!! 1. I don't teach my children by telling them what NOT to do! I wouldn't have a house by now. Why is architecture like that now?? I don't want to bankrupt a few companies before I learn!! 1. Above combine with the fact that arch decisions are hard to change to create a nightmare scenario.
It took me years to get here I spostulate that all other factors and effects are an outcome of these two things applying at different levels and parts of the system. I'd love to hear challenges to this, as this is my basic premise. As an example, feedback in systems thinking is an outcome of system behaviour (inherent) and coupling. Complexity also effects feedback a bit, but more indirectly to coupling. Another example is connacense (a measure indicating that if one thing changes then another thing has to change and vice versa). In my model this is just one form of coupling.
Feedback loops are how systems interact. Systems on all levels simultaneously.
- Classes \ Functions \ components - Services - Sub-systems - Software systems and external systems - Users - Developers - Business people and SME's - And competitors! A small side-note: Feedback loops really define the behaviour of a system. I am NOT sure about this, but I believe that complexity manifests on system level as another form of feedback on another level than the system exhibiting the complexity. Think of the system that has the complexity that has users. Let's think of a warehouse system. It works and it has users. Complexity manifests in two types of feedback: - Complexity in usage scenarios: users bypass parts of it and do stuff in different ways, or stuff just take longer to do. This is feedback to owners of goods being stored and shipped and restocked in form of problems. - Complexity in software itself: making changes takes very long. This is a feedback loop of a system involving the developers) Sociotechnical systems are called that because it involves the people using them _and_ the people maintaining \ working on those systems. And it doesn't stop there. There are competitors to think of that are another form of feedback loops. I can probably spend a whole day discussing this alone, but let's just say that go too deep and your mind may be blown.
ARGH! Again, very useful stuff, but not practical, not positive actions.
My opinion: A huge contributing factor to the above is that software engineering arose as a means to automate other stuff and not as an engineering principle in its own right. There was a push towards more and more software as the value of automation became more and more evident, with little to no time to do any scientific research behind it. Even to this day it's a half-baked field of engineering, with a lot of people claiming you can't be a software _engineer_, and I can't fault them (even if I disagree) because research has still not caught up
I believe the above, but I want to emphasize that there is tremendous value in the existing knowledge base. That's how we (humans) are meant to learn: building on top of knowledge of others.
Wrong theories may have value Similarly, I don't believe what I have here is perfect, but I hope it's useful. And I'll explain.
Control theory is certainly related to what I do here. However, to tie things up is a lot of work. I mention it as it may make some things clearer in case you are aware of this, and I am more than happy to get contributions from people if this sparks something.
NOTE: - There are different types of coupling - VERY important: we have lots of control on coupling - Put differently: A is coupled to B, if A influences B. The reverse may not be true. - We can control it quite a lot. Not it's existence, but whether it affects onw system and not the other. For example we can't control that A will be coupled to B, but we can control the direction. We can also control whether they are not directly coupled, but indirectly coupled to a more stable thing
- _Generally_ speaking we can manage it using encapsulation and information hiding - We can control some of it, but there is inherent complexity
NOTE: they are not entirely independent! However, the relationship is such that allows use of automatic control theory Actually, forcefully lowering coupling can enforce a minimum complexity on a system. It doesn't mean we can calculate complexity from coupling though, so we're safe to continue and assume they're orthogonal for the purposes of the following.
- Single method: no coupling (is one thing)but has insane local complexity - (almost certainly) insane amounts of coupling, high global complexity, super-small local complexity
In a theoretical world, the best architecture would only require us to think of complexity. with the effort being a linear relationship to the size of the system (which would be a _relatively_ fixed cost on top of any complexity to understand)
- A is the cost to make code changes to component α - Here volatility becomes important, because: 1. We are in more control of coupling than complexity. We can get coupling down to zero for example 1. The cost is proportional to the change of all dependent components (which can be exponential)
- A coupled mess that never changes doesn't have a lifetime cost other than the fixed to build it once - Something that is FULLY decoupled has zero cost due to coupling - Something that is very coupled will have a high lifetime cost
The minimum of complexity is inherent complexity + complexity due to coupling \ dependencies
- Complexity is only internal to the component - Complexity is _far_ less dependent on rate of change of system - Coupling is directly proportional to a sum of rate of change I will not postulate that this is more or less important. I just find it interesting
VERY important to note that this applies to all levels
- I use big picture event storming to begin with, but anything will work for this as long as you can visualize actions happening, and systemic dependencies - Wardley mapping for value and industry trends - User story mapping Strategic DDD is here
Strategic DDD is here too
Tactical DDD is here
Slow changing subsystems are: - Generic functionality (notifications, payments if these are not your value proposition \ competitive advantage) - Stuff that are or will be commodity for you (infra) NOTE: These do not need to be different executables \ services. They just need to be encapsulated so that we can decrease complexity by reducing cognitive load \ using information hiding
Point 1: is about correctness. It is important to make sure you make consistent changes _only when you really need it_. Having said that if you need consistent changes and you decouple it, this will introduce accidental complexity.
I call these platform concerns, but they may not be infrastructure related. These can be stuff like authorization, authentication, logging etc. BUT they can also be repeated patterns you want to encapsulate for sake of conceptual integrity