DistributedMethodCallError: The belief that calling across a network is better than calling within a process

Distributed Method Call Error
The belief that methods and functions communicating across a network is somehow better than communicating within a single process on a single machine.

Process this error by politely throwing a verbal exception, inquiring as to what, exactly, is better. And then explain how the answers you're getting back are the wrong answers.

Here are templates for the three main areas on why a distributed architecture does not make X better:

If X is one of: Response
Separation of Concerns, Coupling, Cohesion or similar But X is not primarily about deployment scenarios, so distributing your deployment does not improve X.
Reliability, Performance, Robustness or similar But as you'll know from the Fallacies of Distributed Computing, if not from bitter experience, distributed computing makes things harder not better.
Deployability, continuous deployment or integration But deploying to multiple hosts is harder, not easier, than deploying to a single host.

Yes, there are problems for which distributed computing appear to be part of a solution. Redundancy as a reliability tactic appears to push you to distributed computing. So does horizontal scaling as a performance or capacity tactic. But these things are done extremely well at the infrastructure level by a black box: a load balancer. Notice that load balancing does not force a decision on whether each instance of your application-or-service is deployed entirely on a single box or is distributed.

So if you think that microservices or any other form of distributed deployment address issues such as dependency management, coupling, cohesion, continuous deployment, avoiding domino failure, then may I put it to you that you have confused, not separated, your concerns. In 4+1 terms, you may be confounding your physical and process models (i.e. the runtime models) with your logical & development ('coding-time') models. As Simon Brown pithily put it, "if you can't build a structured monolith, what makes you think microservices are the answer?".

PS

Yesterday I read a blog about 'Monolithic' architecture which said – with pictures and everything – that if your problem is how to carry on coding effectively (add new features, fix bugs, resolve technical debt etc) as the size and complexity of the code base increases, then the solution is a distributed deployment architecture with synchronous calls over http using xml or json.

I could weep. You manage your codebase by managing your codebase! Not by making your runtime deployment and threading models 50 times more complicated. This is what I mean by confounding the logical & development models with the process & deployment models.

PPS

If you're not familiar with the 4+1 architecture views: The

  1. Logical view describes your classes' behaviour and relationships; the
  2. Development view describes how software is organised and subdivided into e.g. modules, components, layers, and how they are turned into deployable artefacts; the
  3. Process view describes what threading and processes and distributed communications you use and the
  4. Physical view (though I'd rather call it the Deployment view) describes what machines run what code on the running system

The '+1' is the use cases, or user stories.

When I first saw 4+1 I only really 'got' the logical view. As years passed, I realised that this reflected a lack of experience on my part. When you first do distributed or asynchronous computing, you begin to see why you'd want a process view and a physical (or deployment) view.

4+1 is quite long in the tooth and has evolved in use. There are other sets of viewpoints. Rozanski & Wood's seven viewpoints show the benefit of a decade's more experience. You may think 7 is a lot, but for a small or simple system some of them need only be a sentence or two.

Estimates and NoEstimates

We had a debate&discussion at XP-Man on NoEstimates for which I did some notes. Reading the NoEstimates stuff, I was most attracted to the sense of "Let's not be satisfied with second rate" and of a thirst for continuous improvement.
I was left with the sense (possibly because I already believed it) that there are contexts in which NoEstimates works, and contexts in which it doesn't. But I was very glad to be provoked to ask in each case, "What value if any is our estimate/planning effort adding?" and "Isn't there a better way to deliver that value?"

What is an Estimate?

An estimate for a Project is (1) a list of things to work on; (2) a cost-range for those things; and (3) a list of risks, that is (3a) of Dependencies that 1&2 rely on, and (3b) of things that might cause significant change.

An estimate or plan for a sprint is (1) a list of things to work on, (2) a "cost" (eg story points) for those things and (3) a list of things we are uncertain about, or (4) need to get help with.

The value of a project estimate is to feed-in to (1) A go/no-go decision and (2) seeing things we want to see sooner rather than later (e.g. should we hire more people, do we need help from specific 3rd parties, is releasing in time for Christmas possible)

The value of a sprint estimate is, to see things we need to ask for in advance (ie external help or resources); to give everyone a sense of confidence about what we're doing; to fail-faster, that is to see sooner what we can't achieve.

Money doesn’t make the world go round

"Who are we kidding? Apps are built to earn money. They are a product created to generate profit, and there are a number of ways we can get them to do just that. The simplest, and therefore most common method is ..."

So says a typical email in my inbox. But. It ain't true. Apps are also built because people love to build things. Many, many apps – and websites; and Real Stuff; — are built because people love to built things, and if they didn't have to earn a living they'd still be building things. For some people, earning money is the pesky thing getting in the way of building the apps they'd love to build.

Should a Software Architect Code? is the Wrong Question

It's a long-running argument in software architecture and I've seen some quite emotional comments on it. But it's the wrong argument. The right question is surely one about ratios:

"What ratio of non-coding architects to coders will work best for us?"

A ballpark answer to this question would follow logically from calculating how many full-time-developers worth of work results in an amount of non-coding-but-still-requires-deep-or-broad-technical-understanding-and-vision work that adds up to one full time job.

I suggest, based mostly on e-commerce & similar SMEs, something like:
(1) 25 developers' worth of coding results in 1 full-time non-coding-architect's worth of work.
(2) Every team needs at least one coding-architect-or-architecturally-competent-developer per 5 developers

Which gets me to a ratio of

1:5:25

for

non-coding-architect : coding-architect-or-lead-dev : developer

Architecture features plenty of non-coding work. Dealing with people, plans, business change, technical design, design decisions, frameworks, principles, patterns, catalogues, quality attributes... There's more than enough of it if you have 25 coders' worth of development going on.

So in a workplace with 100 developers, you may want a chief architect, 3 or 4 non-coding architects and 20 coding-architects-or-architecturally-competent-developers. But if you are chief architect or even CTO of a company with 15 developers, you probably still code.

There's no right answer to the question, how much coding does a coding-architects-or-architecturally-competent-developers do. It can vary from nearly-nothing to 95% as projects progress. Perhaps 50% is a good average?

Discussions

There having been not one but two still-live threads on this on LinkedIn software groups since 2012:
LinkedIn - 97 Things Every Software Architect Should Know - Should architects continue coding ... ?
LinkedIn - IASA - Should software architects code?
Anthony Langsworth's 2012 post