Metadata driven development (part 2): A thought on software development as it could be…

As IT minded individuals, we live in an ever-changing world.  When it comes to software development, we’ve come from programming as “ON ERROR GOTO 52000” (does that even have a name?), to “procedural programming”, to “Object oriented programming”, to the beautiful and complex SOAs that we produce today.

I often lie awake at night wondering what the next step is.  What’s the next industrial revolution for developers?  What’s the next evolutionary leap that we’ll take?

After an introductory post about Metadata driven development (MDDD), I continue my random thoughts in this post…

Bang for the buck?

Our customers need software because computers are faster than humans, and digital data takes up less storage (and overhead) than paper versions.  For the customer, a well-written application increases his productivity.  Whether or not to purchase an application, sometimes tailored to fit, is a very basic equation where the bang gets compared to the buck.  In other words, compared to the time and money it will save him/her, how much does the application really cost ?

For any software project, a company owner asks him/herself a couple of questions.

  • Is it effective? Does the application solve my business problem / increase my productivity?
  • Is it scalable, will it sustain increased use and piles of historical data?
  • Is it maintainable? What’s the cost of new features? Will it grow (or shrink) with my company, that uses it?
  • Is it stable? How will a software bug affect my business process?  What will the financial repercussions be?  Will it ever crash?  What happens to my business when its up-time is less than 100%?
  • Is it realistic?  When will it be delivered and ready to use?  When will it be available, configured to my likings, with my data, connected to my customers, with my employees trained and using it like they never used before…

Unfortunately, in software development, the truth is quite painful…

  • Creating software is a slow and complex process, and thus expensive.
  • Technical and architectural choices are often based on experience, not on the customer’s needs or situation.
  • Software comes with a high cost of maintainability.  Adding new features has an increasing cost, and there’s no such thing as a bug-free application.
  • Software is not maneuverable when it comes to upgrading it to a new architectural pattern, or a new technology.
  • Software has a short lifetime.  To quote Juval Lowy: after a certain period of time, there’s nothing left but to take it outside and shoot it in the head.

Why we keep failing.

In our defense, writing software is an extremely complex process with many weaknesses.

On the one hand, there’s the organisational aspect.  There are a lot of people involved before the customer’s idea is put into code and then delivered.  Let’s not even begin to discuss Agile, Lean, Waterfall or other software development methodologies.

On the other hand, there’s just so many decisions to be made, including (but not limited to):

  • Business logic
    • Entities, domain objects, validation rules, and how they should all play together, …
    • Business processes, workflows, …
    • Users, rights, roles, …
    • How screens / wizards / … should look and what information they should contain
  • Architectural decisions
    • Modularity
    • Transaction script / CQRS / complex SOA with process, entity, capability services, …
    • Transactionality
    • MVVM, MVC, …
  • Technical decisions
    • User experience
    • Silverlight / HTML / WPF / WP7, …
    • Frameworks, such as Entlib
  • Persistence decisions
  • Testability decisions
  • Integration decisions
    • With 3rd party software
    • With older/newer versions of the software / modules
  • Deployment decisions
  • Customization decisions
    • How the UX of one user compares to another based on level of experience with the application
    • How the UX of one user compares to another based on personality
    • How the UX of one user compares to another based on use of the application
  • Monitoring
    • Monitoring and running diagnostics on deployed applications

My drawing skills maxed out...

In most applications, each of the decisions we take influences the others.  On top of that, we implement our code based on the decisions we make, and if for any reason (bug, use of the system grows beyond the original design, new technologies are available, …) we choose to change a decision, a long road of refactoring (or retro-factoring) lies ahead. Face it, the decisions we make form a solid web that form up the application…

How we can do better.

Thinking about it, I can see two ways of improving our software development…

Each decision has to be independent (where possible).

Favor composition over inheritance.  Remember that one?  Why – because it tightly couples the child class to the base class.  Tight coupling is a bad thing, because (Wiki):

  1. A change in one module usually forces a ripple effect of changes in other modules.
  2. Assembly of modules might require more effort and/or time due to the increased inter-module dependency.
  3. A particular module might be harder to reuse and/or test because dependent modules must be included.

Yet, we continue to tightly couple technological decisions with business logic, architectural decisions, …

A quick sample of what I mean:

<UserControl x:Class="Caliburn.SimpleNavigation.PageOneView"
    Page One

The technological choice of WPF, mangled with business logic (“how we will represent a PageOneView – whatever that is”), mangled with the architectural choice of applying MVVM. – Ok maybe not the best sample, but you get what I mean.

Tell me, when we decide to use HTML5 as a UI technology instead of WPF, how can we reuse the business logic of “what data should be shown to the user on a certain screen”?  And how much of that MVVM will still be usable?  What will the cost be to switch from WPF to HTML?

If each of our decisions is independent, where possible, the cost of changing one decision will be minimal, and can more easily be done based on the customer’s (ever-changing) needs.

Each decision has to be implemented once (where possible).

We never copy-paste code when writing an application.  Duplicate code is bad (wiki again):

  • Code bulk affects comprehension: Code duplication frequently creates long, repeated sections of code that differ in only a few lines or characters. The length of such routines can make it difficult to quickly understand them. This is in contrast to the “best practice” of code decomposition.
  • Purpose masking: The repetition of largely identical code sections can conceal how they differ from one another, and therefore, what the specific purpose of each code section is. Often, the only difference is in a parameter value. The best practice in such cases is a reusable subroutine.
  • Update anomalies: Duplicate code contradicts a fundamental principle of database theory that applies here: Avoid redundancy. Non-observance incurs update anomalies, which increase maintenance costs, in that any modification to a redundant piece of code must be made for each duplicate separately. At best, coding and testing time are multiplied by the number of duplications. At worst, some locations may be missed, and for example bugs thought to be fixed may persist in duplicated locations for months or years. The best practice here is a code library.
  • File size: Unless external lossless compression is applied, the file will take up more space on the computer.
Yet, we continue to implement our decisions, our ideas, over and over again.  How many different validators, valueconverters, managementscreens, repositories, … do we really need in a single application?  Isn’t one of each enough, each loaded with its own set of Metadata?
If each of our decisions is implemented once, where possible, the chance on bugs will substantially drop, time to delivery will shorten, and no retro-factoring will have to be done if we add functionality, improve a technical detail, or re-implement a decision…


 If you have an MVVM WPF application, and would like to port it to an HTML MVC website/WP7 application/WP7 app with services on Azure/…, just how much of your application can actually be reused?  Making software is a long and difficult process, and  the lifetime of our applications is short compared to the cost.  However, it is possible to break free from the web of decisions that we have to implement, today.
By keeping our business logic in Metadata, implementing each decision as little as possible, and keeping all decisions independent, our software could sustain in time, from one version to another, from one platform to another, and benefit from technological improvements or simple cosmetic makeovers without much additional effort.
This is my second plea for Metadata driven development, I just feel it’s the next logical evolutionary step in software development.


On a LightSwitch note… Yes this is still a LightSwitch blog, it simply explains why I love what the LightSwitch team has done.  What seems to be a “RAD platform for non-developers” at first glance, is actually one of the most thought-through frameworks I have ever seen.  It’s far from perfect, but it’s a giant step in what I believe to be the right direction…

I love LightSwitch - Image from

1 thought on “Metadata driven development (part 2): A thought on software development as it could be…

  1. You just pointed out some highly thought i agree your second point with the technical decisions and silverlight provides a retained mode graphics system similar to windows presentation foundation and integrates multimedia, graphics, animations into a single run-time environment.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s