As IT minded individuals, we live in an ever-changing world. When it comes to software development, we’ve come from programming as “ON ERROR GOTO 52000” (does that even have a name?), to “procedural programming”, to “Object oriented programming”, to the beautiful and complex SOAs that we produce today.
I often lie awake at night wondering what the next step is. What’s the next industrial revolution for developers? What’s the next evolutionary leap that we’ll take?
After an introductory post about Metadata driven development (MDDD), I continue my random thoughts in this post…
Bang for the buck?
Our customers need software because computers are faster than humans, and digital data takes up less storage (and overhead) than paper versions. For the customer, a well-written application increases his productivity. Whether or not to purchase an application, sometimes tailored to fit, is a very basic equation where the bang gets compared to the buck. In other words, compared to the time and money it will save him/her, how much does the application really cost ?
For any software project, a company owner asks him/herself a couple of questions.
- Is it effective? Does the application solve my business problem / increase my productivity?
- Is it scalable, will it sustain increased use and piles of historical data?
- Is it maintainable? What’s the cost of new features? Will it grow (or shrink) with my company, that uses it?
- Is it stable? How will a software bug affect my business process? What will the financial repercussions be? Will it ever crash? What happens to my business when its up-time is less than 100%?
- Is it realistic? When will it be delivered and ready to use? When will it be available, configured to my likings, with my data, connected to my customers, with my employees trained and using it like they never used before…
Unfortunately, in software development, the truth is quite painful…
- Creating software is a slow and complex process, and thus expensive.
- Technical and architectural choices are often based on experience, not on the customer’s needs or situation.
- Software comes with a high cost of maintainability. Adding new features has an increasing cost, and there’s no such thing as a bug-free application.
- Software is not maneuverable when it comes to upgrading it to a new architectural pattern, or a new technology.
- Software has a short lifetime. To quote Juval Lowy: after a certain period of time, there’s nothing left but to take it outside and shoot it in the head.
Why we keep failing.
In our defense, writing software is an extremely complex process with many weaknesses.
On the one hand, there’s the organisational aspect. There are a lot of people involved before the customer’s idea is put into code and then delivered. Let’s not even begin to discuss Agile, Lean, Waterfall or other software development methodologies.
On the other hand, there’s just so many decisions to be made, including (but not limited to):
- Business logic
- Entities, domain objects, validation rules, and how they should all play together, …
- Business processes, workflows, …
- Users, rights, roles, …
- How screens / wizards / … should look and what information they should contain
- Architectural decisions
- Transaction script / CQRS / complex SOA with process, entity, capability services, …
- MVVM, MVC, …
- Technical decisions
- User experience
- Silverlight / HTML / WPF / WP7, …
- Frameworks, such as Entlib
- Persistence decisions
- Testability decisions
- Integration decisions
- With 3rd party software
- With older/newer versions of the software / modules
- Deployment decisions
- Customization decisions
- How the UX of one user compares to another based on level of experience with the application
- How the UX of one user compares to another based on personality
- How the UX of one user compares to another based on use of the application
- Monitoring and running diagnostics on deployed applications
How we can do better.
Each decision has to be independent (where possible).
- A change in one module usually forces a ripple effect of changes in other modules.
- Assembly of modules might require more effort and/or time due to the increased inter-module dependency.
- A particular module might be harder to reuse and/or test because dependent modules must be included.
Yet, we continue to tightly couple technological decisions with business logic, architectural decisions, …
A quick sample of what I mean:
<UserControl x:Class="Caliburn.SimpleNavigation.PageOneView" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"> Page One
The technological choice of WPF, mangled with business logic (“how we will represent a PageOneView – whatever that is”), mangled with the architectural choice of applying MVVM. – Ok maybe not the best sample, but you get what I mean.
Tell me, when we decide to use HTML5 as a UI technology instead of WPF, how can we reuse the business logic of “what data should be shown to the user on a certain screen”? And how much of that MVVM will still be usable? What will the cost be to switch from WPF to HTML?
If each of our decisions is independent, where possible, the cost of changing one decision will be minimal, and can more easily be done based on the customer’s (ever-changing) needs.
Each decision has to be implemented once (where possible).
We never copy-paste code when writing an application. Duplicate code is bad (wiki again):
- Code bulk affects comprehension: Code duplication frequently creates long, repeated sections of code that differ in only a few lines or characters. The length of such routines can make it difficult to quickly understand them. This is in contrast to the “best practice” of code decomposition.
- Purpose masking: The repetition of largely identical code sections can conceal how they differ from one another, and therefore, what the specific purpose of each code section is. Often, the only difference is in a parameter value. The best practice in such cases is a reusable subroutine.
- Update anomalies: Duplicate code contradicts a fundamental principle of database theory that applies here: Avoid redundancy. Non-observance incurs update anomalies, which increase maintenance costs, in that any modification to a redundant piece of code must be made for each duplicate separately. At best, coding and testing time are multiplied by the number of duplications. At worst, some locations may be missed, and for example bugs thought to be fixed may persist in duplicated locations for months or years. The best practice here is a code library.
- File size: Unless external lossless compression is applied, the file will take up more space on the computer.
On a LightSwitch note… Yes this is still a LightSwitch blog, it simply explains why I love what the LightSwitch team has done. What seems to be a “RAD platform for non-developers” at first glance, is actually one of the most thought-through frameworks I have ever seen. It’s far from perfect, but it’s a giant step in what I believe to be the right direction…