The 50% Rule
Over a number of years and many projects I have come to rely on a 50% rule for new technologies. Basically put, any new project can contain up to 50% new technologies, with the remainder being technologies that are well understood and that have been previously used by the majority of team members.
As an example, a number of years ago I started work on a greenfield project for a company that wanted to build some web-based administration tools for their existing (config file driven) applications. This was their first foray into Java and web development and I was given a team consisting of one senior developer and a number of junior devs with basic Java/Jsp training plus a bit of struts 1.x.
After some careful consideration we decided to introduce three major new technologies: Maven for builds, Hibernate for persistence and Spring Framework for the core application framework. We then decided to stick with proven technologies that the team had existing experience of for the remainder of the stack: MySQL as a database and Struts, JSP and JSTL for the presentation layer. Even though there was considerable motivation for using an alternative presentation stack (Tapestry, which I was using at the time or the early release of what would later become Apache Wicket - which I was a contributor to) the risk of a completely new technology stack, top to bottom, was just too large.
The advantage of taking this mixed approach to technology is that there was a lot less uncertainty on the project and the team is able to be immediately productive using the technologies they already know. The business is therefore more comfortable that progress is being made (rather than the team just playing with 'cool' stuff) and the team therefore has the leeway to learn how to get the most out of the new technologies and to correct any initial mistakes in how they are being used. I just don't feel that this would be the case if every technology being utilised was a new one.
However, sometimes a project is so radically different from anything that has gone before that you just can't find 50% existing technology to bring with you. Thus...
The Incremental Rule
Occasionally, a project departs so much from what has gone before that almost all the technology must be new. By definition this usually means technology that has not been used in a previous project, may not be familiar to the team and/or may never before have been put into production. For these projects some additional 'up-front' work is essential in order to de-risk the new technologies being used.
By way of an example, I am currently doing some initial explorations into an idea for a product for subject classification, relationship navigation and searching. This project will need to deal with very large data sets (> 100 million subjects) and need to support a high degree of concurrency and scalability. For this reason I'm looking into some particular technology choices for this application: Scala, Akka and Cassandra. Given the move to Scala as the main programming language I'm also planning on using the Simple Build Tool (SBT) and investigating the Lift web framework as well. I'm moving my version control from Subversion to Git. Finally, the application will be cloud based so I'm looking to automate deployment and scalability on AWS.
All of the above is a massive amount of new technology to take on board. Were I to just jump in and start the project I'm guaranteed that I would get into difficulty and end up with either major architecture problems or need to rewrite large parts of the application as I learn more about each technology. Instead. I've broken the technologies down into a number of smaller 'starter' projects so that I can get familiar with each in isolation and then build on more technologies gradually.
Starting out, I've built a couple of simple projects in Scala, using SBT, to get familiar with the language and to improve my Functional Programming knowledge. Next, I've started to add a web-based UI using the Lift framework. Once I'm happy with these I'm then going to build something using Scala, Akka and Cassandra. Thus, by the time I actually start building the final solution, the only unknown technology will be automated deployment to AWS (and even this is not a total unknown as I've manually deployed to AWS before).
Building something with so much new technology is always a big risk. But, by taking an incremental approach at de-risking each technology I can manage the risk within acceptable levels and ensure that my final solution is not crippled by lack of knowledge of my chosen technologies.
This then follows on to my final rule....
The Build a 'Strawman' Rule
Regardless of how you select the technologies that will be included in the project, the first iteration should always be to build an end-to-end 'strawman' architecture. This strawman should include all of the selected technologies playing nicely together. Builds should be automated and initial automated test suites (at all levels) should be up and running. Finally, any automated deployment should be in place. The strawman doesn't need to do much functionally, but if it contains enough processing to also allow some basic scalability and performance tests as well then even better.
By selecting just the right blend of new and existing technologies and spending time de-risking when there are many new technologies we can ensure that we always start a project with confidence that we can make the technologies work for us. Then, by starting with an architectural 'strawman' we further ensure that the technologies work together and eliminate the huge integration risk that we would otherwise hit late in the project when it might be too late to resolve it.
The problem I see with rules like this is that there is some some kind of urge to have to use something new.
ReplyDeleteGood, solid applications are not always built on the latest framework or languages but on good design and understanding of what is needed.
Sometimes it's about "if it ain't broken, then don't try to fix it"
In any case, the criticality of a piece of technology (i.e. the core) should be taken into consideration when deciding if it is one of the new ones to be used.
It also depends on the requirements, which should inevitably drive the technology used but as we know in the real world, sometimes requirements are shaped by the technology otherwise you'd end up writing everything bespoke.
Thanks for taking the time to comment. Wise words. Perhaps my term 'rules' is too restrictive and perhaps 'guidelines' would be a better alternative.
ReplyDeleteI also agree on the fact that good requirements and design should shape the application, rather than just an urge to utilise the latest technology. Also, I think the team makeup also plays a big part as some teams are just much more able to integrate new technologies than others.
You mention another very good point, that of team make up and it goes beyond their ability to integrate new technologies but also what their background is.
ReplyDeleteThis isn't to say, for example, that because a team has most of it's experience in mainframe, then one develops with mainframe mentality. It is more that each project needs to be an evolution and something the team as a whole can be carried forward with.
P.S. The about me for you says
ReplyDelete"I'm and agile architect"
I assume you mean
"I'm an agile architect"
:)
Thanks. Corrected.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis is often terribly helpful for a freshman. I just put this article in your favorites for future reference. Keep sharing this type of message. Thank you for sharing. Good reading. Thank you for the information. It helps us.
ReplyDeleteDedicatedHosting4u.com