I have to confess, I'm somewhat of an Apple fanboi. I've been using Apple kit for a long, long time. I started out with the Apple II and was also a user of the early Macs. I also had the misfortune to use Apple kit during the wilderness years of the early/mid-90s. After that I switched to a mix of Linux and Windows.
After Vista rendered all my PCs pretty much useless I returned to the Apple camp with a purchase of a MacPro. Followed later by a 13" MacBook. Unfortunately the MacBook had a glass of lemon squash spilled over it (It actually still works but the battery doesn't charge right any more) so I recently updated to a 15" MacBook Pro (Intel Core i5, 8GB RAM, 256MB SSD). I have also had an iPhone 3G and have recently upgraded to the iPhone4.
Sadly my new MacBook Pro has developed a serious fault related to the memory after just 3 months and has had to go in for repair. Hence the reason for this post....
Apple kit has always cost that little bit more, but for the most part I've always been happy to pay the premium. I have always found the hardware to be of exceptional quality both in terms of specification and build. I've also always found the software to that little bit better than anything else.
I really get the impression that this is now changing. Both my MacBooks have been the unibody design (which really beats the tacky plastic cases of other manufacturers) but both have be not quite as good as I hoped. The first has a DVD drive that doesn't always pick up the disk first time and the disk sometimes sticks when inserting. My newer machine has died after just 3 months. Also both machines have been less stable (i.e. more crashes) than my MacPro ever has.
Despite what people are saying on the web, I can't fault my iPhone 4. Antenna seems fine to me. However I've certainly found the newer iPhone 3.x and iOS 4 releases to be more buggy than the earlier versions that I had on my 3G.
I'm beginning to wonder if I'm now just paying for the Apple name rather than the extra quality that used to be built in for their premium price.
Also, 7 days lead time on an appointment at a Genius Bar in the London Apple stores!!!! Seriously, they must be joking. My MacBook is in with an authorised service provider as I just refuse to accept that no one at an Apple store can even look at non-starting MacBook for a week!
I'm still willing to pay my premium for Apple products (for the time being) but I certainly think they need to invest more of their cash mountain back into quality and customer service.
Thursday, 19 August 2010
Thursday, 12 August 2010
Agile done badly.....sucks!
In this article Peter Viscarola describes why he thinks that the Agile software development methodology sucks. In many cases he’s right, the process he describes does suck - but he’s not actually describing a good Agile process. His experience seems very tainted by the bad interpretation and adoption of Agile that I see in a (sadly large) proportion of companies who have jumped on the agile bandwagon.
Let’s look at some of his points and show that they apply to ‘Agile done bad’ as opposed to ‘Agile done well’...
Peter states:
For the life of me, I do not understand why I would ever want to write code for a complex facility before I have had the chance to design that facility and consider how it will interact with everything else in its environment. ...
But Agile is all about “just writing code.” System architecture? Design? In Agile, that‘s not in the plan. What‘s in the plan is “code it up and see how it works” and “you can fix it in the next sprint.”
I say:
There’s nothing in Agile that says you have to turn off your brain, stop thinking and just churn out mindless code. Just because Agile means you are working in short increments does NOT mean that you can skip the design stage.
The planning session at the beginning of every sprint should primarily be a design session where the team work out the detail of how they are going to complete the sprint’s work. I’ve even seen planning sessions produce UML diagrams, entity models, storyboards and a myriad of other design detail that the team use for the sprint. They are not just about producing a mindless list of estimated tasks!
Additionally, a good agile team will undertake many short design sessions during the sprint. Each time something new is learnt that might challenge the design identified in the planning meeting (such as a TDD session identifying a better approach), the team should breakout around the whiteboard and discuss how this changes the design that the team is working towards.
What about system architecture? interactions? and so on? Well, your agile team should have a good Technical Architect and this person should be spending a proportion of their time thinking about this, ensuring the system architecture is fit for purpose and keeping the bigger picture in their head. They can then feed this into the sprint planning and on-going design to ensure the work being undertaken fits into the wider solution.
Peter states:
And, that brings me to the second thing that makes Agile suck so badly: The whole “estimation” process. Every time somebody insists that I estimate how long it‘s going to take me to implement some particular functionality, and in Agile it‘s not uncommon to have to do these estimates with great precision,...
Why is estimating software development time so hard? Duh! Truthfully, I can’t even estimate with good precision how long it’ll take me to go to store and get a case of beer.”
I say:
I believe this misses the entire point of estimating in an Agile project. We should accept that no estimate can ever be perfect. Even the most skilful estimator can only give a value based on what they currently know. Any unknown or unexpected event that comes along will dictate that the estimate becomes wrong and needs changing.
So, why bother estimating then? The purpose of estimating individual tasks is not to nail down exactly how long everything will take. It’s purpose is to ensure visibility for the current iteration. If things are being completed quicker than expected then the iteration will run out of work so effort needs to be put into planning what extra work to pull in. If things are coming in above estimate then the team knows that there were more unknowns or unplanned events than expected. This means that they might need to change approach, go back and design a bit more or drop some of the planned work from the iteration.
Estimates are about transparent monitoring of progress that lets the team adapt their plans. They’re not about accurately defining exactly how long something will take. Sadly I see too many Scrum Masters/Managers breathing down developer's necks asking why they have spent 1 hour more than estimated on their current task - this is just plain wrong!
But, even if estimates are only a means of achieving transparency it’s still good to get them as good as possible. How do we do this? We eliminate unknowns by using the planning session to design what we are going to do before estimating it. We also have a good BA/TA who has been pre-thinking the big-picture, finding the unknowns and turning them into knowns even before we start detailed planning. Then we only have to deal with the unexpected!
Peter states:
And that leads me to the final Agile precept that fries my potato: User Stories. Every time I hear somebody say “I‘ve entered that as a user story” I want to puke. Why? User stories are just that: Stories. They‘re data. They‘re not wisdom from the ages. ...
...You code to these particular stories. No, you don‘t get a chance to think through the overall experience for any user. This is Agile software development. You don‘t get to think. You‘re not allowed to design. You‘re allowed to “get some code working” so you can try things out. And that code just needs to meet the user stories, and pass the tests that were so lovingly crafted and stored with those stories. Anything else? Well, that‘s for next sprint.
I say:
Any Agile project that just codes user stories purely in isolation from one another or the bigger picture deserves to fail! If this were the Agile process then it would totally suck! Sadly for many Agile teams - and by the looks of it the only ones that Peter has worked on - this is the reality of Agile.
But there is a better (and correct) way. User Stories are a mechanism for describing a particular feature that a user would find valuable. They should not be treated as silver bullet recipe where just coding this story will deliver a great sprint and a great product. You still have to DESIGN what you are going to build to deliver a story. The team must consider how this story will be implemented, how it will fit into the architecture and the bigger solution and the architectural improvements that will be needed. User interaction designers still need to consider how this new feature fits into the UI to ensure a seamless, efficient experience and so on.
Agile is NOT about just taking a User Story and hacking out the minimum code to meet the Conditions of Satisfaction for that one story. It’s about breaking development down into a number of small, more easily managed iterations. Each iteration should deliver one or more User Stories but in doing so it must also consider the evolving System Architecture, must Design each feature thoroughly so that it ‘fits’ properly into the wider system and produce documentation of the features that have been built. You still have to think, plan, design, code, test and consider the bigger picture. You just do it in more manageable chunks.
Peter, if all you have done on agile projects is turn off your brain and hacked out the minimum code required to get a story to pass then I’m not surprised you think Agile sucks. But those projects haven’t been following an agile methodology - they (and you) have just been hacking code.
Addendum
Since writing this I've had a good email debate with Peter Viscarola. In particular he gave me more context about the type of projects he was trying to develop using agile. I think the nature of these projects (very complex and detailed kernel driver work) is not a naturally good fit for the agile approach.
Agile is no silver bullet - it's just another tool in your toolkit. You have to select the correct tool for the job and use it in the correct way. Sometimes you don't have a nail to knock in so picking the same trusty hammer is not the best solution. However, other times you do have a nail and the trusty hammer is the right tool. I still believe it's wrong to say that agile sucks when its being used incorrectly or for the wrong purpose.
Agile can work on complex projects such as Peter's, but you have to accept longer sprints (4-6 weeks) as two weeks is just too short to build complex features. Secondly, User Stories is probably not the best way to specify kernel driver features. Finally, projects such as these usually need a couple of technical spikes with senior architects to define the high-level architecture and concepts. Full team sprints can then be used to provide the implementation of these concepts.
Labels:
agile,
estimation,
TDD,
user stories
Wednesday, 11 August 2010
Programming Challenge: BINGO Part 2
In my last post I started looking at generating random Bingo cards as an educational game for my children. I started by generating a random row. In this post I'm going to look at generating an individual card.
If you remember, a standard UK Bingo card has 3 rows and 9 columns. Each row has 5 number cells and 4 blank cells. Each column must have at least one number cell but could have 2 or 3 number cells.
In an imperative programming model there are two common approaches to solving this problem:
- Generate a grid for the entire card then for each cell check if it has to be a number/blank cell (to meet row and column rules) or can be randomly set
- Use the row generating mechanism but supply hints on which cells are safe to generate Numbers/Blanks and which cells are not
It turns out that because the number of individual card permutations is fairly low that I on average manage to generate a card with at least one number in each column with just 3 attempts. Many times it's possible to generate it perfect on the first attempt, the worst I've seen in 25 attempts.
Quite clearly this is probably less efficient than generating a perfect card every time. However, the simplified code (i.e. very little conditional logic) makes the solution much easier to reason about for correctness. Like everything in software it's a tradeoff. For this project high volume and ultimate performance are not essential, so I'm happy to live with the downsides because of the simpler code that results.
Here's what I ended up with for the card generator:
1: class CardGenerator(rowCount: Int, columnCount: Int, slotCount: Int, random: java.util.Random) {
2:
3: private[this] val Retries = 5
4: private[this] val rowGenerator = new RowGenerator(columnCount, slotCount, random)
5:
6: def makeCard(): Card = {
7: var rows = List[Row]()
8: while ( !fullCard(rows) ) rows = addRows(rows)
9: Card(rows)
10: }
11:
12: private[this] def fullCard(rows: List[Row]) = rows.length == rowCount
13:
14: private[this] def addRows(rows: List[Row]): List[Row] =
15: if ( fullCard(rows) ) rollbackIfNotValid(rows) else addNextRow(rows)
16:
17: private[this] def rollbackIfNotValid(rows: List[Row]) =
18: if ( rowsAreValid(rows) ) rows else rows.tail
19:
20: private[this] def rowsAreValid(rows: List[Row]): Boolean = {
21: rows match {
22: case Nil => true
23: case _ if ( rows.filterNot(_.cells == Nil) == Nil ) => true
24: case x if ( rows.map(_.cells.head).contains(CardCell(Item)) ) => rowsAreValid(x.map(row => Row(row.cells.tail)))
25: case _ => false
26: }
27: }
28:
29: private[this] def addNextRow(rows: List[Row]): List[Row] = {
30: for ( i <- 0 until Retries ) {
31: val resultRows = addRows(rowGenerator.makeRow :: rows)
32: if ( fullCard(resultRows) ) return resultRows
33: }
34: if ( rows == Nil ) Nil else rows.tail
35: }
36: }
37:
One interesting observation: In used an imperative loop construct for implementing retries as this felt more natural than doing this recursively. One of the strengths of Scala is the ability to mix imperative and functional approaches in this way.Also of note is the map calls on line 24, which I use to pull all the cells in the current column for validation and then again to strip the current column from all rows and pass the remaining columns recursively to the same function. Map is very powerful when you know how to use it.
In my final post on this subject I'll be looking at using the same approach to generating a full page of cards and how this proves to be not so successful.
Labels:
functional programming,
programming,
scala
Friday, 6 August 2010
Programming Challenge: BINGO
Recently we went camping to a site with evening entertainment. One activity they offered was Bingo. This included both adult and children games. We discovered that my five year old loves Bingo and that it's also a great way for her to learn her numbers. So, my wife suggested that it would be great to have a way to play Bingo as a family from time to time.
I like a little programming challenge so I though't I'd build a little Bingo Calling app that we can use at home and when on holiday. However, the first thing I decided that we need is a way to generate Bingo cards for us to mark the numbers off against. I wanted something flexible so that I can generate 'proper' cards (both UK and US style) plus simpler cards for younger children - perhaps even Picture Bingo cards as well.
I decided to start by trying to generate non-US (UK) style cards as these are by far the most complex. Actually, it turns out they are really complex to generate. The basic rules are as follows:
- Each card is a presented on a grid of 3 rows by 9 columns
- Each row has 5 cells containing numbers and 4 cells that are blank (Thus, on each card there are 15 cells with numbers and 12 with blanks)
- Each column must have at least one number cell, but can also have two or three number cells
- The first column can contain only numbers 1-9, second column 10-19, third column 20-29 and so on until the last column which can contain only numbers 80-90
- Cards are presented on a page made up of six separate cards. On each page each of the numbers 1 to 90 must appear ONCE and ONLY ONCE
So, we're looking at an algorithm that generates random rows, combines these into random cards and then combines six of these cards on a single page all while honouring the rules about number of cells per row and column.
My current language of choice is Scala as I like it's power and flexibility. I'm also trying to improve my ability at functional programming and this looked like a perfect challenge for adopting the functional approach.
So, I started out with something simple to get my head around the problem. I decided my first goal would be to generate a single row containing the required template pattern of 9 columns, with 5 that will contain numbers and 4 that will be blank (putting in numbers will be a later task once I can generate the card templates). Starting with the test case:
1: class RowGeneratorSpec extends FlatSpec with ShouldMatchers {
2:
3: "A Row Generator" should "generate a row with the specified number of columns" in {
4: val generator = new RowGenerator(9, 5, new java.util.Random)
5: val row = generator.makeRow
6:
7: row.cells.length should be (9)
8: }
9:
10: it should "generate the specified number of slots" in {
11: val generator = new RowGenerator(9, 5, new java.util.Random)
12: val row = generator.makeRow
13:
14: row.cells.filter(_ == CardCell(Item)).length should be (5)
15: }
16:
17: it should "generate the correct number of blanks" in {
18: val generator = new RowGenerator(9, 5, new java.util.Random)
19: val row = generator.makeRow
20:
21: row.cells.filter(_ == CardCell(Blank)).length should be (4)
22: }
23:
24: it should "generate the same row with the same random seed" in {
25: val generator1 = new RowGenerator(9, 5, new java.util.Random(1L))
26: val generator2 = new RowGenerator(9, 5, new java.util.Random(1L))
27:
28: compareCells(generator1.makeRow.cells, generator2.makeRow.cells)
29: }
30:
31: private[this] def compareCells(lhs: List[CardCell], rhs: List[CardCell]): Unit = {
32: if ( !lhs.isEmpty ) {
33: lhs.head should be (rhs.head)
34: compareCells(lhs.tail, rhs.tail)
35: }
36: }
37: }
38:
Next I added some domain objects:
1: object CellType extends Enumeration {
2: type CellType = Value
3: val Blank, Item = Value
4: }
5:
6: case class CardCell(cellType: CellType)
7: case class Row(cells: List[CardCell])
8:
Then, came my initial version of the code to generate the row:
1: class RowGenerator(columnCount: Int, slotCount: Int, random: java.util.Random) {
2:
3: require(slotCount <= columnCount)
4:
5: def makeRow(): Row = {
6: val indexes = selectIndexes
7:
8: val cells = Array.ofDim[CardCell](columnCount)
9: for ( index <- 0 until columnCount ) {
10: cells(index) = if ( indexes.contains(index) ) CardCell(Item) else CardCell(Blank)
11: }
12: Row(cells.toList)
13: }
14:
15: private[this] def selectIndexes = {
16: val indexes = Set[Int]()
17: while ( indexes.size < slotCount ) indexes += random.nextInt(columnCount)
18: indexes
19: }
20: }
21:
This code works fine, but it's pretty imperative in nature. The populating of the set of indexes and then setting values into an array is pretty typical code that would be written in languages such a Java or C++. I therefore had another go trying for a more recursive, functional solution:
1: class RowGenerator(columnCount: Int, slotCount: Int, random: java.util.Random) {
2:
3: require(slotCount <= columnCount)
4:
5: def makeRow(): Row = Row(addToRow(Nil))
6:
7: private[this] def addToRow(cells: List[CardCell]): List[CardCell] = {
8: val slotsFilled = cells.count(_ == CardCell(Item))
9: val cellsRemaining = columnCount - cells.length
10: val slotsRemaining = slotCount - slotsFilled
11:
12: (cellsRemaining, slotsRemaining) match {
13: case (0, _) => cells
14: case (_, 0) => addToRow(CardCell(Blank) :: cells)
15: case (cr, sr) if ( cr == sr ) => addToRow(CardCell(Item) :: cells)
16: case _ => if ( random.nextBoolean ) addToRow(CardCell(Item) :: cells)
17: else addToRow(CardCell(Blank) :: cells)
18: }
19: }
20: }
21:
This new code is much more functional in nature, building the row by concatenating lists. A match is done on the state of the supplied list and the appropriate return or recursion is triggered by this match.
The code is fairly simple but also very flexible, allowing me to generate a range of different row configurations in the future. I'm also using an externally supplied Random instance to that I can seed the random with a known value and generate the same rows consistently (handle future automated call checking code should I decide to add it).
I'm happy with this part of the solution. In the next post we'll look into the first of the more complex cases: generating an individual Bingo card that complies with the row and column rules.
Labels:
functional programming,
programming,
scala
Thursday, 5 August 2010
The Value of Hardening
In my last post I looked at projects suffering with large amounts of technical debt and ever slowing velocities. One solution that one of these projects put forward was to undertake one (or perhaps more) hardening sprints. In these sprints, all teams would stop development of new functionality and focus on two things:
Given the above, there seems little value to a hardening sprint (or sprints). A much better approach would be to correctly plan out technical debt into a number of DETAILED stories. These should address reducing the complexity of architecture and code in a structured way, refactoring code in a more structured way, improving the thinking about scalability and improving project focus and processes to increase quality, reliability and performance of architectures and code. The business should then be forced to prioritise these above new functionality for inclusion in the ordinary sprint process.
- Fixing as many of the outstanding defects as possible
- Addressing and improving the areas of the code that exhibit the largest clusters of defects
Firstly, all code will contain some defects. Even the best code created using TDD principles will still have the occasional defect. These sort of defects can be easily addressed in the normal sprint mechanism - just get the business to prioritise them correctly along with the user stories and they will get resolved. No need to a dedicated sprint to fix these.
Secondly, we look at the defect clusters. The problem comes with trying to fix defect clusters and improve the code in a single hardening sprint. My feeling is that this is just a papering over the cracks, which, while perhaps offering some quick wins, fails to address the underlying causes of those cracks.
In most applications, defect clusters occur for one of two key reasons:
- The product has been crammed with too many features without sufficient refactoring of the architecture and code. The net result is that the code is now too complex, poorly structured and difficult to maintain. Adding new features is significantly more likely to break other features that were working.
- An initial rough architecture and code base was rushed through to meet a deadline and has not been sufficiently refactored prior to trying to scale it up - resulting in poor performance and reliability.
In the first case, trying to fix defects doesn't resolve the underlying complexity issues. In fact, fixing one defect is significantly likely to introduce other defects - especially if the automated test are lacking (common in projects that have reached this state).
Additionally, trying to make smaller fixes and refactorings to this complex code initially results in less stability and higher defect discovery rates. Given the aim of a hardening sprint is to make the product more stable and bullet-proof, these refactorings and small improvements actually can have a negative effect on quality in the short term.
In the second case, trying to address performance and stability in a single sprint just won't work. Achieving highly reliable and performant code can only be met through an in-grained philosophy in the project. All code must be well written, well tested and proved performant as part of its development. Taking some existing code and trying to tweak it to make it so is largely a futile effort.
This approach prevents the short-term, quick fix mentality of hardening sprints. These sprints consume large amounts of resource, produce only surface improvements and leave all the fundamental problems intact, while at the same time fooling the business into thinking that things are improving. Never a good combination.
Labels:
agile,
architecture,
performance,
reliability,
scalability,
scrum,
sprint,
TDD,
technical debt
Tuesday, 3 August 2010
Drowning in the Technical Debt Mountain
Over the last few years I’ve worked on a number of Agile projects that have stumbled into problems. Looking back, all of these projects have had similar characteristics. Namely:
- They were all long-term strategic projects vital to the success of the company building them
- They were all long-running projects with at least 18 months development behind them
The problem that each of these projects encountered was that over time it became increasingly more difficult and time consuming to add new features to the products. The number of bugs and issues slowly rose and each time they would take longer to fix. Early in the project, teams had velocities measured in tens of points, but these gradually reduced until single digit velocities were the norm. Why does this happen?
Taking a broad look across these projects highlights a number of problems that they all had in common:
Build Up of Technical Debt
Probably the most critical problem was that each of the projects had allowed Technical Debt to build up. They had all started well with good intentions and with emergent architectures. However, in the rush to get new features in, they neglected the continuous architectural improvement needed. Releases went out with rushed development and the intention to ‘go back and fix that’, but the business pressure to add new functionality always won over technical improvement.
Over time the interest on the technical debt builds up and you spend so long servicing it that velocity drops. You then end up in a death spiral catch-22 situation: you can’t add new functionality without extensive time and the addition of many new defects but you can’t just stop adding functionality and re-write major portions of a critical product that is already live.
Too Much Focus on Frameworks
Another thing that I noticed about all these projects is that they spent a significant amount of time early on developing ‘frameworks’ and aiming to ‘develop for reuse’. This just wouldn’t happen on a short-duration project.
Unfortunately, building smart solutions in this way just doesn’t work in an agile world. Firstly, you just can’t fully know all the features the framework will require in advance. You therefore try to preempt future requirements which at best results in unnecessary work. At worst (and usually most common) you end up building a framework that forces a way of working that you constantly have to shoe-horn future work into.
Asking the business to fund a major rework of a core framework mid or late in the project usually results in some very hard business questions, so development teams limp on with an increasingly complex and not-fit-for-purpose solution - which is ultimately the source of much of the technical debt build up.
Building Depth Before Width
Another problem that all these projects encountered is that they aimed to build one component of the system to completion rather than focusing broadly across the whole architecture. Thus, they spent a lot of time building some really cool features into one part of the product but then had to rush other parts when that time started to run out.
The net result of this approach is that you get architectural complexity building up in one area and insufficient architectural development in others. Ultimately you hit a number of problems such as scalability and reliability issues in the areas where not enough time was spent and maintainability issues in the areas where too much time was spent and too much complexity was added. All of these issues add to the burden of technical debt.
So, how do we avoid these problems happening? There are a number of key things that you need to do:
- Build and Architectural Straw-Man: The first thing that a long-running agile project must do is spend a sprint (or three) building a broad architectural straw-man that addresses the full breadth of the architecture. Focus on the smallest possible number of key features that touch the entire product without adding too much depth
- Prove the Straw-Man: Make sure that the architectural straw-man is proven in terms of scalability, reliability and performance. It should have plenty to spare in all areas. Ideally tests for these non-functional areas should be automated so that they can be repeated as each new feature is added.
- Focus the Business on Critical Features: Often the business gets focused on trying to build something that is feature complete in one area. Make sure they don’t do this. Instead get them to focus on what is most important across the entire product. Build breadth of functionality rather than depth in just one area. Avoid at all costs release plans that focus on just one component of the solution.
- Don’t Build Frameworks!: There’s absolutely no point in building any frameworks or coding specifically for re-use. It just doesn’t work. Instead, just work on the straw-man and then on adding new features. When a new feature overlaps with something already existing then refactor out the duplication - do nothing more than that! Let the frameworks evolve and emerge through this process of refactoring out duplication, don’t try to preempt what might be needed.
- Continuously Validate the Architecture: Each new feature added to a product might have required some architectural change, might have required refactoring out of duplication or might even have added a new component. The key is that the development of each new feature should include work to validate that the scalability, reliability and performance have not been compromised. There should also be time available to improve the architecture before moving onto the next feature.
By starting with an architecture that is sound and leaving it in a sound state at the end of every feature addition you avoid the build up of technical debt. Many would argue that this slows down delivery of feature releases and the addition of new functionality. This may be true, but for a long-term strategic project the benefits of continued future productivity outweigh this many times. Also, avoiding building frameworks and minimising initial complexity often saves so much time that initial releases come in quicker anyway.
One question remains, how do we save any of the projects that I looked at that are already drowning under their mountains of technical debt? There’s no easy solution to this question - often cancelling the project may be the best answer. However, where that’s not feasible then hare’s some ideas that I’d like to explore further:
- Strip out existing complexity - find the main areas of the project that are too complex and thus have high defect rates and which are difficult to maintain. Remove features and complexity from them until they become simple enough to fix and refactor. While this might not be popular with the business, a product that works and that can be maintained and enhanced is much better than one packed with features but which is non-functional and costs a fortune to maintain.
- Move beyond exiting architecture - accept that what went before is perhaps beyond saving. Start out with a new architectural straw-man that avoids fancy frameworks and is proven scalable, reliable and performant. Build all new functionality against this architecture and gradually migrate existing functionality to it. Prove the architecture and refactor for EVERY feature.
- Write-off all existing technical debt. NEVER let technical debt build up again
- Re-prioritise - work with the Product Owner to re-prioritise the backlog to focus on building the broadest set of essential features first from now on rather than trying to add every bell and whistle from the start.
- Shoot any developer on the team who still thinks that building re-usable frameworks was a good idea - seriously!
Labels:
agile,
architecture,
performance,
reliability,
scalability,
technical debt
Subscribe to:
Posts (Atom)