12/08/2014 at 09:52 #5728
In IFPUG’s Beyond MetricViews Carol Dekkers seeks to marry International Function Point Users Group definitions with equivalent concepts in agile/iterative proces.
Carol Dekkers’ paper does a good job of describing a way of overcoming the challenges of applying the IFPUG method to measure the size of software as it is delivered by a project using an Agile approach in a way that is consistent with how it would be measured when delivered by a project using a waterfall approach.
What is also clear from the paper is that this approach cannot be used for estimating effort or monitoring performance at the sprint or iteration levels, since the functionality to be measured as ‘delivered’ at any of these intermediate stages does not correspond to the development effort needed to reach that level. The root cause of these difficulties is that functional sizes measured according to the standard IFPUG rules cannot be aggregated, particularly when enhancements are needed (e.g. re-factoring), from sprints to iterations to projects, to delivered systems.
There is, of course, a simpler alternative for measuring, estimating and controlling Agile projects. See the ‘Guideline for the use of COSMIC FSM to manage Agile projects’ in our Knowledge Base.
12/08/2014 at 11:56 #5730
I can agree quite strongly with Charles. In our agile system, once we define the Generic Software Model, everything is either an add or a modify of a CFP. It works very well for prediction of releasing, iterations etc and is simple enough for every developer in the team to understand and contribute to. Of course there are some wrinkles in places but the premise still holds very firm.
Truth be told, I tuned out about three quarters through the article as there were too many mappings/concepts things I needed to keep in my head. These things were too complex for my simple CFP mind to comprehend.
I’m not being trite or smart, just giving an honest reaction from an on-the-ground practitioner whose eternal struggle is answering the question “how long will it take”.
dk- [a benchmarkable agilist]
12/08/2014 at 13:16 #5732
Great to hear about practical experience with COSMIC & Agile. There is one topic which I struggle with. Maybe you would like to share your practical experience with this issue (if it is an issue)?
In a nutshell, the issue arises when you try to apply COSMIC on a per-iteration basis while trying to determine the developed size (in contrast to product size). It is due to the fact that in agile, you often split up big functional processes into small stories that are easy to handle within one sprint/iteration. This leads to some of the same data movements being modified in consecutive iterations, increasing the developed size.
But ‘real’ changed requirements also lead to modified data movements. Do you separate these two categories (changed due to incremental delivery vs. changed due to changed customer requirements) and if so, how? When I want the developed size to be an indication of the ‘delivered value,’ I don’t want it to contain artifacts/points which are due to the incremental nature of working in agile, because it would give agile an ‘unfair advantage’ when benchmarked against non-agile projects. What is your take on the matter?
12/08/2014 at 14:05 #5734
We do not have this issue. We only count CFP in terms of the Functional Process size at the top level granularity. This means in practice that should a Functional Process span multiple iterations, then no function points are counted until the Functional Process is complete and these Function Points are then divided over the iterations. Until the Functional Process is in production we ignore the “count” of modifications, from either changing requirements or development process design. Once software is in production, then we track modification counts.
We initially struggled with measuring developed size but found we were constantly struggling with mixed granularities and focusing on effort rather than size.
My observation is that software development is a stochastic process and as such trying to predict at detailed levels about things like developed Function Points is tightening the noose around our own neck.
And lastly, in the end, the only thing that matters is the value delivered to the business (which arguably is the number of data movements) not how many pick-ups and set-downs you chose to achieve the outcome.
We have found that adopting this approach provides us with excellent predictability and comparability across projects, teams, time and technologies.
dk- [a benchmarkable agilist]
12/08/2014 at 16:07 #5736
Thanks you your answer. The way you deal with this question (only counting functional processes after being fully developed) makes a lot of sense. This is, funnily, somewhat in line with my intuition and with Carol’s original suggestions about IFPUG and apparently a little less in line with what the COSMIC Agile Guideline says in 3.2.1: “Some modifications[…] may also be necessary if a large chunk of required functionality must be spread over several increments or planned to be implemented over more than one iteration. All this is referred to as ‘re-work’.” Maybe this is also a case of different measurement goals and strategies. But I think yours this is probably one of the most promising (and proven, apparently) ways to deal with this scenario.
I completely agree about your statement what I read as software metrics having to be used on the correct (read: not too small) scale.
One last aspect which I find interesting: You mention that, while the functionality is first being developed, changed requirements along the way are being ignored.
One of the big advantages of agile development is that you constantly check your work against the possible shifting requirements of the customer, leading to an end product which has a much higher chance of being used completely right off-the-bat — in contrast to many non-agile projects delivering products which, while being in line with original requirements, don’t meet the current requirements and where maybe some 60 to 90% will be used immediately, and then a bunch of change requests will be filed to adapt the system to meet the current requirements. The way I see it, agile saves a lot of these change requests, because change is built right in from the start.
It’s a bit of a pity that that way, a non-agile approach would seemingly deliver more business value (measured in function points/data movements) for the delivery of the original requirements and then again for the changes, whereas agile, because of following the evolving requirements, get it right the first time and end up delivering less function points because of that. At least when no function points are counted for changes along the way.
How do you deal with this? Do you measure % of usable functionality delivered?
12/08/2014 at 16:08 #5737
P.S.: It may appear that I’m arguing in the opposite direction now just for arguments sake. I’m really not, this just, to me, seems to be a “damned-if-you-do-damned-if-you-don’t”-situation:
- Count all modifications between iterations and you’ll get too many function points (for changed requirements and for incremental delivery)
- Count no modifications between iterations and you’ll get too few function points (missing changed requirements which would be counted as extra “changes” in waterfall)
- Differentiate between modifications due to changed requirements and modifications due to incremental delivery and — I hope you’ve made no plans for the next month or so.
15/08/2014 at 08:16 #5741
a couple of things,
The way you deal with this question (only counting functional processes after being fully developed)
This is not really what I intended to say. We do very much count before we start. In “Corporate Reality”, most Agile teams will have some general target scope to arrive at. So essentially we derive an Iteration 0 Generic Software Model. This GSM is then evolved as we go along and each Functional Process count is not completed until the release of the software.
I haven’t read the Agile guidance for a while, but I recall I didn’t really relate to a lot of what it was talking about. It seemed overly complicated for my needs, and it didn’t seem grounded in enough experience. I could be wrong about that though.
In regards to rework, you are correct in suggesting that it is not my goal to count that. All the work that is within the Delivery System boundary is just that, work. Nor is it my goal to compare Traditional Methods with Modern Methods. My interest in counting modified Function Points is to understand how to predict with them. Sometimes also to understand the change frequency of particular Functional Processes such that we can reason about why the delivery system is failing to find a workable solution.
Thinking about this last night I wonder if the software is the same size regardless of the method used and the fact that Traditional methods require additional modifications till they are fit for purpose should be reflected in the ratio between added/modified. But I think these modifications can only be counted after the software has moved outside the Delivery System boundary.
In terms of measuring functionality used, we’ve tried this a couple of times, but found that the delivery system isn’t mature enough to act on this feedback. So we stopped. After all, as you point out, you don’t want build Function Points if no one is using them.
end up delivering less function points because of that
Delivering less software to achieve the same business outcome is the correct perspective. Software is not an asset, but a liability. If you can get the same business outcome with the least amount of Function Points then you have a better delivery system and I feel that should be the goal.
nb. by “Delivery System” I mean the sum tools, technologies and people to make the software.
16/08/2014 at 15:18 #5743
Denis, thanks a lot for your answer. I think we’ve got a really fruitful discussion going.
You stated that you don’t want to compare different methods. Granted, but you also stated earlier that you are comparing projects with each other. That’s what I want as well, but in my case the projects are/have been using different methods.
About the last point:
In my view, software is an investment which incurs cost, but it also has value, and function points are a measure for the size and ideally the value of software (can’t replace a business case, however). In the development scope, project/developed functions tell you how much software/value a release gives you for your money. So they are part of the output (value) side of development, not the input (cost) side.
If the customer can get the same value for lower cost (quality & schedule assumed constant), this is generally better. But in order for that ‘value’ to really be of value it should meet the current customer’s demands and not require a half dozen changes before it can be used.
Therefore, those changes that often immediately extend a waterfall project, can be regarded as not delivering additional value, but are simply additional cost incurred by the change-adverse method. Also, they give those projects an additional ‘unfair boost’ compared to agile projects.
But I can’t change how ISBSG and other databases measure software and I can’t re-size all of our project database. There, change requests count towards function points delivered, fact of life.
So, if I want to be able to compare projects (value=value), and I can’t rectify this problem on the waterfall side, I’ll need to deal with it on the agile side (by identifying requirements changes). That’s, in a nutshell, where I’m coming from. That’s of course, if you accept the premise that software size, a metric for ‘value’, should be independent of the method.
You must be logged in to reply to this topic.