Forum Replies Created

Viewing 8 posts - 1 through 8 (of 8 total)
  • Author
    Posts
  • in reply to: Deep Linking into the Standard #6060
    Denis Krizanovic
    Participant

    Charles,

    I think you want to make it as linkable as possible. In my previous workplace I always wanted to link the internal wiki into parts of the documents to help explain the heuristics we had adopted for our context. More links the merrier. Consider the way wikipedia creates links of every heading in every article, so you can link deeply into their world.

    regards,
    dk-

    in reply to: Deep Linking into the Standard #6051
    Denis Krizanovic
    Participant

    I just think something simple like a html version of the standard. Different sections broken up into different pages. With anchor’s for easy linking. That way, people can promote the standard by linking into it. Rather than asking people to download a pdf document with all the “noise” that a standard document must have.

    in reply to: COSMIC Data Model #3114
    Denis Krizanovic
    Participant

    Thanks Charles! Some challenging input that is helping me clear things in my head.

    I found that trying to construct a data model clarified my own thinking about how things are related. So I think getting this out as a teaching tool is a great idea.

    In regards to my unusual interests, you are correct. I am interested in understanding the actual data attributes that are moved in a particular movement. This is because we have found over the last couple of years that a detailed understanding of data attributes being moved and not moved helps people understand which DG they are “modifying” if they’re working on a system modification.

    In terms of the DG-DA relationship, we have found that in practice, we tend to 1:n.. Some of this may be related to trying to use a spreadsheet to model DG’s.. so I take it on notice that this might be a problem with our model.

    We also struggle significantly with the concept of OOI. So much so, that we pretty much ignore the concept, and hence you won’t see it in our model.

    As for system context, yes, one day in the future, we will expand that. For the moment, I’ve sneakily packaged that all into one “diagram” attribute, and hope the artifact uploaded there is adequate. Just trying to keep the size of this system small. (and yes number of data attributes does impact size! heretical I know! : ))

    As some background, we use the GSM in our work every day, and every piece of work is reacting to the GSM. Every developer interacts with the GSM everyday. Hence the need for a tool where we can collaborate on the evolution of the GSM.

    Thanks for the great input,
    denis-

    in reply to: COSMIC Data Model #3111
    Denis Krizanovic
    Participant

    Frank and Ton,

    I have reviewed the Nesma paper (and Ton’s “A Functional Sizing Meta Model” paper too). I would like to test my understanding of the idea.

    It seems to me that the reference model is focused on Functional Processes (and their sub-process) and modelling them. It seems to imply that Functional Process contain an attribute which is the sum of their data movements.

    It does not go down to the next level of actually being explicit about which data attributes intersect with which sub processes.

    Is that a fair reading of the situation? Just want to check, as there seems to be some self-referential types of relationships..

    regards,
    dk-

    in reply to: COSMIC Data Model #3110
    Denis Krizanovic
    Participant

    Thanks Frank and Ton.

    I will check out that reference model.

    As for Siesta, I did get a download of that and I wasn’t convinced. I would’ve spent at least 30 minutes with the tool and I couldn’t figure out how to use it. My personal rule is that if I can’t work it out in 30 seconds, its not a good tool. Harsh I know, but there is so little time in life, and so many tools. I’m thinking that our tool will have a consumer-grade UX, one designed for novices, not experts.

    My goal with this tool is to allow any of my developers, even those that have never really read the Cosmic material to understand the functional model of the software and be able to size their modifications/additions. In other words, the tool needs to scale across various levels of cosmic understanding.

    Our tool will end up on github when it’s ready, so anyone who wants to can look at it. Bear in mind that my goal is to make this tool usable for our purposes rather than strictly correct to the model. I know that might make people a little uncomfortable, but I’m very pragmatic about things. : )

    Still happy to hear any feedback on our data model for anyone else out there.

    regards,
    dk-

    Denis Krizanovic
    Participant

    Andreas,

    a couple of things,

    The way you deal with this question (only counting functional processes after being fully developed)

    This is not really what I intended to say. We do very much count before we start. In “Corporate Reality”, most Agile teams will have some general target scope to arrive at. So essentially we derive an Iteration 0 Generic Software Model. This GSM is then evolved as we go along and each Functional Process count is not completed until the release of the software.

    I haven’t read the Agile guidance for a while, but I recall I didn’t really relate to a lot of what it was talking about. It seemed overly complicated for my needs, and it didn’t seem grounded in enough experience. I could be wrong about that though.

    In regards to rework, you are correct in suggesting that it is not my goal to count that. All the work that is within the Delivery System boundary is just that, work. Nor is it my goal to compare Traditional Methods with Modern Methods. My interest in counting modified Function Points is to understand how to predict with them. Sometimes also to understand the change frequency of particular Functional Processes such that we can reason about why the delivery system is failing to find a workable solution.

    Thinking about this last night I wonder if the software is the same size regardless of the method used and the fact that Traditional methods require additional modifications till they are fit for purpose should be reflected in the ratio between added/modified. But I think these modifications can only be counted after the software has moved outside the Delivery System boundary.

    In terms of measuring functionality used, we’ve tried this a couple of times, but found that the delivery system isn’t mature enough to act on this feedback. So we stopped. After all, as you point out, you don’t want build Function Points if no one is using them.

    And lastly,

    end up delivering less function points because of that

    Delivering less software to achieve the same business outcome is the correct perspective. Software is not an asset, but a liability. If you can get the same business outcome with the least amount of Function Points then you have a better delivery system and I feel that should be the goal.

    nb. by “Delivery System” I mean the sum tools, technologies and people to make the software.

    Denis Krizanovic
    Participant

    Andreas,

    We do not have this issue. We only count CFP in terms of the Functional Process size at the top level granularity. This means in practice that should a Functional Process span multiple iterations, then no function points are counted until the Functional Process is complete and these Function Points are then divided over the iterations. Until the Functional Process is in production we ignore the “count” of modifications, from either changing requirements or development process design. Once software is in production, then we track modification counts.

    We initially struggled with measuring developed size but found we were constantly struggling with mixed granularities and focusing on effort rather than size.

    My observation is that software development is a stochastic process and as such trying to predict at detailed levels about things like developed Function Points is tightening the noose around our own neck.

    And lastly, in the end, the only thing that matters is the value delivered to the business (which arguably is the number of data movements) not how many pick-ups and set-downs you chose to achieve the outcome.

    We have found that adopting this approach provides us with excellent predictability and comparability across projects, teams, time and technologies.

    regards,
    dk- [a benchmarkable agilist]

    Denis Krizanovic
    Participant

    I can agree quite strongly with Charles. In our agile system, once we define the Generic Software Model, everything is either an add or a modify of a CFP. It works very well for prediction of releasing, iterations etc and is simple enough for every developer in the team to understand and contribute to. Of course there are some wrinkles in places but the premise still holds very firm.

    Truth be told, I tuned out about three quarters through the article as there were too many mappings/concepts things I needed to keep in my head. These things were too complex for my simple CFP mind to comprehend.

    I’m not being trite or smart, just giving an honest reaction from an on-the-ground practitioner whose eternal struggle is answering the question “how long will it take”.

    regards,
    dk- [a benchmarkable agilist]

Viewing 8 posts - 1 through 8 (of 8 total)