Forum Replies Created

Viewing 15 posts - 1 through 15 (of 34 total)
  • Author
    Posts
  • Charles Symons
    Participant

    Dear Maricela, the COSMIC method, like all functional size measurement methods, cannot account for data manipulation. The COSMIC method assumes that data manipulation is associated with data movements and that the count of data movements is a good-enough measure of the amount of functionality required for any functional process. Most of the time this assumption is valid for the types of software which the method is designed to measure.
    Sometimes you find functional processes which are dominated by data manipulation. Your case where different input data attribute values lead to the need to execute different business rules is a good example where the size measured in CFP may not adequately represent the amount of functionality.
    So you have a choice of ways of dealing with the problem, depending on your circumstances.
    1. If the system being measured is large, ignore this problem. In a large system, some FP’s will have a lot of data manipulation, some will have little. As Allan Albrecht said ‘the Law of Averages’ will apply. The problem of the ‘true’ contribution of this one FP will be minor in comparison to the total functional size; simlarly the effort to solve the problem of the business rules for this one FP will be minor compared with the overall effort to develop the system.
    2. If the FP with the complex rules dominates the software you are trying to measure, or if you are trying to estimate the effort to analyse, develop, test and implement these rules, then separate the task of estimating the effort for the work on the rules from the effort to develop the remainder of the FP, i.e. the 3 CFP for the data movements. This is a perfectly reasonable approach because developing business rules is largely a creative process. It is very diffcult to predict how much effort is needed to develop algorithms. Some experienced people may do it quickly, others take more effort or may fail, etc. Anyone can program E = mc2 in seconds but it took a genius to develop this equation. Measuring Einstein’s productivity for developing this equation is not very useful.
    3. You could extend the COSMIC method with your own units of functional size for algorithms of varying complexity (I personally would not recommend this)
    All this is discussed in version 4.0.2 of the Measurement Manual, section 4.5.
    I hope this helps
    Charles Symons

    in reply to: Identify data groups in reports #11350
    Charles Symons
    Participant

    Dear Mary, when I checked there was no reply to your questions, so after dinner I came back and wrote a reply, which I have copied below. No surprise – I agree with Arlan Lesterhuis’s solutions. Here is my reply to your same questions:
    Hi Mary, thank you for these good questions. Let’s summarise your situation.
    You have two Applications A and B that communicate with each other. Each App must be measured separately, so has its own ‘measurement scope’.
    App A knows only the interface to app B. App A knows nothing about the functionality inside App B, and vice versa. It follows that each App can have its own ‘internal’ objects of interest. (I don’t think that my conclusion is stated anywhere explicitly in the Measurement Manual, but the situation described above is given as in section 4.5.1, case 1b, in the ‘Guideline for sizing Business Application Software’, v1.3, and in earlier versions.
    Question 1: App A sends a request for data about an employee to App B via one Exit and receives the result via one Entry. (Each DM moves one DG describing the OOI ‘employee’.) App A does not know that App B had to consult data describing two OOI’s internally in order to generate the Exit to App A. App A is not interested in what happens inside App B, and vice versa.
    Question 2: Yes, count two Reads (obviously) and only one Exit. All the attributes of the DG moved by the Exit describe one OOI, ‘employee’. (All attributes that you list occur once for each occurrence of ‘employee’.) The attributes in the Exit could be named ‘Current Salary’ and ‘Starting Date of the Current Salary. In contrast, in the persistent data of the ‘Salary History’ record in App B, the attributes could be named ‘Salary at the given Starting Date and ‘Starting Date’. The attributes in the persistent data record are not the same as the attributes in the ‘Salary History’ record. They have different meanings.
    Question 3: Yes, there is only one Exit for the report that you describe. The employee ID (‘CURP’?) is a composite of other data but it is issued as one attribute of one OOI.
    Note that for App B to obtain the employee details from App A that App B needs to compose the CURP, App B will have to obtain these details via an X/E pair to/from App A.
    Question 4: Re rule c) on page 52 of the MM v4.0.2. Rule c) refers to Note 3, which in turn refers to several examples in section 3.5.7, and to section 3.5.11 for some exception examples. I hope these are all clear.
    I hope this is all OK.
    Regards Charles Symons

    in reply to: Enhancement changing navigation #11254
    Charles Symons
    Participant

    Hi Carlos,
    I think you are correct in concluding that a required change to a ‘pure’ menu command should not be counted as a change to a data movement according to the MM v4.0.2 (where a ‘pure’ menu command is one that only assists user navigation to different parts of the software or that causes only a data entry screen for a functional process to appear without any attributes of the triggering Entry of the process having been entered.) Alternatively, if any the menu commands enable one or more attributes of the triggering Entry of a functional process to be entered, then required changes to these commands may obviously be interpreted as changes to the affected triggering Entries, as per the normal rules.

    However, given that you are engaged in a project where you are required to use COSMIC as the size measurement method, you should be entitled to invent a ‘local extension’ to the method for the case of measuring required changes to ‘pure’ menu commands which is not covered by the standard method.

    One way to deal with this would be to think of the menu navigation system as a separate piece of software of defined scope. (Maybe it would help to think of the menu navigation system as in a different layer from the layer in which the application resides?) Assuming the menus have a tree-like structure, you must then identify the object of interest types of this structure. This ‘structure’ is effectively a ‘data structure’ that is persistently ‘stored’ (either actually hard-coded or actually stored as a maintainable structure) by the menu navigation software. You must then define the functional processes that must be developed to make the required changes to this structure.

    At this point I must stop trying to help because I have no knowledge of how your menu system actually works and the degree to which it is independent of the functional processes to which it provides access. Changes might be needed to the application arising from changes to be made to the menu navigation software.

    I hope this helps.

    Best regards
    Charles

    Charles Symons
    Participant

    Dear Mariem, I agree with Frank’s reply. The functional users of a distributed application (the software being measured) are the same regardless of whether the operating environment is cloud or non-cloud. Charles

    in reply to: Doubt about the contextual diagram #10266
    Charles Symons
    Participant

    Jorge, the short answer is ‘yes’. Context diagrams are useful in the Measurement Strategy phase to help understand the scope and functional users of pieces of software to be measured.

    in reply to: Doubt about the contextual diagram #10260
    Charles Symons
    Participant

    Jorge, on the cosmic-sizing website, there is a ‘Guideline for Measurement Strategy Patterns’ which describes several context diagram patterns that are commonly found for different types of software.
    I do not think that a piece of software can exist that is useful if it does not have complete functional processes, unless it is being developed and is not yet ‘complete’. Almost by definition, if a piece of software has ‘incomplete’ functional processes, it is not yet in a state where it can deliver useful functionality.

    Charles Symons
    Participant

    Dear Yufeng Chen, thank you for your question.
    First, it’s not correct to state ‘In section 4.2.4 Example 5, month/year DOES NOT count as a X, but In this example ( … example deleted … ) the Month/Year is counted as a X.’
    The main example 5 in 4.2.4 (of size 12 CFP) shows six different aggregations of the value of clothes sales (numbered i) to vi) on page 49). The example analysed after the main example is for when the sales figure i) is reported separately.
    For BOTH these examples, it is correct to state that ‘the functional process has an Exit that moves a data group that is keyed on (= is uniquely identified by) the attribute ‘Month/year’. Consequently, both Exits must have the same frequency of occurrence ‘n’ (the number of months for which sales figures must be output).
    However, the data groups moved by the Exits of these two examples are different as you will see from the tables showing the analysis. In the main example 5, the data group moved is ‘Sales for the whole Product Range in a given month’. (See row 8 of the table.) This data group’s two attributes are ‘Month/Year’ and ‘Sales for the whole Product Range in the given month’. In the other example which reports only the sales figure i), the data group moved has only the one attribute ‘Month/Year’. (See row 2 of its table.)
    For all of these examples 4 and 5, the data groups output in each example have different frequencies of occurrence, so must be moved by different Exits, as per part 1 of the rule in section 2.6.2.
    I hope this helps Charles Symons

    in reply to: Automated CFP Count #6094
    Charles Symons
    Participant

    Dear Ye Zhao, re tools to automate COSMIC FP sizes from a ‘simulation model’. I am not 100% clear what this means. I assume you wish to measure a size in units of CFP automatically from functional requirements captured in a software tool.
    I just looked through the Knowledge Base on https://www.cosmic-sizing.org. There are several papers – see links below. The list may not be complete.

    COSMIC Rules for Embedded Software Requirements Expressed using Simulink

    Design of a Functional Size Measurement Procedure for Real-Time Embedded Software Requirements Expressed using the Simulink Model

    A Refined FSM Procedure for Real-Time Embedded Software Requirements Expressed using the Simulink Model (slides)

    Fast Functional Size Measurement with Synchronous Languages

    Automating the Measurement of Functional Size of Conceptual Models in an MDA Environment


    In addition, the following paper has been published in ‘Information and Software Technology’
    Title: AUTOMATIC COSMIC FUNCTION POINT MEASUREMENT USING REQUIREMENTS ENGINEERING ONTOLOGY
    Selami Bagriyanik a,b, Adem Karahoca b
    a Software Development Department, Turkcell Technology R&D, Maltepe, İstanbul,
    selami.bagriyanik@turkcell.com.tr.
    b Software Engineering Department, Faculty of Engineering, Bahçeşehir University, Beşiktas,
    İstanbul, adem.karahoca@bahcesehir.edu.tr.

    There is a tool available from Poland to automatically measure CFP sizes of requirements specified as UML Use Cases, as an extension to Sparx Enterprise Architect CASE tool – but not limited to EA
    The tool is available under licence from DC300 (English & Polish). http://300dc.pl/oferta/standardy-modelowania/

    I hope this helps Charles Symons

    in reply to: Measurement Scope #6089
    Charles Symons
    Participant

    Sameer, the rules are very simple
    – For the software you want measure, you must define the layer of the architecture in which it is located.
    – Then the scope of the sofware you want to measure must be wholly confined to that layer (this software may, of course, interact with software in other layers)
    As an example, if you want to measure the three components of an application, seen as ‘view’ b), as in Figure 2.4 of the Measurement Manual, you must measure the sizes of the UI, BR and DS components separately because they are in different layers of the 3-layer architecture.

    Then, if it happens that you want to know the total size of the UI + BR components, that may be calculated, but you cannot just simply arithmetically add the two individual sizes. You must eliminate the size contribution of the X/E pairs between the the components from the arithmetic sum of the two sizes.
    A related rule is that a functional process of e.g. the UI component being measured cannot extend over into the BI layer (It can issue X’s and receive E’s from the BI layer, but all its processing takes place within the UI layer.)
    With these rules, and documenting the layer of any piece of software being measured, future users of the measurement will know how to interpret it and how, if needed, its size can be aggregated with other sizes.
    I hope this is clear

    in reply to: CFP size in case of Third Party Tool #6057
    Charles Symons
    Participant

    Sameer. if I understand your question correctly, for the various objectives it will help to distinguish the following parts of your applicaton.
    a) the functionality you developed in-house (measure size in the normal way)
    b) the functionality of the 3rd-party tool that you had to modify (1. measure the size of the changes you made and 2. measure the size of these functions after they were changed)
    c) the functionality that the tool provided that you did not modify, but that was needed accordng to the FUR for the application (measure size in the normal way)
    d) the funcitonality that you got for free but which you did not need.
    FUR means ‘Functional User Requirements’, so I don’t think you should include the size d) of functionality you did not need in a measure that will be used for productivity. The productivity of the team can be measured in two ways: ‘Development productivity’ = size of (a+b1)/E, where E is the team effort. Or measure ‘Implementation productivity’ = (a +b2 + c)/E. Comparing the two productivity measures should illustrate the productivity gain from a buy-versus-develop decision. The development productivity is likely to be more useful for future effort estimating, if that is another goal.
    The size of the applicaton is a + b2 + c + d, though if the functionality d is never used, this measure may be misleading for whatever you will be doing with the measures in the future. Remember also that the tool supplier may add or change the tool. Avoid measures that will be misleading longer-term. I hope this helps

    in reply to: Deep Linking into the Standard #6055
    Charles Symons
    Participant

    Denis, good suggestion which we should be able to implement quickly. I assume we could deep link either to Sections of the Measurement Manual, or to each box containing the Principles of one of the models (Software Context or Generic Software) and all other boxes containing Definitions, Principles and Rules. My guess is that the latter could be more useful, though if anyone want to refer to examples, then referring to Sections would be more general. Any thoughts? Charles

    in reply to: Data group definition #5890
    Charles Symons
    Participant

    Hallo Sameer, I must admit we had a tendency to make definitions more complicated or academic than necessary. Over time we are simplifying them.
    From the original definition of a ‘Data Group’, we first eliminated ‘non-redundant’. It does not matter for COSMIC sizing if the data group contains redundant data attributes, as long as they all describe the same one object of interest, so including ‘non-reduntant’ in the definition was unnecessary

    In v4.0.1, we still say that the data group must be ‘non-empty’ (i.e. there must be at least one data attribute) and that the data elements must be ‘non-ordered’. The latter term means that it does not matter for COSMIC sizing in which sequence the attirbutes occur in the group. A consequence is that you cannot claim that two data groups comprising the same attributes are different just because their sequence in the group is different. I don’t think this is likely to happen in practice, but we try to prevent malpractice or mistakes.

    Quite recently in Method Update Bulletin 13, we published a new Note to the definition of a data group to prevent further misunderstandings. I hope it is self-explanatory.
    “NOTE: A ‘data group’ does not necessarily mean ‘the set of all data attributes that describe a single object of interest’. The FUR of a piece of software may specify data groups to be formed from any combinations of data attributes that all describe the same object of interest, as needed by different functional processes.”
    I hoipe this is all OK
    Charles

    in reply to: homologation problems #5864
    Charles Symons
    Participant

    Carlos, thank you for pointing out this inconsistency. I can see that Spanish translation must be difficult. The correct word to use is ‘requirement’ (which is an ‘accepted request’). I will note that we must use only ‘requirement’ in the next update to the Measureemnt Manual.
    Best regards Charles

    Charles Symons
    Participant

    I think we can agree on the following:

    • The paper by Alain and others describes their findings in a particular company that does not follow Agile best practice as recommended by, amongst others, Mike Cohn. Mike was one of the founders of the Agile movement whose view on best practice is described in his book ‘Agile Estimating and Planning’.
    • Alain’s findings that COSMIC sizes correlate much better with effort on sprints than Story Points correlate with effort, as practised in this company, therefore do not mean that these findings are generally valid for all organisations that follow Agile best practice.
    • Having said that, we all acknowledge that Story Points (however interpreted, as ‘unit-less’, or a measure of size of a Story, or as an indication of effort for a sprint) are only valid in the context of a particular project team. There is no expectation that Story Points will correlate any better with sprint effort than as described in Alain’s paper, even if project teams do follow Agile best practice.
    • Story Points do not provide any basis for activities such as comparing performance across project teams, contracting with software suppliers, or for estimating the total effort for new projects. Mike is therefore exploring the use of COSMIC in Agile projects, as an objective measure of the size of user stories that can also be used for these other purposes.
    Charles Symons
    Participant

    Ben,
    you may claim, and we may accept, that this organization’s use of SP for estimation is not the proper way to use SP and that therefore this study is not representative of how accurate use of SP’s are for estimation in organizations generally. (Btw it would be nice to see hard evidence that SP-based estimating is better than reported in this organization.) But that does not make the findings of the study in this organization ‘not valid’. This paper reports the facts found in a particular organization. That’s all.

    All the findings are ‘valid’, unless you’re suggesting the researchers made mistakes. Equally, it is not unreasonable for us to suggest that the results might be of wider relevance

Viewing 15 posts - 1 through 15 (of 34 total)