Co-author/project partner: @catjam
Background
This post follows on @puniaviision’s initial post in December 2020 on how new Index Cooperative products should be proposed, scored, and moved along a multi-stage process for approval by the community:
Goal
With the benefit of hindsight and the experience of launching five products since the Coop’s inception, the Index Cooperative Product Team sought to improve upon the existing product prioritization rubric with a revised framework that:
-
is easy to understand and to communicate to Index Cooperative community members,
-
better quantifies the risk/reward of launching a new product, by incorporating lessons from past product launches, and
-
can be reliably utilized by any Product Working Group member to score a proposed Index Cooperative product.
Rationale
Before jumping into the rubric, the construction was guided by the following high-level thinking:
- Benefits > Costs. The expected benefits of launching a product receive a higher score (70% weight) in the new rubric than the associated risks/costs/commitments (30% weight). We believe that financial, operational, and technical costs are important considerations but shouldn’t get in the way of an otherwise attractive product aligned with market needs. We believe that Set, Index Cooperative, and methodologist resources can be leveraged to cover those costs.
- Solving problems for customers is tantamount. The most important factor in scoring a product benefit (with a 60% share of the benefit score) is the “Market Opportunity” - the degree to which the product solves problems for many potential customers (i.e, product-market fit). Here we considered factors such as the level of pain posed by the problem, differentiation to existing products in the market, and the expected size of the current market.
- Differentiation is not a major factor (yet). The crypto ETP market is relatively nascent, so the fact a competing product exists should not dissuade us from launching an Index Cooperative version, though it remains a consideration. As a result, we collapsed the “Unique” factor from the old rubric into the “Market Opportunity” factor. We similarly collapsed “Marketing Benefit” from the prior rubric into the “Methodologist Impact” factor, believing that the press/market bump from a product should be a relatively smaller consideration next to the product’s ability to find product-market fit.
- Operational & Technical costs. On the cost front, operational and technical commitments were given higher weight than financial commitments because financial commitments are tougher to quantify beforehand (e.g., it’s hard to know before launching just how much liquidity mining incentives are needed and for how long). By comparison, we can point to products that were significantly more painful due to operational and technical issues.
- Removing “Risk” as a stand-alone factor. The overall goal of a prioritization spreadsheet is to evaluate the risk of launching a new product, so we feel this is well captured across the three cost factors.
- More balanced scoring continuum. All factors will now receive a 1-5 score (1=low, 5=high). This avoids a single faulty score (i.e., “Extreme” worth 16 points in old rubric) from having an outsized impact on the overall ranking. This was an issue with the previous framework and was the primary reason CGI scored higher than all other products.
New Product Scoring Rubric
The revised rubric below is reflected in a new scoring chart. For reference, here is the link to old rubric.
Benefits (total 70%)
Market Opportunity - 60%
- Start with a baseline of 1 point. Add an additional point for each of the following:
- Large Market. The product caters to a large market from a potential AUM standpoint. We used AUM rather than the # of users because a product (e.g., SYI) can appeal to a smaller # of users (i.e., DAO Treasuries in the case of SYI) while still meaningfully attracting AUM. Sizing and defining the expected market can admittedly be difficult and should come with context from the project team. A large market is likely not specific to a particular protocol (e.g., SNX stakers hedging sUSD exposure) and should be reflective of the current market (not a hoped-for future state).
- Differentiation. The product is differentiated in the current market. (i.e., substantial advantages over competitors or competitors do not yet exist)
- Nice-to-have. The product solves a nice-to-have customer need (i.e., impacts a small % of a user’s target portfolio, protects against moderate research or risk).
- Must-have. The product solves a must-have customer need (i.e., impacts a large % of a user’s target portfolio, protects against huge gas fees, avoids extensive research or risk).
Revenue Potential - 25%
-
As the Index Cooperative rolls out new products and partners with new methodologists, we recognize that new revenue structures will emerge. This rubric avoids being too prescriptive with a grading system and gives examples of three existing products we have scored previously as guidelines for revenue potential. When evaluating revenue potential, consider the following factors: overall bps of streaming fees, fee split, whether there are mint/redeem fees (and if so, the degree of turnover/trading volume), and overall AUM.
- 1 pt: BED - 35 bps (50/50 split with 17.5 bps to Index)
- 3 pts: DPI - 95 bps (70/30 split with 66.5 bps to Index)
- 5 pts: FLI - 19 5bps - 125 bps (60/40 split with 117 bps - 75 bps to Index) + high turnover + mint/redeem
Methodologist Impact - 15%
-
Start with a baseline of 1 point. Add an additional point for each of the following:
- Product Competency. The methodologist understands the target customer / market well and has extensive DeFi experience.
- Methodology. The methodology proposed has undergone rigorous review, revision, and feedback processes.
- Marketing support. The go-to-market strategy is well-defined, and the methodologist will play an active role in promotion. Methodologists should have reach (# of followers) and several community channels. (e.g., Bankless, DeFi Pulse)
- Reputation. The methodologist is a known entity in the crypto space and will be a good long-term partner. (e.g., Synthetix, CoinShares, Bankless, DeFi Pulse)
Costs (total 30%)
Financial Commitments - 20%
- Start with a baseline of 1 point. Add an additional two points for either of the following:
- Requires the Index Cooperative to seed a liquidity pool.
- Requires liquidity mining incentives by the Index Cooperative.
Operational Commitments - 40%
- Start with a baseline of 1 point. Add an additional point for any of the following:
- Manual rebalancing at least monthly. (e.g., DPI or MVI)
- Manual rebalancing more than 1x per month. (e.g., SMI)
- Possibility of extrinsic blow-up. (e.g., FLI liquidation)
- Requires regular liquidity mining operations. (e.g., DPI)
Technical Commitments - 40%
- This section should be scored by the Engineering team (i.e., not Product) and uses a t-shirt size-style methodology.
- 1 pt - Reuses existing frameworks & infrastructure. (e.g., BTC2x-FLI)
- 3 pts - Requires creating some new frameworks or infrastructure. (e.g., SYI or IP)
- 5 pts - Requires creating completely new frameworks and infrastructure. (e.g., ETH2x-FLI)
Scoring Products Going Forward
The process to date has been to score proposed products only once they have successfully passed through DG1. We are proposing that going forward, aspiring methodologists check in with the Product Onboarding Team (which is currently @catjam, @jdcook, and me) to get a quick-and-dirty scoring prior to calling for a DG1 vote. The intent is for this check to help pre-empt concerns/weaknesses with the proposal, since the scoring rubric should reflect the community’s beliefs as to what makes a strong Index Cooperative product.
PS: This framework is a work in progress and very likely won’t anticipate all benefits and costs. Therefore, your continued feedback is much appreciated!
Special thanks to @jdcook, @puniaviision, @anon10525910, and @overanalyser who all provided fantastic feedback.