Thank you for the post, @Monportefeuille.
Before we schedule DG2 snapshot we would like to get feedback from the community about the perceived market opportunity for this product. Product-market fit is critical to the success of any product and getting this right ensures the success of our products and IC as a whole.
Keeping in mind that it’s not just time that’s dedicated but significant capital to:
- Provide seed liquidity
- Pay out reward incentives (if applicable)
- Pay gas fees for initial composition and rebalance
As PWG reviews various methodologies we feel that this product, though well thought out, will likely not drive adoption because it does not solve a need in the market.
Before we assign more time and resources, we would like to get feedback from the community. Mainly answering the question, “Would you buy this?”
We would like to keep this post open for one more week and decide next steps based on feedback.
As shared recently with @Cavalier_Eth, I’m more than happy to collect community feedback and ways to improve the narrative / positioning around iRobot.
I don’t pretend to own the key for the point you’re raising, I don’t think anyone does as is often the case for innovative products :
I would like to flag here that doing this kind of survey for iRobot at scale, via the Coop’s socials, is something I already suggested several months ago but nothing happened.
Nevertheless, as far as I’m aware, the Work Team Analysis and the Product Prioritization Score - that have also been issued for iRobot 1,5 months ago - are the main 2 instruments supposed to help PWG and the broader community building an unbiased opinion on a product. Otherwise, what would have been the point of the Product Onboarding Process overhaul ?
I hope that I’m not misinterpretating your intention here, because it’s a written discussion. But, at the moment, I don’t really understand how PWG unilaterally stopping the governance process, and posting this single question on the governance forum, is going to provide an accurate indication of what’s supposed to happen next ?
I fear this may create a dangerous precedent for internal methodologists, particularly as it comes several weeks after the project completed all required stages of the DG1 to DG2 process, and in spite of a continuous effort from the methodologist to work at the highest possible standard of transparency and quality over the past year.
You really don’t think the market needs a diversified portfolio, with long-lasting performance, and minimal downside, in a single token? I can’t think of a single person who doesn’t need this. With the right marketing, this could be a core holding for every native and noob alike. And quite possibly the coop’s biggest success, over the long run, as we watch sectors and flavors come and go. If we look at TradFi, the funds most suggested by advisors are not sector specific, or blue chip, like most IC products, but those with the broadest most diversified market exposure. That’s the value prop offered by iRobot, and it’s what the general market is buying. So far iRobot is the only IC product attempting to fill this HUGE need.
And I agree with @Monportefeuille points above regarding the process. Specifically, the onboarding process would require the review team to present “significant blockers that can not be avoided” in order to issue a “No” vote for DG2 at this point. Surely unsustainable product-market fit would qualify as such, but as @Monportefeuille pointed out, how do we reconcile a Market Opportunity score of 3 on the product scoring chart, with “does not solve a need in the market”?
Gotta say, I’m left pretty confused about this reply as I see no precedent for it. I have so many questions about the approach, I can’t imagine how Julien must feel to see a response like this from the Coop.
Fundamentally (and I may have missed something having pulled back from the forum recently) I don’t understand why the community is deciding whether an index has PMF prior to DG2, but after work team analysis is already complete. Is that part of the onboarding process now? Aren’t the experts in the product team supposed to assess this as part of the analysis?
Speaking of which, iROBOT was rated higher than BED (a product that is already live!) in the product scoring table, and YHI which didn’t have to jump through the same hoop prior to DG2. What is the difference here? If the response is that the numbers in that sheet are illustrative then why have it at all.
If the community feedback is lukewarm or negative, will iROBOT be shelved or scrapped entirely? How do you even judge it, who judges, and what is the pass criteria to move forward?
I think Coop members should put themselves in Julien’s shoes, as he’s worked tirelessly for months on this product, and put his faith in the product onboarding process during that time. If part of that process suddenly changes, then it is no longer credibly neutral. A big reason Coop products do so well is the trust in methodology and things not changing on a whim, we should strive to repeat that in all our interactions. I see no reason why iROBOT shouldn’t proceed to DG2 where the community gets to vote on it anyway.
Since the question is ‘what does the community think?’ I have to say I’ve been looking forward to this product launching. This and $GMI to me both fill the niche of projects I’m interested in allocating to but haven’t had the time or conviction. Index products using a methodology to allocate in this middle ground are something that’s missing in our lineup, and as stated in the GMI proposal:
‘$DEGEN by Indexed Finance – perhaps the closest alternative to GMI – is the most successful non-Coop index in the market.’
So there is room for IC offerings in this space, a sentiment backed up in the original work team analysis.
Hi @Monportefeuille , @DocHabanero – wanted to respond from the Work Team perspective! Chatted with @jdcook this afternoon on this and this comment reflects our thinking on the analysis specifically (not intending to comment on the product)
A few pieces of context:
- We made changes to the onboarding process & WTA while iROBOT was in flight. Moving forward, WTA includes the supporting pod so that PWG feedback & crucial perspective is included. We really appreciate Julien’s patience with the process changes!
- Separately, we are open to deprecating the Work Team Analysis or removing the current process if it no longer serves the community.
On this score:
- Market Opp of 3 feels right for this product: it’s differentiated and a nice-to-have as other posters have pointed out and Julien presented research for. The rubric for nice-to-have specifically reflects thinking that we should launch as many products as possible to be successful – though this point is being re-evaluated throughout the Coop.
- 1.74 is an ok score: it ranks below all of our live products with the exception of BED, but above products that also received serious consideration like SMI
- If PWG or others have has supporting data that merits changing the score or redoing the rubric, we’d be happy to do so, especially given the 2 months that have elapsed since publishing it!
Would re-iterate everything mentioned above. Delaying or suspending the standard governance process at this stage seems unwarranted. Unless there are factors, technical or otherwise, that make this product untenable in which case those should be highlighted publicly.
While this could be true, I don’t think it warrants deviation from the standard process. PWG can certainly present the case for why this product wouldn’t be a good fit for the Coop. As @catjam suggested, that can be reflected in the updated score, if those concerns are backed by research/data. At which point the Coop should still follow the process and move to DG2.
Wouldn’t DG2 vote qualify as “feedback from the community”?
I read two interpretations of DocH’s post. If the proposal is that this is an ad hoc decision-gate, then I strongly agree with earlier comments that that would violate the process and be wholly unfair to the project team. If the request is for more community feedback ahead of the scheduled DG2 snapshot vote, then that would seem to be consistent with the process and fair, and something Julien has been asking for consistently albeit much earlier in the process. Let’s take the latter interpretation.
My main concern with iRobot is that the evidence that the strategy would outperform an ‘average’ portfolio is very weak. It claims to have discovered a new source of alpha which would require much more evidence than was presented here. The back-test showed that an ‘average’ portfolio would beat iRobot’s ‘winner’ portfolio ~80 days out of 90. The evidence barrier is high because there’s a lot of independent literature that any momentum in crypto is small and fleeting. Also, there’s a lot of evidence in the equity literature that the Sortino ratio (or Sharpe ratio) does not predict future performance.
On Sortino-optimization, it’s important to note that its inventor has completely disavowed his namesake methodology: “There was a time that I believed the Sortino Ratio was the best way to measure performance. When the evidence began to accumulate in the 1990s that I was wrong, I wrote a paper pointing out the flaw and posted it on my website.”
My second concern, which I don’t think has been addressed in the liquidity analysis or WTA, is the the product will have high NAV decay and rebalancing costs due to the very high turnover, mid/small cap composition, and $10M proposed AUM. As part of the DG2 post, it would be helpful to have a rough estimate of the value decay and rebalancing costs, particularly since the Coop will need to bear the latter costs.
Index Coop’s market survey indicated interest in a "Top 30 Market Cap Index”, “S&P500 equivalent”, “Value Index”, and “Diverse Blockchain segments” but there was no request for a momentum index or algorithmically-traded index. One could argue that the respondents were unaware such a product was even possible and so couldn’t ask for it but that’s impossible to disprove and the other requests showed a high level of sophistication.
As always, I’m glad we can have these discussions in the sprit of shared respect, process-focus and fact-based reasoning. Cheers.
Hi @JosephKnecht !
Thanks for your feedback.
On this aspect :
I think it’s important to consider the latest backtest data, which were compiled to answer one of your previous questions, and illustrates how iRobot would perform in an “up” market :
Regarding the value decay and rebalancing costs, is this something that your profitability tool would enable to recompile from past data ? In which case, I’m more than happy to have a quick introduction and try to produce this additional info.
Generally speaking, I totally agree with the principles of a fact-based, process-focused discussion and think that any reasoning around the methodology’s performance should revolve around this kind of data analysis.
Wanted to post some comments on this initiative that I previously shared with PWG (separate from work team analysis.)
I’m super supportive of raising the bar on the products we launch and doing more comprehensive analysis on rebalancing costs, target AUM, etc (@JosephKnecht also makes good points here.) I personally think we will launch better products and drive higher success for the Coop when we empower PWG to have authority and ownership over the product roadmap.
That said, I think we can also elevate the approach PWG has taken to this product. While our current token-based voting system is not a good way to determine potential PMF, neither is personal opinion or using internal feedback as a proxy for market research. We have a great team on board and will make sure to bring data and customer & market research to future analysis.
Finally – I’d be remiss not to note that I’ve personally enjoyed working with @Monportefeuille and appreciate his dedication to this product. You’re a valued member of our community regardless of the outcome for iROBOT!
These back-test results are definitely more encouraging, particularly since they cover a down and up cycle. How difficult would it be to generate that over a 1-year period? Statistically, if you wanted to prove the effect was real you could also calculate the SD of returns across the average portfolios.
The ‘product profitability model’ we’re developing with @prairiefi , @Cavalier_Eth , and others (more on that Monday) takes the NAV decay and rebalancing costs as inputs and not outputs. @jackiepoo and @overanalyser are the liquidity experts but here’s a crude estimate.
By eyeball, it looks like iRobot’s turnover is ~20%/month (=$2M/month). From the October composition, the trading depth for a typical component is ~$10k @ 50 bps. The NAV decay would therefore be 0.2%/month (=20% x 0.5% x 2) and the rebalancing fees $40k/month (=$2M/$10k x $200/tx). Compare that to the monthly streaming fee income of $12.5k/month (=$10M x 1.5% / 12). One can do the same calculation for 100 bps. This may underestimate the costs because the more volatile components are likely less liquid. Also, this assumes there’s enough volume to arb the price back to baseline. To reduce the value decay and/or fees I can suggest reducing the AUM, increasing the liquidity weighting, or reducing the turnover with the understanding that that might reduce the return. Unfortunately, IC’s rebalancing algorithm is a black box so it’s hard to cost optimize. It’s worth noting that most momentum equity funds under-return precisely because of the high turnover costs. I hope that’s useful. Cheers.
While I can see value in the wider conversations on methodology robustness, index running cost, etc, as others have pointed out, a sudden stop now against our old (and maybe new) consultation and launch processes feels like a problematic signal to send to methodologists.
If this product is not technically hard to launch and support, I feel we should go to DG2 and potentially launch it and deliver against quite a lot of historical signal to an external partner. It’s hard to be exposed to our changes of process too - we’re a young DAO, we’ll change processes again in the future.
Regardless of what the technical, nuanced answer is re momentum in crypto, market participants might not fully appreciate that (momentum lives as a real meme in TradFi (sometimes it - as a factor - works, sometimes it doesn’t. Just like ‘value’), and it’s very hard to know which products work until we launch them. I and many others - including the methodologists - thought MVI was a fun, small allocation, ‘satellite holding’ back in January - now MVI is far more than that. Twitter polls have our followers thinking MVI is the thematic index with the most potential this next year - I now think there are whole new larger (than defi) business models and platforms in tokens in MVI too. At recent rates of growth it’ll match DPI one day not too far out.
Product traction is an imprecise science. MVI benefitted from Axie, Illuvium and then… Lord Zuk - none of this was known in Q1. Similarly here, AI meeting crypto is a powerful meeting of trends, a momentum meme (correctly or incorrectly) could develop powerfully in crypto (the pumps are so hard to catch repeatably!) and an index like iRobot might be really well placed to capture attention and TVL.
Thanks a lot for unlocking the vote @Mringz ! Quickly flagging here that some parts of the text on Snapshot still mention the Yield Hunter Index.
Also, I have contacted the POAP team on their Discord to link the event I created with the proposal, hope they can process the request ASAP
The intent was never to change the current process, but to provide PWG feedback and allow the community time to reflect. Any changes to the process will always be communicated and approved by the community.
Quantitative data should have been provided in lieu of ‘community feedback’, but shortly after DG2 is approved the process to start developing the product begins. At this point time and capital begin to incur.
As we continue to enhance the onboarding process, co-organized market research will be a critical input for determining product success. To @Monportefeuille’s point, he suggested this and we never followed up.
We will keep the community up to date on any concerns earlier in the process and provide objective feedback. This ensures everyone is comfortable and well informed for all community decisions.
Hey @catjam ! Sorry for the late reply, and thank you so much for the kind words ! Likewise, it’s been a pleasure contributing to the revamp of the onboarding process by sharing this first hand feedback . Beyond this, as shared already with other members (and regardless of the outcome of the vote), feeling this energy and growing alongside this talented, focused community really is an incredible experience.
Hey Joseph, thanks a lot for these indications and for pushing me to extract the hell out of my modeling and backtesting capacities since the end of September While low liquidity and high fees effectively remain today’s undeniable reality, I’m sure we as a community can work it out with a combination of the levers you describe + potentially rebalancing more often, even if at 1st glimpse it might seem counter-intuitive.
Nevertheless, I think we need to build with an open mind about what will happen with liquidity and fees in the near future. Even beyond this, I remain convinced that an index built rationally, offering the broadest possible market exposure like iRobot can benefit to a lot of people potentially still waiting on the sidelines, or hoping that the system we’re building produces radically different types of vehicles compared to the legacy we have today.
IIP-071 Launching the Robot Index ($iRobot) did not pass DG2.
Results available here: Snapshot
A public thank you and recognition of the work @Monportefeuille has put into the product development process. Creating a new product from scratch is a difficult task in any field, and herculean in defi. It’s a creative journey that requires vulnerability, humility and persistence.
Despite the product not passing DG2, I’m sure some of the lessons and insight will be used as inputs to some future product. We are all better off for having taken iROBOT through the process, and suggest a quick retro at the right time.
Hey @Cavalier_Eth, dear owls,
After some much needed time away to digest and recover from this disappointing result, my first thank you goes to those of you who supported this project all the way through to DG2, during and after the vote itself
Taking some time away was the best method to avoid any epidermic reaction, reflect and learn from an event like this as a methodologist, and as a human being.
Several of you have indeed reached out to express their support after nights, weekends, holidays of work were accumulated on this project for nearly a year - on top of a “normal” occupation and a daily family routine - with the outcome that we all know.
This reminded me why I chose to get involved with the Coop in the 1st place
But this also reinforced my motivation to keep contributing and improving - particularly on 3 points :
- The concept and marketing behind the Robot Index methodology.
- The market research and promotion of further innovative proposals, especially from within the community (I personally find a great shame that the last 2 internal proposals failed to pass DG2).
- The decision process transparency so we can keep fighting at the forefront of web 3 organizations.
Looking at the vote results with the most constructive approach possible, I would therefore appreciate any feedback from the BD team (cc @BigSky7 / @Mringz) to understand why our main shareholders rejected the Robot Index proposal altogether.
@Cavalier_Eth now that your non-fungible relationship has been successfully recorded on main net, I am also quite keen to go through a quick retro at the Product Team’s convenience
Let’s keep rocking !