I think we discussed it's going to be mostly SFW, at least the free version. While for verification purposes, the images should be indeed dimensioned as stated using compared pics, looking for specific sizes of penis must be a filter available only in the premium version. As I said, very specific filters, including the NSFW, are in the premium app.
I also agree that price gouging should never be a thing - and to an extent - we should turn this practice as something to counter current industry standards
I also agree on the match feature, which should be separate from the grid one. As I suggested, I think the app needs a grid and a match. The match feature must allow 10 free matches per day in populated areas and 15/20 in less populated ones (a 2 M people mark). To enter the matching feature, a verified face pic is mandatory. AI pics must be detected and excluded.
Grids should include 150 free profiles in very populated areas, and 200 in less populated areas. Some cities in the US and Canada tend to be sparsely populated but very sprawled, that's why I propose the 2M people mark.
Concerning safety measures, there should totally be an anti-harrassment feature that uses both IP recognition and face detection measures to avoid some users to create multiple accounts. This allows abusive users to avoid loopholes that allow them to create multiple accounts. On one hand, IP recognition stops multiple accounts to be spammed/opened from an IP address. On the other, facial recognition won't allow abusive users to reopen accounts; the moment they upload a pic identfied either as a fake or as someone who has three conduct strikes, the account immediately gets blocked.
Equally, there should be a revolutionnary safety feature call text "anti-sparrasment" (short for spam harrassment). Essentially someone can get blocked from contacting a profile after - let's say - 10 unanswered messages. This is thought as a measure for those who either have a lot of difficulty respecting boundaries, or actual bots. Getting blocked through this feature doesn't create a strike (it would be too draconian), but an extremely persistent behavior - combined with reports, could be the motive for a strike.
Any other ideas?