Announcement

Collapse
No announcement yet.

MMR Matchmaking Planning Thread. Nerds welcome.

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • MMR Matchmaking Planning Thread. Nerds welcome.

    After studying various games' MMR (Matchmaking Rating) systems for many hours, here are the conclusions that I've come to. I'm writing it as much for myself as for others to understand things, and to find any possible flaws before moving forward. I'm by no means an expert on stats. I got an A in a college statistics class when I was 16 or 17 and that's the absolute limit of my qualifications here. I won't resent any objections or pointing out of flaws, but please keep it civil and on-topic with the idea of moving this forward.

    --

    While many factors can be included, there are two key elements needed for any successful modern MMR system, and that I propose we use: the rating; and uncertainty (or its reverse, confidence). For our purposes, we can use a simplification of the first Glicko system, which is itself a more dynamic version of Elo.

    Wondering how Elo is calculated? Here are a few of the more straightforward and less technical explanations I've found:
    https://www.omnicalculator.com/sports/elo (Continue scrolling for explanation)
    https://mattmazzola.medium.com/imple...m-a085f178e065



    Ratings

    Ratings are scaled based on a factor that works on the basis of standard deviations. In Elo, the scale is 400, representing two standard deviations.

    Elo is used for chess. If one chess player is 1600 and another is 1200, there's a difference in skill of 400, representing the fact that the player at 1600 is expected to win (or draw) about 91% of the time over the player at 1200. (For 1400 and 1200, the 1400 player is expected to win/draw about 76% of the time.)

    For what we're doing, it helps to have increased granularity. So I propose starting from a rating of 3000. A higher number such as this is what a lot of games are using, and allows for a wider range of ratings. I'd also recommend using a scale of 800, or double Elo's scale, as we're using double the base rating. So a difference of 800 means 91% win probability for the higher-rated player, a difference of 400 means 76% probability, etc.

    One other bonus to using 3000: it might possibly map somewhat cleanly to stars. (MMR * 2 / 1000) So 3000 starts you out at 6 stars. I think we'll see a few players above 5000, though, meaning they might be rated more like 11* or even 12*.


    Uncertainty

    Uncertainty represents the level to which the system believes a player's rating to be accurate. It estimates that there's a 95% chance that the player's actual rating is somewhere within this range, plus or minus their normal rating. (In combination with a base ratings change factor, this replaces Elo's somewhat arbitrary K-factor, which is simply the maximum number of points a person can win or lose in a given match.)

    Uncertainty starts out usually somewhere around two standard deviations in both directions. I'm proposing to go a bit further than that: about 2.5 standard deviations, or 1000. Starting out slightly more uncertain is just fine. We'll tighten as we go, but start out by using larger rating changes in quals. Starting from 3000, this means our system with maximum uncertainty of 1000 estimates that with about 97% certainty that our player falls between the skill range of 2000 and 4000. This will change as performance is evaluated. The 1.5% best and 1.5% worst of the zone are expected to fall outside this range, basically.

    The idea of uncertainty is to continue to tighten up the range and grow more certain of the rating. So, with a rating of 3000 and an uncertainty value of 500, the system would estimate that the player's actual skill level is somewhere between 2500 and 3500, with (hopefully) 95% confidence. (This would happen if a player consistently won and lost about 50% of the time against teams with an average rating of 3000. The starting rating of 3000 was found to be accurate, and after more games are played, we keep confirming that to be true.)

    Uncertainty narrows the range we believe a player's actual skill to be in. Uncertainty lowers by playing matches. In more advanced systems than initially planning for ours, it also lowers more when the predicted outcome of the match is accurate. (Even matches where the outcome isn't accurate will still generally lower uncertainty, because we're changing ratings in the proper direction reflecting the outcome of the match.) Changes in rating after each match become smaller over time. It might help to think of ratings changes as moving the center point in the curve, and then tightening the curve up with each iteration/match by lowering uncertainty.


    Ratings changes

    Four main factors influence ratings changes: the difference in rating between the two teams; the uncertainty of each player's rating; a base factor for ratings changes; and of course, whether or not it was a win or loss for the player.


    Ratings difference and win/loss

    The ratings difference can be represented simply enough using the standard Elo formula. The strengths of the two teams are averaged, and then run through the formula.

    Example:

    Code:
    Team A: 3400
    Team B: 3000
    
    Ratings diff = -400
    Scale factor: 800 (Our chosen constant equal to two standard deviations)
    
    probA: The probability that Team A will win. 0.5 (50%) is even odds and represents when two teams with exactly the same rating face off.
    probB: The probability that Team B will win.
    
    probA = 1 / (1 + (10^(-diff / scalefactor)))
    probA = 1 / (1 + (10^(-400 / 800)))
    probA = 1 / (1 + (10^(-0.5)))
    probA = 1 / (1 + (1/(3.16)))
    probA = 1 / (1 + (.316))
    probA = 1 / (1 + (.316))
    probA = .7597
    
    probB = 1 - probA
    probB = .2403

    Uncertainty

    Uncertainty starts high, and has a big impact on how much each rating changes. The idea is that ratings change more at the start of a player's season, reflecting the system's attempts to rapidly but still effectively place them. This is similar to how the Elo system uses arbitrary K-factors of 40 for new chess players (high uncertainty) and 10 for chess pros (low uncertainty). The main difference between Elo and a modern MMR system is that uncertainty lowers automatically rather than being manually assigned.


    Changing ratings based on both ratings differences and uncertainty

    In standard Elo, how much your rating changes after a match is simply your probability of winning (probA or probB) multiplied by the K-factor, representing the maximum amount your rating can change. In TWD, K=50, which is why you see unexpected victories/losses in TWD give/take 50 points. TWD uses classic Elo with a high K-factor. (K is the same for everyone in TWD, which is one flaw of the system.)

    In a simple MMR system, the K-factor is instead determined by uncertainty multiplied by a constant or base factor.


    Base factor

    We need a base factor, then. Starting with Elo's 40 for new players in a starting 1500 system, we can use a value of something above 2 x 40 (perhaps 100-250) in a starting 3000 system using self-adjusting uncertainty. This base should be larger because we'll be reducing uncertainty over time, whereas in Elo, 40 K-factor is used for a player's first 30 matches. So we need to keep the final calculation well above 80 for a good number of matches, at least 20 or so. Quals tend to use very high values to rapidly place a player, even if it may be initially flawed. Note that quals don't tell a player what their MMR is until they've finished because rating fluctuates so much and is highly inaccurate during this time.

    There are a few reasons I selected 1000 as our starting uncertainty value. It's within the range of expected deviation (~2.5x standard deviations), matches what other systems have done in attempting to qualify players quickly, and is a nice round number that makes it logical to calculate. For instance, 1000 = uncertainty of 1, or full uncertainty in a player's rating.


    Ratings change calculation (WIP)

    Code:
    ratings change = probability of win * (base factor * player's uncertainty).
    Positive for win, negative for loss.
    (Here, again, base factor * uncertainty are effectively replacing K.)
    As uncertainty will be different for different members of a team, players on the same team will have different ratings adjustments.


    Further work needed

    The main two questions that still need to be answered to get a solid, working system are linked. One decides the other and vice versa.
    • What is the base factor for ratings changes?
    • How quickly does uncertainty lower based on games played?
      I need to find a good formula for this, maybe a logarithm. We need to have it high enough at the start for quals to rapidly place a player. Then it needs to level off and decrease slowly, especially with many matches played, where it should hardly change at all. Considering Elo's really basic method of making huge changes to K based on differences in play level/number of matches played, it's actually not that important to get this perfect. Elo still works. Because of this, I'm considering just using a lookup table with verified calculations to make sure it works well. If anyone's a math guy and is interested in finding a more elegant solution, let me know. Glicko and Glicko2 offer some options here but are a bit complicated for what we're trying to do. I'm trying to keep it simple in order to get it operating decently from the start. We can then refine with time. This is how pretty much every game has developed their MM algo. (As an aside for the bored, I just learned today that the lead dev on the first Sonic game changed a key element of the game two weeks before the master release was finalized, so that even if the player had one single, solitary ring, hits wouldn't be fatal. Can you imagine how the game would have changed if you had to have 20+ rings to avoid a death? The point is, iteration is key. Everything is a work in progress in gamedev.)


    Stretch goals
    • Adjust the change to uncertainty by how far off the prediction was. If we see a big upset, such as a team winning a game they were predicted to have a 10% chance of winning, we shouldn't be equally as confident of new ratings as in a case when the prediction was accurate.
    • Allow increases of uncertainty in some cases where uncertainty is already low, such as if a player is consistently performing outside of their skill rating (either higher or lower), demonstrating that we may need to allow more substantial ratings changes until they once again level off and we can be certain they're placed where they should be. This is standard in many advanced systems.
    • Increase uncertainty slowly for inactive accounts. (Glicko) This is done because the longer a player rusts up, the less reliable past ratings are.
    • Use uncertainty of opponents as a factor in ratings adjustments. If we're uncertain of an opponent's rating, it makes sense to be more cautious about ratings changes. If we're very certain of an opponent's rating, obviously we can be much more certain that we're making a meaningful adjustment in the right direction.


    That's it for now. Any and all comments and questions appreciated. It's taken a good bit to figure all this out and there are bound to be issues here and there. Getting the base rating and uncertainty change over time right are especially important.

    If you're a hardcore nerd who loves this kind of stuff, or if you have statistical experience in general and would like to help make things work, reach out. We have a Slack channel dedicated to busting MMR/Elo for TW wide open and could use your input.
    Last edited by qan; 09-02-2022, 12:34 PM.
    "You're a gentleman," they used to say to him. "You shouldn't have gone murdering people with a hatchet; that's no occupation for a gentleman."
    -Dostoevsky's Crime and Punishment

  • #2
    I would assume the end goal of using this method would be to select X groups of 10 players from the entire pool available considering that each group of 10 can be split evenly into teams that would end up with comparable ELO totals. It could vary from match to match but it would try and pair two evenly matched ELO teams at the very least I also assume this is primarily for wb and jav only. I would think having multiple ships in base would gum this up. Please correct me if i am wrong.

    So one factor that is impossible to really take into consideration when setting up teams is team play style composition. I would suspect that while ELOs are even, you'll always run into teams with a mix of passive players and aggressive ones. I mentioned in another thread that it would be fun to introduce a 'management league' or something of the like alongside TSL. The only change from what I think you propose to what I would suggest would be to allow two players of comparable ELO from each group of 8-10 to be assigned captain. Then be able to have a !list command that shows your pool of players and their ELO and allow the caps to choose in that pool. Depending on what the pool of players could look like it may be somewhat imbalanced and could heavily favor the first pick. one mitigation there might be something like a draft order like 1-2-2-1-2-1-2. But that is just speculating, unknown how it would actually shake out.

    I would be interested to see if something like this would make a huge difference or just be so dependent on league knowledge that it would be harmful. To see how this all plays out, I would float the idea of playing back to back games with the same group of 10 people. The first game played with the computer alone selecting each team. Then right after that game concludes, play another game selecting 2 caps of comparable ELO (unknown how to do that yet) and let them draft their teams from the remaining pool of 8 players available. Take note of the final ELO of each team and let it assign an expected winning percentage. The cap would get certain number of points for winning that could scale up or down with ELO imbalance. Until we see how this would affect the greater ELO of the players i wouldn't have this affect anyone's ELO.

    I personally would find this interesting and help players that traditionally don't captain learn some things about setting up complimentary lines. But that's just me, if the better players of the zone think this is silly, it wouldnt be worth pursuing. I have no idea how much additional work this would be to set up. Since the group of players and their ELO would already be available the outcome wouldnt integrate into the ELO system from the start, it seems like not too bad, but I dont know programming.

    Other than that, thanks for your efforts qan. You're one of the guys that always works to improve the zone.


    1996 Minnesota State Pooping Champion

    Comment


    • #3
      What is the base factor for ratings changes?

      This does not answer your question: i recommend 5000 (or 4000) as a starting point.

      How quickly does uncertainty lower based on games played?

      Can uncertainty be situational? I wonder if that would help minimize the margin for error (which is inevitable)

      Whether its by:
      • Number of games played by the opposing & own team (there can be a minimum of x games so that aliases aren't immediately rewarded, and a player whos played against/with an aliased player isnt severely impacted)
      • Number of games played by the player

      Ty 4 this qan & others
      Last edited by Riverside; 05-02-2022, 02:23 PM.

      Comment


      • #4
        deleted
        Last edited by saiyan; 10-17-2022, 01:42 PM.

        Comment


        • #5
          Originally posted by saiyan View Post

          Based on my extensive matchmaking experience in games using an elevated starting point like you have, I think your MMR change should come out to around 25-30 a game in an even matchup (post-calibration). Whatever base factor is required to make that happen should be fine.



          In each case I've seen, your confidence is very high (fixed, reduced K) by the time you've played a dozen post-placement matches. From that point you need to grind out a bunch of games if you were placed far off your true rating. Keep in mind there is always a hard cap on how high you can be placed. You should never come out of the placements at a top tier, those spots need to be earned via grinding.

          These are the 3 variations I've seen:
          • Low confidence throughout placement matches that continues into official ranked play until you start to win roughly 50% of even matchups
          • Low confidence (or very high K value) throughout placements, followed immediately by maximum confidence (or a lower, fixed K value) during official ranked play.
          • No placements, no uncertainty factor, just one fixed K value from beginning to end (lazy, but not necessarily bad)
          I have seen matchmakers do some strange things when you get streaky, such as throwing you into 1 or 2 games way outside of your skill bracket. But I'm not going to delve deep into that as I don't want to make this any more complicated. Nor do I think we have the player count for that. As for an uncertainty algorithm, I can't help you there, but I'll keep my eyes open.

          Ultimately it's going to be hard to go wrong and this will be a success as long as you control smurfing and there is enough activity.

          Other suggestions:
          • If this is how players will be ranked in TSL, consider hiding MMR (at least for everyone outside of the top 10) and then doing a full numeric reveal at the end of the season. For some players, most of their journey will be a straight decline from the starting point until they reach their true rating. It might be disheartening for them to see, especially when some of these individuals (Jessup) somehow believe they should be able to compete for a top spot. I want these people to keep playing, learn how they can create more value for their team, and improve. This isn't TWL or TWDT. You are against yourself.
          • Because of the above, be very generous with rewards to encourage our lesser talented players. Tw kiddies like pixel medals. Maybe different medals for top 1, 3, 5, 10, 25..... 50?
          Really appreciate you trying to make something good happen here.
          The only issue I have with other games' MMR matchmaking experiences are that in many cases, the way they force 50-50 matchups is to put the high-MMR person in with a bunch of scrubs against a team of balanced MMR players, assuming this is a balanced matchup. It almost never truly is though. In games where teamwork is essential and hard-carrying is extremely difficult to do (Overwatch 1 for example), this was a huge detriment and ultimately helped kill the game. It also tended to put high MMR players in each medal bracket against each other to create 50-50 matchups, but what that ended up doing was making it so that instead of just dominating a bunch of gold players and jumping ranks into higher tiers asap, a new player who belongs in a much higher bracket will be stuck playing other players in gold who belong in a much higher bracket. Smurfs playing smurfs essentially. They do eventually rank up, but it forces things to take much longer. Instead of dominating people and quickly moving up, they end up being forced into 50% win/loss games for that bracket. If a high MMR player can't be found for the other team, it would add a bunch of scrubs that belong in a lower bracket but haven't downranked yet as teammates.

          I don't actually think that applies much to TW due to the lack of players, gameplay that allows for far more individual skill to shine through, and a lack of SSR tiers on top of hidden MMR affecting games. I just wonder how badly games will set against certain players. If a 10* WB is playing, and the other side doesn't have a 10*, will that 10* then be stuck with four 5* players against a bunch of 8* and 9*s? That's an unwinnable situation for them all just to achieve 50-50 matchmaking. That game cannot be fun for anyone on either team really.

          This all said, it will be interesting to find out how things shake up regardless. Maybe things go much different than the above examples.
          RaCka> imagine standing out as a retard on subspace
          RaCka> mad impressive

          Comment


          • #6
            I'm not good at math or statistics so I'm just going to put some general observations about how we would apply an MMR system here:

            - Require a 24-72-hour signup cooldown to prevent smurfing. If someone wants to sign up during the season, force them to wait out a timer to hopefully discourage smurfing.

            - "Leaver penalties" probably need to be built in. Meaning someone leaving a 5v5 would lose a bunch of points and the losing team would lose less or no points.

            - This is going to be integrated into basing as well, correct? Because of its variability in basing lineups, I think it would be smart to still assign captains who !add. What I'm picturing is everyone who wants to base does !signup, the bot makes groups of 16 and assigns two caps per group who alternate adding players from that pool. I do not think letting people pick their ships will lead to engaging play.
            Vehicle> ?help Will the division's be decided as well today?
            Message has been sent to online moderators
            2:BLeeN> veh yes
            (Overstrand)>no
            2:Vehicle> (Overstrand)>no
            2:BLeeN> ok then no
            :Overstrand:2:Bleen> veh yes
            (Overstrand)>oh...then yes

            Comment


            • #7
              Originally posted by Exalt View Post

              The only issue I have with other games' MMR matchmaking experiences are that in many cases, the way they force 50-50 matchups is to put the high-MMR person in with a bunch of scrubs against a team of balanced MMR players, assuming this is a balanced matchup. It almost never truly is though. In games where teamwork is essential and hard-carrying is extremely difficult to do (Overwatch 1 for example), this was a huge detriment and ultimately helped kill the game. It also tended to put high MMR players in each medal bracket against each other to create 50-50 matchups, but what that ended up doing was making it so that instead of just dominating a bunch of gold players and jumping ranks into higher tiers asap, a new player who belongs in a much higher bracket will be stuck playing other players in gold who belong in a much higher bracket. Smurfs playing smurfs essentially. They do eventually rank up, but it forces things to take much longer. Instead of dominating people and quickly moving up, they end up being forced into 50% win/loss games for that bracket. If a high MMR player can't be found for the other team, it would add a bunch of scrubs that belong in a lower bracket but haven't downranked yet as teammates.

              I don't actually think that applies much to TW due to the lack of players, gameplay that allows for far more individual skill to shine through, and a lack of SSR tiers on top of hidden MMR affecting games. I just wonder how badly games will set against certain players. If a 10* WB is playing, and the other side doesn't have a 10*, will that 10* then be stuck with four 5* players against a bunch of 8* and 9*s? That's an unwinnable situation for them all just to achieve 50-50 matchmaking. That game cannot be fun for anyone on either team really.

              This all said, it will be interesting to find out how things shake up regardless. Maybe things go much different than the above examples.
              This is another reason I made the suggestion about trying it with captains. 2 knowledgeable people should be able to pick more even lines within the pool of the 8 remaining players.


              1996 Minnesota State Pooping Champion

              Comment


              • #8
                Some MMR updates:
                • Bot has been created (forked from TSLBot). Signup, initial and individual league ratings, qualifications, etc., are all looking good.
                • To simplify things, for the initial run, we're going to work on the easier case of just D/J at the moment. BIET needed to pause on the base team choosing algorithm for RL reasons. Base is still happening, but it's much more complicated, so it needs to come later down the line.
                • Still hammering out exact values for base factor and how quickly uncertainty changes based on games played (see above for details), but getting close. Once this is nailed down, the initial formulas will be a go.
                • Hoping to possibly have some kind of very basic alpha version to test ratings changes by next weekend.
                If you'd like to help with the development effort, please ?go mmr and PM !signup to the bot. This doesn't commit you to anything. I just need more records in the database for testing purposes.
                "You're a gentleman," they used to say to him. "You shouldn't have gone murdering people with a hatchet; that's no occupation for a gentleman."
                -Dostoevsky's Crime and Punishment

                Comment


                • #9
                  Originally posted by qan View Post
                  If you'd like to help with the development effort, please ?go mmr and PM !signup to the bot. This doesn't commit you to anything. I just need more records in the database for testing purposes.
                  Fuking qan n biet ty 2 u n da teams 4 dis



                  To clarify:

                  If I sign up in ?go mmr, and pm the bot !signup, what happens?

                  Does me doing that give permission to allow past, present, future TWD games to be collected and used to improve the accuracy of the MMR system/bot?

                  Excitid 2 vs this $$$

                  Comment


                  • #10
                    That'd be pretty cool, but no, it would be difficult to make use of that older data unless it was done to make a sort of simulated MMR with your past self vs all other players, adjusting each person's simulated MMR over time. Interesting idea, actually, considering the huge amount of match data we have available. It would even allow simulated MMR to be tracked over time, so you could see who was the very best in every single age (at least from TWD data) ... But as fascinating as that might be, it's a different project altogether.

                    It'll just work as most MMR systems do: getting closer to estimating your current actual skill level as you play more and more games. Nothing fancy, just: did you win, or did you lose? It's an algorithm that takes the human guesswork out, which is often the step (at least without rigorous testing and reiterating) that mucks up just about any kind of ratings system.

                    And what !signup does is just allow me some test data to use to make sure things are working properly. It'll also make it easier to start testing. Everything will be wiped before the real go-live, so no worries.
                    "You're a gentleman," they used to say to him. "You shouldn't have gone murdering people with a hatchet; that's no occupation for a gentleman."
                    -Dostoevsky's Crime and Punishment

                    Comment


                    • #11
                      Originally posted by qan View Post
                      And what !signup does is just allow me some test data to use to make sure things are working properly. It'll also make it easier to start testing. Everything will be wiped before the real go-live, so no worries.
                      I appreciate the clarification.

                      Signing up sounds like a true 0 commitment case with this ask.

                      Meanwhile, it sounds like the consequence cud b impactful for the Volunteer Dev Staff/teams to help move this project, and other surrounding projects, forward.

                      Will vs !signup upon entry 2 TW, ty qan u fuking beast

                      Comment


                      • #12
                        Originally posted by qan View Post
                        That'd be pretty cool, but no, it would be difficult to make use of that older data unless it was done to make a sort of simulated MMR with your past self vs all other players, adjusting each person's simulated MMR over time. Interesting idea, actually, considering the huge amount of match data we have available. It would even allow simulated MMR to be tracked over time, so you could see who was the very best in every single age (at least from TWD data) ... But as fascinating as that might be, it's a different project altogether.

                        It'll just work as most MMR systems do: getting closer to estimating your current actual skill level as you play more and more games. Nothing fancy, just: did you win, or did you lose? It's an algorithm that takes the human guesswork out, which is often the step (at least without rigorous testing and reiterating) that mucks up just about any kind of ratings system.

                        And what !signup does is just allow me some test data to use to make sure things are working properly. It'll also make it easier to start testing. Everything will be wiped before the real go-live, so no worries.
                        Quick question: is the mmr going to favor wins over record? Will it have have a bigger weight for kills or KOs, etc? I only ask this because the answer can directly impact how someone plays.

                        If I play for the win, record doesn't matter to me in most cases. I'll take risks to keep momentum, give up tons of deaths to get the enemy 9s, setup kills for others, or just die to keep high death teammates alive. All of these things are huge parts to winning games (especially with teammates that are not top players), yet do not show up on stat sheets for the most part.

                        If I play for record, I can just sit back not giving a fuck and get a better record in a probable loss, unless I'm surrounded by better players who will hold their own regardless of what I do. Since that's rare unless I join a better squad (there are too few already), record gets put on the backburner a lot.

                        Maybe it's being on Rocket for so long or not being a 10* that makes me have this perspective over people on stacked squads, but I literally have to play different depending on my teammates and what their capabilities are just to win games. I compare it to a PG in basketball being pass-first compared to score-first depending on whose around them. This game doesn't record assists so wins are the only thing that can reflect the difference. Winning 'more' than otherwise would be possible with lesser-skilled players due to doing those untracked things is difficult, but doable. Hypothetically, MMR should eventually recognize that as well?
                        RaCka> imagine standing out as a retard on subspace
                        RaCka> mad impressive

                        Comment


                        • #13
                          In a true MMR system only winning matters.

                          Comment


                          • #14
                            As Turban says, wins are the only thing that matter, yes. No other data is taken into consideration. A few MMR systems attempt to adjust slightly based on performance, but arguably this is a mistake.

                            If you kill 20 people in battle but your army dies at the end, you didn't win. You're dead. In fact, in some cases it's possible that the reason you were able to amass such an impressive list of victims is simply because your team was supporting you properly, but in your excitement, you didn't do the same for them.

                            Those who will do best in MMR are those who know how to play well as part of a team, and do so as skillfully as possible, bringing wins for their team. Theoretically, you could actually make no kills but still be at the top of the leaderboard. (Extremely unlikely, but perhaps you've developed some nefarious bait scheme that allows your team to pull in the wins.)

                            We'll also be able to run teams of different skill levels against one another and award points fairly, much like how TWD team scores function. Matchmaking is not strictly a necessary component. Queuing with a small group of friends or a full team might also be possible, with work.

                            Just had a side-thought as well: we could look at both casual and comp. Casual is exactly the same but hides MMR score, allowing you to feel a bit freer to experiment while still getting the benefits of an MMR system.
                            "You're a gentleman," they used to say to him. "You shouldn't have gone murdering people with a hatchet; that's no occupation for a gentleman."
                            -Dostoevsky's Crime and Punishment

                            Comment


                            • #15
                              qan, with the aim to get more !signups in ?go mmr

                              Cud we plz (automatically if possibil) allow the playirs who !signup to be listid on forums, similar to what haf been done with TWDT/TWEL signups?

                              Comment

                              Working...
                              X