Implementing a migration for the rps chain

Published 23/2/2022.

This is a continuation of the previous rps article, to follow along either go through that one first or acquire v0.1 of the repository. The plan now is to add a map where players can look up the games they are associated with.

Run the chain, start some games and finish a few of them, If you haven't already. Check the current block height with rpsd q block and stop the chain. Next make a backup of the ~/.rps folder.

Migrations are usually performed by passing a software upgrade proposal however we are going to take a shortcut and simply register the upgrade in the EndBlock hook. Open app/app.go in your favorite IDE (I use Atom for the obvious reason) and navigate to the EndBlocker function where you add the following, with the block heights adjusted appropriately:

	if ctx.BlockHeight() == 59340 {
		plan := upgradetypes.Plan{Name: "add-playerMap", Height: 59350}
		app.UpgradeKeeper.ScheduleUpgrade(ctx, plan)
	}

Rebuild rpsd with starport c build and run the chain through rpsd start (or use starport c serve --verbose) until it halts and says the upgrade is needed, ctrl-C the chain and add our map with the following command: starport scaffold map player ongoing:array.uint completed:array.uint --no-message

Then open x/rps/module.go at line 140 where the RegisterServices function is, within it add the following:

	cfg.RegisterMigration(types.ModuleName, 1, func(ctx sdk.Context) error {
		// Perform in-place store migrations from ConsensusVersion 1 to 2.
		// Starport currently initiates the CV to 2 even though the Cosmos docs say it should start at 1.
		// The Cosmos docs also state that there has to be a migration for every preceding version, hence this dummy function.
		return nil
	})
	cfg.RegisterMigration(types.ModuleName, 2, func(ctx sdk.Context) error {
		// Perform in-place store migrations from ConsensusVersion 2 to 3.
		playerMap := make(map[string]*types.Player)
		k := am.keeper
		matches := k.GetAllMatch(ctx)
		for _, match := range matches {
			for _, addr := range []string{match.Player1, match.Player2} {
				player, found := playerMap[addr]
				if !found {
					player = &types.Player{
						Index:     addr,
						Ongoing:   []uint64{},
						Completed: []uint64{},
					}
				}
				if match.Winner > 0 {
					player.Completed = append(player.Completed, match.Id)
				} else {
					player.Ongoing = append(player.Ongoing, match.Id)
				}
				playerMap[addr] = player
			}
		}
		for _, player := range playerMap {
			k.SetPlayer(ctx, *player)
		}
		return nil
	})

The dummy migration from 1 to 2 is there because for some reason starport initializes modules to version 2, I have opened an issue on this matter. While in the module.go file modify the ConsensusVersion function to return 3.

Go back to the app.go file and search for app.mm.RegisterServices. Notice that the configurator passed to RegisterServices, where the migrations are stored, is itself not saved anywhere. This is a problem as we need to access those migrations when performing the upgrade (I've submitted an issue on this as well), to remedy the situation lift the NewConfigurator call out of the parenthesises and into a variable we will call appcfg:

	appcfg = module.NewConfigurator(app.appCodec, app.MsgServiceRouter(), app.GRPCQueryRouter())
	app.mm.RegisterServices(appcfg)

Then declare it as a global variable for example on line 102:

var appcfg module.Configurator

Finally search for the assigning of the apps UpgradeKeeper (app.UpgradeKeeper = upgradekeeper.NewKeeper) and below it insert:

	app.UpgradeKeeper.SetUpgradeHandler("add-playerMap", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) {
		return app.mm.RunMigrations(ctx, appcfg, fromVM)
	})

Now the upgrade logic is complete and we just have to handle new games that are started and finished. Open x/rps/keeper/msg_server_join_queue.go and insert the following in a good place (get the id from id := k.AppendMatch(ctx, match)):

		for _, addr := range []string{match.Player1, match.Player2} {
			player, found := k.GetPlayer(ctx, addr)
			if !found {
				player = types.Player{
					Index:     addr,
					Ongoing:   []uint64{},
					Completed: []uint64{},
				}
			}
			player.Ongoing = append(player.Ongoing, id)
			k.SetPlayer(ctx, player)
		}

Next open x/rps/keeper/msg_server_submit_move.go and find a fitting location for the following:

		for _, addr := range []string{match.Player1, match.Player2} {
			player, found := k.GetPlayer(ctx, addr)
			if !found {
				panic("Player not found even though they finished a match.")
			}
			for i, mid := range player.Ongoing {
				if mid == match.Id {
					player.Ongoing = append(player.Ongoing[:i], player.Ongoing[i+1:]...)
					break
				}
			}
			player.Completed = append(player.Completed, match.Id)
			k.SetPlayer(ctx, player)
		}

Perfect! Now rebuild the chain, launch with rpsd start and verify that the map of players got populated with rpsd q rps list-player

Note that I had some very strange corruption issues when using starport c serve above instead of rpsd start, debug messages got printed in the exported genesis and when I restored the ~/.starport/local-chains/rps backup it flat out refused to start even after c serve -r with errors like panic: parameter UnbondingTime not registered and ERR CONSENSUS FAILURE!!! err="parameter HistoricalEntries not registered". In addition it didn't execute the migration. Even though the migration went fine when using the daemon directly when I afterwards tried to pick up with c serve it could not read the database and on shutdown gave a panic: UnmarshalJSON cannot decode empty bytes, running the chain with starport after the upgrade corrupted something as rpsd start could not resume afterwards.

Output of starport version:

Starport version:	v0.19.1
Starport build date:	2021-12-18T05:56:36Z
Starport source hash:	-
Your OS:		linux
Your arch:		amd64
Your go version:	go version go1.16.2 linux/amd64

To learn how to build a frontend for the chain see the next article.