Expanding customer base with redesigned websites using bandit framework
As Covid-19 led to an intensified worldwide lockdown, the population at large struggled to adapt to the new reality. Although the cautious easing of restrictions has been initiated, many behavioural changes will habituate despite being adopted due to force majeure. For example, during this time, many consumers that reluctantly moved to online purchasing will carry on with that practice and thus become digitally more comfortable. Moreover, this new cohort is likely to have a revised preference for products due to the prevailing economic crunch.
Hence, these changes in requirements will result in a hunt for new vendors by individuals that find their existing suppliers inadequate. From a commercial perspective, this disruption is both an opportunity for businesses to acquire new clients, as well as a threat for those with low customer satisfaction. Therefore, in order to intelligently traverse during this rapidly changing environment, organisations must adopt innovative strategies to retain and expand their client base during these fluid times.
Whilst consumers navigate the web to search for new suppliers, companies must make improvements to their digital assets to retain and attract wavering customers. As organisations make enhancements to their online presence, they can track their progress by monitoring website metrics such as the conversion rate. In its simplest form, the conversion rate is the total number of individuals that buy a product, divided by the total number of visitors to the website. This important metric works in conjunction with search engine optimisation (SEO). Whereas SEO aims to increase visitors, conversion rate optimisation seeks to drive maximum number from amongst them to buy. Hence, during this critical period of consumer upheaval, companies can adopt conversion rate as one of the key performance indicators (KPO) that reflects retention as well as new customer acquisitions.
A well-known method for businesses to enhance their conversion rate is by dynamically redesigning their websites using an AB testing platform. The AB testing procedure works by automatically splitting the incoming visitors into different pools in which each group views a specific instance of a competing design option. For example, let us assume we are interested in improving the background colour of our website and would like to choose from the two options, red or blue. The AB testing process divides the visitors into two sections and displays the website with the red background to one group, whereas, the others are shown an instance with the blue background. The eventual winner of the experiment is the option that achieves the highest conversion rate and it is subsequently adopted for all customers.
As websites typically consists of several components that require upgrading, the AB testing process could be tediously slow if we target a single element every time. Although an enhanced version called multivariate AB testing can simultaneously deal with numerous attributes, its application has severe limitations. The essential issue is that the count of splits needed for incoming traffic grow uncontrollably as we increase the number of features within a test. For example, the previous instance had a single attribute (background) with a couple of options that entailed the distribution of online visitors into two groups. However, if instead of one change we would concurrently target two binary variables, then the required partitions will increase to four. In general, the number of required traffic divisions are equal to the total number of possible design options. Hence, if we were to simultaneously optimise thirty such features, then the total number of possible designs as well as bifurcations sharply rises to about a billion. Therefore, large scale multivariate testing is impractical since division of visitors in the billions is not possible for even busy websites with traffic in the millions.
Interestingly, the complexity of multivariate AB testing can be made tractable if we model it as an instance of the armed bandit problem. The word bandit indicates a slot machine, referencing a scenario in which a gambler visits a new casino and wishes to maximize earnings. Hence, given some initial amount of money, the gambler needs to develop a strategy to play the machines in a way that ensures maximum returns.
As the initial investment is not enough to play out all the bandits, the gambler could start off by trying out a few arbitrarily selected ones. As more of these short trials are conducted, the bettor begins to gain a deeper understanding of the lay off the land. The garnered insight is instrumental to divert the spend effectively in favour of highest paying bandits. Additionally, generalized learnings from the trials influence scouting for the best untested slot machines. For example, if older bandits are observed to be poor performers, then it makes sense to prioritize newly manufactured untried machines for inclusion in next iterations.
Incidentally, the best approach under these circumstances follow two common idioms. Firstly, the gambler should not put all the eggs in one basket. This means that instead of focusing on a single bandit, it is better to maintain a portfolio of tested as well as untried slot machines as targets for investment. Secondly, the bandits should be played to their strengths. This basically implies that the slot machines that provide higher dividends should proportionately get more share of chances in next rounds. Remarkably, this simple strategy broadly explains the founding principles of an ideal winning algorithm.
The bandit framework can be mapped to multivariate AB testing in order to make it manageable. Recall that in order to achieve best results, the gambler had to rely on learnings from repeated short trials as limited resources did not permit trying out all the bandits. Similarly, an analogous approach is followed in which small scale AB testing with few partitions is repeatedly undertaken to achieve convergence. Hence, the process kickstarts with an initial AB test (trial) of few randomly selected designs (bandits) and the conversion rates (winnings) for each split are tabulated. During the next AB test (trial), more traffic (resources) is diverted to high yielding choices whereas learnings from the previous AB tests help generate new designs that continuously replenish poor performing ones. Therefore, by employing the bandit framework within multivariate AB testing, businesses can speedily redesign their websites automatically and without having to exhaustively try out all possibilities.
As consumers undergo transformation due the grim economic outlook, the vendors for their own survival must make their digital presence more engaging for visitors. For this purpose, the bandit framework can be integrated with multivariate AB testing to speedily achieve scalable redesign of business websites. This amalgamated approach will enable organisations to achieve higher conversion rates leading to improved customer attainment and stickiness.
Published in The Express Tribune, June 30th, 2020.
Like Opinion & Editorial on Facebook, follow @ETOpEd on Twitter to receive all updates on all our daily pieces.