signals of noise

Category: data and analytics (page 1 of 2)

e-Commerce and Memory [3]

This post is part of a series where I develop a concept of adding “memory” to e-commerce sites to improve the experience of customers with the site. [Part 1, Part 2]

While I am at discussing the usefulness of adding some memory to e-commerce sites, I thought I might as well take a stab at creating some wireframes on how the feature should be implemented. (The first two sketches were created using Penultimate on my iPad while the low fidelity wireframe was created with Mockingbird.

The first sketch is a suggested generic layout for a e-commerce site. Typically most e-commerce sites display the best selling or new addition sections right at the top. Without exception, even if the current user is logged in and the site has information about the user. Ideally (caveat this is my interpretation and obviously must be tested before being accepted in practice) the sections that make most sense to the user, the personalized sections on recommendations and user specific activity. This is what I have attempted to do here.

A generic layout of an e-commerce site.

This is the sketch for the specific section in question here. Mostly self-explanatory so I will save myself some typing :) And if it is self-explanatory then I suppose the design works as well!

A sketch of the implementation of "memory" in e-commerce.

And finally the low fidelity wireframe built with Mockingbird.

A low fidelity wireframe of the implementation of "memory" in e-commerce.

How does this look?

e-Commerce and Memory [2]

This post is part of a series where I develop a concept of adding “memory” to e-commerce sites to improve the experience of customers with the site. [Part 1, Part 3]

Adding memory to e-commerce

At its most basic this could be implemented as a list of last n items that were explored by me on the last p visits. (n should ideally be a small number and p could be a reasonable historical period that would make sense for the customer. Going back too far back might not be useful as the possibility of the need for that item having expired would be higher.) This serves just as a simple reminder and a kind of a bookmark for easy access to the products.

Once past this hurdle there is some really significant value that can be created by making this memory intelligent. There are many possibilities here and I will only be scratching the surface with the ideas that follow. If I had more time, and a mandate to do this on an official basis, I could dream up more ways. So let’s see what can be done.

In the simple solution I suggested going back only a reasonable time back in history to get the customer’s products of interest. But that is a bad generalization. Especially for high value purchases. When I bought my camera I researched for almost 2 months to get all necessary information about camera bodies and additional lenses before deciding on what to purchase. So in case of showing recently viewed products this is something that needs to be kept in mind. So there is a trade-off between what category of product to show in this list between time since first view, last viewed date and relative value of the product.

I know I am being fairly subjective here, but then this is really a brain dump. So, there you go.

Further signals that can be used to build this list could include “changes” to the product. Of course the product itself doesn’t change but the information available about the product can easily change and be tracked. May be additional reviews have been posted, a friend might have bought it (social integration in e-commerce is another idea ripe in my mind and I will write about it in a few weeks), it may have been the darling of public social conversations recently or the it might have won an award. There can be scores of such signals that can be used to optimize the way this information is presented to the customer.

The challenge is obviously to identify the correct signals for this task. Furthermore the some signals are likely to work better with specific classes of products while there may not be any such valuable signal for some classes of products (e.g. a pendrive). The choices made at this signal identification stage will determine the complexity of implementation. Also to be ensured is that there is always a suitable fallback available in case the customer has not viewed any products recently or has only browsed those classes of products for which no signal is available.

Leave a comment to let me know what you think of the idea. E-commerce has so much to offer beyond just the recommendations being served up right now. It could really do with some intelligent memory for the time being. A socially intelligent memory will be even better.

e-Commerce and Memory [1]

This post is part of a series where I develop a concept of adding “memory” to e-commerce sites to improve the experience of customers with the site. [Part 2, Part 3]

Add memory to e-commerce for a better experience.

What do you do when you get back to work in the morning? Yes, the first thing is to get the menace of email out of the way. But then there are two other broad categories into which the daily routine can be divided. One, is a set of new tasks that need to start on that day or needs your attention on that day. Two, and more importantly perhaps, is a continuation of what you were doing the previous evening before you left. You need to complete the incomplete tasks at hand. This where our memory serves us well. We explicitly remember that there are somethings that we need to complete before proceeding with other things.

How I hope the same was true with web applications too. And especially for e-commerce. It will also help answer that all important e-commerce question: How do increase customer loyalty? Let me explain.

When I go over to a e-commerce site (e.g. to research a product (books, lenses, camera accessories, etc.) I do not necessarily buy it with my first visit. If it is something specific I will read reviews on the site, or in case there are none go off to a search engine to get that information. If I am only certain of the class of product (e.g. a wide angle lens with Nikon mount) I might compare the product specifications, reviews and then may be continue the research elsewhere on the web. Fairly straightforward. Nothing what others won’t be doing.

Here is where I feel a little bit of memory would serve both the e-commerce sites as well as the customers well. On Flipkart for example I can see a list of recently viewed products when I am on a product page. But not on the home page when I land there. So if I am returning to the site to do further research or even buy a product I have to do a search again or at best access it quickly from my wishlist if I have added it there (which is also not very convenient if it is as long as my Flipkart wishlist).

Not only is this an extra step to get to my goal, it is also a missed opportunity for the site to remind me that I was looking at something the last time I was there that I have not yet made a decision on yet (as far as they know since I might have picked it up from somewhere else for whatever reason). So how should this memory thing work for e-commerce?

Just stop here and think about it. I will be back with my thoughts tomorrow in part 2 of this series.

India to Provide Access to Public and Government Data

India to join the open data revolution in July – India

This portal which has been modeled on, a portal which offers public and government data to the outside world, aims to share non-sensitive data from various Indian ministries for general public use in scientific, economic and developmental programmes.

In November last year I had written a post that called on the Government of India to provide access of official data to the public along with APIs to expose that data to developers willing to create mashups (Open Access Government: via API).

Open access to government data would allow developers to use it to create applications that can then be accessed by us to understand the state of the nation. It could help economists and statisticians predict future trends and understand correlations between seemingly disparate sets of data.

While I am sure no one in the Indian Government actually got the idea from me, it is good to see that something like what I had called for is getting set up. Albeit there might not be any APIs available yet (or ever). At least the data will be out there and will hopefully lead to some interesting insights to what is happening in different departments of the Government.

PS: I am kind of betting there won’t be much about black money there yet.

How Not to Chart: The McKinsey Way

How many new enemies did I make with that statement? The most famous management consulting firm in the world does not know how to draw charts? I must be mad.

But I think not. Charts are a means to show data that makes sense very easily. The important word here is not sense, but easily. Anyone can create a data visualization. What is difficult is to create it in a way that is easily understandable to all. Not only to the people who created it.

Here is the chart that forced me into thinking. A visualization of data and projections on the growth of digital media in three Asian countries – India, China and Malaysia.

How not to create charts

There are so many numbers that it is really difficult to make sense of any. The legend on the left side indicates the shades of blue that represent each category. But how easy is it to understand that and apply to the Malaysian data?

For India in 2009 you might even think that mobile is 65 because that is the bottom-most stack in the bar. But then you realize that there is a 0 jutting out.

Then there is the table sitting out in the cold giving penetration numbers as a percentage of the total population. But what does each column represent?

After some exploration you will realize that you need to read the years and the country from the graph above!!

The other thing I hate about this representation is the way the growth is compared using stacked bars. For example in the case of China the PC penetration drops from 2009-2015. But unless those numbers are there you might even think they are almost the same. This kind of growth comparison should always be from the same base. It is much easier for the intended audience to understand trends.

But then maybe McKinsey doesn’t want the intended audience to understand too easily, hmm!

Social Media Metrics: One Size Don’t Fit All

A recent Econsultancy report on usage of social media shows that most companies are looking to invest in this channel as expected. However, the actual amount being spent is still quite moderate to low.

Social media spends

The one concern I have with this report is the size of the businesses surveyed. Almost 80% of the surveyed businesses has a annual turnover of less than £150 million. And we also cannot be sure how high the turnover goes for the other 20% of businesses. By this measure I wouldn’t say the spend is too moderate given the uncertainties of social media measurements. A reason cited for the lack of investment.


The primary reasons for this according to the report are a lack of integration within business and a failure to establish appropriate metrics to measure success.

I feel, to resolve the difficulty many companies are having integrating social media activities with other parts of the business, companies need to establish a social media strategy based on their objectives and in line with determined success measures. It is important to involve the entire company in the strategy to maintain a consistent experience for customers on the social web.

To tackle the measurement issues of social media marketing efforts here are what I feel could be some metrics that can be tracked. And I am just scratching the surface here. However, the choice of the metric has to be clearly dictated by the objectives of the companies in the line of business they are in.

  1. Sales generated through social media campaigns
  2. Leads generated in terms of registrations completed and downloads
  3. Contacts made and information requests in response to campaigns
  4. Referrals and shares beyond the original recipient of the message
  5. Increase in the number of fans/followers
  6. Reduction in call center activity and hence associated costs
  7. Decrease in the number of consumer complaints
  8. Reduced sales cycle

Clearly not all these can be applicable to all efforts. Some are related to awareness campaigns while others are demand generation metrics. Some are sales metrics while still others relate to cost reduction. The choice of metrics thus would depend on the type of effort undertaken. One size doesn’t fit all.

Price Wars, Data and Portability

Data portability may improve user experience and make social networks richer

Yesterday I was reading a nice post on data portability on Mashable. It was a guest post by Elias Bizannes, the chairperson and executive director of the DataPortability Project. Before I share my thoughts here is what data portability means (from the post).

Data Portability can be loosely described as the free flow of people’s personal information across the Internet(),  within their control. It has now become a standard term in the Internet industry in the context of cloud computing,  open standards and privacy.

Examples of data portability include:

  • Being able to import all your social network connections, media and other data to another service with the click of a button.
  • The ability to reuse your health records when visiting different doctors and jurisdictions.
  • Not having to re-enter your credit card information when a service you use changes payment gateways.

You will realize the importance of data portability when you want to share the same data across networks. I face this situation often with this blog. I want to share my posts with friends on Facebook, followers on Twitter and my network on LinkedIn. But there is not easy way to do it.

I use a plugin (WPBook) to share on Facebook via an application. FeedBurner to share on Twitter. And have connected LinkedIn to my Twitter account to pull in all my updates.

Sometimes I want to upload the same photo to Flickr and Facebook but have to do it twice. It is my photo but I cannot share it across the two networks. These are some problems that data portability should be able to stop. The only way this is possible today is through public APIs that most of these networks expose. But that means we need to depend on apps that use these APIs to create a connection. And it is still difficult to selectively share data.

In an ideal world we would have a dashboard showing all our data in one place and share it across all our networks simultaneously and seamlessly. If we want to delete some information, we do it in one place and it is removed from everywhere.

So why do the networks not have a data portability policy in place today? It can only help them. The data is enriched with metadata not from one network but from all my networks. Win-win. Isn’t it? But data is the currency of social networks. Like cold hard cash is for the industrial economy. Data is the source of revenue. Data is the source of differentiation. At least that is the thinking.

In this way I feel this battle for data is similar to a price war in the offline world. Similarly as a price war reduces the industry profitability (case in point the Indian mobile telephony industry), this reluctance of data sharing is preventing the social networks from exploiting their true potential.

What are you thoughts? Do you think that a lack of data portability is stifling the innovation potential of social networks? Share any examples, for or against that you might have.

[image: Flickr/Michiel2005]

Want to Jump on the Measurement Bandwagon?


Do you want to measure everything? Do you want to do it because everyone else is doing it? Or do you know why you want to do it?

Answering these questions can help you.

  • What are the business objectives behind the measurement?
  • What metrics are relevant to the stated business objectives?
  • What are the call-to-action for customers to generate data for the relevant metrics?
  • What information is required by the users to respond to the necessary call-to-actions?

[image: Flickr/Laineys Repertoire]

Leverage App Usage Data Proactively

Keep an eye on users to help them proactively

Many times when you install a new application it will ask permission to collect and send anonymous usage data. This is true for many most Microsoft and Google products and most browsers. The data collected can then be used to enhance future versions of the product, add new features or drop unused features.

But rarely ever will the app maker ever proactively help the user in her usage of the product. I feel most app makers who have internal app analytics and those who don’t are missing out on a big user experience offering.

Soluto provides feedback on how others are using the productSoluto, for example, collects data from its users and help make decisions to pause or delay an app from Windows boot. It also allows users to edit app descriptions that then helps other users to understand what a particular app is for.

This makes the experience of using Soluto really great.

But what I have not seen in any product is proactive user feedback. That is personalized user interactions to help them use the product better. This is besides the traditional ways in which this data is used. Here are some thought on how this can be done:

1. Send user tips on how to better use the product based on current usage statistics.

2. Suggest the user to feature that she may not be using but you want her to.

3. Help her discover new ways to use your product.

The advantages are obvious. Greater interaction with the product, good word-of-mouth and recommendations.

What if you have too many users? You select the innovators, the power users and those who spend most time with your app. Of course before you do all this do not forget privacy concerns. Allow users the option to opt-in anonymously or with their email addresses. The default should always be opt-out. And remember, the product itself has to be worthwhile for any of this to work.

[image: Flickr/Niffty]

The Power of Data Visualization

David McCandless turns complex data sets (like worldwide military spending, media buzz, Facebook status updates) into beautiful, simple diagrams that tease out unseen patterns and connections. Good design, he suggests, is the best way to navigate information glut — and it may just change the way we see the world.

The kind of TED talk that I love. The key to visualization of data and information is context. Without context everything is meaningless. Once context is added to a visualization, every image starts to tell a tale.

David McCandless: The beauty of data visualization

Bonus Video: Does your mindset match the dataset?

Hans Rosling: Let my dataset change your mindset

More from the data visualization master, Hans Rosling.

[thumbnail: Flickr/Morgaine]

Older posts

Copyright © 2014 signals of noise

Theme by Anders NorenUp ↑