<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:media="http://search.yahoo.com/mrss/"
	
	>

<channel>
	<title>The Data Handbook</title>
	<link>https://crhandbook.cargo.site</link>
	<description>The Data Handbook</description>
	<pubDate>Mon, 11 Apr 2022 12:04:42 +0000</pubDate>
	<generator>https://crhandbook.cargo.site</generator>
	<language>en</language>
	
		
	<item>
		<title>Chapter 1</title>
				
		<link>https://crhandbook.cargo.site/Chapter-1</link>

		<pubDate>Sun, 20 Mar 2022 11:49:20 +0000</pubDate>

		<dc:creator>The Data Handbook</dc:creator>

		<guid isPermaLink="true">https://crhandbook.cargo.site/Chapter-1</guid>

		<description>
	
    	
        	INDEX
            &#38;nbsp;  &#38;nbsp;&#124;  &#38;nbsp;  &#38;nbsp; 
            ⇦ PREV
            &#38;nbsp;  &#38;nbsp;&#124;  &#38;nbsp;  &#38;nbsp; 
            ⇨ NEXT
        
	
	
		
    Editor’s note
    	
    
    
    	
        	INDEX
            &#38;nbsp;  &#38;nbsp;&#124;  &#38;nbsp;  &#38;nbsp; 
            ⇦ PREV
            &#38;nbsp;  &#38;nbsp;&#124;  &#38;nbsp;  &#38;nbsp; 
            ⇨ NEXT
        
    


&#60;img width="2000" height="600" width_o="2000" height_o="600" data-src="https://freight.cargo.site/t/original/i/7bd356ceb0a046f7ee9bde401f33549dee4aa0b8fdbd98fd210177d13fa4b858/1.jpg" data-mid="138720715" border="0"  src="https://freight.cargo.site/w/1000/i/7bd356ceb0a046f7ee9bde401f33549dee4aa0b8fdbd98fd210177d13fa4b858/1.jpg" /&#62;


	
		  
 		&#60;img width="1000" height="1000" width_o="1000" height_o="1000" data-src="https://freight.cargo.site/t/original/i/cd7935dd57d81902a05e43cc301efb626e48608568d5cb2ef1e19064d1d66876/Patricia.png" data-mid="138580034" border="0"  src="https://freight.cargo.site/w/1000/i/cd7935dd57d81902a05e43cc301efb626e48608568d5cb2ef1e19064d1d66876/Patricia.png" /&#62;
        
		
    		Patricia Åkerman
           	Editor, 
         	Partner
    
		
        	For many companies, understanding the commercial customer journey is a challenge where digital sales are involved. Often, parts of that journey might even be visionary, but each step is optimised separately and usually owned by different functional departments. To run digital sales, it’s essential to have shared goals and a common thread to follow. That thread is data.
        
		
		The question is, how can we stitch together a set of winning digital commerce capabilities through consolidated data? 
		
        
        	We’ve had discussions with more than a hundred companies in the Nordics, the UK, Germany and the Netherlands, and it’s clear what the current number one topic is for every digital commerce leader and practitioner: how to better utilise data to grow sales revenue in digital and traditional channels. And more specifically, how to use data holistically over the entire customer journey rather than optimising individual channels and touchpoints.
        
        
			This handbook aims to help business, sales, marketing, IT and ecommerce leaders formulate an overall understanding of various essential themes that can help turn data into concrete business impact.
        
        
     
        
        	Through interviews with thought leaders conducted by Columbia Road's general manager Eero Martela, along with our own insights, we cover aspects of commercial data strategies, transformational data projects, technical data capabilities, the utilisation of data for automating customer-facing activities and using data science and engineering to build more commercially lucrative algorithms.
         
         
         	I hope that these explorations of data in digital sales and marketing help you to better understand relationships and dependencies, formulate a mental map of opportunities and gain insight and inspiration for the key decisions you must make, both now and in the future.
    	
	
</description>
		
	</item>
		
		
	<item>
		<title>Chapter 2</title>
				
		<link>https://crhandbook.cargo.site/Chapter-2</link>

		<pubDate>Tue, 29 Mar 2022 14:25:18 +0000</pubDate>

		<dc:creator>The Data Handbook</dc:creator>

		<guid isPermaLink="true">https://crhandbook.cargo.site/Chapter-2</guid>

		<description>
	
    	
        	INDEX
            &#38;nbsp;  &#38;nbsp;&#124;  &#38;nbsp;  &#38;nbsp; 
            ⇦ PREV
            &#38;nbsp;  &#38;nbsp;&#124;  &#38;nbsp;  &#38;nbsp; 
            ⇨ NEXT
        
	
	
		
    Introduction: to understand, you must ask
    	
    
    
    	
        	INDEX
            &#38;nbsp;  &#38;nbsp;&#124;  &#38;nbsp;  &#38;nbsp; 
            ⇦ PREV
            &#38;nbsp;  &#38;nbsp;&#124;  &#38;nbsp;  &#38;nbsp; 
            ⇨ NEXT
        
    



	&#60;img width="2000" height="600" width_o="2000" height_o="600" data-src="https://freight.cargo.site/t/original/i/2aa79b7295343f58ee9c6e6112c663ca1c5d1932c1a9504a0f47965bd1e6e281/2.jpg" data-mid="138720932" border="0"  src="https://freight.cargo.site/w/1000/i/2aa79b7295343f58ee9c6e6112c663ca1c5d1932c1a9504a0f47965bd1e6e281/2.jpg" /&#62;




	
        
        &#60;img width="1000" height="1000" width_o="1000" height_o="1000" data-src="https://freight.cargo.site/t/original/i/35acc52ffe499270938a1d7375116a0dc132e1aaa35bb92864db2796408e5bb6/Eero.png" data-mid="138579639" border="0"  src="https://freight.cargo.site/w/1000/i/35acc52ffe499270938a1d7375116a0dc132e1aaa35bb92864db2796408e5bb6/Eero.png" /&#62;
        
        
    		Eero Martela
           	Managing Consultant, 
         	Partner
	
    
		
        	If I had to choose just one hot topic in the field of digital commerce it would, without a doubt, be data.
		
         
         But what about data? That’s the challenge. Many companies and experts in the field have their own individual points of view on how to improve data utilisation. For a business leader, it can be difficult to form an overall understanding of which questions they should be asking and the decisions they should make — the topic of data is simply too broad. Ultimately, data encompasses every aspect of a business.
		
         
With this handbook, we’ve attempted to make it a little easier for you to approach this vast theme. We’ve talked with academics, thought leaders, data companies, major retailers and disruptive brands to drill down into the topic of data from both a strategic and a practical perspective. In addition, our consultants share what they’ve learned from working with data in a digital commerce setting. 
		
        Insights from industry leaders
        
        To produce this handbook, I sat down with some people I find inspiring and companies that I feel are doing interesting things with data. I learned a lot through these discussions, and I feel ever more convinced of the importance of data — and its utilisation for business growth. 
        
        
        Dr Daniel McCarthy broadens the topic of data utilisation into the highest possible level of abstraction: how to actually evaluate a company’s worth based on customer data. While we wouldn’t necessarily expect it to change a company’s valuation model overnight, this approach surely provides inspiration for pushing holistic data management into a decision-making tool for board-level discussions. 
        
        
        On a more concrete level, the ultimate goal for data utilisation would be to have the ability to continuously measure and optimise commercial actions based on customer lifetime value (CLV). Many of the experts interviewed for the book touch on this, but Dr Peter Fader, a leading scholar on the topic and a dear friend of ours, provides an update on the status of the CLV framework in companies today and the potential future development areas.
		
        
For even further inspiration on new approaches to data utilisation, Erik Zetterberg from Singular Society explains, from a designer's perspective, how his company’s revolutionary business model has introduced a new way of using customer and transactional data, and how this business model helps them tackle privacy issues around data.
		
        
        Once a target state for the commercial utilisation of data has been defined, the reality is that the journey towards the end goal will take years. How to really succeed in this long-term transformation while maintaining coherence and quickly gaining a tangible impact is a billion-euro question.
            
    
        
    
         
Minna Vakkilainen from Kesko, a leading Finnish retailer, and Saku Laitinen from European fashion ecommerce leader Zalando have already been on that journey for some years. Vakkilainen shares her experiences of successfully leading Kesko’s transition into a data-driven company starting from bottom-up buy-in. Meanwhile, Laitinen shares his slightly more technical point of view on how, over the years, Zalando built highly sophisticated content personalisation capabilities that are now at the heart of the company’s commercial operations.
		
         
         Paramount in data strategy implementation is the ability to successfully build the core data architecture. Yves Mulkers, a data and analytics strategist and recognised global thought leader, elaborates on how companies can succeed in data capability build projects and shares some of the most interesting recent development trends he’s seen in the field of data architecture.
		
         
Major data-related build and transformation projects are commonplace in modern companies. Once suitable data capabilities are in place, systematically utilising them for commercial actions should be a high priority, meaning that maximising the impact of the available data is central to success. Vishnu Sahoo and Edward Ford, top experts from Supermetrics — a market-leading marketing and sales data service — share their views on the key success factors for generating true business value from data science and analytics, how companies are adapting to evolving privacy regulations and the latest development areas in the field.
    	
        From ideas to practice
        
        To support our interviews with these leading experts and their broader insights on the above themes, we also wanted to introduce some very concrete takeaways from the field of digital sales and marketing. For this, our own experts here at Columbia Road, who work hands-on with these topics, share some learnings that we see as critical for the decision makers of today.
        
		
        Aki Pöntinen shares his research on the topic of utilising AI in B2B sales and Roope Paju, Antton Ikola and Elli Pyykkö reflect on what they’ve learned from tackling the challenge of unconnected data. Meanwhile, Juuli Kiiskinen reflects on the future of brands in the age of data-based personalisation and Laura Purontaus describes a data engineer’s role in a growth team, building a bridge between the themes of this book and the practice of daily digital sales operations.
        
    
    
</description>
		
	</item>
		
		
	<item>
		<title>Chapter 3</title>
				
		<link>https://crhandbook.cargo.site/Chapter-3</link>

		<pubDate>Tue, 05 Apr 2022 13:57:52 +0000</pubDate>

		<dc:creator>The Data Handbook</dc:creator>

		<guid isPermaLink="true">https://crhandbook.cargo.site/Chapter-3</guid>

		<description>
	
    	
        	INDEX
            &#38;nbsp;  &#38;nbsp;&#124;  &#38;nbsp;  &#38;nbsp; 
            ⇦ PREV
            &#38;nbsp;  &#38;nbsp;&#124;  &#38;nbsp;  &#38;nbsp; 
            ⇨ NEXT
        
	
	
		
    Interview: Dr Daniel McCarthy
    	
    
    
    	
        	INDEX
            &#38;nbsp;  &#38;nbsp;&#124;  &#38;nbsp;  &#38;nbsp; 
            ⇦ PREV
            &#38;nbsp;  &#38;nbsp;&#124;  &#38;nbsp;  &#38;nbsp; 
            ⇨ NEXT
        
    



&#60;img width="2000" height="600" width_o="2000" height_o="600" data-src="https://freight.cargo.site/t/original/i/3f2dd03f00772290810b92e6bf8b8847e153fbd6fd2a8a307578294544fc59e0/3.jpg" data-mid="138720975" border="0"  src="https://freight.cargo.site/w/1000/i/3f2dd03f00772290810b92e6bf8b8847e153fbd6fd2a8a307578294544fc59e0/3.jpg" /&#62;




	
        
        &#60;img width="1000" height="1000" width_o="1000" height_o="1000" data-src="https://freight.cargo.site/t/original/i/62f708632e28cd64b4b189bf8b7a4ff3f563211faa685622376d1ac05bec861b/DanielMcC.png" data-mid="138721079" border="0"  src="https://freight.cargo.site/w/1000/i/62f708632e28cd64b4b189bf8b7a4ff3f563211faa685622376d1ac05bec861b/DanielMcC.png" /&#62;
        
        
    		Daniel McCarthy
           	Assistant Professor of Marketing,
         	Emory University's Goizueta School of Business
    	
	
    
    
		
        	Columbia Road: Can you briefly explain the CBCV model and its benefits?
        
        
            Daniel: The CBCV framework uses historical data and a collection of state-of-the-art statistical models to predict the lifetime value of each customer. It does this by forecasting the future stream of revenue and variable profits per customer over the customer’s projected lifetime. Performing these calculations across all existing customers, factoring in new customers that will be acquired in the future and accounting for other standard financial factors, for example capital structure and cost of capital, we arrive at a CBCV for the company. Of course, doing so also gives us a number of other key financial KPIs “for free”, such as total revenue, customer lifetime value (CLV), CLV relative to customer acquisition cost (CAC) and how these quantities have been varying across acquisition cohorts.
            
            
            The beauty of this framework is that although the model gives us an overall valuation, it also provides us with individual-level estimates for the value of each and every customer in the customer file, so if other company stakeholders, for example marketing managers, want to break it down — even to the level of individual customers — then they can, with everyone working off of the exact same model, building a bridge between marketing, finance, and other divisions within a company. Ultimately, it offers a new way of defining company valuation that is derived from the existing and potential customer base along with the cost to serve each customer, rather than looking at market size and historical revenue and profit figures.
		
        
        CR: Do your clients automate CBCV to track predicted lifetime value and costs for individual customers, and can they use this to track the overall company valuation?
        
        
        Daniel: Yes, some companies do, at least for the revenue and cost data. Having that available in near real time isn’t easy because of the difficulty of attributing things like product returns to specific customers. We’ll often just do a one-shot analysis, but as you allude to, a company can really take this to the next level by building it into their company’s DNA. If it’s helpful, I can take you through an example of one of the analyses that we did on AT&#38;amp;T to get a better sense of what the valuation process looks like.
        
        
        CR: Yes, please tell us more about the AT&#38;amp;T case...
        
        
        Daniel: For AT&#38;amp;T we ran two separate valuations, one for all postpaid customers and one for all prepaid customers. Although the prepaid segment was growing rapidly, the expected lifetime value of the customers within it was much lower, meaning that most of the company’s value was coming from postpaid customers. That bigger-picture view showed us that the success or failure of the company is likely going to be driven by what happens in the postpaid segment. 
        
        
        Breaking down a firm’s value using the expected lifetime value of both existing customers and those yet to be acquired allows us to peel back the onion and analyse the company’s value in a much more granular and diagnostically relevant way, helping us to understand what might happen to overall valuation if some of the drivers were to change. To use a hypothetical example, if AT&#38;amp;T’s prepaid segment were to see a 10% drop in the expected lifetime of new customers, we could quantify exactly what that would mean for AT&#38;amp;T’s overall valuation. If we were inside the company, we could then examine what drove this reduction in expected lifetime and what remedies we could consider to get this back on track. Management can put together a customer dashboard to track these predictive KPIs, in addition to the historical ones, to know how the health of their customer base is evolving over time.
        
        
        CR:
        How does a CBCV-inspired approach affect the work and focus areas of sales and marketing teams?
        
        
        Daniel: The main way is that we could move prospects through the adoption funnel more efficiently, so we’d be acquiring more new customers for the same amount of marketing dollars, resulting in a lower CAC. Ideally, we would be bringing in higher-value customers, meaning their post-acquisition value would rise, too. We would then set up a workbench through which we could start to optimise marketing spend allocation in this long-term customer value-oriented way. That’s the sort of thing we’d recommend. It can be used to get the CFO on board with marketing initiatives because CBCV gives them a more direct view of how customer-focused activities affect the overall company valuation.
        
    
    
    
         
         CR: Can the model help with investment decisions across the whole company, or is it just for sales and marketing?
         
         
         Daniel: Yes, the CFO already thinks about proposed initiatives — typically capital projects — in terms of measures like return on investment, payback period and internal rate of return. The more we can translate our marketing activities into their language, using these same sorts of measures, the easier it will be to sell marketing projects to them. An aspect that’s often ignored is short term versus long term — there are many initiatives where the return is positive in the short term but it won’t be in the longer term. An example of this could be routing customers into an interactive voice response (IVR) system when they call up. Although this can offer significant savings in the short term, it can also lead to a lot of frustrated and unhappy customers — so it’s really important to estimate not just the reduction in costs, but also the impact on customer retention.
         
         
         CR: Does your work lead to a more data-centric approach to everyday sales and marketing operations for your clients?
         
         
         Daniel: Facilitating AB testing is one very valuable thing that this framework can offer. When we run an AB test on an initiative we can establish its net effect on the average customer then use that information to work out the return on investment. For example, we might want to find out if implementing call centre automation positively affects the customer value. If we can randomly route some customers calling in to the more automated solution, while routing everyone else to the incumbent solution, running the right predictive models on both groups and taking the difference between the two enables us to get the impact of that automation initiative on the long-term value of the customer, and how it may vary across customers.
         
         
         CR: Does doing this kind of analysis help people to see sales and marketing initiatives in a different way?
         
         
         Daniel: Yes. For each initiative you have a base case valuation and a full potential valuation. Without any fancy instrumentation, you can still manually run the numbers. Pilots such as this can allow you to generate some early wins and in turn some excitement about this new way of looking at the world. Once you’ve successfully executed this process a few times you can clearly demonstrate to management why it’s important to have an experimental platform and continue running these “test and learn” experiments with a broader set of activities. Netflix and Stitch Fix are great examples of companies who’ve made it easy to run lots of experiments to see how their initiatives perform. They’ve spent a lot of money to do this, but companies can achieve similar results with less extravagant methods — you don’t always have to re-invent the wheel as there are many existing solutions on the market. For example, Optimizely is purely focused on AB testing, while Pega Systems has relevant workbench capabilities. Using existing tools like these effectively, you can create your own “laboratory” in which you can run the experiments and present the results in an easy-to-interpret way.
         
         
         CR: Have you seen companies turning their public reporting and valuation principles into a CBCV-inspired model?
         
         
         Daniel: The whole disclosure question is interesting. We’ve been working with a major bank that’s now planning to disclose customer data for the first time and we’re very excited to be a part of it. They see disclosure as a way to close that valuation gap and get the credit they feel they deserve for their valuable customer relationships. If companies hold themselves accountable in this way it also adds transparency and puts pressure on them to demonstrate that they can maintain or improve their CBCV over time, which is good news for everyone.
		
                
         
        DANIEL MCCARTHY is an Assistant Professor of Marketing at Emory University's Goizueta School of Business. Alongside his colleague Peter Fader, Daniel developed the customer-based corporate valuation (CBCV) model, and the two founded Zodiac (which has since been acquired by Nike) and Theta to make the model’s predictive customer analytics commercially available to companies.
        
        
	
    
</description>
		
	</item>
		
		
	<item>
		<title>Chapter 4</title>
				
		<link>https://crhandbook.cargo.site/Chapter-4</link>

		<pubDate>Tue, 05 Apr 2022 14:49:57 +0000</pubDate>

		<dc:creator>The Data Handbook</dc:creator>

		<guid isPermaLink="true">https://crhandbook.cargo.site/Chapter-4</guid>

		<description>
	
    	
        	INDEX
            &#38;nbsp;  &#38;nbsp;&#124;  &#38;nbsp;  &#38;nbsp; 
            ⇦ PREV
            &#38;nbsp;  &#38;nbsp;&#124;  &#38;nbsp;  &#38;nbsp; 
            ⇨ NEXT
        
	
	
		
    Interview: Dr Peter Fader
    	
    
    
    	
        	INDEX
            &#38;nbsp;  &#38;nbsp;&#124;  &#38;nbsp;  &#38;nbsp; 
            ⇦ PREV
            &#38;nbsp;  &#38;nbsp;&#124;  &#38;nbsp;  &#38;nbsp; 
            ⇨ NEXT
        
    



&#60;img width="2000" height="600" width_o="2000" height_o="600" data-src="https://freight.cargo.site/t/original/i/a38170e164cd9d24a723302165d850a952befe67f0d3adbe871e21aa85c24ff9/4.jpg" data-mid="138731059" border="0"  src="https://freight.cargo.site/w/1000/i/a38170e164cd9d24a723302165d850a952befe67f0d3adbe871e21aa85c24ff9/4.jpg" /&#62;




	
        
        &#60;img width="1000" height="1000" width_o="1000" height_o="1000" data-src="https://freight.cargo.site/t/original/i/c5475bb4c37abfed21b41293757731b8c0c73207a7576b049625bf13ac09a501/4.png" data-mid="138731723" border="0"  src="https://freight.cargo.site/w/1000/i/c5475bb4c37abfed21b41293757731b8c0c73207a7576b049625bf13ac09a501/4.png" /&#62;
        
        
    		Dr Peter Fader
           	Professor of Marketing,
         	The Wharton School of the University of Pennsylvania
    	
	
    
    
		
        	Columbia Road: How did you arrive at the concept of customer-based valuation?
        
        
         Peter: I’ve spent my 35 years at the Wharton School building models to predict various things. Around the turn of the millennium, three things happened: firstly, even though we were continuously coming out with better forecasting models the academic journals were losing interest in the incremental improvements from one paper to another. Secondly, the evolution of the internet meant we were starting to get richer data sources, and thirdly, companies started to raise new, customer-related issues that they hadn’t considered before. These three developments provided a perfect setting to focus on the commercial opportunities.
        
        
         Even though we were giving companies the basic “academic” code which they could apply to project their customers’ lifetime value, they couldn’t replicate all the “bells and whistles” in the more complex models as well as we could, so we founded Zodiac, which was hugely successful and saw us working with lots of firms all over the world. In early 2018 Nike approached to buy the company; Nike embraced and internalised the concept — seeing a world-class company doing it for the right reasons was great.
        
        
        CR: So, you have been advancing the topic of data-based customer focus now for more than 20 years. What are the typical obstacles that companies are facing today?
        
        
        Peter: Back in the day the main challenge used to be that companies simply weren’t interested in measuring the lifetime value of their customers — the primary interest of marketing people was related to the brand or product. Today, the level of awareness and ambition for turning the focus from transaction centricity to customer lifetime value management is starting to mature. Many companies are asking the right questions but too often their technical approach isn’t optimal, resulting in a lack of rigour, accountability and standardisation.
        
        
        Many companies think it’s best to build their own customer lifetime value formula rather than using something that’s been created by academics — the intent is good, but it rarely works as well as the top-notch specifications we’ve developed. When that happens, some will acknowledge it whereas others will change the way they ask questions or evaluate the models in order to avoid admitting that their in-house version is sub-par. Companies are often inconsistent and don’t hold themselves accountable to the accuracy of their predictions. It’s also important that different stakeholders share a common vocabulary and shared perspective on the topic.
        
        
        I want to make CLV tangible, and to achieve this I focus on three components: firstly, how long until the customer relationship ends, secondly, how many purchases they’ll make before then and thirdly, how much they’ll spend on these purchases. Having a concrete and tangible basis to start from before going deeper into the models helps make the predictions more concrete, comparable and accountable.
        
        
        CR: How do companies typically wake up to the need for measuring customer lifetime value and taking initiatives based on that?
        
        
        Peter: Depending on the company there are three typical paths — the one that makes me happiest is when they approach me saying something like, “everyone else is doing the CLV thing, we should too!” What’s more common is that they have a broad strategic problem, for example their growth is too slow. Normally at this stage they’re not focused on data or models, but on figuring out the problem, so I point them in the direction of my books and they see that looking at lifetime value is the way to achieve customer centricity. The third way is when the CFO is looking into CBCV — once they’re comfortable with that, it’s a small hop into CLV.
        
        
        CR: Does the customer lifetime value model vary for different companies?
        
        
        Peter: Yes it does, but to keep things as simple as possible we’ve come up with five lifetime value models, and we believe any business in the world can use one of them. First, we have the simple service contractual model — this is for companies like Spotify, Netflix, Slack or insurance companies and it’s the simplest model but far from the most common. 
        
        
        The second model is multi-service contractual, for companies that provide a portfolio of services over time – say, retail banks, telecommunications companies or business services companies where services are added and dropped over time.
        
        
        Thirdly we have the retail model, where there’s no contract so it’s just a question of how many purchases and how often – this is the most common and also the model that Zodiac used, and it relates to businesses like retail, travel and hospitality. 
        
    
    
	
     	
        Next you have a model for multiple levels of non-contractual transactions, such as mobile gaming where people can play for free but make in-app purchases. The questions we’re asking here could be how often they play the game, how long they play before starting to spend money and how often and how much they spend once they do. 
        
        
        Finally, we have a model for when there’s a mix of contractual and non-contractual transactions, like Amazon Prime or a gym — there’s a contract and you pay a regular subscription or membership fee, but they’re also selling individual items on top of that such as entertainment in the case of Amazon Prime, or classes, food and gear in the case of a gym.
        
         
        	CR: Have you seen many companies adopting CLV as a key metric that is continuously measured and used for directing operative actions and strategic decisions?
         
         
            Peter: This was exactly the original intention of Zodiac: to make CLV so essential that the models would be run on a daily basis for all kinds of things. Now we have seen that continuous measurement of CLV, let alone adopting it as a KPI, hasn’t been as frequent as I’d hoped — companies are more likely to run it once and then again three months later to see what’s changed, or as a special project that gets pulled out once in a while for specific questions, rather than making it part of the daily workflow. We currently have plans to create the next iteration of Zodiac, where the principle would be more around automation and self-serve. The vision would be that a customer can “just press a button” to run the models rather than needing to come to us — hopefully that will help companies to get the full value out of it.
         
         
         CR: How did Nike utilise Zodiac’s know-how?
         
         
         Peter: The key challenge for them was to figure out how to bring together all the many sources of customer data, such as the loyalty programme, fitness trackers with athletic data and so on. What’s interesting is that whereas many companies turn to this model in desperation, Nike was coming from a position of strength: they were already doing great with their brand and their products, but they knew they could do even better with their customer data. I think the primary impact of Zodiac for Nike was being a driver of cultural change. The initially standalone Zodiac team has become deeply woven into the company culture. I wish I could point to it and say, “there’s Zodiac and that’s what they’re doing for Nike” but you can no longer distinguish the Zodiac-type activities from Nike’s other ongoing initiatives — and that’s a good thing!
         
         
         CR: You’ve mentioned your books a couple of times – are you working on any at the moment?
         
         
         Peter: So far I’ve written two and there are at least two more to come. Customer Centricity is about the basic ideas and how CLV is radically different from the current way of doing things while also talking about some well-beloved companies that aren’t customer centric — although a lot of those have come around to the idea since the book was written. The Customer Centricity Playbook is more about how a company can incorporate these central ideas. 
         
         
         The third book, The Customer-Base Audit, will be published in summer 2022, and it is almost like a prequel to the first one. The message is, let’s forget about the models for a moment and look at the data we have and what it tells us about our customers and start to understand how and why we should build our business around them. And the fourth one is about making these things part of the company culture — it’s one thing to have these great models, and a company may have buy-in from the CEO, but to really succeed in this area they need the right culture too.
         
         
         That cultural aspect is critical — asking how we can build our culture around our customers rather than products and brand is a very different mindset. We have seen the benefits of creating a customer-centric corporate culture and this book lays out new frameworks that will help companies succeed with customer centricity.
        
        
         
        PETER FADER is a Professor of Marketing at The Wharton School of the University of Pennsylvania. Peter has spent much of his career working on models for customer lifetime valuation (CLV) and customer-based corporate valuation (CBCV), and is a recognised author on the subject. With his colleague Dan McCarthy he has also founded Zodiac, which is now part of Nike, and Theta.
        
        
        
        Customer Centricity and The Customer Centricity Playbook are available on Amazon. The Customer-Base Audit will be available for pre-order soon.
        
	
    
</description>
		
	</item>
		
		
	<item>
		<title>Chapter 5</title>
				
		<link>https://crhandbook.cargo.site/Chapter-5</link>

		<pubDate>Mon, 11 Apr 2022 11:04:56 +0000</pubDate>

		<dc:creator>The Data Handbook</dc:creator>

		<guid isPermaLink="true">https://crhandbook.cargo.site/Chapter-5</guid>

		<description>
	
    	
        	INDEX
            &#38;nbsp;  &#38;nbsp;&#124;  &#38;nbsp;  &#38;nbsp; 
            ⇦ PREV
            &#38;nbsp;  &#38;nbsp;&#124;  &#38;nbsp;  &#38;nbsp; 
            ⇨ NEXT
        
	
	
		
    Interview: Erik Zetterberg
    
    
    	
        	INDEX
            &#38;nbsp;  &#38;nbsp;&#124;  &#38;nbsp;  &#38;nbsp; 
            ⇦ PREV
            &#38;nbsp;  &#38;nbsp;&#124;  &#38;nbsp;  &#38;nbsp; 
            ⇨ NEXT
        
    



&#60;img width="2000" height="600" width_o="2000" height_o="600" data-src="https://freight.cargo.site/t/original/i/73ec16705465785c5400e26e4be4cbb99a4a50f4a6e7e810fbab26f8ab371bcd/5.jpg" data-mid="142201755" border="0"  src="https://freight.cargo.site/w/1000/i/73ec16705465785c5400e26e4be4cbb99a4a50f4a6e7e810fbab26f8ab371bcd/5.jpg" /&#62;




	
        
        &#60;img width="1000" height="1000" width_o="1000" height_o="1000" data-src="https://freight.cargo.site/t/original/i/872eb445dee7a56907c8aa6117ec114a9a6feb56518c7382ce07be64bf1d745e/5.png" data-mid="142192319" border="0"  src="https://freight.cargo.site/w/1000/i/872eb445dee7a56907c8aa6117ec114a9a6feb56518c7382ce07be64bf1d745e/5.png" /&#62;
        
        
    		Erik Zetterberg
            Creative Director, 
            Singular Society
        
	
            
    
		
        	Columbia Road: Singular Society has a unique business model. How did you come up with the concept and why were you sure it would work?
            
            
            Erik: My co-founder Daniel and I both come from fashion retail backgrounds and what we learned about accessibility, democratising fashion, product quality and customer experience helped shape Singular Society in terms of doing things in a disruptive way that makes a difference and means something to people. 
            
            
            Of course, you can never be sure of anything in advance but we felt pretty confident that customers like good value and that a business does well with stable, predictable income and lower risk. The fact that Singular Society could contribute to a more responsible way of manufacturing products by focusing on high quality and minimising overproduction meant that it also addressed sustainability, which is a big challenge in the industry. 
            
            
            To us, that meant we were addressing some of the main challenges we saw in fashion and retail: customer value, sustainability, business risk and direct access to customers. This gave us the confidence to move forward, as we felt that we were making a difference and contributing something new to the existing retail offering by creating positive solutions to those four challenges. 
            
            
            CR: You’re focused on loyalty and retention rather than selling products in the traditional way — for example, you don’t have to buy external customer data as you know all of your customers and their histories. Does that change how you look at data?
            
            
            Erik: Yes, absolutely, it means that we form a much closer relationship with our customers, where the focus is on the long term. Our only agenda is to serve them to the best of our ability. With that in mind, we look at sales and data differently. Our job is to show people that thanks to our model, responsibly made, high-quality products can be offered at fair prices.
            
            
            If we look at what traditional high-end retail brands are offering when it comes to, for example, hand soap, the price tag is sometimes more than five times higher than the Singular price. We sell ours for 7 euros, so the monthly fee we charge members can be saved with just one single product.
            
            
            CR: How has this approach changed your point of view as a designer?
            
            
            Erik: A great deal, I’d say. We look at products from a service-level perspective, with the purpose of creating satisfied members. When looking at retention, the recency, frequency, monetary (RFM) factors are really important. If we introduce a new product we look at what proportion of our members we’re servicing with that product and how often they’re buying it. We want to make sure that what we offer is relevant to our customers, and we want them to keep using our service and feel that they’re getting value from it.
            
            
            Our goal is to offer our members the things they need and want in their everyday life, so our process starts with understanding what they want and then trying to make that happen. Ultimately, it’s our members who decide what we should develop and offer, not us. So hypothetically, if all they want is hand soap, well, we’re a hand soap company! 
            
            
            CR: So effectively, your focus has shifted from sales to engagement?
            
            
            Erik: Yes, that’s a good way to put it — the long-term relationship and the engagement that comes with that is our focus, so we make sure to respect and maintain that to the best of our ability. We started out with focus groups where we explained how our service works and asked what people would like to see in our offering, and we haven’t had any insights from the customer data that contradict what we learned from them. We want to build a relationship over time by looking at things like how much our subscribers buy and how often they return, rather than focusing on short-term profits. 
            
            
            CR: You could argue that there are two kinds of people — those for whom the logo on a product doesn’t matter, such as Singular Society’s subscribers, and those who identify strongly with a particular brand and only want to buy from that brand. Why do your customers identify so strongly with your ethos? 
            
            
            Erik: We don’t see ourselves as a brand that wants to replace traditional brands, more as a good alternative: keep the brands you love and use us for the other things you need and want, where the logo might be less important. We want to give our members the chance to buy high-quality products without the logo. They trust that when we launch a product, it’s going to be responsibly made with a quality level that stands the test of time. That’s encouraging to us, and it comes down to trust — I don’t think there’s any need to pay more for the logo. We have seen that our members will wait for us to launch seasonal products rather than buying them from elsewhere. That’s something we looked for in the data, as it shows that we’ve been able to build the trust we were looking for.
            
	
    
	
     	
        CR: When it comes to probing to see what level of trust you have with your customers, do you use direct or indirect means — or is it a gut feeling thing?
        
        
        Erik: There are some ways to measure that I think — not new and innovative ways, but established ones like email open rates. We don’t spam, and we only send out emails if we have something to say that we think is relevant to our members. We use a weekly or biweekly pace. If people stop opening them, we’ll look into that as it means we’re doing something wrong. Fortunately, so far we’re very happy with our open rates. 
        
        
        Up until now we’ve been quite old school in our communications, with a lot of text-based product information online — but we are learning that some members are less committed to consuming long texts and emails, so we’re currently working on other ways to communicate and tell stories in order to remain relevant. We’re not alone in facing that challenge. We want our members to feel safe and decide how they share their data, and we want to use it exclusively to improve our service to them. By pivoting to this model, the by-product is a mechanism for building a healthy and trusting relationship.
        
        
        CR: People are often very engaged and passionate about services where they pay a fixed membership fee and then use the service as much as they want — do you see that too?
        
        
        Erik: When your intentions are good, and you’re obsessively passionate about what you do and encourage that almost nerdy attitude in your community, people seem to get really into it. Then you can get insightful and helpful feedback where members request a product that they feel is missing from the assortment or give great advice on how to improve something. From a retention perspective, we love getting that kind of input as it gives us ideas for products and ways to improve our service.
        
        
        CR: We’ve been talking a lot about customer lifetime value lately. Do you follow that at Singular Society
        
        
        Erik: Yes, we follow that closely as it’s tied to members' satisfaction with our service, so it’s a foundation if we are to succeed. We’ve just recently started to get those numbers back — how many customers stay on and how many leave, including how, when and why they’re leaving; we’re trying to learn as much as we possibly can from that. We’ve only been around for a little over a year at this point, and we know that most of our customers have stayed with us for a second year, so we’re off to a good start.
        
        
        CR: Do you face any unique challenges when it comes to collecting enough data to optimise things like lifetime behaviour, churn rates, engagement rates and email open rates and putting it all together to make predictions?
        
        
        Erik: It’s a very different game from traditional retail — the products are similar but the mechanics are very different, and that leads us to different places. Measurements and predictions are vital to us of course, as they’re key to telling us how well we’re serving our members and ultimately performing as a business. But as our model is so fundamentally different from the traditional retail model, learning what to focus on in order to set the correct benchmarks and KPIs is also an ongoing process.  
        
        
        CR: Is there anything you’d like to add?
        
        
        Erik: It’s worth mentioning that on a more personal level, we all feel that it’s incredibly fun and rewarding to work with Singular Society. What we do is very straightforward — we’re simply trying to offer a good service and make great products that last and that our members value and enjoy, while building meaningful, long-lasting relationships over time.
        
        
         
        ERIK ZETTERBERG is the Creative Director and founding member of Singular Society, a pioneering membership-based retail brand. His wide-ranging experience across the fashion industry, from H&#38;amp;M to NET-A-PORTER to Amazon Fashion, has given him a unique overview of the industry and the challenges it faces. 
        
        
	
    
</description>
		
	</item>
		
		
	<item>
		<title>Chapter 6</title>
				
		<link>https://crhandbook.cargo.site/Chapter-6</link>

		<pubDate>Mon, 11 Apr 2022 11:16:10 +0000</pubDate>

		<dc:creator>The Data Handbook</dc:creator>

		<guid isPermaLink="true">https://crhandbook.cargo.site/Chapter-6</guid>

		<description>
	
    	
        	INDEX
            &#38;nbsp;  &#38;nbsp;&#124;  &#38;nbsp;  &#38;nbsp; 
            ⇦ PREV
            &#38;nbsp;  &#38;nbsp;&#124;  &#38;nbsp;  &#38;nbsp; 
            ⇨ NEXT
        
	
	
		
    Interview: Minna Vakkilainen
    
    
    	
        	INDEX
            &#38;nbsp;  &#38;nbsp;&#124;  &#38;nbsp;  &#38;nbsp; 
            ⇦ PREV
            &#38;nbsp;  &#38;nbsp;&#124;  &#38;nbsp;  &#38;nbsp; 
            ⇨ NEXT
        
    



&#60;img width="2000" height="600" width_o="2000" height_o="600" data-src="https://freight.cargo.site/t/original/i/e5bf5c7c9cdac93a7baeaa610cc1a222f9fd93d8101c4b340f31befc7740bc94/6.jpg" data-mid="142201746" border="0"  src="https://freight.cargo.site/w/1000/i/e5bf5c7c9cdac93a7baeaa610cc1a222f9fd93d8101c4b340f31befc7740bc94/6.jpg" /&#62;




	
        
        &#60;img width="1000" height="1000" width_o="1000" height_o="1000" data-src="https://freight.cargo.site/t/original/i/5857b8539c9940881f6139a831d5eb2cdd758800b114513f41289eef90337720/6.png" data-mid="142192715" border="0"  src="https://freight.cargo.site/w/1000/i/5857b8539c9940881f6139a831d5eb2cdd758800b114513f41289eef90337720/6.png" /&#62;
        
        
    		Minna Vakkilainen
            Vice President, 
            Analytics, Data and Loyalty,  
            Kesko
        
	
    
    
		
        	Columbia Road: What was the starting point of your journey towards data centricity at Kesko?
                        
            
            Minna: In Finland, the large grocery chains are in the unique position of having a huge amount of customer loyalty data available. Out of 5.5 million citizens, Kesko has 3.3 million active loyalty card holders — that’s almost 80% of households that are regularly using our loyalty programme to buy groceries! Our data strategy is heavily focused on understanding our customers and their needs, and our data analytics isn’t only for Kesko, it’s also for our ecosystem of independent store owners and entrepreneurs — K-retailers, suppliers, partners and customers. We want to use our data to create synergies that benefit all of these groups. 
                        
            
            When starting our roadmap seven years ago, the central premise was to enhance data-driven decision making to improve our food stores’ performance and help customers to maximise the value they got from our loyalty programme. The first phase was to find a better way to give access to and visualise the data in order to support our K-retailers’ everyday decisions. Because each store is different in our strategy, with their own store-specific business ideas, we then focused on presenting the data more holistically and dynamically, combining it to enable K-retailers to use insights about market share, potential, profitability and product categories in a way that would most benefit them — a big part of this was talking to individual K-retailers about their needs and challenges. What we’re doing now is moving towards supporting them to not only see the data but also carry out actions based on it, for example with store-specific product recommendations that help them make more customer-oriented and streamlined assortment decisions and then draw insights from that.
                        
            
            CR: Was this roadmap developed piece by piece or did you have a strategy in mind from the beginning?
                        
            
            Minna: When I started in this role our data assets were siloed — for example, the customer data was managed by an internal company and the loyalty sales data included tax and was not combined with other data sources, which made it less useful and harder to extract real value from. At that point, we had some ideas of what we wanted to do in the future but the immediate action was to tidy up the data and combine it into dashboards that would make it more available and usable. Only once this step was completed could we look into bringing in AI and analytics to support our decision making more holistically and justify this development to the K-retailers, Kesko’s own users and customers.
                        
            
            CR: That must have been a costly investment — how did you push for this internally?
                        
            
            Minna: As the loyalty programme was already in place, we started by looking at how much it was costing us in its current form and how much value our proposed improvements could bring. We had access to all this information about our customers and their preferences, so it made sense to use this valuable insight on the store level, and we also enriched the loyalty data with other data sources and aggregated the data insights on a chain level, which helped to show management that the information had value. They clearly agreed as they made this a core strategic asset for the company. 
                        
            
            CR: Based on how you combined things like profitability and store-level data sets, are you able to suggest any best practices for companies that may be struggling as they start out on the same journey?
                        
            
            Minna: It’s still a challenging journey, even for us! My main recommendation is just to be proud and show the data you have as it’s never ready — there will be errors, like mismatched data between the spreadsheets and the dashboard, but making the data available for users to question is the best way to find mistakes. Having visualised the data, such as detailed information for our customers about nutritional values or the origin of their products purchased, our worry when giving the visualisations to them was that the data wasn’t perfect. But if you label it “beta” on the tool then at least it’s out there which is better than nothing. Customers understand that it's under development.
                        
	
    
	
     	
        CR: How have you ensured that the planned tools and dashboards will really be taken into use?
                    
        
        Minna: You need to start with the challenge – which for us involves talking to K-retailers and employees to find out what individual challenges they face — then you can look at how data can help on an operational level. The next step is to ensure that you have an iterative process where you're always improving what you're doing through continuous testing and learning — there’s no point where we’ll say, “OK, now it’s ready”.
                    
        
        Service design is crucial in this process as it helps us to turn insights into easy-to-use tools, and the K-retailers have to buy into the process and be kept in the loop throughout; we won’t just create a tool and then push it out to them. Our stores have channels where they discuss these things, and some within those networks have more sway than others. Getting their backing helps to get more K-retailers involved right from the beginning, which allows us to get more data on how effective the tools are.
        
        
        CR: How can these tools be used to improve store performance?
                    
        
        Minna: The stores can be measured on various KPIs like sales, sales development, profitability and customer satisfaction. We can educate K-retailers and help them address their performance by embracing the data and following our recommendations based on it. For example, if a K-retailer feels that the store isn’t meeting its potential, we can support them to improve their processes by helping them to use their dashboard more. The use of data and dashboards is a new way of working and leading for K-retailers.
                    
        
        CR: How do you cope with the responsibility of operating with this huge amount of customer data? Do you have any examples of that? 
                    
        
        Minna: We have ethical principles for creating the AI and using customer data, and we’ve stated clearly that we want to offer value for our customers via data. If it’s not good for them then it’s not good for us and that’s something I communicate heavily in internal discussions, for example, if I’m asked why we’re not prioritising certain profitable products. It can be hard to be on the customers’ side and to find the right balance in these discussions, but it’s so important for us. I’m very happy that we have such strong data ethics and principles in use.
                    
        
        CR: You’ve been utilising AI in various areas to gain more tangible insights and impact from data. Do you have any recommendations for others who are starting their journey with machine learning and AI?
                    
        
        Minna: Stop talking about AI solutions and put more effort into understanding the problem. Also, don’t have a use case that’s too big — start small to make it more understandable. We don’t try to improve our product assortments on a company level — we start at the store level and then build trust so others see that it can be useful for them and approach us for help. I also want to highlight that it’s crucial for employers to trust AI-embedded solutions and tools. My recommendation is to always be as transparent as you can — meaning that you take the time to explain how these AI tools and algorithms work — and of course make sure that you can show the impact!  
                    
        
        CR: What’s next?
                    
        
        Minna: Our journey has only just begun. We have a good situation with the dashboard where we’re combining the data, and we’re now moving from insight to action. We’re developing new smart and easy-to-use tools, including analytics and AI components, that help on an operational level – we have some good ones already but the potential for the future is huge, as our digital services, ecommerce, logistics and many other processes can utilise the same approach. 
                    
        
One area where we’ve improved a lot over time is transparency and cooperation within the company and with all the stakeholders — we’ve become much more open in our discussions and ways of working — there are still lots of silos but it’s much easier now as we all have the same data and knowledge, and we’ve already done a lot together. I always like to say, “if there’s a will, there’s a way”; we need to be ready to improve our skills and create more value every day — it’s an endless story.
        
        
         
        MINNA VAKKILAINEN joined Kesko in 2014 to focus primarily on data analytics, and over time her role has evolved to also include responsibility for AI, machine learning, data development and the K-Plussa loyalty programme. Minna shares how she and her team have developed their tools and processes during her first seven years of leading the company towards customer data centricity.&#38;nbsp;For more information about Kesko’s data advancements, take a look at their recent Data Balance Sheet report.
        
	
    
</description>
		
	</item>
		
		
	<item>
		<title>Chapter 7</title>
				
		<link>https://crhandbook.cargo.site/Chapter-7</link>

		<pubDate>Mon, 11 Apr 2022 11:29:30 +0000</pubDate>

		<dc:creator>The Data Handbook</dc:creator>

		<guid isPermaLink="true">https://crhandbook.cargo.site/Chapter-7</guid>

		<description>
	
    	
        	INDEX
            &#38;nbsp;  &#38;nbsp;&#124;  &#38;nbsp;  &#38;nbsp; 
            ⇦ PREV
            &#38;nbsp;  &#38;nbsp;&#124;  &#38;nbsp;  &#38;nbsp; 
            ⇨ NEXT
        
	
	
		
    Interview: Saku Laitinen
    
    
    	
        	INDEX
            &#38;nbsp;  &#38;nbsp;&#124;  &#38;nbsp;  &#38;nbsp; 
            ⇦ PREV
            &#38;nbsp;  &#38;nbsp;&#124;  &#38;nbsp;  &#38;nbsp; 
            ⇨ NEXT
        
    



&#60;img width="2000" height="600" width_o="2000" height_o="600" data-src="https://freight.cargo.site/t/original/i/cf8a6b0246a5c4b31f21cd62ed06a3d127c78d92661b18fbb51f489514727d4e/7.jpg" data-mid="142201639" border="0"  src="https://freight.cargo.site/w/1000/i/cf8a6b0246a5c4b31f21cd62ed06a3d127c78d92661b18fbb51f489514727d4e/7.jpg" /&#62;




	
        
        &#60;img width="1000" height="1000" width_o="1000" height_o="1000" data-src="https://freight.cargo.site/t/original/i/55931a36b91da1a3fc0879627a309d6bcb2e18558f08c54371ec66f111eb77a3/7.png" data-mid="142192822" border="0"  src="https://freight.cargo.site/w/1000/i/55931a36b91da1a3fc0879627a309d6bcb2e18558f08c54371ec66f111eb77a3/7.png" /&#62;
        
        
    		Saku Laitinen
            Director of Home and Content Visibility,
            Zalando
        
    
    
    
		
        	Columbia Road: What kind of engagement are you trying to generate at Zalando?
            
            
            Saku: We want to inspire customers with meaningful fashion moments in our digital experience. It starts with the traditional click engagement as a sign of interest; a transition to another step like clicking through to a different catalogue page or product details. Beyond the clicks we try to understand what happens afterwards and include things like leaving feedback, following a brand, adopting a new trend or watching a video — so it’s quite broad from a customer behaviour perspective. Over time, we want to understand what kind of engagement will retain customers.
            
            
            There’s a lot of content on Zalando’s platform. We’re building, maintaining and optimising a system which defines what content, for example, is shown on an individual customer’s home page when they land at Zalando. The content can be ads and articles from different teams at Zalando, from brands or some other third parties. What’s critical for us is to optimise the system so that the content is being shown in such a way that each piece of content provides as much engagement and revenue impact as possible — it’s all about personalisation and performance metrics.
            
            
            CR: What are your KPIs in terms of engagement? 
            
            
            Saku: We have been learning a lot through this journey, and we’ve identified the key dimensions that contribute to our success in terms of content. The first two are how tailored customers perceive the experience to be and how big a share of the journey is tailored. The other two are relevance, which is to do with how well we’re able to pick the right content, and variety, which is how much content we have to choose from. Based on those, we had to figure out how to measure them and which aspects we can directly influence and which we can’t.
            
            
            We identified a hierarchy of KPIs. We drive the group level north star metric — customer lifetime value - with engagement and retention. So the direct top level KPI for us is the number of engaged customers over a certain period of time. Then we have different outputs that matter for engagement, such as the share of paid advertising content, share of content that matched our customers' fashion preferences and what is the total predicted page value. These are based on content’s expected contribution to engagement, advertising revenue and transactions. To match the outputs we have input KPIs that describe the breadth of available content offering and how much we understand about the customer. What matters here is that we have a solid understanding of what we and Zalando as whole can control vs what is external variance.
            
            
            CR: The Zalando platform is really big, with multiple propositions, categories and country sites, so managing the complexity of the data sounds challenging. What processes do you have in place to address this?
            
            
            Saku: There’s this phrase that “if you build it, then you run it”. For the systems we now have that should perhaps be “if you build it, then you understand it”. We need to understand the input and output KPIs that are part of the request and what we can build and aggregate on. Privacy is a really big chunk of managing this. The experience of trust and respect for privacy is one of our six customer promises. We do not just want to be compliant but a leader in this area. That has been a heavy investment area in the past few years — making sure that the recording and processing activities are stored and only accessible by people who really need it. It takes a lot of work to keep our data management compliant with regulations.
            
            
            CR: In short, how do you make sure that individual pieces of data are accessible and being utilised in a responsible way from different parts of the system and organisation?
            
            
            Saku: We have efficient ways to see what data is available. We have thousands of data streams relating to tracking and different services, and you can search what’s available and find out whether the data contains personal identifiers. All the data streams also have a schema description. The streams and data are classified into different confidentiality domains and depending on the domain the access is restricted. If we share anything — even between different teams — it has to be properly aggregated and anonymised; the only way to have access to the confidential data sets is through a strict process.
            
            
            CR: Getting to that stage is something that many companies can only dream about! Is it a complex challenge to make sure that the data is utilised in a compliant way and that its quality is trustworthy, and also that people understand what data they can use and how?
            
            
            Saku: We’ve been working really hard to ensure that we only track the essentials and that there’s no bloat where we’re tracking things that we don’t need. Later, we can explore the dimensions in the metadata that are part of the content so we can correlate it later rather than including it in the tracking mechanisms. This way we can add more dimensions in the tracking data much faster than would be possible with traditional methods where everything is put in the tracking information. That’s pretty handy, as there are global unique correlation keys that we can use across customers and tracking errands. This also helps to keep us compliant as we can drop all the tracking data that we can’t use.
            
            
            CR: What experts do you have in the team?
            
            
            Saku: We have around 10 data scientists, around 20 engineers, five product managers and also focused analytics and design teams that work with our team, who are part of the more than 2,500 employees working in tech at Zalando. In terms of profiles, some of the software engineers are more focused on the frontend topics, while some are more focused on data engineering. We’ve split these topics around the input KPIs that we have. There are people who are focused on understanding the economics — the inputs and outcomes. Then we have teams focused on tailoring the experience and connecting with the customer. We’re also responsible for the understanding of the content.
            
            
            We are additionally building a content platform that provides a really quick way to experiment with new content without worrying about operational issues or content delivery. This provides a way to track content so people don’t have to worry about tracking, but they get the correlation keys and can then use their own metadata to attach to it later. We’ve ramped it up from one to 15 content formats and thousands of new entries every week in six months, so building it was a good exercise.
            
            
            CR: Has the challenge of getting such a sophisticated content platform up and running been more about tackling technical challenges or about collaboration between teams?
            
            
            Saku: We believe that the more content we have from different types of contributors, the better we can make the experience. We considered our role and came to the conclusion that it’s more about content acquisition than content strategy and content production, but we need to make it as easy as possible for teams to build their content products and for external partners to contribute content, so that it’s not a huge investment for them to understand if the content works or not. The thing we had to tackle there was that before this setup, everything would typically have its own data structure. So whenever there was a new piece of content we’d need to do some kind of transformation to understand what’s included. 
            
            
            Providing a good publishing and delivery pipeline is quite challenging on this scale, especially with the latency, resilience and localisation expectations, so we actually solved this pain for other teams. We’ve been able to get to a point where everyone’s using one unified data structure in terms of how the metadata is labelled and how long it’s available. This has been done to have solid data contracts with all the content producers, that describe the content types while giving them the flexibility to use whatever they want to present their content in the way they want to.
            
                    
            Saku: The final push was not that difficult but we spent a year and a half to understand what our organisation's role should be, and what are the problems we need to solve for others. It took a while for us to feel confident that we had a good enough understanding of why and how to do this in a way that would be worth the investment and benefit the other teams so much that they’d want to get involved.
            
	
    
	
     	
        CR: It must have taken quite an effort to combine all the different points of view and aspirations across the organisation?
        
        
        CR: Coming back to the team, you said that there are experts like data scientists, data engineers and designers. Do they all share the same objective of engagement, or are they more like virtual teams coming from different points of view and supporting your team’s key KPIs?
        
        
        Saku: We haven’t set them up in a way that all the data scientists sit in one team. The people who build the endpoints and services needed to access the predictions sit together with the people building the prediction models. Typically in a team there are a couple of engineers focused on how to build the services and make them resilient, then the data scientists work together with them figuring out how to build the system in a way that lets them experiment with different models. They work together to get all the data in one place so we can join and analyse it for decision making and to generate an understanding of performance.
        
        
        CR: Typically the challenge for many companies seems to be that the data engineers are deep in the IT organisation and there’s an analytics team and frontend builders in other teams, and coordinating these teams is really slow. It seems like you have everything closer together so you can get something like AB testing done in the frontend when you need it?
        
        
        Saku: Yes, that’s how the whole company is set up. You need to understand the customer and the problem that you’re going to solve for them, and then the principle is that you invest in enough people and build the right kind of team to solve the problem rather than a setup where you have to compete with some distant resources to do every small thing. This is critical for our success because for the products we’re building that are based on data and machine learning, it’s difficult to figure out when we will get the outcome, so there might be quite a few iterations. If we had to request one analysis after another until we get what we need, it would slow us down quite a bit and we wouldn’t be building the kind of domain understanding that we want people to have.
        
        
        CR: What have you learned that you wish you’d known from the start of this process?
        
        
        Saku: If I think about how much it’s changed from optimising tens of pieces of content each visit to thousands, and simultaneously constantly improving the customer experience so we can measure it based on engagement and enabling significant advertising at the same time, we’ve actually built something quite compelling over the years. We already understood the dimensions that affect our success early on, but if I were to start again now I’d put more emphasis on rigorously iterating and on understanding how to measure those dimensions. Also, I’ve learned to keep it simple in terms of being clear on the ultimate objective the product should be optimised for. 
        
        
        Typically, for any product that’s creating the customer experience on ecommerce platforms, different stakeholders will have different opinions on what it should be optimised for, whether that be immediate conversion, driving profit contribution or solving inventory problems. Where we’re lucky is that our board understands the complexity of these issues and we have a clear understanding of where we’re going.
        
        
        CR: I liked your point about how it’s not only about understanding what works, it’s also on a meta level, understanding what affects how you understand what works.
        
        
        Saku: Yes, we could consider our KPIs as really well defined, but understanding the causalities between inputs, outputs and outcomes will help us understand what to focus on. At least for more mature setups that want to drive long-term success in ecommerce, I can understand the desire to work with objectives and key results (OKRs), but if you want to learn more about what makes an impact, first you need a better understanding of the long-term drivers for achieving your goals, and then you can approach it with the OKRs and see which parts you want to improve.
        
        
        CR: On a high level, what were the key obstacles you had to overcome when you started running AB tests?
        
        
        Saku: To tailor the content experience with machine learning algorithms, we ran the first AB tests around four years ago. The initial obstacle was that people were afraid of losing control. It got easier after very positive results, but at each step we had to communicate that we were first running an experiment, and that’s the key part. There are multiple stakeholders who want to display content or get their share of customer attention, and we have to explain the impact of intervening with the system. We’re constantly measuring the ideal system, where no one can intervene at all with the algorithm and it only makes decisions for the best customer experience, then we’re comparing that against the scenario where we enable marketing investments and advertising. We work with the stakeholders to educate them on what works and which parameters they can tune to create maximum impact from customer attention without hurting the overall customer experience.
        
        
        CR: From a system perspective, I can imagine that there are hundreds of data sources and data lakes and machine learning functionalities. How were you able to get concrete results while maintaining coherence?
        
        
        Saku: We’ve had our fair share of challenges on that front in the past. You need to create different kinds of monitoring — first, to find whether we got the data and if the model training went through, and if we are getting any errors in actually scoring or ranking the content. Then you need to constantly improve so you can start detecting deviations in the calibration of the predictions — so, we monitor things that explain not just if things are working technically, but if they’re producing the results we expected in a business and logical sense. If we start with a specific app version on a specific platform, do we start seeing less tracking events in some dimensions and can we drill down to find the anomalies if we see anything changing? I think we’re doing pretty well there, but when you think about the amount of data, systems and events, there are also some moments when things fall into the grey areas between teams or departments. Whenever we discover another one of those we can figure out a solution.
        
        
        CR: Let’s talk quickly about customer lifetime value (CLV) — it must be complex in your setting?
        
        
        Saku: Yes. We tried to optimise the content based on post-click revenue within the same system, and you’d think that this is a relatively short part of the customer journey. We gave it quite a few attempts, but in the end predicting the immediate engagement for the content turned out to be more impactful than focusing on the conversions on post-click revenue. The customer journey is already so noisy over this short span of time that it’s actually rather difficult to optimise it like that.
        
        
        Now, our approach is more to understand the long-term drivers of customer value, so we try to include things like how big a share of their wardrobe has come from Zalando or what kinds of beauty brands they follow in the evaluation. In this way, instead of trying to optimise for immediate revenue or transactions, we try to optimise for events that are even further away but that have solid causal economic research behind them. Then we use that along with our prediction of customer engagement to come up with an approach that lets us optimise the lifetime value.
        
        
        This work is in the early stages, but it’s showing some signs of success already. We’re trying to build models where we can understand long-term CLV, say 90 days, and then we’re working towards learning the parameters that affect the 90-day CLV. First we need the parameters and then we need to run the system for quite a long time — so we’re on that path but it will be a long journey.
        
        
         
        SAKU LAITINEN is tasked with driving engagement and inspiration by matching content to customers along their journey on Zalando, a leading European online platform for fashion and lifestyle. He and his team are also responsible for enabling the monetisation of engagement through advertising within the digital experience.
        
    
</description>
		
	</item>
		
		
	<item>
		<title>Chapter 8</title>
				
		<link>https://crhandbook.cargo.site/Chapter-8</link>

		<pubDate>Mon, 11 Apr 2022 11:44:44 +0000</pubDate>

		<dc:creator>The Data Handbook</dc:creator>

		<guid isPermaLink="true">https://crhandbook.cargo.site/Chapter-8</guid>

		<description>
	
    	
        	INDEX
            &#38;nbsp;  &#38;nbsp;&#124;  &#38;nbsp;  &#38;nbsp; 
            ⇦ PREV
            &#38;nbsp;  &#38;nbsp;&#124;  &#38;nbsp;  &#38;nbsp; 
            ⇨ NEXT
        
	
	
		
    Interview: Yves Mulkers
    
    
    	
        	INDEX
            &#38;nbsp;  &#38;nbsp;&#124;  &#38;nbsp;  &#38;nbsp; 
            ⇦ PREV
            &#38;nbsp;  &#38;nbsp;&#124;  &#38;nbsp;  &#38;nbsp; 
            ⇨ NEXT
        
    



&#60;img width="2000" height="600" width_o="2000" height_o="600" data-src="https://freight.cargo.site/t/original/i/966b1e5cd2f33ee5123f7c39037b8c083d574d34aacebf91fb0dd79103e8cc74/8.jpg" data-mid="142201626" border="0"  src="https://freight.cargo.site/w/1000/i/966b1e5cd2f33ee5123f7c39037b8c083d574d34aacebf91fb0dd79103e8cc74/8.jpg" /&#62;




	
        
        &#60;img width="1000" height="1000" width_o="1000" height_o="1000" data-src="https://freight.cargo.site/t/original/i/cb4019b3a3f8a3701e5f277dc31e55f92ec29d91b715914035edf42754ff302f/8.png" data-mid="142192890" border="0"  src="https://freight.cargo.site/w/1000/i/cb4019b3a3f8a3701e5f277dc31e55f92ec29d91b715914035edf42754ff302f/8.png" /&#62;
        
        
    		Yves Mulkers
            Data and Analytics Strategist, 
            7wData
            
	
    
    
		
        Columbia Road: You’ve been working in the field of data for years. Would you like to share any concrete recent development areas that you’ve been working on?
        
        
        Yves: On a really concrete level, just recently I’ve been evaluating data catalogues on the market; it takes a lot of time after the business analysis work to find the right data and identify it across systems. Data catalogues help you find where the data is coming from and how it goes from one hub to another, helping you to find, verify and understand it. Some data catalogues do this with machine learning, so they look at the data, help you identify it and then build that into the model. If you have a business understanding and, say, you’re looking for a client, in one system it might be listed as a client ID and another with a client number. The data catalogue can look at the data to find patterns and see where there are matches, recognising that client IDs and client numbers are the same thing. This is very helpful because otherwise I have to write and identify a query, and it’s quite technical to make sure that columns in different systems with different labels contain the same data.
        
        
        CR: Do you have any success stories to share where data catalogues would have been implemented and utilised? 
        
        
        Yves: There have certainly been some successful implementations of data catalogues in the utility industry. The trouble is humans are very negative — we see the bad in everything. So if you read articles, 80% of data catalogue implementations fail and 80% of machine learning and AI projects fail. But there have definitely been lessons learned, and you need to look at failures from a lessons-learned perspective; in the case of data catalogues, for example, maybe they threw everything from the operational systems into the data catalogue and then manually started to map and identify. So there is a lesson to learn there in starting with a business use case and taking only that data and putting it into your system, analysing it and growing from there. It’s not a big bang approach, but it allows you to go forward step by step and continuously show the value of what you’ve been building.
        
        
        CR: When it comes to AI, there are some really interesting use cases with global players like Amazon, and maybe Zalando in Europe. But if your turnover is only 1–2 billion dollars a year, what are the use cases for machine learning for digital sales? I haven’t seen too many for smaller companies.
        
        
        Yves: For smaller shops it’s the typical market basket analysis and these types of algorithms that help. It’s the returning visitor data and the intent data that shows what someone’s been looking at. That’s popping up more and more in ecommerce — you’re looking to buy a fridge, so you look at some on a website, then they combine that data and enrich it with other data and realise “hey, this person is really looking for a fridge!” Then they know to put adverts everywhere — wherever you go online. Now it’s very likely you’ll buy that fridge. A lot of companies are now focusing on intent data for ecommerce because you can find hot leads, so to speak. In B2B it’s much harder of course, because people don’t tend to surf around and vendors don’t have that connection in B2B. The hidden gem is all the digital markers you leave behind where you can identify that a company is very likely to make a new technology investment. It’s not a perfect science, but it gives you some idea of who to approach in the B2B space so you can reach out if there’s an interest. 
        
        
        CR: And B2B companies aren’t really doing that. It would be possible to track not only individuals, but also how many people from one company are visiting different touchpoints. There’s definitely potential there. 
        
        
        Yves: I know which companies are visiting my website from tracking data and this helps me ensure that the message I have on my website addresses my ideal client. A lot of people aren’t aware you can do that. When I see a company has visited I sometimes reach out on LinkedIn and start a conversation — it’s a warm lead rather than just spamming people. There is a lot of MarTech knowledge that can be used, and if you combine that with other kinds of knowledge from finance and technology, you can optimise on all levels. Adding data from your website to your customer profiles can help you to optimise your sales. Think about your complete business in a holistic way and not just your marketing pipeline – I find it fascinating to build up different knowledge from all parts of a business. 
        
        
        CR: A typical challenge that our clients face is combining product data from very different markets and product lines into one aggregated database. Do you have any case studies that show how they can succeed in this process? 
        
        
        Yves: I’ve experienced two cases like this that immediately spring to mind. The first was a retailer who sold a lot of consumer products. They have a distribution hub in Sweden and various sales organisations all over Europe and the project we worked on was supply chain optimisation. The challenge was that they were using siloed systems and every physical product had a different commercial name depending on which country’s database we were looking at. In order to optimise the complete supply chain, including distribution and physical shops, we had to understand the different product names to confirm that we were talking about the same product. 
        
        
        It seems simple but it’s not just a matter of translation — sometimes you need to package your products in different quantities, for example, and this is important because it can be a legal issue. Back in the day, aggregating or aligning products was mostly technical, you had it identified in your enterprise resource planning (ERP) system in a certain way. But if you look at a product in two different ERP systems you need to ask if it’s really the same type of product, and if not whether we can look at it in the same way when optimising the supply chain? It’s both a technical challenge and a business challenge: understanding what the product is, but also how you handle, store or ship it — there are various ways to say whether it can be categorised as the same product. 
        
        
        CR: I’ve experienced a similar problem with a client, where product lines have different names in different countries, but the key challenge with ecommerce is they still need to get everything in the same webshop. It’s important to have an overall understanding of the data, which might have different codes and names in different countries. You make a good point that it’s not just about the data IDs, it’s about the whole supply chain including the means of delivery and packaging choices. People often think this is something to figure out at the end of a webshop project, but it can really cause problems if it hasn’t been identified beforehand.
        
        
        Yves: Yes, and that's if you're talking about products within your own company. If you take a company like Amazon, it’s another step further. And if you want to have the complete supply chain from producer to distribution and so on, it becomes even more complicated. I've attended some conferences where a unique global product identifier was discussed, and I thought, okay, it seems simple, but apparently there are 100 different standards for product identifiers out there. Which one do you choose? It's a big challenge, and that’s just the ID! And then of course, there are an endless number of other attributes to consider. If you look at the packaging, that’s even more complex. With boxed products for example, how many go in each box? In one country it might be 12, in another 24. It’s a big challenge to standardise the data across countries.
        
	
    
	
     	
        CR: How did you go about solving these issues in the project you mentioned? 
        
        
        Yves: I wasn’t involved in that particular part of the standardisation, instead we were confronted with different things in different systems. It was on the master data management (MDM) track where they were standardising the products and saying that these five IDs, say, would become this one ID in the end. We had a preparation list of around 100 products where we could see what the final, global unique identifier for each product would be. This meant we could anticipate our data model to work with both the old and new IDs, and we found a way for the matching and calculating to work as long as we had both sets. When we evolve and gradually fade out the old IDs, the new ones come into place. You need to find a way of working with the two systems because you’ll always have a transition period with one system that’s already been migrated to the new product IDs and others that haven’t yet. We needed to anticipate the change on a technical level, while cooperating closely with the people working on the MDM track. 
        
        
        CR: You mentioned a second case example. What was it?
        
        
        Yves: Yes, it was an identification of medicinal products (IDMP) project for a pharmaceutical company. In Europe there is often a shortage of pharmaceutical products on the market, so the idea was to standardise products across all pharmaceutical companies to improve availability. There are basically two sorts of products, vaccines and drugs, which are treated differently in terms of how they’re used. We were focusing on vaccines — the vaccine goes into a serum and then into a plunger, and understanding the packaging was almost a project in itself! We had a data model but the biggest complexity that slowed the project down was that there were 7,000 stakeholders around the table trying to drive the project. That’s pharmaceutical companies, lobby groups, legal stakeholders and government representatives — there are so many people with different interests.
        
        
        We had a good team who were weeding out discrepancies in the data model, but there were still so many errors. So, you learn that there’s always more to do and discuss. The approach was to work for six months, design the model and then throw it out for everyone to comment on. This allows initiatives to be crowd created. One thing I learned from this project is that if you get something thrown to you to comment on, don’t be judgemental that it’s wrong, just start with it and let it evolve but keep it moving forward. There are so many discussions to be had before projects finally move forward, especially standardisation projects. 
        
        
        CR: That’s an interesting story, and the learning points from it will apply to many other cases. The key is facilitating cooperation and discussion between people who have different opinions and who are coming at things from different angles. Do you have any best practices or tips for how to make such a collaboration go smoothly? 
        
        
        Yves: One thing I learned on the project was that if you have a good vision you can be in the driver’s seat. In the IDMP project there was too much discussion without anything moving forward, so I decided to take on the responsibility and accountability for getting things moving. Another thing was that the data needed to be standardised. 
        
        
        The challenge with pharmaceutical products is that all the complexity is in unstructured data in documents, for example all the possible side effects if you take a medicine. I suggested doing text mining and running a semantic model on it, but I found that the ontology doesn’t yet exist to define everything in the documents. So in the end we hired a bunch of students for fortnightly periods and they just read through all the documents and manually highlighted all the entities and concepts and we extracted them in a physical, manual way. This gave us our dictionary and context for the data mining – we had a solid result, which was better than building and training a model that would have taken more time.
        
        
        CR: When it comes to huge amounts of data, I think using students is a really underrated method! Too often companies think they need to automate things even for a one-time analysis, but thousands of lines of product data is not that complex to go through with someone who’s motivated.
        
        
        Yves: It’s a trade-off and it’s pragmatic — we want to get results and if you build a model it can be more expensive than students or cheap labour. Websites like Amazon Mechanical Turk let you say what you need, and maybe 1,000 people will pick up your task. This allows you to do something manually at first, and if you see that it’s working you can automate the process as you go, but you’ve already reduced the time to market. Twenty years ago I thought that everything could be automated, but actually there are times when human beings are better and faster at completing a task. 
        
        
        CR: I see three key learning points from this discussion, the first being that every data project is really a business project — business should lead it and you should get an end-to-end understanding of the data you’re looking at. The second is the importance of getting people together to facilitate discussion, achieve mutual understanding and then iterate forward. And the third key learning point is to think before you automate — find out which processes you can do manually, experiment with those and only automate after that.
        
        
        Yves: I agree, and related to that final point, the importance of finding pain points and focusing on fixing those with automation. Once you’ve been through the process you’ll understand it, and then it’s easier to automate in the correct way — it’s very important to find the right steps and to understand why you’re taking these steps. You can scale automation, make the process faster and remove human error, but it’s important to find the right situations in which to apply automation.
        
        
         
        YVES MULKERS is a data strategist at 7wData who has been working for over 20 years on IT-related matters, initially on the technical aspects of data and integration and more recently on data strategy and developing data-related business opportunities. Yves is widely recognised as a top-10 influencer in big data, AI, the cloud and digital transformation and he uses his knowledge and experience of operational environments and emerging technologies and capabilities to help companies achieve their data-related goals.
        
	
    
</description>
		
	</item>
		
		
	<item>
		<title>Chapter 9</title>
				
		<link>https://crhandbook.cargo.site/Chapter-9</link>

		<pubDate>Mon, 11 Apr 2022 11:50:56 +0000</pubDate>

		<dc:creator>The Data Handbook</dc:creator>

		<guid isPermaLink="true">https://crhandbook.cargo.site/Chapter-9</guid>

		<description>
	
    	
        	INDEX
            &#38;nbsp;  &#38;nbsp;&#124;  &#38;nbsp;  &#38;nbsp; 
            ⇦ PREV
            &#38;nbsp;  &#38;nbsp;&#124;  &#38;nbsp;  &#38;nbsp; 
            ⇨ NEXT
        
	
	
		
    Interview: Supermetrics
    
    
    	
        	INDEX
            &#38;nbsp;  &#38;nbsp;&#124;  &#38;nbsp;  &#38;nbsp; 
            ⇦ PREV
            &#38;nbsp;  &#38;nbsp;&#124;  &#38;nbsp;  &#38;nbsp; 
            ⇨ NEXT
        
    



&#60;img width="2000" height="600" width_o="2000" height_o="600" data-src="https://freight.cargo.site/t/original/i/5f2ce619a1298c3604741af85c48f92a6bcbf2b640854a03829e2175d091cdab/9.jpg" data-mid="142201295" border="0"  src="https://freight.cargo.site/w/1000/i/5f2ce619a1298c3604741af85c48f92a6bcbf2b640854a03829e2175d091cdab/9.jpg" /&#62;




	
        
        &#60;img width="1000" height="1000" width_o="1000" height_o="1000" data-src="https://freight.cargo.site/t/original/i/0e3cad5735037a9c809708df77dd0fd20e19fdeaa4903dcd357dc7729439b8ed/9-1.png" data-mid="142192985" border="0"  src="https://freight.cargo.site/w/1000/i/0e3cad5735037a9c809708df77dd0fd20e19fdeaa4903dcd357dc7729439b8ed/9-1.png" /&#62;
        
        
    		Edward Ford
			Demand Gen Director, 
            Supermetrics
        
        
        &#60;img width="1000" height="1000" width_o="1000" height_o="1000" data-src="https://freight.cargo.site/t/original/i/d53298fa5ef2c173d432530b31bf4515efd71d6827646e34c964644e4bec0b11/9-2.png" data-mid="142193084" border="0"  src="https://freight.cargo.site/w/1000/i/d53298fa5ef2c173d432530b31bf4515efd71d6827646e34c964644e4bec0b11/9-2.png" /&#62;
        
        
        	Vishnu Sahoo 
			Marketing Analytics Director, 
            Supermetrics
        
	
    
    
		
        Columbia Road: What are the challenges that your clients have recently been tackling around more advanced data analytics?
        
        
        Supermetrics: In our experience, there are three main challenges facing the marketing space at present. The first of these is a skills shortage, in terms of people who have a combination of data science skills and business knowledge and can use this to generate insights. Then we have the challenge of data stability, which relates to things like transformation, response time, data structure and gathering data from multiple sources, along with the ever-increasing complexity and volume of data in marketing and ecommerce. The final challenge is marketing measurement where attribution has been a challenge with the growing focus on privacy and an increasing lack of trust.
        
        
        CR: Can you tell us more about these challenges?
        
        
        Supermetrics: Sure!
        
        Skills shortage
        
        So the first challenge is finding people with both the technical skills in terms of data science and the business understanding required to draw insights and make decisions based on the data. There is a real shortage of people who can do both, leading to a mismatch between the data science team’s output and business decision makers’ expectations, as well as a lack of sufficiently accurate analytics that actually fit the business need. There are a few approaches that companies might use to address this. 
        
        
        One method is by carrying out internal training, although this takes time and still doesn’t always cover the gap. Another trend is automated machine learning, as this frees up data scientists to focus on solving business challenges. We’re also starting to see new business intelligence solutions, which are often self-serve and contextual — you can ask a question and get the answer. Some of these new trends will address the skills shortage and can also help to democratise data analytics.
        
        Data stability
        
        That leads us onto our second challenge — with the way the landscape is developing, there’s a huge, and growing, volume of data that quickly becomes an unmanageable mess. Data engineering capabilities are critical for making the data sensible and accessible and keeping it that way. However, instead of investing in these capabilities, many companies focus primarily on getting data scientists in, which leads to them getting frustrated with the slow pace when it comes to unearthing valuable insights that will see their work generating dollar value in the bottom line.
        
        
        Yes, there are lots of applications for data scientists that can impact the top line, but all of those depend on having a stable data infrastructure, so hiring them isn’t necessarily a solution in its own right. First, you need to have data engineering to provide them with a baseline of high-quality, accessible data.
        
        Attribution measurement
        
        Because of privacy challenges*, third-party data hasn’t been reliable for some time now. That has led, once again, to more focus being placed on measuring impact through traditional marketing mix modelling (MMM), which helps not only from the privacy perspective but also with brand-related activities. The increased focus on privacy is changing how marketing teams work and think about data — in the long term this will force them to rethink how they can add value for the customer while working in a more ethical way.
        
        
        The other thing is that attribution is not going to work in this new world of privacy, which paves the way for using lift testing to decide where to allocate the marketing budget. A lift test is a bit like an AB test that measures the incremental impact of marketing dollars when they are allocated to certain activities. They are not that straightforward to do, however — they can take a long time and a partnership with an ad platform is required, as well as people with the right skills. This means that smaller companies will miss out on the benefit of lift testing, which opens up an opportunity for someone to offer a platform that can automate it. There are already some tools and methods available that can bridge the skills gap and help to democratise data and make it available, like Power BI and Tableau.
        
        
        CR: You mentioned marketing mix modelling — can you explain more about what that is?
        
        
        Supermetrics: This isn’t really a new topic in the sense that MMM has been there since TV advertising began, long before digital ads came in. It’s a statistical approach to understanding the impact of your marketing dollars across different media. With increasing third-party cookies, tracking and so on, the importance of MMM diminished but it’s making a comeback because of the limitations of attribution. With MMM, you can track each touchpoint where your marketing dollars are allocated, looking at all the different inputs including spend and other factors such as the weather. Tracking those data trends then allows you to understand the long-term impact of spend on particular activities over a given period of time. MMM has its drawbacks, for example it doesn’t help on a day-to-day level, but it’s coming back into prominence because of the privacy-related restrictions around tracking.
        
	
    
	
     	
        CR: What are the current opportunities related to data?
        
        
        Supermetrics: With so many metrics and so much data, one thing we’re currently working on is establishing what metrics are important. Understanding which data should be used to make decisions is a big thing, and to accomplish that you need the right skillset – as the data grows, having the business knowledge to make use of it becomes increasingly important. In the end it’s all about supporting growth and sales. The real opportunity is to find ways to collect data, make sense of it and structure it in a way that’s easy to analyse in order to gather insights that can better inform marketing and the other teams who rely on it. If we can close the ecommerce reporting loop – so breaking down the data from the perspectives of acquisition, how people behave in your online store and how we turn that into sales — then we can model the customer journey, looking at our KPIs at different stages and finding opportunities to grow. On a more advanced level, MMM gives you a better idea of how your marketing is really performing, as opposed to things like multi-touch attribution where you never get a full picture of what’s happening.
        
        
        CR: How should companies go about building a data science team?
        
        
        Supermetrics: This really depends on the company’s maturity level in terms of analytics — the crawl, walk, run philosophy comes into play here. For instance, a lot of leaders hear about advanced topics when they go to conferences or read articles, but the company may not be mature enough to approach those things yet. Another consideration is organisational structure — do you go with a centralised team that caters for all your analytics needs, a completely decentralised team with analysts, product and marketing teams working separately, or a hybrid of these two approaches? You need to know which structure will work best right from the early stages of this journey.
        
        
        In general, in the initial phase the focus should be more on engineering, making sure the company has the right data and that it is stable and available. Then you can build on top of that with business analytics and business intelligence. Once you’ve started generating value from your analytics and business intelligence teams, you can add in predictive algorithms, personalised advertisements or even location intelligence, and then you can get to data science after that. The workload for this is huge, so before investing in building a data science team there should be a clear roadmap in place for identifying what you need and then building a team to answer these needs, rather than building a team and assuming something will come out of that.
        
        
        CR: Can you share any concrete examples of a company using data to make decisions?
        
        
        Supermetrics: One example was at my previous company, which sends out lots of customer emails. Analysing through different touchpoints allowed us to better understand our customers’ behaviour, building this understanding into a more individualised approach and changing how many emails we sent them per week. By providing the right messages at the right times we could reduce the number of emails we were sending out overall without losing impact.
        
        
        CR: What are the main challenges around managing multiple data sources for marketing and ecommerce, and how do you help your customers with these at Supermetrics?
        
        
        Supermetrics: The biggest challenges are the cost of maintenance and the speed of execution — the data isn’t automatically available quickly enough from different ad platforms that the marketing team is trying to explore, and that’s where Supermetrics comes in. We’re market leaders in that sense. In terms of maintenance, every time platforms like Google, Facebook or LinkedIn change something, our customers don’t have to worry because we’ll take care of it — from that perspective, Supermetrics is an all-encompassing platform that makes sure data continues to flow.
        
        
        One of the ways we segment our audience is between data managers who build and maintain the infrastructure and data users – including marketing teams — who come in and work with the data. Supermetrics can support both of these segments — this then brings us to the build versus buy question. If you’re building your data infrastructure then the onus is on you to maintain it and fix things, which can be full-on at busy times such as Black Friday. Supermetrics can take care of that aspect for our data manager customers, which gives them peace of mind. And as for the data users, they know that they’ll have that data available, which is useful for them.
        
        
        * Stricter privacy legislation, such as the EU’s GDPR, cookie consent requirements, California’s CCPA and browsers blocking third-party cookies. Third-party cookies have enabled user tracking, for instance for Facebook, other marketing platforms and web analytics software.
        
        
        
         
        SUPERMETRICS streamlines the delivery of data from sales and marketing platforms into the tools and destinations that marketers — and the analysts and engineers who work with them — use to make better decisions, such as spreadsheets, data visualisation tools and data warehouses. Demand Gen Director Edward Ford, who joined the company’s marketing department in its early days, and Vishnu Sahoo, who in his role of Marketing Analytics Director mainly focuses on marketing analytics and group marketing, talk about the challenges they face, as well as opportunities and solutions in the data analytics space.
        
    
    
</description>
		
	</item>
		
		
	<item>
		<title>Chapter 10</title>
				
		<link>https://crhandbook.cargo.site/Chapter-10</link>

		<pubDate>Mon, 11 Apr 2022 12:04:42 +0000</pubDate>

		<dc:creator>The Data Handbook</dc:creator>

		<guid isPermaLink="true">https://crhandbook.cargo.site/Chapter-10</guid>

		<description>
	
    	
        	INDEX
            &#38;nbsp;  &#38;nbsp;&#124;  &#38;nbsp;  &#38;nbsp; 
            ⇦ PREV
            &#38;nbsp;  &#38;nbsp;&#124;  &#38;nbsp;  &#38;nbsp; 
            ⇨ NEXT
        
	
	
		
    Why does a growth team need a data engineer?
    
    
    	
        	INDEX
            &#38;nbsp;  &#38;nbsp;&#124;  &#38;nbsp;  &#38;nbsp; 
            ⇦ PREV
            &#38;nbsp;  &#38;nbsp;&#124;  &#38;nbsp;  &#38;nbsp; 
            ⇨ NEXT
        
    



&#60;img width="2000" height="600" width_o="2000" height_o="600" data-src="https://freight.cargo.site/t/original/i/ab18f1e1fe6de87fa66cd009483df5470264c84eeabc4dd0e908674c52293ac9/10.jpg" data-mid="142201256" border="0"  src="https://freight.cargo.site/w/1000/i/ab18f1e1fe6de87fa66cd009483df5470264c84eeabc4dd0e908674c52293ac9/10.jpg" /&#62;



	
    
        
        &#60;img width="1000" height="1000" width_o="1000" height_o="1000" data-src="https://freight.cargo.site/t/original/i/dd19c0d12075215e1f611a6d4d526a3a3a6cd1f9b87c3fcbb9d03da560ac0390/10.png" data-mid="142193391" border="0"  src="https://freight.cargo.site/w/1000/i/dd19c0d12075215e1f611a6d4d526a3a3a6cd1f9b87c3fcbb9d03da560ac0390/10.png" /&#62;
        
        
    		Laura Purontaus
            Managing Consultant, 
            Data, 
            Columbia Road
        
    
    
    
		
        Data engineers are enablers of next-level digital sales. Any growth team would benefit from having a data engineer on board – sooner rather than later. 
        
        
        Data engineers are software designers with specialised knowledge in building systems that collect, validate and process data. Their work is the foundation on which business improvement is built. For example, artificial intelligence (AI), machine learning or even elementary business intelligence can’t work without high-quality data.
        
        
        In a digital commerce environment, a growth team’s core focus is on growing revenue, customer base, number of daily active users or whatever the key business target might be. Usually, the team consists of digital sales specialists, such as marketing professionals, designers, software engineers, ecommerce specialists and content producers, who conduct experiments at all stages of a customer journey.
        
        
        Having a data engineer in a growth team is still uncommon, but that is about to change. I’ll tell you why.
        
        Data is the fuel of a successful digital sales growth machine
        
        
        Data engineers can enable the growth team to move to the next level in digital sales. With the help of data engineering capabilities, growth teams can experiment with a wider range of ideas. For inspiration, here are some examples of how — bearing in mind that the nature of the business in question will define their applicability.
        
        
        
        Knowing that the average customer journey is good, but that personalising it would make it even better. Personalised marketing messages can be adapted across different channels and phases in customer journeys.
        
        
        The current product recommendations can work, but more spot-on product recommendations could boost sales further. The accuracy of recommendations varies; and it’s a case of making a business calculation to know what the target accuracy is.
        
        
        Churn prediction models could predict which customers are about to take their business elsewhere, enabling companies to act early. For example, the model could track the actions a customer takes online and alert the sales department, who would then know that they should give this customer a call to see what’s going on — and maybe throw in a personalised offer.
        
        
        Customer segmentation could be useful in helping salespeople offer the best products and prices for various customer groups. This works from two points of view: closing the deal, and also optimising the product margin. The sales cases can become increasingly complex, especially in the B2B setting, and data-based sales tools can help with both of these aspects.
        
        
        How do consumers react to personalised offers or incentives that encourage them to, for example, give the company their contact details? When used well, contact details can reduce the cost of an acquisition, especially for existing customers. 
        
        
	
    
	
     	
        On one hand, having data is essential in all of the examples above, but you must also know how to use it; you need a valid business idea and you need someone to implement it. 
        
        
        On the other hand, data is the foundation of effective experimentation; without it, you’re just trying things out without knowing if they actually work. But testing the impact of all these endeavours requires a fair amount of analysis — and doing these activities at scale, automatically and continuously, would be impossible without data engineers. They enable the step from proof-of-concept trials to the impactful use of data.
        
        Your growth (team) needs data (engineering) — now
        
        
        Data has been a hot topic for a long time now, and some might say you shouldn’t believe the hype. A lot of companies have focused on building data collection systems, data lakes and databases.
        
        
        All this is, of course, essential for using data, but it’s about time we derive value from it too. Similarly, data engineers are not only infrastructure builders, but a quintessential part of commercial operations. It’s important to bring data capabilities into the core of business operations, and not run isolated data projects that have little effect on how the business is run. 
        
        
        With individual consumers increasingly protected – as they should be — and the focus on data privacy on the rise, stricter legislation and changes in browsers have made using data more complex for companies. But this can also be seen as an opportunity: it forces us to properly think about customer experiences and makes us consider what data is used for. It’s no longer possible to “collect everything”, put it in storage and forget it’s there. Instead, it takes strategic planning, knowledge of existing and forthcoming regulations and a thorough understanding of how and what data works.
        
        
        That’s why data engineers are key players in a growth team. They are experts in various methods of data collection, they deal with data processing and, together with designers, developers, marketing technology experts and data scientists — the growth team, they contribute significantly to a business' success by creating better customer experiences. 
        
    
</description>
		
	</item>
		
	</channel>
</rss>