In this post, guest blogger Leo Zancani, CTO of Ontology Systems, looks at the challenge of data variety. He argues that to benefit from Big Data opportunities, a new generation of data variety tools are required to handle data integration and data alignment challenges that CSPs are facing.
A variety of California Grapes in a Vase, William J. McCloskey, 1921
Back in 2001, Gartner (then META Group) analyst Doug Laney famously defined three 'dimensions' for thinking about data management challenges: volume, velocity and variety (See: Laney, 3D Data Management: Controlling Data Volume, Velocity and Variety, 2001). In 2012, Laney went on to use these as the basis for Gartner’s definition of big data (See: Laney, The Importance of 'Big Data': A Definition, 2012).
An interesting thing about the '3 Vs' is that only two of them are easily quantified: volume in bytes and velocity in seconds. Variety? Well, that’s a much more slippery notion – is it the number of different formats? Different subject matter domains? Different originating systems?
Human beings and the technology market being what they are, once the demand for tools to handle big data was established, technical endeavour focussed eagerly on the easily measurable. The argument for investing in and buying a new data store that can store more terabytes or handle more updates per millisecond is pretty straightforward: competitive positioning is based on a single, well defined and easily understood metric. Variety? Once again, not so much.
All of which perhaps begins to explain why it is that – in spite of the fact that analysts consider it to be by far the biggest and baddest V (Leopold, 2014) – variety has received so little attention in terms of tools and technology. Volume? Hadoop, MongoDB, Cassandra to name just a few. Velocity? Storm, Hibari and any number of commercial products. Variety? Um…
As well as the basic difficulty of dealing with a wide variety of data (whatever that means), a less obvious issue is starting to come to light, caused by the growing availability of cheap, high-volume storage: the 'data attic' effect.
Before the advent of big data, when retaining data was an expensive thing to do, the default disposition of organisations was to discard data; retaining it was very much an opt-in choice. With the notable exception of data covered by retention regulation, the existence of big data stores encourages people to horde data – 'just in case'. Retention has become an opt-out.
This is apt to create data attics – big, dusty, rarely visited rooms of data, isolated from the rest of the enterprise data estate, just waiting around for a garage sale data scientist to magic value out of them. Why? Because the technology to easily join them up, between themselves and to the rest of an organisation’s systems – that is to say, to handle the variety problem – just isn’t available.
For communications service providers, a combination of the sudden technology diversification brought about by the boom times, and under-investment from more recent lean times, means that the issue of siloed systems and data is already very acute. Data attics certainly don’t help improve that.
So we find ourselves in a situation where the expectation is being set that it’s possible to get value from any amount of any data at any time, but in fact we are in very real danger of creating more and even less penetrable data stores, divorced from the day-to-day operations and concerns of the business and too hard to see as a unified whole and therefore of little strategic value either (Kelly, 2013).
There are two sides to this problem: one is a technology challenge, the other a business challenge.
Big data, in spite of its bigness, is still data, and expenditure made to record, retain and analyse it needs to have a coherent and meaningful purpose and justification, just like any other technology initiative in a business. The advice from Deloitte (amongst other commentators) is that simply collecting a lot of data and expecting 'insights' to materialise isn’t going to work, industry hype notwithstanding (Sharwood, 2014). The business challenge is clear and should be easy to resolve: big data projects need a use-case and a business-case in order to be successful.
The technology challenge though is more subtle.
The Register’s Paul Kunert aptly sums this up by saying that “big data is like teenage sex: everyone is talking about it and nobody is doing it correctly” (Kunert, 2014).
As with any industry trend that emerges very quickly, the pressure on organisations to communicate what they are 'doing about it' is immense – and has become yet greater as the communications cycle has accelerated in recent years. This type of undirected pressure has led technical organisations to take the path of most immediately executable action: the tools available focus on volume and velocity, so they have stored data quickly, on the – apparently reasonable - assumption that since the data is being retained, once the businesses requirements are articulated, they can be implemented retrospectively on the retained corpus.
What this approach didn’t consider was the lurking demon of data variety.
A curious aspect of this oversight is that while the data resided in its originating systems, the obstacle was very visible: projects to use data from multiple systems anticipated (although usually vastly underestimated) the substantial effort required to link that data together in order to make use of it as a single body.
In combination with the lack of a clear motivation from the business, these effects will lead to a colossal data integration deficit.
Previously, the business wanted to make a specific use of some data – so it figured out where that data would come from and then went about costing the activity of joining it up to make it usable for that specific use. It often found this cost to be much higher than expected, as the sorry state of data integration projects shows.
Now, the (often implicit) requirement is to be able to ask any question of any data, and so that data must all be joined up in a way that enables it to be used it in any way. The cost of this is likely to be tremendous – the square of the cost of joining any two sources up - but it hasn’t been accounted for at all!
In order to move forward, the answer is clearly to move the focus of big data tooling onto the variety dimension.
Just as the recent frenzy of innovation in high-volume and high-velocity data tools brought down the cost of acquisition, storage and query by orders of magnitude, so a new generation of data variety tools are now urgently required before the hidden treasure in the data attics can be rescued from an impenetrable layer of data integration dust and data alignment cobwebs.
About the author
Leo Zancani co-founded Ontology Systems in 2005 and is now leading the application of semantic technologies to management systems for IT, Data Center, and Network environments.
Telesperience: operational efficiency, commercial agility and a better customer experience
Chief Strategist Teresa Cottam looks at the persistent weakness of billshock prevention. She argues that as CSPs evolve their role, they must focus on having more robust measures in place to tackle fraud, accidental overspending, bad debt and billshock.
Charles Joseph Grips, Opportunity Makes a Thief, 1875
When I review the ongoing cases of billshock that pop up in the general press at depressingly regular intervals, it's evident that billshock remains a persistent risk to both customers and communications service providers (CSPs) worldwide. I regularly tweet out many and varied cases I discover, and there is little evidence they are decreasing. The causes may change, but the result is always the same: angry and upset customers, and poor publicity for the industry. For the CSP there is also the cost of dealing with the complaint from the customer, and indirect costs associated with poor publicity such as churn, loss of customer trust and so on.
In my last post I looked at why billshock is an issue not just for postpaid customers but also for prepaid customers. But when we review the ongoing cases of postpaid billshock, what we see is chronic failures of systems, policies, processes and people. Often, any systems or processes work only in a siloed fashion, alerting customers to certain problems but not others. Regulations are also complex, leading to misunderstandings that expose customers to risk. As I explained in a previous post, premium rate numbers, for example, are not covered in EU roaming regulations, which is why if your phone is stolen a common fraud is to exploit the loophole to call premium rate numbers.
But it's not just mobile users who are at risk of this type of fraud. Big businesses have frequently been defrauded by criminals using a similar technique. One scam involves hacking into the voicemail of an unused extension and then war dialling international premium rate numbers (IPRN). Thieves rent IPRNs and when one of their numbers is dialled they make money. Meanwhile, the customer and CSP get stuck with a huge bill. In 2012 it was estimated UK businesses may have lost up to £1 billion using this little trick.
Scarcely better than this is the increasing number of billshock events related to digital content. These occur when customers are charged for in-app purchases, particularly during gaming. By their nature, the victim is disproportionately a young person who may not fully understand the charges and, more often than not, doesn't pay the bill. The content company often doesn't make it easy to understand the charges or the aggregation of charges over time. And while you cannot blame content companies from attempting to make money, you can blame them for behaving unethically. Charging someone thousands of pounds for buying a few digital goods is unethical.
Mired in all of this is the CSP, who is often the one responsible for collecting the money, dealing with the complaints, and sitting in the middle of the resulting bad publicity. Do they really want to sell their brand's value so cheaply?
While both CSP and content provider are to blame, there's a third party in all of this that's responsible too. That's the regulator - at both EU and national level. It cannot be right that someone who has agreed to pay a few tens of pounds for telecoms services is allowed to run up a bill for thousands of pounds without query. There is no excuse for not blocking services to, and payments from, the customer, and the technology certainly exists to intervene early.
One school of thought is that CSPs don't act because the uncomfortable truth is they make considerable sums out of overages. While partly true, the validity of this argument depends on the scale of charges. Once the overage exceeds a certain level, risk to the CSP rises exponentially. A small percentage of overages may indeed be in their favour - customers are unlikely to complain and just pay up - but once the sum owed crosses a certain threshold, the customer will complain, may not be able to pay, or will generate bad publicity for the CSP. Customers are all too aware that complaining on social media or to the press usually results in the sum owed being wholly or partly waived.
Excessive billshock does not favour the CSP at all. They pick up the cost of dealing with it, the brand damage and often the bills. Yet the solution is in their hands. In the case of content partners overcharging, a policy should prevent any further purchasing to be billed until the situation is cleared up. A simple clause in any partnership contract would state that the CSP nor its customer would be liable for excessive charges and that it was up to the content provider to put in place safeguards to ensure people could not spend hundreds of pounds within a short timeframe (a clear sign of either fraud, a child using the service, or an adult who does not understand the consequences). It's easy to make payment contracts net of repudiation, fraud and bad debt. And it's poor business practice on the part of the content company to charge recklessly high sums with no spending controls in place.
This would avoid the kind of damaging situation we see here where a UK mother faces a £7000 bill her son ran up playing a mobile game. When charges are being incurred at the rate of £240 per day, the CSP is ultimately at fault for poor governance.
However, as is often the case, it is the CSP (T-Mobile in this instance) that is blamed for the situation, and which bears most of the negative consequences. Again, the regulator is at fault because there should be an absolute and legal cut off point over which services cannot be supplied without a credit check and a written agreement. This should be no more than 200% of the normal monthly bill, and no more than 20% of the normal bill in a single day. Both CSPs and content providers are on dangerous legal grounds, since any contract between the two parties (CSP and customer) was for normal telecoms charges and not for excessive content charges. They need to be assured that the content provider clearly explained the charges, alerted customers to spending, and took reasonable measures to behave ethically. Plus they need to be sure that the customer agreed to pay: since many people falling foul of these charges are children, the 'contract' is not enforceable as the permission of the (over 18) billpayer had not been given.
Part of the reason for weak regulatory intervention here is that the UK telecoms regular Ofcom is not actually responsible for so-called "premium rate" services - that duty belongs to PhonePayPlus. Thus we have two regulators in the mix and neither is being aggressive enough to put in place robust measures to protect customers. Couple this with a weak response by the UK phone industry to these type of problems, and it's easy to see why customers continue to fall victim to excessive charging.
CSPs may think they have solutions and processes in place to deal with problems around basic services, but as they evolve into digital service providers (DSPs) they need to widen protection and see this both as a revenue management and customer care issue. This is not just a tactical matter, but a strategic problem. The damage the industry continues to do to itself by not upping its game on excessive and fraudulent charging is justification enough for action. How can CSPs hope to build a future role within the digital supply chain as aggregators, payment processors and so on if they don't have robust measures in place to protect customers, partners as well as themselves?
What's more, offering an assurance to customers that charges will always be fair, clear and proportionate, and that you will protect them from fraud and making mistakes that lead them open to excessive charges, is clearly a differentiated customer experience.
Telesperience: operational efficiency, commercial agility and a better customer experience
Chief Strategist Teresa Cottam looks at why CSPs need to control charges for prepaid customers as well as postpaid customers.
Simon Hollosy, The Purse Is Empty
I hope by now we're all getting the message that billshock is bad for business. In the EU regulatory intervention means certain types of billshock are becoming regulated and obsoleted (such as intra-EU roaming billshock). As we have discussed before, this opens up new risks to the operator brand and business, due to customers misunderstanding which risks they are protected from, and which they're not. This, in turn, means that CSPs cannot afford to be complacent about billshock and the detrimental effect it has on customers, brand perception and long-term business.
One area of complacency is around prepaid users. Contrary to perceived wisdom in the industry they are not immune to 'billshock'. Although many customers think that using a prepaid account will protect them from billshock, all it really does is limit the damage. There is also an important discrepancy of viewpoint to consider here between developed and developing market perspectives. In developed markets, customers may be annoyed their credit has been drained, but console themselves with how it could have been worse. In developing markets that are predominantly prepaid, the issue is far more serious.
Previously, many CSPs in largely prepaid countries believed that billshock wasn't an issue for them, as their subscriber bases were protected from the phenomenon. The logic was simple: they don't get bills so they don't get billshocked. This may largely have been true in the past when customers were using feature phones to access basic services, because customers could usually predict how fast they would use their credit. However, as customers transition to smartphones, and begin using phones to access content-based and internet services (rather than just to make voice calls or text), they are increasingly exposed to credit drain (the prepaid version of billshock).
The same principles of brand damage and poor customer experience apply when a customer's account is rapidly drained of credit. It can leave them vulnerable and unable to make urgent or emergency calls (calls for help either to government agencies or family). It can make them angry because they feel the charges are unfair. And, for poorer customers, it can also deprive them of their livelihood and their principal means of doing business.
If they have to travel some considerable distance to recharge their credit, then credit drain becomes a serious issue - eating into income-generating time as well as inconveniencing the customer. This customer may also have no more disposable income to spend on their phone. In situations where customers are already sacrificing necessities such as food in order to pay for mobile credit - because their mobile is an essential tool for them to generate income - credit drain has grave consequences.
Although regulators have acted to protect contract subscribers, fewer measures are in place to protect prepaid subscribers due to the belief that the credit limit is adequate protection. However, this is changing.
In South Africa, for example, the National Consumer Commission (NCC) is poised to act after consumer complaints. It is particularly concerned about CSPs' failure to warn customers about data usage, believing this to negatively affect consumers' rights. Meanwhile, Pep Stores, South Africa's largest mobile phone retailer, is now working with Google to tackle the problem. This is said to be taking the form of an Android-based 'Africa-friendly' proposition, which includes a user interface that allows customers to have better control and awareness of their data usage.
Telesperience advocates that prepaid customers should have the same protection and consideration given to them as postpaid customers. They should be advised on excessive or out-of-character spending, should be given the ability to limit spending on certain services and for particular time periods, and should be alerted when these limits are being reached. The technology is readily available to do this, and this type of capability would differentiate CSPs in prepaid markets and amongst certain customer segments.
The arbitrary line between postpaid and prepaid customers continues to blur, and CSPs need to offer basic customer assurance - such as usage notifications and protection - to all customers, irrespective of how they choose to pay.
For BSS vendors, this could be a new need against which to sell well-established policy control and charging capabilities, as the market for postpaid policy control begins to slow in developed markets.
Telesperience: operational efficiency, commercial agility and a better customer experience
Chief Strategist Teresa Cottam looks at why communications service providers need to combine efficiency and flexibility to successfully address the enterprise market. She argues that many CSPs do not yet have the right infrastructure to support their enterprise plans on both the demand and supply side.
William Armstrong, Painting of the Toronto Rolling Mills, 1864
Show me a telco and the odds are that they now believe they can make huge sums from selling to enterprises - particularly smaller enterprises. The (telco) logic is simple: the consumer market is toughly competitive, with a race to the bottom on pricing for basic services and the requirement to constantly innovate; the large enterprise market is equally tough with embedded competition and hard-nosed price negotiation from companies that are demanding more and more 'bang for their buck'; the small and medium-sized enterprise (SME) market though is underserved with ICT and there are millions of smaller enterprises waiting for telcos to come along and make them part of the connected world. While selling another content service to a cash-strapped consumer to maintain ARPUs might be both alien and challenging to the average telco, selling a network-plus* or Cloud service to an enterprise should be both natural and easy.
Of course in the outside world this logic comes under huge pressure when reality starts to bite. As soon as communications service providers (CSPs) begin to think about how they will sell to the SME market, and deliver the capabilities and performance SMEs require, while still remaining profitable, a large number of problems surface. It's not as though CSPs haven't tried selling to this market before; it's just that they've struggled to do so profitably.
Contrary to what many in the industry believe, SMEs are not all technically naive organisations sitting there waiting for telcos to save them. Neither are they focused just on cost, as many would have you believe. In fact, quality, availability and ease of use are all more important to the average SME than cost alone. Important to enterprises of all sizes though is trust: can they trust a telco to deliver key and often mission-critical capabilities?
The evidence over a number of years and both Telesperience and other studies, is that SMEs have a patchy view of their CSPs. Many have been stung with billshock or let down by poor quality of service. Why then would they trust a telco to provide a wider range of capabilities?
Even where they have a broadly positive view of their CSP, branding also has a role to play: CSPs are strongly associated with basic connectivity services, not with more complex solutions. SMEs may complain that they are expected to assemble solutions, and yearn for someone to come along and offer them verticalised and tailored solutions that meet their needs more exactly, but the natural place to turn for these may not be their CSP.
If CSPs want to open up this market they have to work on both their trustability and their understanding of enterprise businesses before anything else. Services the enterprise really needs, clear pricing, no billshock, reliability and good quality plus excellent customer service. Getting these basic things right is the foundation for any successful enterprise business going forward. Being easy to do business with as well as a valued provider of good quality connectivity sets the CSP up to build more value as enterprises move towards mobile and distributed working, and accessing more capabilities from the Cloud.
Most importantly, CSPs have to be able to demonstrate basic customer experience and customer service principles in the enterprise market. Many concepts now taken for granted in the consumer market are still novel or radical in the B2B side of the industry. For example, an understanding of niche marketing. Enterprises are just as idiosyncratic as customers - there is no "average" enterprise, just as there is no "average" consumer. Meeting the needs of enterprises more exactly is both the opportunity and the challenge facing CSPs.
In the past, CSPs have either tried to deliver economies of scale by offering less choice (an efficiency-centric model) or have tried to meet the needs of enterprises by tailoring services individually (a flexibility-centric model that is expensive to maintain). In the future, they will need to combine approaches to deliver mass customisation - a more tailored service that can be delivered cheaply because it is supported by automation. Verticalised solutions and 'pick and mix' approaches will deliver a closer fit without the overheads associated with homespun tailoring. Key to this though is an infrastructure that supports large volumes of customers with complex needs, provides cheap and easy flexibility through automation, makes change easy, is cheap to run and yet, most importantly, is still reliable.
On the flipside, creating wider solutions will require CSPs to work with a broad range of partners and be able to manage these partners, their performance, and the wide range of service elements they supply - including paying these partners accurately and on time.
If CSPs are serious about their enterprise ambitions they have to focus on their infrastructure, because it will have a huge role to play in their success or failure. Current infrastructural weaknesses are slowing their ability to benefit from the wide range of enterprise opportunities - both direct and indirect. And while CSPs may have certain advantages, they don't have unlimited time to get this right because if they don't meet enterprise needs, there are plenty of others who will.
To compete successfully, CSPs need infrastructure that supports their profitability, as well as enterprises' needs for easier purchasing and fulfillment, more choice and more complex solutions. There is a clear role for complex solution providers, enabling CSPs to aggregate key solution elements for provision to smaller enterprises, but this requires the kind of collaborative working that is not natural to CSPs, and which they might not currently have the capabilities to support.
Teresa Cottam is a renowned strategist on B2B (small, medium and large enterprises) market opportunities for Communications Service Providers (CSPs) worldwide. Teresa advises CSPs and their suppliers, as well as leads Telesperience's extensive research on B2B strategies. Teresa has unique cross-industry skills.In addition to 20 years of telecoms experience, she has worked for a financial services think tank in the City of London,served as an analyst and researcher in the media, retail, local government and manufacturing sectors, as well as a consultant in the manufacturing, food & beverages, healthcare, and education sectors. You can follow Teresa on Twitter@teresacottam
Telesperience's Chief Strategist Teresa Cottam looks at the latest research from Ofcom, which shows that real quality of service varies hugely between cities and rural areas.
Alone, by Felix Parra, 1898
Ofcom published data in August 2014 that it has collected on the 'real' customer experience in the UK, supplied by RootMetric which measured experience at the handset (rather than in the network).
The research found that while 78% of people in urban areas are satisfied with their mobile provider, only 67% of people in rural area of the UK are satisfied.
For example, the proportion of calls successfully connected varied considerably between providers and according to where the customer tried to connect. The CSP achieving the highest average number of calls successfully connected was EE at 97%, but this comprised 97.5% call successful completion in urban areas compared to 93.7% in rural areas. This represents quite a tight range of performance compared to the other main networks with O2 and 3UK having over a 10 percentage point disparity between their urban and rural call completion rates, and Vodafone having a massive 15.4 percentage point difference.
Notable facts from this research
All UK networks manage to achieve above 95% completion rates in urban areas, with Vodafone having the lowest rate at 95.3%.
The average successful call completion rate in urban areas across the UK's big 4 is 96.63%.
Successful call completion in rural areas is almost 10 percentage points lower on average across all four networks at 86.75%.
EE beats the average performance in both rural and urban areas with call completion of 93.7% in rural areas and 97.5% in urban areas. The variation in its performance between both areas is low. It has the highest call completion in rural areas by a considerable margin (the next nearest is O2 which is (6.3 percentage points behind EE). It comes second to O2 by a small amount in urban areas (0.2 percentage points behind), which is well within the margin of error. Thus the two should be considered tied in terms of performance in urban areas.
O2 beats the average performance in both rural and urban areas with call completion rates of 87.4% in rural areas and 97.7% in urban areas. It is effectively tied in first place in terms of call completion in urban areas with EE (see above), and it has the second best call completion rates in rural areas (behind EE).
3UK which has frequently been criticised for its quality of service performs better than many might expect. It is roughly performing at the average for the UK in both rural and urban areas (at 86.0% and 96.0% respectively). It trails EE and O2 in these performance metrics, but is ahead of Vodafone in both areas, and in rural areas by a considerable margin of 6.1 percentage points.
Vodafone has the worst call completion rates across the big four in both urban (95.3%) and rural (79.9%) areas, and in both areas it performs at below the market average (in rural areas by a considerable margin of 6.85 percentage points below average, and 13.8 percentage points behind the top performer EE). Even in urban areas it trails top performer O2 by 2.4 percentage points. This will surprise many, as Vodafone pitches itself as a premium provider aiming particularly at business customers.
How bad is quality of service?
The figures reported by Ofcom above, and collected by RootMetric, are considerably worse than those reported by the industry itself. This is because CSPs use different methodologies to report call completion, giving a rosier picture than that discovered by RootMetric. For example, CSPs usually exclude call attempts made when customers are outside network coverage. This means that the customer experiences not being able to make or complete a call; but the CSP does not include this in its statistics because it is not aware that the customer tried unsuccessfully to make a call. The outcome is that CSPs are planning and working from inaccurate data, or at least data that does not give the full picture.
However, there is another factor to consider here. UK customers already select mobile network based on local coverage (coverage in the areas they are most likely to use their mobile). This reduces competition in localities where coverage is poor - such as in certain rural areas - giving less choice or lower competition in these areas. While, in theory, customers may have a choice of 4 or more network providers, poor coverage can reduce their real choice to 1 or 2 providers. Even then this does not guarantee availability or good quality of service and those in rural areas have become accustomed to a level of service that is far lower than that expected by residents in urban areas.
Dissatisfaction with call quality is a hard metric to capture accurately, as we may not be comparing like with like. If customers in rural areas are accustomed to lower service quality then their expectations may be satisfied at a much lower level of performance than those in urban areas - even though they are paying the same amount of money for a far lower quality of service. This latter point is something that Ofcom should focus upon.
Ofcom also found that while 55% of customers reported rarely or never having no signal or poor reception, 30% said this happened at least once per week. Likewise while 69% say they rarely or never experienced a blocked call, and 65% rarely or never experienced a dropped call, a fifth of customers experience this regularly (20% regularly experience blocked calls, 22% dropped calls). Those in rural areas are far more likely to have this problem than those in urban areas.
Tellingly, Ofcom found that in ‘remote rural’ areas, 35% of people said they were unable to make calls at least once a week, while 28% found calls were unexpectedly dropped. What is quite shocking is that 'remote rural' areas are not, as you might think, just the most northern parts of Scotland or mountains in Wales, but include villages of up to 2,000 people. In other words, a lot of people and businesses are in areas that they may not regard as remote - or be deemed to be remote by the Office of National Statistics - but have been categorised as such by telecoms service providers.
Why is poor rural QoE a problem?
Rural service provision is a big problem in the UK: the government has a stated commitment to closing the 'digital divide', encouraging businesses in rural areas and supporting rural communities. The urban-rural digital divide means that many types of business are unable to function or optimise their performance because of inadequate telecoms infrastructure. Moreover, those travelling for business around the UK find that their experience is very patchy - and certainly not the connected, mobile lifestyle our industry touts as possible.
These figures demonstrate what many have been saying for a long time: that too much bandwidth and investment is being thrown at the largest cities (such as London and Manchester), which already have a much higher quality of experience, at the expense of other areas of the UK.
This issue only seems to become newsworthy when it affects powerful people. Such as when the Prime Minister recently reported having to stand on a hill in Cornwall to get a mobile signal while on holiday. For those living outside cities it's not news, but it is a huge cause of frustration when the experience promised by CSPs is not delivered.
In fact, the experience in large swathes of the country - rather poetically termed 'rural areas' - can be very patchy. I live in a well-sited town of 20,000 inhabitants in the Midlands, which is close to several major cities, as well as being situated on the main motorway and rail corridors. High-speed broadband is now available, and mobile coverage is average (dropped signals are a real problem), but a short distance outside the town (less than one mile) there is very patchy mobile coverage and no broadband for those living even in substantially-sized villages.
Fixing the problem by local authority area doesn't work if there is huge disparity between the coverage within the authority. In our local authority there are two sizeable towns and a large number of villages both large and small. This means the average across the authority (by population) is unlikely to reflect the reality experienced by those in the villages, or those moving around the area. We don't expect to have mobile access in the middle of a field; but we do expect access when on main roads or the edges of towns, travelling down main railways, and so on. We also do not appreciate fluctuating signals, which means we can connect to the network but the call repeatedly drops - making it unusable.
What do stakeholders propose to do?
EE, O2, Three and Vodafone are working with Ofcom to develop a common methodology for measuring the rates of calls successfully completed on their networks.
All four networks are committed to matching O2’s 98% coverage obligation for 4G mobile.
Ofcom will publish research comparing 3G and 4G mobile broadband speeds, following detailed testing across five UK cities.
The UK government has committed £150 million to fund mobile masts in uncovered areas.
The Department for Transport and Network Rail have committed £53 million to improve mobile services and access to WiFi on railways.
Vodafone alone is investing over £1 billion to improve its network.
Is proposed stakeholder action enough?
The risk is that the industry will continue to be measured by its performance in urban areas with only lip service being paid to the problems experienced in rural areas. Announcing major investment is all very well, but it often is unevenly spread investment.
The impact of poor coverage is discriminatory: rural areas, on average, contain a higher proportion of older people and those under 18, as more younger working-age people move to cities to look for work. This rural 'brain drain' leads to loss of industry, entrepreneurship and workers in rural areas, while increasing congestion, housing and unemployment problems in cities. As life becomes more networked, and workers more 'mobile', less commuting is envisaged. However, rural areas risk not benefitting from the social advantages associated with both this and other networked society initiatives (such as the Internet of Things and Smart Environments).
Without urgent and appropriate action, the UK's digital divide will not be closed. Telesperience would strongly argue that the taxpayer should not be the first port of call to subsidise industry initiatives. Rather we would like to see customers paying for quality of service. Those experiencing disproportionate numbers of dropped calls, or inability to get a signal, should be compensated by the mobile provider. Get less, pay less. Discounts for poor service should be reflected in bills, or top ups sent to prepaid users. CSPs should also be fined for not providing adequate service in areas deemed to be core parts of the UK mainland and which they have committed to covering: with fines being used to improve infrastructure in those areas. We believe that only when Ofcom make charges reflective of the service actually provided will CSPs be incentivised to be more honest about what they're delivering.
You can download a free copy of the Ofcom report here.
Telesperience: operational efficiency, commercial agility and a better customer experience
Smell the change? There's an ill commercial wind blowing in the telecoms market. Chief Strategist Teresa Cottam looks at the weather forecast.
Carlo Bonavia, Storm off Rocky Coast, 1757
In telecoms we're usually focused on the latest shiny new technology. It doesn't really matter what it is - LTE, WiFi, the latest smartphone, 5G, hetnets or policy-controlled M2M - because the odds are that at least in the short to mid term it's going to both cost us and lose us a lot of money.
The sums of money involved are staggering - more than the GDP of many small countries. And yet, we talk about billions as though they were small change - easily recoverable and just part of the cost of doing business.
Yet the industry is also littered with examples of poor investment strategy:
Initiatives where enormous amounts of money were spent but which never took off.
'Build it and they will come' strategies that left shiny new investments high and dry for years.
Failure to optimise the commercial potential of services that customers do want and are willing to pay for.
Investment in complex solutions that take years to come to market, when more agile competitors find cheaper, faster ways to market.
The 'me-too' approach to investment that is epidemic in telecoms, sees CSPs plunging head-first into massive billion-dollar investments - often with only a vague sense of how this will be recovered or add value. More often than not, the investment is simply justified as being necessary to maintain the status quo. Meanwhile competitors with a keener commercial eye come in and print billions off the telcos' backs. Rather than learn the lessons, the telcos blame their more agile competitors, claim that it's not fair, and try and manipulate the environment in their favour.
But another lesson the telcos have failed to learn is that holding back the tide doesn't work - as Canute demonstrated over a thousand years ago.
The whole telecoms ecosystem is guilty of the same thing: hyping up technology as a panacea. We're told there are no 'problems', only 'challenges'. And, more often than not, technology is the answer to those 'challenges'. Or is it? This approach totally devalues and brushes over every other contributor to success. Even where innovation is technology-based, failing to get the commercials right means the opportunity is lost or not sufficiently optimised - haemorrhaging billions.
Yet look at the press (including analyst output), look at vendor marketing, and at industry 'advice' to investors. More often than not the emphasis is on how cool the tech is and how investment will continue the status quo. The hypnotic effect of all that shiny technology deludes sane and intelligent people into believing that the telecoms status quo can be preserved simply by heavy investment to maintain barriers to entry. But while telcos build their connectivity castles, competitors are not interested in taking down the walls; they simply move around the castle and undermine the foundations with more successful commercialisation.
I'd like to ask those of you that read EY's 'Top Ten Risks in Telecommunications' - now a couple of years old - whether you think that the risks have been addressed successfully? In its follow up, EY suggested that senior executives had everything under control - though they hinted at a gap between answers and execution.
So have CSPs successfully shifted their business models? Are they engaged with customers? Do we have confidence in ROI? Are we better at turning demand into value?
Personally, I see a lot of great PowerPoint along these lines - we're certainly good at talking the talk - but I'm not convinced that most CSPs are in the right place to take advantage of the huge commercial potential coming into (and above) the market. There's obviously variation - some are cleverer than others - but it's hard to avoid the conclusion that the communications market is fundamentally and rapidly shifting and that there will be a lot of casualties in the next few years on all sides of the market.
I'm reminded of the old investment caveat that past performance is no indicator of future performance. Never a truer word was said about the telecoms market. And the shift is already evident across the entire supply chain. What's curious is that so many cling to the old way of doing business and refuse to see that the connectivity market is fundamentally changing.
I don't mean to infer that there won't be a healthy business in the future; or that companies will not make very good returns. The question is whether past stellar performers will be future stellar performers. (Clearly this is not guaranteed.)
We need to take a long hard look at the characteristics, strategies, technologies and approaches that *will* mark out the winners from the losers. We cannot assume that just because an opportunity is telecoms-based, that telecoms companies will be the ones to benefit from it. Many established names are going to disappear. But neither can we assume that johnny-come-lately newcomers with shiny new(er) technology will do any better. As Shakespeare said: all that glisters is not gold. And finding the gold amongst all the pyrite has never been harder or more important.
Telesperience: operational efficiency, commercial agility and a better customer experience
In this third and final part of a three-part view on the damage caused by internal politics within a CSP, guest blogger Snowden Burgess looks at how local optimisation, bottle necking and restructuring is preventing CSPs from competing effectively and meeting the needs of their customers.
Children dancing in a ring, Hans Thoma, 1872
Most large communications service providers (CSPs) are now driven by a need to be operationally efficient. However, this seemingly worthy goal can, in fact, drive and encourage both internal politics and/or fire-fighting cultures. Operational excellence may be essential for current survival, but innovation excellence is essential for ongoing survival in a world of constantly changing technology.
In this world of enhanced technology advancements, technology start-ups can compete with, and in some cases remove, large entrenched companies. This is why innovation excellence is critical, and why large traditional CSPs feel so threatened by the likes of Facebook, Google and Amazon.
Creating innovation excellence and true operational efficiency means getting internal politics out of the way. Generally, though, the larger an organisation becomes, the more entrenched the internal politics become, and the more these block both operational efficiency and innovation excellence.
In my last post, I discussed the 'Empire Builder' - one of the four roles assumed by leaders and managers playing internal politics. The other three roles - 'The Restructurer', 'The Local Optimiser' and 'The Bottle Necker' - I will look at in this post and analyse why they cause chaos in CSPs seeking to improve their performance.
This is one of the most effective ways for a manager to present a sense of order, control and growth while at the same time driving confusion, chaos and stagnation.
“We trained hard, but it seemed that every time we were beginning to form up into teams we would be reorganised. I was to learn later in life that we tend to meet any new situation by reorganising: and a wonderful method it can be for creating the illusion of progress, while producing confusion, inefficiency and demoralisation.”Gaius Petronius Arbiter (c AD22–67)
Sometimes restructuring an organisation or department is the right thing to do and can be very effective in driving change, re-energising teams and improving bottom lines, but for a restructure to be effective an organisation needs to be clear on four things:
What the problem is they are trying to fix
What the communication plan is for effectively informing the team being reorganised, as well as wider teams they may support and, importantly, customers that might be affected
What the time frame is for the change, start dates, milestones and end dates
What success looks like.
The Restructurer will not have answers to points 3 & 4 and will have a very limited communication plan. This is why the end result is confusion, inefficiency and demoralisation. A restructure is often used to meet internal or external pressures being placed on a leader or manager to take action on poor performance. By restructuring but not addressing the four items above, the leader or manager simply adds more confusion and chaos to the situation, while deflecting the pressure from his door with the excuse of 'I’m restructuring, so you have to give it time to take effect!'
Ultimately, a true Restructurer will never stop: using this tactic continually and moving from one restructure to the next while failing to define why the change is taking place, as well as ineffectively communicating. This means they will not be able to show any real success or Return On Investment (ROI).
I work in a large organisation that typically experiences a major restructure every year. Sometimes this is due to M&A; sometimes due to more mundane matters. For example, in the last few years just on the level of office accommodation, the company has moved teams from one office to another, moved teams around that building, and then concluded that moving them back to their original positions might be desirable. This has all been done in the name of improved performance and improved moral, at vast expense but with little to show for it.
Badly thought-through restructuring that is poorly planned and supported has a significant impact on the internal moral and performance of a team. The knock-on effects impact dependent and adjacent teams and, ultimately, the end customer experience.
The Local Optimiser and Bottle Necker
Another great internal politician that impacts on innovation and efficiency is the Local Optimiser. Local Optimisation is a great way to demonstrate short-term gains within a process flow, and shift any problems deeper into the process and away from your team.
A great example of how Local Optimisation can stymie operational efficiency efforts and innovation excellence is where a goal is given to a team to complete a certain number of actions in a day (for the sake of argument let's say 100). Each of these actions takes 20 minutes.
However, the team is under considerable pressure. The Local Optimiser therefore decides to meet his objectives by pushing some of the work onto other teams. For example, rather than extracting information from emails, the emails may simply be attached to the system (requiring other teams to read them and extract the necessary information).
The Local Optimiser is now a hero because he has increased his throughput to 150 actions a day, and reduced the time taken to process an action from 20 minutes to 10 minutes.
For some weeks everything seems to be fine. However, what has happened is that the internal consumers of the information are now struggling. They have a defined window to complete their tasks, but their workload has now been increased significantly simply because The Local Optimiser has made himself look good by passing some of his workload onto them. This lengthens the time taken to complete their tasks, and impacts directly on the customer.
This change heralds the start of an internal 'turf war' between two managers, with the first manager unwilling to change back (Why would he, he now looks good!) and the second manager - through no fault of his own - being pressurised for poor performance. Often the result will be a restructure in the belief that this will solve the problem and improve performance.
The Local Optimiser changed the process to optimise his own performance with no consideration for the impact on the wider process, other teams or the customer. In most CSPs, this behaviour will be rewarded with a promotion or a bonus - incentivising managers to put their own needs and careers ahead of the needs of the organisation and the customer.
In contrast, The Bottle Necker doesn't release control, but becomes a bottle neck within a decision-making process of an organisation. When under pressure, they shift into 'Fire Fighting Mode'. Teams can also become a bottle neck within a process flow due to local optimisation implemented elsewhere. The consequence of both of these is often further restructuring.
Most, if not all, of the above actions are internally focused andtake little or no account of the impact they have on the customer experience or the overall performance of the CSP. But then, by its very naure, internal politics has little to do with the customer and everything to do with the internal standing and gains of individual managers and leaders. However, while CSPs continue to spend so much time focused internally, they are unlikely to be able to expend sufficient effort to effectively fight their competitors or meet the real needs of their customers.
Peter Bowen looks at the relationship between profitability and customer centricity - sparking a debate on the importance of customer centricity and what it really means
Nicola Forcella Dans Le Souk Aux Cuivres
“What are you complaining about it’s the same for everyone!” – taken from a conversation between a CSR at a CSP with a customer complaining about a service outage. I understand the customer apologised for calling.
I have long held the belief that putting the customer first was the best way of securing a long term profitable relationship. However, whilst working for a large multinational I found management were more focused on short-term profit than the customer.
This approach was somewhat echoed in a recent discussion I had on what it means to be customer centric. One of the participants in this discussion stated that to be customer centric you had to do what was best for the company ie make a profit.
I then came across an article in MIT Sloan’s Spring 2014 magazine http://sloanreview.mit.edu/art...stomer-satisfaction/ It covered a study that looked across industries and the correlation between companies’ customer-satisfaction levels for a given year and the corresponding stock performance of these companies for that same year. On average, it argued, satisfaction explained only 1% of the variation in a company’s market return.
In the same publication it mentioned a 2013 article in Bloomberg Businessweek entitled “Proof That It Pays to Be America’s Most-Hated”. The magazine reported that “customer-service scores have no relevance to stock market returns … the most-hated companies perform better than their beloved peers … Your contempt really, truly doesn’t matter … If anything, it might hurt company profits to spend money making customers happy.”
The authors did admit that the above examples represented an overly simplistic examination of the relationship between satisfaction and stock performance and that you would expect customer satisfaction to impact performance over time, as simply looking at satisfaction and stock performance levels for the same year would not accurately capture the complete relationship. Whilst this may well be true, could the 'putting the company first' approach explain the phenomenon of individualsthat do a dreadful job in one company only to do the same in many other organisations?
Perhaps there is something to ‘company first’ after all.
Hold on a minute, this can’t be right. Sure there are people out there who are natural victims and have a capacity to forgive or blame themselves for having selected the service provider in the first place; but this must be a minority. I, like the majority of people, don’t think about changing a supplier unless there is an incident that causes me to re-evaluate the relationship. Normally, this would happen at contract renewal. Providing the CSP has not done anything untoward and the new contract looks OK, I simply renew. However, I had a broadband provider who proved incompetent at installation, provided poor quality service, had staff who were poorly trained and a customer complaints process that equated to “we don’t care”. I cancelled.
The cost of providing poor service, dealing with my complaints and then losing my contract far outweighed any potential profit they may have made in the period, so if most people are like me then keeping us happy really does pay. This said, it may not be in the customer’s best interest if the company does not make a profit, as it will simply go out of business. So the challenge is how to be profitable and be customer centric at the same time.
Before leaping to the conclusion that efficiency is the answer I should point out that you can be operationally efficient without being customer centric.
For example, I was on assignment in Bangkok and went into a burger franchise. When I placed the order, the guy serving placed a stop clock on the counter and said “if we don’t serve you in less than 1 minute we will give you a free drink”. I got my order quickly but there was no “so how are you today – based on passed orders you might like to try our special”.
If I go to my favorite Italian restaurant it’s “how are you, how’s your mother...you should try this we think you will like”. The service is great and I feel as though I am a valued customer. However, if I have my annual urge for a burger I would probably go back to the same franchise; maybe not in Bangkok.
I concluded that for some organisations being efficient is all that is needed, whilst others need to be both efficient and really get to know their customer and use the information to provide a better experience. Those that seek short-term profit will ultimately pay the price. As for the ‘victims’ who put up with poor service – it’s time to ‘man up’ accept that you made the wrong choice and move on.
Telesperience: operational efficiency, commercial agility and a better customer experience
Nokia's Jane Rygaard continues the debate on CEM and SON, she argues for the importance of a 360 degree view of the customer and that what's required is a hybrid approach to SON.
John William Waterhouse - The Soul of the Rose, 1908
With a wink and a nod to Jane Austen: it is a truth universally acknowledged that an operator in possession of a good customer must provide an exceptional experience. Call it pride, or maybe I’m just prejudiced, but after reading recent posts, I feel the need for a spirited debate around some of the views expressed on CEM and SON.
If I consider my own experience with my mobile operator, it’s a combination of touch points that defines my overall experience. So, yes, I agree that when I use my smartphone, I expect a given level of coverage, and naturally expect a high quality mobile broadband service. But I also judge my operator on customer care, the different service bundles, as well as on the complicated art of charging and billing in an easy to understand format. So I would expect my operator to understand my overall experience as a customer, and what matters the most for me. And I don’t think I’m alone here… check out the key customer experience management (CEM) drivers here as confirmed by our 2014 customer acquisition & retention study.
Limiting CEM to only one part of the network feels a bit one-dimensional (kind of like Mrs Bennet if you’re keeping score with the Pride and Prejudice analogy). For CEM, we need to take the blinkers off and ensure a full 360 degree view of the customer. We know network and service quality is an important driver for customer retention, but what determines network and service quality? As a mobile broadband customer, my quality of experience is equal to the sum of what’s happening in the network (operator scope) + the applications I’m using + my personal experience with whatever smart device is in my hand. Hence, my operator must consider end-to-end service quality.
Therefore, the service model must include the radio, core, transport and IT networks to give the quality overview related to experience. This is why I feel passionately that customer experience insight and action has a home everywhere in the network, and beyond.
In order to deliver the optimum quality cost-effectively, we need to automate our operations. This leads me to the matter of SON and the discussion about whether it should only be centralized. In today’s world, where real-time is king, we should be able to add the decision-making power where it makes sense. For example, some optimizations should occur at the edge of the network, without the added delay of asking a centralized algorithm what to do. Sometimes we need the umbrella view, where we can utilize CEM insight to optimize the network the right way for the right customers. In other words, the best SON option is hybrid: employing a decentralized SON functionality in the network for speed, combined with the power of centralized algorithms where insight from CEM and e2e network and service quality factor into the decisions. Call me proud, call me prejudiced – but of this I am certain!
Agree? Disagree? I’m love to hear your opinion and continue the discussion.
For more great opinion from Nokia check out their blog at Nokia Blog
Our inside man, Snowden Burgess, looks at how company culture impacts on the customer experience CSPs deliver. He argues that company core values must be lived, not just written, and explains why trust is so important for both employees and customers.
John Everett Millais, The Rescue, 1855
Does the culture of a company impact the experience received by customers? Does the way leadership and management act, and the way they treat their employees, reflect on the customer experience the employees provide?
The answer to both questions is definitely yes!
The company culture surrounding your employees and the way your leadership and management teams act and treat staff has a direct impact on, and relationship to, the experience being received by customers.
I have read a lot of articles over the years about company culture and 'engaged workforces', and most if not all allude to the fact that customer experience is impacted by a poor company culture and a disengaged workforce: yet many organisations do little or nothing to address this issue.
When an organisation becomes more focused on reporting what its employees do on a daily basis, and/or driving internal targets focused on a silo mentality, the results are a foregone conclusion. They build a culture of distrust and fear, which will be passed on to their customers.
This type of organisation becomes so internally focused they forget the importance of the customer and their needs, and management behaviour damages both the company culture and employee engagement.
Consider core values: these are intended to guide how all employees at all levels should act. Take a look at the core values of your organisation. I can guarantee they will contain words such as 'Respect', 'Integrity', 'Accountability', 'Responsibility' and 'Customer Focused'.
Those companies that abide by their corporate values are normally companies whose leadership team lead by example: they talk the talk but they also walk the walk. They not only understand the values but passionately believe in them.These organisations usually have a far more engaged workforce and provide a better customer experience.
Unfortunately, there are a lot of organisations that have core values which are drilled into employees but ignored by leadership and management. I regularly encounter, and have worked for, a lot of organisations that have well-documented values but whose leadership have little respect for them. Instead, they tend to believe in the concept “Do as I say not as I do”.
If the leadership and management have no respect for their staff, their staff will have little or no respect for them; ultimately they will have little or no respect for their customer base either.
Let me explain how this works. An organisation may begin to restrict travel & expenses for employees on the premise of saving money; but at the same time leadership travel in corporate jets and hold champange breakfast meetings. Or an organisation may have a policy of respect and anti-bullying only for their leadership and management teams to shout and scream at direct reports, or use micromanagement, excessive monitoring and threat of redundancy to drive employees to work excessively long hours.
In the worst cases leaders and managers drive internal targets that directly impact the end customers, but encourage or instruct staff to ignore customers or, worse still, deliberately provide a poor experience simply to meet an internal target or milestone.
As a culture moves into one of distrust it becomes increasingly internally focused - driving internal departments to move into a silo mindset. This ultimately results in managers driving local optimisation (the changing of Process and Procedures to meet a single department's needs, with no consideration for the impact on the end-to-end process) to hit their latest targets. These organisations build a Big Brother culture, with checkers checking the checkers!
Whilst all of this is going on, the impact on the customer experience is being ignored. Distrust is passed on to the customer base, and once you lose the trust of customers, you lose your customers!
Core Values are the soul of an organisation. As leaders and managers we need to not only follow them but believe in them; we need to follow these values even when no one is looking. And we need to ensure that company targets such as revenue, profit and share price don’t drive us to put them to one side.
Ultimately successful companies believe in talented peoplewho build a trust culture that unites them around a customer-centric mission to which they hold themselves accountable. Revenue, Profits and increased share price will follow.
At a basic level, customers buy from companies they trust. Loyalty is driven by trust and belief in core values, and by the respect of customers. If your organisation is running a retention program aimed at reducing the amount of customers leaving, it's already too late! You're now just fire fighting and are focusing on the symptom not the cause.
Snowden Burgess is the pseudonym for an executive within the telecoms industry. His blogs tell you what's really going on behind the PowerPoint, as he shares insights that no-one else dares share.
Christian McMahon gives us some great pointers on common pitfalls of outsourcing. He argues that outsourcing is often blamed for things that are really due to poor sourcing/IT strategy and tactics. And explains why a more diligent approach to outsourcing is essential to successful outcomes.
Rembrandt Peale, Pearl of Grief, 1849
Once upon a time, when asked what the main benefit of outsourcing was, the response would have been cost savings based on labour arbitrage. Today that answer would be superficial and incomplete.
I believe the main benefits of outsourcing are now access to scarce skills, expertise and the latest technology, cost reduction, transforming capital expenditure into operating expenditure, and the opportunity to concentrate resources on core business objectives.
If you think about outsourcing in this manner, you will not only start to recognise areas within your IT organisation that would benefit from adopting it, but also ways as a strategic leader you can add further value to your entire organisation.
The first big error people make when considering outsourcing is looking to resolve a problem through outsourcing without first looking to do so in-house. But a problem remains a problem, no matter where it sits.
Sensible outsourcing providers will often sniff this out during the RFP or other stages of the bidding process, but others may look to take it on, hoping they can fix the issue(s) as a calculated risk whilst trying to win the business (the fact a vendor accepts this huge risk should really start ringing alarm bells for you as you both know there’s an elephant in the room).
Those that don’t take the business (and hopefully this is the majority) are likely to make you go back and fix the problem before retendering. Those who take it on will only delay the inevitable: leaving you not only with a larger problem downstream but also with the added bonus of a whole heap of complex contractual issues to sort out (which I imagine you will now discover were also not properly agreed or worded up front).
Many take this approach and get their fingers burnt with outsourcing, vowing never to return, which is a real shame, as outsourcing done in the right way is an extremely beneficial way to add to the value you provide to your organisation.
The second biggest error people make when considering outsourcing is to engage with, and select, a vendor by only having had a few live sales meetings/conference calls, with just a cursory glance over (vendor) provided case studies. They won't have visited the operating/service centres to see the outsourcing company in action in a live environment, or have met the staff who will be working with their team.
You wouldn’t do this if you were hiring permanent staff or running the project in-house, so why do this when exploring outsourcing? It makes no sense.
This often occurs, however, when a company decides to outsource a small project or a portion of it to see if outsourcing works for them in an operational sense.
In this case, the vendor is often chosen just on labour arbitrage alone, and because of this the work is often performed in Asia or Eastern Europe. The ‘project’ is then left with the vendor with scant and seemingly erratic communication, and only poured over in detail once the deliverable is returned with obvious errors.
The result is the project often has to be redone in-house - blowing the project budget, causing delays and delivering red faces all round. Outsourcing is again blamed as the enemy: the lack of communication and poor vendor selection/interaction issues are conveniently swept under the carpet.
So, in summary it may be that outsourcing is not for you; but you owe it to yourself and your organisation to try everything that you can to add value to what you deliver. Outsourcing executed properly can provide real value when opportunities are identified, structured, communicated and managed correctly, so what are you waiting for?
Christian McMahon is Founder and CEO of three25. Christian founded three25 to provide his award-winning services as a leading global IT and Digital executive, innovative thought-leader & trusted advisor. He has over 20 years' experience in delivering commercially-focused, world-class multinational IT organisations. He is a recognised blogger and respected expert in the IT sector, with significant reach & engagement across social channels.
Nir Asulin, CEO of FTS, outlines the important role policy control and charging has in the commercialization of Wi-Fi services. He describes how operators can move from just offloading to ensure QoS to creating an additional way of generating money.
Market Scene, Henry Charles Bryant (1835–1915)
Policy Control is Central to Monetization of WiFi Offloading
A popular way to ease congestion on a cellular network is to make optimal use of WiFi, as it is a very cost-effective and efficient means of offloading large quantities of mobile data traffic. This is feasible because WiFi is ubiquitous. It allows the operator to provide affordable, always-on data connectivity, which ensures the lucrative provisioning of a variety of new services with appealing subscriber-directed packages.
That being said, the idea of making money from WiFi offloading has yet to be universally accepted. This is probably because operators still don’t believe monetization of the concept is feasible.
The implementation of a complete solution – one that supports the integration of an offloading mechanism together with a BSS solution (policy control and charging) – enables an operator’s marketing team to introduce policy and charging rules that switch the user from the mobile network to the most highly available WiFi networks based on a variety of easily configured parameters. This even includes, for example, the type of traffic that can be offloaded and that which will be left on the mobile network.
Sounds too good to be true? Ultimately this is exactly what a smart policy control and charging (PCC) application – integrated with a third-party WiFi offload platform – should provide. Policy control gives an operator’s marketing team the ability to easily and rapidly introduce or modify policy and charging rules that transparently switch the user from the mobile network to WiFi networks.
How Policy Control is used to Monetize WiFi Offloading
So how does policy control help monetize WiFi offloading? Firstly, an operator’s back-office team can now implement the most innovative ideas originated by marketing. This enables them to go far beyond the scope of standard BSS configurations to deliver flexible and fully customizable marketing packages. Here are just a few ideas that can easily be implemented using an integrated policy control and charging solution within the shortest possible time to market:
Offloading video only and leaving VoIP or VoLTE services on the reliable cellular network.
Providing incentives such as data credits (additional free megabytes of data usage) for customers who are willing to move certain types of traffic to WiFi (video, music, gaming, etc).
Charging the subscriber per transport layer (WiFi or cellular), and not relying on a standard charging model, eg lower rates for data transferred over WiFi (eg each MB is calculated as if it was 0.5 MB).
Offering smart routing so that a customer is able to choose between WiFi and mobile usage based upon required quality.
Combining seamless WiFi and 3G/4G data transfer with a cap on total data volume.
Proposing low-cost Wi-Fi services for new subscribers who have yet to implement a mobile data plan.
Offering low-cost Wi-Fi data services during the hours of the day or days of the week when users are usually away from home or the office (train stations, shopping malls, parks, etc. in which hotspots are available).
Using an integrated offloading and policy control solution, specific packages for customer segments can be created using simple rules:
A rule for quality based upon a subscriber’s preferences (eg the best quality network at premium prices or decent quality as much lower rates).
A rule that rejects WiFi leaving the user only on a cellular network.
A rule that puts the user only on trusted WiFi networks or the operator’s own hotspots, such that the service will not kick into WiFi if the network in question does not meet predefined standards.
A rule that transparently switches a subscriber from a cellular to a WiFi network, when available, no matter what the circumstances are.
All such scenarios are very easy to achieve using specific profiles transferred to routing and offloading platforms on a per-subscriber or group basis. If an operator looks not only to reduce the mobile network load, but also commercialize this capability, then additional new packages that include WiFi offloading can be sold with bonuses and incentives for subscribers who are ready to offload all or certain types of traffic.
WiFi can also be marketed as a value-added service with the combined data package being positioned as an elite item, thus generating even more revenue.
Another possibility is to further enhance the WiFi offering with agreements with WiFi hotspot providers (such as airports, universities etc). Further monetization is achieved in this case, by implementing smart revenue-sharing mechanisms, enabling innovative business models within multi-partner, complex value chains.
The successful deployment of WiFi is a question of marketing and should not be regarded as a billing issue. Service providers’ marketing teams deal with customer experience rather than network and billing problems. Modern billing and policy management tools should provide the operators with the ability to launch new services at the speed of marketing, enabling the commercialization of network-related issues such as WiFi offloading.
With more than 15 years of experience in the telecom and software industries, Nir has led FTS's operations since 2012. Nir joined FTS in 2003 and moved from managing the Quality Assurance group to leading the Professional Services and Projects Delivery groups - eventually becoming the VP of Global Operations within FTS in 2009. Nir has both technology and management experience in billing & customer care, CRM and policy control software, having previously spent several years at Amdocs’ Professional Services group, holding several positions at large European and American telecom operators.
Amdocs's Liron Golan argues that central product catalogs offer service providers the opportunity to escape 'the circle of silos'. He says they offer greater flexibility when defining offerings across multiple lines of business, and create a single source of product truth within the service provider organization.
Das Blinde-Kuh-Spiel, 1865, Ferdinand Laufberger
As a service provider’s business evolves and grows, it deploys thousands of complex products and services using different technologies across networks and business sectors. To keep up with the quick pace of the market and effectively compete with traditional competitors and over-the-top (OTT) players, greater effort is needed so pockets of product information are up-to-date and synchronized. Failing to do so will, more often than not, result in slow time to market, adversely affecting product quality and weighing down service providers trying to launch themselves into a new era of telecommunication services.
To address this evolution, service providers must ensure a unified business support system (BSS)-operational support system (OSS) environment across all lines of businesses, networks, devices and customer personas. However, today’s existing silo architecture limits service provider flexibility.
One way for service providers to free themselves from the circle of silos is to combine all product data into a single centralized container – an enterprise product catalog – which holds all aspects of product data in one canonical information model, to provide a single source of “product truth”. To make this a reality, service providers need a centralized catalog that fits different systems and supports the product structure that corresponds with business needs. Data model flexibility allows easy adaptation to business needs on one end and is synchronized with the needs and abilities of the BSS/OSS on the other.
Not insignificant is the aging of legacy systems, which has become more apparent in recent years. Many legacy systems are unable to support new business models and services. Adding to the challenges are the additional silos that are built on the side of these systems to support new services, new business models and are often adjusted to enhance the abilities of legacy systems. Modernization and replacement of these legacy systems is needed, but replacing them in one ‘big bang’ operation may not be appealing to all service providers. This is where an enterprise product catalog can really help.
Enterprise product catalogs help service providers optimize revenue generation by providing greater flexibility when defining offerings across multiple lines of business. This is possible due to sophisticated and complex product and service bundling, discounting plans and promotion mechanisms, which provide service providers with the ability to tailor their offerings to match the specific needs of each customer segment.
Since modern BSS/OSS follows a well-defined business process that is driven by an enterprise product catalog, service providers can gain additional benefits and strengths.
For example, the enterprise product catalog can store and point each product, offer or bundle to the relevant business process or sub-process, as well as provide parameters, attributes and every other aspect of the product definition needed for corresponding business processes, such as order-to-cash or catalog-driven provisioning.
Since hard-coded processes are not needed, changing existing business processes, introducing new ones and determining the way products are handled is placed in the hands of product managers and marketers, enabling them to quickly respond to the most updated business requirements without tying them down to system capabilities.
By streamlining the business processes associated with the introduction of new products, enterprise product catalogs can reduce the product development cycle, as well as greatly increase service providers’ agility - ultimately increasing customer loyalty and profitability. An enterprise product catalog allows service providers to respond to changing market conditions and introduce new initiatives quickly and efficiently, providing a much-needed competitive edge.
Liron Golan heads up Amdocs cloud marketing. He is responsible for defining the marketing and business strategy for Amdocs’ telco cloud solutions, which include cloud brokerage (cloud enablement) and BaaS (BSS as a service).
In this post, MDS's Andy Peers looks at why business innovation has become so important to CSP success. He argues that CSPs need to look outside the industry for inspiration and highlights an example of recession-busting pricing success that CSPs can take inspiration from.
Marché avec l'Ecce Homo, Joachim Beuckelaer, 1561
In telecoms we like our innovations. But while we’re constantly talking about innovation, we usually think of it as something comprised of either wires or code – something technological.
But innovation has many forms. It doesn’t have to be revolutionary, it can be quietly incremental; it doesn’t have to flow from developed to developing markets, but can flow in the opposite direction; and it doesn’t have to be technological, but can be business innovation.
Arguably, of all the varying forms of innovation, business innovation is where CSPs struggle most today – particularly in the fields of business models and pricing. Many CSPs don’t even associate the concepts of innovation and pricing – paying little attention to the role pricing can play in increasing the appeal of a product or the health of a business. Yet better pricing strategies are a great way for CSPs to differentiate themselves from competitors, while creating a different commercial relationship with their customer by understanding what the customer actually values in their products and services, and then meeting that need.
Typically, CSPs build networks and then figure out how they’re going to price the connectivity and any other value-added services they sell on top of it. This “build it and they will come” approach is coupled with a “cost-plus” mindset, which means they expect a certain return from their investment, and price accordingly. Yet more sophisticated approaches to pricing have been used to build product, business and industry success for many years. Henry Ford, for example, began the automotive revolution with the concept of price: he was going to build a car that was affordable for those who couldn’t previously afford cars. The product definition, manufacturing and business processes stemmed from his desire to build a car “so low in price that no man making a good salary will be unable to own one.”
In fact, telcos can gain new pricing ideas and innovative approaches by looking outside their own industry. While telecoms companies in developing markets have had success designing products to fit price niches – for example, Vodafone offering prepaid cards in denominations as low as Rs10 in India - designing products to fit price niches is something that retailers, in particular, are extremely adept at. A great example of a retailer who has used this pricing approach successfully is UK retailer Poundland, which is now owned by Warburg Pincus. How did Poundland enter a crowded retail market and transform itself into a 528 branch, £1 billion, recession-beating success story, with over 10,000 employees and four million weekly shoppers, of which more than 1 in 5 are now staunchly middle class (A,Bs)?
The answer is a very clever – if very simple - pricing strategy. The pricing strategy actually defines the brand: Poundland is a shop where everything costs £1. The beauty of it is that customers understand the pricing and buy into the concept; Poundland alters the product to fit the pricing, rather than varying the pricing to fit the product.
The company began by selling an idiosyncratic assortment of homeware products, end-of-line chocolates and biscuits, and products destined for export that got left on the docks. Now it sells a much bigger range of products – including fresh food, toys, stationery, reading glasses, gardening goods and make-up. It’s the UK’s biggest seller of Toblerone, and sells 14,000 pregnancy tests a week.
It has such buying power, that it can go directly to the biggest manufacturers and get bulk deals and special-sized packs which are designed to meet its £1 price point. It can also offer clearance for manufacturers that find they’re overstocked or need to liquidise stock quickly.
Poundland terms its ability to design a product to its price point as “re-engineering” – this means if the cost of a product goes up, it shrinks the amount it offers to maintain the £1 price point.
It’s also really good at marketing its bargains. Many packs will come with a flash pointing out the extra value (“3 bars plus 1 free&rdquo. Shoppers like this tactic because it’s pointing out the value and reassuring them they’re getting a bargain.
Gorkan Ahmetoglu a consumer psychologist at Goldsmiths, University of London, advised the Office of Fair Trading on a recent pricing study. He says bargains and offers stimulate a biological change in customers’ brains that “trigger a reaction which says 'This is a reward’.” Ahmetoglu points out that it’s a very effective strategy to encourage consumption.
Another Poundland strategy that CSPs would do well to take note of is that its own-brand goods are retailed under a variety of different names. Retailers call this “phantom” branding. Thus candles are supplied by Coley and Gill and the minced beef by Fenback Farm. This strategy caters to the bargain-hunt mentality, with names carefully chosen to make consumers think either they are higher worth than the £1 being charged, or to evoke an emotional response or piggy-back off the expensive market-building marketing of others. Fenback Farm, for example, sounds quintessentially English, making us think of small scale production, free roaming cows and old world values.
It’s notable that Poundland does not use its own brand because it wants the brand to stand for “cheap” and “value”, while its phantom brands stand for “expensive” and “valuable”. It reinforces this effect by mixing own-brand goods with marketing-leading brands that draw shoppers in.
There are many clever tricks in Poundland’s strategy that CSPs can learn from, adapt and re-use. Simple pricing has some history in telecoms – SMS, for example, was partly successful because the pricing was simple for customers to understand. But simplicity of pricing is an underutilised strategy, with CSPs retrenching from AYCE bundles to more complex pricing. The nadir of this approach is possibly volume-based pricing, which customers continually communicate their frustration with. (What is a megabyte of data exactly? What value does it offer in the real world?)
Using a multi-brand strategy to convince customers they’re getting quality products for keen prices, sourcing cheaper products and raw materials to create “bargain sales”, and designing to a price point are all tactics that CSPs could utilise effectively.
I’d argue that in future, business innovation is going to be just as important in this industry as technology innovation, if not more so. To tap into pricing and business innovation, CSPs need to analyse the best that’s happening in our own industry and shamelessly copy and adapt; but also look outside our industry for new sources of inspiration.
For more on this topic
Ensure you reserve a copy of the new Telesperience issues paper we’ve commissioned on Pricing Innovation, and sign up for our upcoming free webinar. Send your contact details now to firstname.lastname@example.org and we’ll keep you updated.
Portrait of Jimi Hendrix, oil on canvas, by the Swedish artist Tommy Tallstig
Andy is VP Product & Services Strategy at MDS and has over 25 years industry experience. He talks regularly to customers about their business challenges, and pricing and innovation is a recurring topic.
Telesperience: operational efficiency, commercial agility and a better customer experience
A conversation between Rob Rich, managing director of insights research at TM Forum, and Annie Turner, editorial director at TM Forum, about customers, net promoter scores and big data analytics.
AT | We’ve talked a lot about improving customer experience. Why does progress seem slow? RR | Progress has been slow because, firstly, some service providers have struggled to cope with the expanding breadth and complexity of the market. Secondly, the urgency has only become apparent as markets have matured; margins and revenues have fallen; and device, app and web companies have distanced service providers from customers. Finally, organizational and cultural issues have certainly come into play, particularly for some ‘traditional’ service providers.
AT | Has your view of customer experience changed in the last five years? RR | The fundamentals have changed little. For the customer, it’s still about usefulness, convenience and trust. But, nowadays, the diversity of devices, channels and services, plus the volume of usage, have exploded, meaning that customers are more in control.
AT | Should there be more focus on providing excellent coverage and connectivity – and the agile IT infrastructure to enable it – rather than net promoter scores (NPS)? RR | I would argue that fundamentals like coverage and service performance heavily influence NPS. If your service performance is poor, is that useful for end users? Convenient? Does that increase trust? The best way to increase NPS is to do the basics well – determine what’s really important to your customers, prioritize that and your NPS will likely improve.
AT | What are we trying to achieve with big data analytics? RR | There are many areas where ‘big’ and ‘small’ data can help, and service providers have a huge opportunity to improve customer experience, develop attractive products and offers, and lower costs.
AT | What are the most important areas big data could help service providers with? RR | Several service providers are using network and customer data to learn the habits of their most valuable customers – for example, which geographic locations they favor. They also use network data to determine overall usage in those locations. By comparing the two, they can drive support for future network investments that will improve network performance and boost customer satisfaction. Another example might include improving recommendation engine performance by adding unstructured and semi-structured data to the analytical mix. Another could be social network analysis to identify influential users’ propensity to churn. It all depends on what the service provider views as most pressing.
AT | When will big data analytics deliver in the communications industry? RR | Big data analytics is still in its infancy and will continue to develop over at least the next decade or two. But for service providers approaching it with the right strategies and resources, it’s already delivering value right now!
Annie Turner is Editorial Director at TM Forum. She has been researching and writing about the communications industry since the 1980s, editing magazines dedicated to the subject, including titles published by Thomson International and The Economist Group. This article is excerpted from TM Forum’sPerspectivespublication. Topics such as customer experience, analytics, digital services and IT transformation are the focus ofTM Forum Live!, June 2-5, Nice, France.
Telesperience: operational efficiency, commercial agility and a better customer experience
Amdocs' Tanya White looks at how the widespread introduction of self-optimizing network (SON) features within the radio access network (RAN) means customer experience measurements are rapidly shifting toward the more relevant metric of active customer experience improvement. She argues that operators who've implemented SON, and especially centralized SON (C-SON), have seen impressive gains in their network performance resulting in enriched customer experiences.
Anders Zorn, Omnibus, 1891/92
The widespread introduction of self-optimizing network (SON) features within the radio access network (RAN), means customer experience measurements are rapidly shifting toward the more relevant metric of active customer experience improvement.
The day-to-day customer experience of most subscribers is very straightforward. Every day, subscribers make and receive phone calls, have missed and dropped calls; they check Facebook, Twitter, email and the Web, in good coverage and bad. Essentially, their experience comes down to whether they have good voice and data coverage. The customer experience is therefore intimately tied with the success or shortcomings of the RAN.
This year, according to IDC, sales of smartphones overtook sales of feature phones globally for the first time. As an industry, we know that when people start using smartphones they use more data, and then start to rely on that data being universally available. As proof of that, a recent global survey by a major network equipment provider found that Internet access quality has become a deciding factor in the choice of networks in mature markets, with voice quality being key in developing markets.
The survey goes on to explain that the likelihood of subscriber wastage (defined as losing a customer unnecessarily through inaction) has increased by over 20% since last year.
This tells the story that even with a renewed emphasis on connection quality, and data connection quality in particular, expectations are not being met by many operators. So if voice and data coverage are primary factors, how can operators best address those requirements to improve the customer experience and reduce strategically important aspects of market share loss?
In our experience most operators have moved beyond the thinking that LTE rollouts are a panacea. The most advanced operators understand that they need to manage the performance and behavior of their resources better, regardless of network access technology.
So it's not surprising that many of the world's largest and most advanced mobile operators have started to look seriously at self-optimizing network (SON) technology. SON is arguably one of the few mobile technology trends over recent years that has delivered upon its initial hype, with immediate, meaningful and long-lasting improvements both for mobile operators’ businesses and subscribers’ daily experience.
Operators who have implemented SON, and especially centralized SON (C-SON), have seen impressive gains in their network performance resulting in enriched customer experiences. They have seen greater than 15% improvement in capacity utilization and over 20% improvement in dropped call rates, typically within a few hours or days after SON installation. And this all happens while substantially simplifying network management complexity.
But not all types of SON are born equal. There are two main flavors of SON: C-SON and distributed SON (D-SON). D-SON typically applies to a single node or a small localized cluster of cells, and does not allow for significant coordination across diverse infrastructure vendors' equipment. Because of these limitations, there is some concern that D-SON is already becoming an obsolete technology. C-SON, on the other hand, allows the whole network to be self-optimized because it focuses on solving quality, capacity and coverage issues across the entire network.
Another benefit of C-SON is that, as a technology, it was built to be much closer to the subscriber than many types of customer experience management tools. When operators try to measure the customer experience, very rarely does it include device-level measurements, which is where the action happens as far as the subscriber is concerned.
Simply put, they are not really measuring the subscriber’s de facto experience. Most of the time operators are measuring how their own network is performing, and from there extrapolating the impact upon their subscribers. Modern C-SON systems, like those from Amdocs, are already processing tens of millions of events daily in live HetNets in some of the world’s largest megacities, and are able to take every single subscriber’s actions, movements and experiences into consideration when adjusting network parameterization.
With SON, the management of everyday customer experiences is entering a new and more exciting phase. Instead of the relatively incomplete definition of customer experience management, SON provides the much more relevant proactive customer experience improvement. Customer experience management as a term is not going away, but as an industry, if we are a little more pragmatic and grounded about what we need to achieve, we can see that the home of customer experience improvement is in the RAN – and it always has been.
Tanya White is responsible for product marketing in the Amdocs Radio Access Network (RAN) division. She has over 13 years of telecommunications experience in marketing, strategic planning, and marketing analytics.
Affandy Johan, Senior Product Marketing Manager from InfoVista looks at the importance of understanding quality of experience, and how CSPs can leverage subscriber-aware data to optimize network performance and deliver a consistently high QoE, thus preventing dissatisfaction and churn.
Lesser Ury: Leser mit Lupe, c1895
As mobile penetration around the world continues to creep higher, most mobile operators have shifted their focus from customer acquisition to customer retention. Mobile operators are no longer working to “grow the pie”; rather, they are trying to gain a bigger slice of the existing one. With that shift in mind, mobile operators are increasingly realizing the importance of delivering a consistent, high quality of experience (QoE) to satisfy and maintain their relationships with subscribers.
Yet, many are still using the same legacy tools and key performance indicators (KPIs) that they’ve always used to report on mobile network performance and quality of service (QoS). While KPIs provide relevant data to benchmark network-wide performance, this gives a broad portrayal of service levels, presenting many challenges in optimizing the individual customer experience. However, in order to guarantee quality of experience (QoE) to prevent customer churn, mobile operators require more granular, customer-centric views into network performance.
Having access to subscriber-aware data, such as call trace information, is essential for mobile operators. Collecting this data doesn't cost mobile operators anything, and provides powerful insight into the real behaviors and distribution of subscribers on their mobile networks. Additionally, by collecting call trace data, mobile network optimization engineers can better measure and understand events such as dropped and blocked calls in order to address QoS issues and prevent them in the future. Therefore, the priority becomes troubleshooting network faults experienced by individual users and optimizing the network to best serve high-value subscribers.
In addition to monitoring and managing individual users’ service performance, mobile operators can leverage the same data to identify the equipment or applications contributing to a poor QoE. While operators may or may not be able to reduce this negative impact on network performance, the insight enables them to offer concrete recommendations to subscribers in order to boost their network QoS.
Finally, call-trace information can dramatically reduce the need for time-consuming drive tests, saving mobile operators significant resources and speeding up the data collection process. Call trace data is much more cost-effective to collect and can be applied to a number of network optimization tasks, including cell troubleshooting, area-wide optimization and VIP call monitoring. And, if mobile operators have real-time capabilities to collect this call-trace data, the impact on QoS is much more immediate because network optimization engineers can quickly drill down to improve the areas that are known to have quality issues.
Mobile network conditions are constantly fluctuating. Even within the same mobile cell, some subscribers may have positive experiences, while others may not. Mobile operators must, therefore, be able to access reliable, accurate and up-to-date performance data at any given moment to ensure the best possible QoS for their subscribers.
Historically, service assurance has been reactive for many operators; this can no longer be the case. By becoming more aware of individual subscribers’ QoE, mobile operators can mitigate or avoid potential network performance issues and proactively reduce customer dissatisfaction before they churn.
Steve Hateley, head of marketing at Comptel argues that communications service providers (CSPs) are at a crossroads, forcing them to reconsider their approach to software provision and look seriously at the Cloud.
In the Tyrol, Albert Bierstadt
For years, business models and network architectures have remained relatively stable. But, faster than you can say “WhatsApp,” everything has changed.
The influx of mobile devices and usage of data services, the increasing competition from over-the-top (OTT) providers and the pressure to cut costs and improve efficiency have forced CSPs to seriously reconsider their approaches. Now, businesses are looking less at hardware, and more at the cloud, to automate, scale and simplify infrastructure and operations management.
Recent research shows that the majority of C-level executives are thinking about this – 64% of CMOs and CTOs/CIOs are working to incorporate cloud-based technology into their OSS/BSS systems this year, and 58% believe their OSS/BSS systems need to be modernised and consolidated.
By going ‘virtual’ with software-defined networking (SDN) and network function virtualization (NFV), CSPs can become more agile and more responsive to market demands than ever before. They will be able to innovate personalised service packages to fulfill consumers’ infotainment desires, provide and win from more dynamic, contextual marketing offers and, ultimately, achieve more cost-effective operations - bridging silos throughout their organisations.
SDN and NFV have been big focuses at conferences lately, as well as the topic of conversation among industry thought leaders. But how do these technologies work together? And what implications do they have for OSS/BSS systems?
The SDN/NFV Partnership
While NFV and SDN are different technologies, they are, in effect, two sides of the same coin.
NFV focuses on the consolidation of proprietary parts of the network, such as hardware-based routers, and brings those functions to a virtual environment. Meanwhile, SDN works to centralise command and control capabilities within a virtual environment. By decoupling network control and data planes found in network equipment, SDN creates a central layer of control between infrastructure and applications.
When the physical network no longer has to be managed to introduce new services, or adjusted to fulfill complex customer requirements, opportunities will quickly emerge.
The agility that SDN and NFV enable will usher in a new era for CSPs. Provisioning and activation across the network will be more accurate and consistent, allowing for seamless service rollouts and more streamlined inventory management. SDN/NFV won’t just improve operational efficiency and allow for faster time-to-market - these technologies will also help establish partnerships with OTT providers, too, creating a dynamic, virtual layer that will enable a whole new ecosystem of potential service and monetisation opportunities.
Consolidation and Clouds
To maximise these benefits, OSS/BSS systems have to be consolidated and modernised. Currently, a CSP’s operations team can be working with more than a dozen different systems, trying to piece together information from another dozen silos to build a coherent picture of the network and customers’ needs.
Things won’t change radically – OSS/BSS systems will still work to collect customer, network, service and other important data – but SDN and NFV will play an important role in streamlining the data collection, analysis and business decision-making processes. Additionally, SDN/NFV opens the door for automation across the business, whether CSPs are interested in increasing efficiency across the network, operations or charging and billing.
This will mean big changes for OSS/BSS vendors, but the change is likely to be an evolution, rather than a revolution. Instead of creating software that’s based on proprietary hardware, the new OSS/BSS programs will live in the cloud, with software that can integrate with legacy systems and provide new efficiencies. This will ultimately lead to more options for CSPs, because each system will be able to work across any virtual environment.
In this case, modernisation will have to start with consolidation. Once all of the network and customer data from legacy OSS/BSS systems has migrated to a central platform, SDN and NFV technologies can really get to work, and CSPs can finally move infrastructure from the ground up to the cloud.
You can read more from Comptelians at the excellent Comptel Blog.
In this post our inside man Snowden Burgess analyses another reason why CSPs are slow in transforming their customer experience. He looks at the fire fighting mindset, and how it encourages and incentivises the wrong behaviour from both CSP staff and customers.
The Burning of the Houses of Lords and Commons 16th October 1834 JMW Turner, 1884/5
We have all heard the term Fire Fighting or Fire Fighting mode, but what does this really mean and what impact does it have on the overall customer experience?
Many organisation's fire fight and, used in the right way, it can be a very effective tool to rally the troops and tackle a difficult or serious problem within your organisation or within a specific customer account. Basically fire fighting is effective & targeted troubleshooting to resolve a specific issues or problems.
But organisations frequently drift into a structure that promotes fire fighting as the normal process for day-to-day activities, creating a culture of pressure, stress and disruption as each new fire is dealt with, while whole processes and procedures are adapted to facilitate the requirement to fire fight. Staff become unconcerned about the fires and dangers, as these become commonplace, which means that now fires need to be bigger and more dangerous fires before they attract the attention of the fire fighters within the organisation.
Teams and departments now organise themselves around the need to fire fight; managers and leaders target staff not on the day-to-day running of the business but on how many fires they deal with. This promotes the need to fight fires.
As the engine room of the organisation strives for operational excellence, they reduce head count to meet declining revenues, but this merely creates further fires within the organisation and customer base.
Recruitment now starts to focus on recruiting more fire fighters (heavy-hitting troubleshooters) to fix the internal problems, but they are quickly consumed by the raging fires around them.
The cycle continues and most fires never really get put out; they're just brought under control for limited periods of time, before the fire fighters are pulled away to the next inferno.
As the organisation drifts into the fire-fighting culture, both staff and customers realise that the only way to get the attention they need is to start fires or at least to dial 999/911 and claim there is a fire.
This is mainly due to the fact that key staff become far too focused on the latest fire, giving little time to the day-to-day activities that keep the lights on in an organisation. But a lack of attention to these items, simply sparks them into smouldering issues that quickly take light into the full-blown fires of the future.
As the fire-fighting culture starts to take hold, the key fire fighters within the organisation start to thrive on the excitement and adrenaline of fighting fires, along with the prestige and focus that comes with taming the latest fire. Senior management and customers reward those assisting with the fires and ignore those driving the day-to-day activities of the business.
By this time the organisation is in the tight grip of the fire fighters, with little or nothing getting done unless it’s a blaze: the false impression is created that only *they* can sort out the problems. Likewise, the argument arises that they simply don’t have enough fire fighters, and the myth is born "fire fighters are critical to the success of the business!”
"The great enemy of the truth is very often not the lie, deliberate, contrived and dishonest, but the myth, persistent, persuasive and unrealistic." John F Kennedy
In the midst of the fire fighting all organisations have their 'fire prevention teams' - otherwise known as 'Customer Experience', 'Service Improvement', 'Process Development' and 'Operational Improvement' teams. Unfortunately, fire prevention is slow, boring and unexciting, with little chance of winning any awards or fame.
Resources and funding allocated to fire fighting far out-reaches that given to fire prevention. Where the culture is really bad, fire fighters actively push back on fire prevention trying desperately to maintain the status quo.
“Status quo is Latin for, 'The mess we're in.”
So what does this mean for customer experience, which is ultimately at the centre of most fires?
An organisation that has a culture of fire fighting is inevitably internally-focused, with most decisions based around what is needed by the fire fighters to either fight fires or, worse still, measure how many fires they can put out! The customers are forgotten, with the belief that by putting the fires out you will improve customer experience; everything is about the short term with little attention paid to any long-term planning that is in place.
Ultimately customers only get the attention they need when they dial 999/911, and only for as long as they keep the fires burning!
Fire fighting is a critical tool and skill, but organisations need to take a leaf out of the book of the world’s real fire fighters, who know that a day sat at the station not fighting fires is a good day!
Snowden Burgess is the pseudonym for an executive within the telecoms industry. His blogs tell you what's really going on behind the PowerPoint, as he shares insights that no-one else dare share.
Portrait of Jimi Hendrix, oil on canvas, by the Swedish artist Tommy Tallstig
Sanjay Kapoor, CMO at Nominum, looks at the hidden power of DNS and how CSPs can unleash its potential to provide immediate impact on their customer experience and commercial agility.
James Ward, An Overshot Mill, 1802-1807
The age of CSPs focusing efforts on attracting new customers fuelled by the launch of an increasing number of feature-rich smartphones available on subsidised 48-month tariffs is coming to an end.
In an era characterised by growth in consumers buying handsets outright and shopping around for the best SIM-only deal, retention has become the new priority in response to market pressures.
Unfortunately, most CSPs are currently not ‘engaged’ with their customers. If you ask the average subscriber how they feel about their mobile or landline provider, the answer is likely to be either negative if they have had any recent problems, or apathy if everything has been running well. This is a common response to any service provider who supplies a utility with little attention to brand loyalty or customer interaction. They provide their service and billing, but that is where the engagement ends.
Nowhere in the chain do subscribers feel they get a personal, tailored service or additional value from being a customer of their provider. Compare this to the way Google or Facebook users engage with the brand, and we can see CSPs clearly have a long way to go in terms of achieving strong, two-way relationships with their customers.
However, help is at hand – all that is needed is a change in perception from CMOs and for them to realise the power of the data they already have at their disposal.
All CSPs who offer an internet connection have access to a wealth of data already stored within their networks which can help protect their customers, increase engagement through more targeted marketing and even open new revenue streams through third-party advertising. This doesn’t need to be a large marketing analytics platform, but something much simpler and intrinsic to their operations – the DNS.
The DNS is one of the building blocks of the internet and is used to direct each query to the correct website. Nominum servers alone process 1.5 trillion queries each day. By using analytics tools on opt-in data derived from the DNS, CSPs can acquire insight into users’ internet activity data in near real-time without needing to resort to intrusive technologies such as third-party cookies.
By offering services which add value to subscribers, such as relevant discounts for products and services they use on a regular basis, CSPs can secure the opt-in of customers and increase the engagement and trust needed to become a more meaningful brand in their lives. Building this level of trust and engagement will enable providers to supply a greater range of value-added services to customers, creating upsell opportunities and opening new revenue streams.
An engaged subscriber is a profitable subscriber and also one less likely to churn. Using DNS data, operators can identify which customers are displaying the key behaviours associated with dissatisfaction, including visiting competitors’ websites and performing speed tests on their internet connection.
This knowledge can empower them to engage with subscribers quickly with a relevant offer. In this scenario, no other data collection method is as timely and relevant as DNS. In fact the first time the operator is likely to be aware that this is an unhappy customer is when they cancel their contract, by which time it is already too late.
Operator margins are being squeezed by an onslaught of OTT players eroding key revenues and introducing new services on operators’ existing network platforms. The cost of retaining a customer is far lower than acquiring a new one, and so operators need to look very closely at their engagement with subscribers and how they can utilize their existing infrastructure to improve the service they offer and ensure they offer value.
About the author
Sanjay Kapoor is CMO at Nominum. As the Marketing and Business Strategy leader, Sanjay is responsible for shaping Nominum’s growth and product strategy. Sanjay is a frequent speaker on Customer/Marketing analytics and new advertising business models for telecom operators at industry conferences and events worldwide. Prior to joining Nominum, he spent over 20 years in a variety of leadership roles spanning strategic planning, general management, marketing and product management.
Portrait of Jimi Hendrix, oil on canvas, by the Swedish artist Tommy Tallstig
Telesperience: operational efficiency, commercial agility and a better customer experience