Net Neutrality: The Anguish of Mediocrity

February 28th, 2015

It is rare for me to be on the same side of an issue as AT&T and Verizon and on the opposite side of Sprint and T-Mobile, but I think the new Net Neutrality rules that the FCC adopted this week are a mistake that will hurt consumers and the telecom industry.

I won’t take the time to go point-by-point through the various elements of the new rules. Plenty of people smarter than me on regulatory topics have written about that elsewhere. The two aspects that really have me concerned are:

  1. the inability to prioritize paid traffic
  2. the inability to impair or degrade traffic based on content, applications, etc.

I believe that these restrictions will lead to networks that will perform much more poorly than they need to.

The Importance of Prioritization

Thirteen years ago, while I was chief strategist for TeleChoice, I wrote a whitepaper using some tools that we had developed to evaluate the cost to build a network to handle the traffic that would be generated by increasingly fast broadband access networks.

In the paper I say “ATM, Frame Relay, and now MPLS have enabled carriers to have their customers prioritize traffic, which in turn gives the carriers more options in sizing their networks, however, customers have failed to seriously confront properly categorizing their traffic. There has been no need to because there was no penalty for just saying ‘It’s all important.’”

With the new rules, the FCC ensures that this will continue to be the case.

Think about it. If you live in a city that suffers from heavy highway traffic, if you’re sitting in slow traffic and you see a few cars zipping along in the HOV lane, don’t you wish you were allowed into that lane? Of course you do. Hopefully it even gets you to consider making the change necessary to use that lane. Why do HOV lanes even exist? Because it was deemed a positive outcome for everyone if more people would carpool to reduce the overall traffic. Reducing overall traffic would have many benefits including reducing the amount of money needed to be spent to make the highway big enough to handle the traffic and at the same time improving the highway experience for all travelers.

Continuing the analogy, if you’re sitting in slow traffic and you see an ambulance with its lights flashing driving up the shoulder to get a patient to the hospital, do you consider it an unfair use of highway resources that you aren’t allowed to use yourself? Hopefully not. You recognize that this is a particular use case that requires different handling.

Finally, extending the analogy one more time, as you’re sitting in that traffic (on a free highway) and you look over and see traffic zipping along on the expensive toll road that parallels the free highway, do you consider whether you can afford to switch to the toll road? I bet you at least think about it.

Analogies always break down at some point, so let me transition into explaining the problem that the new rules impose on all of us. Networks, like highways, have to be built with enough capacity to provide an acceptable level of service during peak traffic. Data access networks, unlike highways, have traffic levels that are very dynamic with sudden spikes and troughs that last seconds or less. While all telecommunications networks have predictable busy hour patterns, just like highways, unlike highways, the network user experience can be dramatically impacted by a sudden influx of traffic. This requires network operators to build enough capacity to handle the peak seconds and peak minutes reasonably well rather than just the peak hour.

Different network applications respond differently to network congestion. An e-mail that arrives in 30 seconds instead of 20 seconds will rarely (if ever) be noticed. A web page that loads in 5 seconds instead of 4 seconds will be easily forgiven. Video streaming of recorded content can be buffered to handle reasonable variations in network performance. But if a voice or video packet during a live conversation is delayed a few seconds, it can dramatically impact the user experience.

Thirteen years ago, I argued that failing to provide the right incentives for prioritizing traffic to take into account these differences could require 40% more investment in network capacity than if prioritization were enabled. In an industry that spends tens of billions of dollars each year in capacity, that’s a lot of money.

Why The New Rules Hurt Consumers and the Industry

Is the industry going to continue to invest in capacity? Yes. But the amount of revenue they can get from that capacity will place natural limits on how much investment they will make. And, without prioritization, for any given level of network investment, the experience that the user enjoys will be dramatically less acceptable than it could be.

Let’s just quickly look at the two approaches to prioritization I called out above that the new rules block.

Paid prioritization is a business mechanism for ensuring that end applications have the right performance to create the value implied by the end service provider. This is the toll road analogy, but probably a better analogy is when a supplier chooses to ship via air, train, truck, or ship. If what I’m promising is fresh seafood, I’d better put it on an airplane. If what I’m promising is inexpensive canned goods with a shelf life of years, I will choose the least expensive shipping method. Paid prioritization enables some service providers (e.g. Netflix or Skype) to offer a level of service that customers value and are willing to pay for that requires better than mediocre network performance, and for the service provider to pay for that better network performance to ensure that their customers get what they expect. The service provider (e.g. Netflix or Skype) builds their business model balancing the revenue from their customers with the cost of offering the service. This approach provides additional revenue to the network operators enabling them to invest in more capacity that benefits all customers.

Impairing or degrading traffic based on content or application is a technical mechanism that enables the network to handle traffic differently based on the performance requirements of the content or application. An e-mail can be delayed a few seconds so that a voice or video call can be handled without delay. This allows the capacity in the network to provide an optimized experience for all users.

Obviously, these mechanisms provide opportunities for abuse by the network operators, but to forbid them outright, I believe, is damaging to the industry and to consumers, and a mistake.

The Intelligence Revolution for Churches (Part 2)

February 24th, 2015

I’m continuing here to share a series of articles I’ve written over the past several months for Christian Computing magazine on the Intelligence Revolution.

Over the past several posts I’ve introduced the Intelligence Revolution and put it in the context of the broader Information Age. I’ve provided a working definition (The Intelligence Revolution will help us better understand the world around us; will improve our decision making to enhance our health, safety, and peace of mind; and will enable companies to better serve us based on the correlation and analysis of data from the interrelation of people, things, and content), I’ve identified the “power” and the “danger” of the Intelligence Revolution, and in the last post I started to answer the question of what the Intelligence Revolution will mean for each of our churches. However, last month’s column used a specific example to demonstrate the risks we face if we are too aggressive in collecting and correlating data about our congregants. What are the more positive ways that large churches can consider using big data?

Revisiting the Danger

Last month I started by making the point that most churches are too small to ever have the data or the capabilities to fully participate in the Intelligence Revolution. But to consider how large churches could potentially leverage big data, I referenced an article by Michael D. Gutzler in the Spring 2014 issue of Dialog: A Journal of Theology. In the article, titled “Big Data and the 21st Century Church,” the Lutheran pastor made the claim that “data collection and analysis could be the key to providing a deeper faith life to the people of our congregational communities.” As I introduced the approach that Pastor Gutzler advocates, I’m guessing that many of you became increasingly uncomfortable. His approach would correlate personal information (including derived assumptions about personal income) with giving, attendance, and commitment to spiritual growth, amongst other data points. His goal was to identify the actions that the church could successfully take for specific families to draw them more deeply into the church.

A few weeks ago, I discussed the article with a Christian friend who has been the data scientist for a major retailer, the chief data scientist for a big data consultancy, and is currently the manager of data analysis for a major web-based service. The approach Pastor Gutzler outlined concerned her, I think in large part because of its reliance on personally identifiable information (PII). Increasingly, regulations are being crafted and enacted to protect PII, especially in light of the growing threat of fraud and identity theft. The high profile cases of credit card data theft from retailers, e-mail and password theft from online sites, and the very broad theft of information from Sony should make it clear to all of us that we risk the reputation of our churches (and by extension, Christ Himself) the more that we collect, store, and correlate information about people that can be personally linked back to them and potentially used to their detriment. But I think she was, as many of us were, also concerned by the types of information being collected and the inferences being made from it. Would we be embarrassed if our constituents found out about the information we’re collecting and how we are using it? If so, then our actions likely aren’t bringing glory to God.

Searching for the Power

Then is there anything good that the Intelligence Revolution can do for large churches? The answer will depend on the church, but I think there’s some potential.

Whenever I talk to businesses about the Intelligence Revolution, I emphasize that they start first with the mission of their business. Is there any data that, if available, could help them to better serve their customers in accomplishing their mission? Likewise, each of us should start with the mission of your church. I know there are different views on the mission of the church, so I won’t try to lay out a comprehensive definition that all readers can agree to, but I’m guessing we all can agree that the Great Commission is at least an important part of the church’s mission. In their book What is the Mission of the Church?, Kevin DeYoung and Greg Gilbert summarized it down simply to this: “the mission of the church – as seen in the Great Commissions, the early church in Acts, and the life of the apostle Paul – is to win people to Christ and build them up in Christ.” This follows directly from Christ’s own words in Matthew 28:18-20 “All authority has been given to Me in heaven and on earth. Go therefore and make disciples of all the nations, baptizing them in the name of the Father and of the Son and of the Holy Spirit, teaching them to observe all things that I have commanded you; and lo, I am with you always, even to the end of the age.”

If we just start with this as at least part of the mission of the church, what data could help us in our Gospel outreach efforts, and what data would help us to build our people up in Christ? Many churches reflect these two dimensions of their mission as the outward facing and the inward facing aspects of their mission, and I’m guessing that the data that we could use will correspondingly come from outward and inward sources.

For decades, churches have used external sources of data to learn more about their city and how they can best reach the unchurched and the lost. The Intelligence Revolution is rapidly increasing the sources of data that are available. Demographics, crime data, addresses of certain types of businesses and facilities, all of these sources of data are becoming increasingly available and searchable. George Barna, who has long been a source for the church of information on national and global trends, has even introduced customized reports on 117 cities and 48 states.

However, to help our congregants grow in their knowledge of God and their ability to observe all that Christ commanded, we likely need to look inside – at the data that we have about our own people. What are their abilities? What are their desires? Where do they live and work? In what ways and in what settings do we touch them today? How do we leverage these opportunities and create additional ones to build them up in Christ? If we have a large enough population, we should be able to anonymize the data for our analysis and decision making. On an aggregate basis, what do we know about the people who attend the early worship service and how should that affect our interactions with them there? What do we know about those in our singles ministry and what opportunities can we create for that group to help them mature and grow?

Obviously, this isn’t fundamentally different from how we make decisions today, but the potential promised by the Intelligence Revolution is that we will have more data and greater ability to work with it, so that we can be more precise and make decisions with greater confidence, helping our churches be more successful in achieving our mission, all to the glory of God.

The Intelligence Revolution for Churches (Part 1)

February 24th, 2015

I’m continuing here to share a series of articles I’ve written over the past several months for Christian Computing magazine on the Intelligence Revolution.

Over the past few posts I’ve introduced the Intelligence Revolution and put it in the context of the broader Information Age. Three posts ago I provided this working definition: The Intelligence Revolution will help us better understand the world around us; will improve our decision making to enhance our health, safety, and peace of mind; and will enable companies to better serve us based on the correlation and analysis of data from the interrelation of people, things, and content. Over the past two posts I’ve identified the “power” and the “danger” of the Intelligence Revolution. This article will address the question that you’ve probably been pondering over the past several months – what will the Intelligence Revolution mean for my church?

Different Kinds of Churches

To be honest, I doubt that the Intelligence Revolution will ever significantly impact how many (most?) churches go about serving the Lord. According to the 2010 Religious Congregations and Membership Survey, there are nearly 333 thousand Christian congregations serving over 144 million adherents (adherents is the broadest measure of people associated with a congregation – this represents nearly half of the U.S. population). The simple math tells us that there’s an average of 432 adherents per congregation. In reality, most churches are much smaller than that. According to the 2012 National Congregations Study, the median number of people associated in any way with a congregation is 135 and the median number of attendees at the main worship service is 60. The Intelligence Revolution derives value from “big data” analysis, and with groups of people this small, there simply won’t be data that is big in volume, velocity, or variety. At churches this size, there also tends not to be the resources to do fancy analysis of whatever data might be available.

Bottom line, these churches will keep doing what they’ve always done, serving the Lord and serving their communities in Christ. I attend a small church. We don’t need fancy data analysis tools to understand the people we serve, because we have deep personal relationships within the body. We know each other’s needs, gifts, and lives. We adapt as new needs arise (as new families arrive or changes happen within families), as new gifts and talents emerge, and as we grow closer to each other in growing closer to the Lord. Just as PCs, the Internet, the smartphone, and social media have provided tools that enhance what we do and make it easier to do it, I expect that the Intelligence Revolution will provide some tools that will make it easier to see the geographic distribution of our families, the concentrations of ages that we serve, and the participation we have in different ministries, but that is simply putting a precise point on the facts that we already inherently know because we know our own small population.

Can Big Churches Benefit From Big Data?

Michael D. Gutzler wrote an eye opening article for the Spring 2014 issue of Dialog: A Journal of Theology. In the article, titled “Big Data and the 21st Century Church,” the Lutheran pastor made the claim that “data collection and analysis could be the key to providing a deeper faith life to the people of our congregational communities.” While we’ve talked about the dangers of collecting personal information in previous articles, Pastor Gutzler says “I would suggest for those working in the life of the church there is a higher calling to data analysis: to help the participants in a community of faith come to a greater understanding of God’s forgiveness, grace and love.”

As his starting framework, Pastor Gutzler rests upon the Circles of Commitment model promoted by Saddleback Church and documented in Rick Warren’s The Purpose Driven Church. The goal for church leaders, in Pastor Gutzler’s model, is to move adherents from being in the unchurched community to the crowd of regular attenders to the congregation of members to the committed maturing members and finally into the core of lay ministers. To accomplish this goal, church leadership analyzes data about each family and family member in the congregation, correlating that data with participation in specific events and activities, examining historical trends, and from that, making wise decisions.

For example, does participation in a given event or activity correlate with increased commitment to the church, no change, or actually a moving away from the core? Do the answers differ based on the current circle of commitment of different families participating? Should we do more events/activities like this or scrap them altogether? Should we target them towards specific families rather than broadly offering them to the entire congregation?

Pastor Gutzler even argues for targeting the sermon message differently for each circle of commitment. He uses the example of a sermon on stewardship: “A better way to approach the subject would be to give one general message about what stewardship is, but have illustrations that speak to each circle. Then, to emphasize the message, a follow-up communication should be sent to each group that falls into each of the demographics to further emphasize the message’s point.”

Pastor Gutzler identifies five classes of data that most churches are already collecting as being enough to get started in implementing this segmentation, targeting, and analysis-driven decision making:

  • Attendance: at worship, but also at all other church-related events
  • Community Life: tracking the amount of time congregants invest in different church activities
  • Personal Information: Pastor Gutzler makes the point that, with tools like Zillow and salary.com, even simple information like address and occupation can provide significant insights that can be correlated with other sources to indicate the family’s financial commitment to the ministry of the church.
  • Personal Giving: Not just tithes and offerings, but also donations of food, clothing, and responses to other special appeals.
  • Personal Development: Time committed to opportunities to develop and deepen their faith life.

While I respect Pastor Gutzler’s passion for using every tool available to achieve the mission of his church, I fear that he is demonstrating the “grey areas” that I warned about in my last article. Our actions will be scrutinized by the watching world and by our own church members. We are to honor and glorify God, reflecting His attributes in loving and serving those around us. We are not to trust in a mechanical, scientific exercise in data analysis, but we are to trust in the living God who works in mysterious ways, drawing people to Himself.

All that being said, I believe that, especially large churches do and will have “big data” at their fingertips. Pastor Gutzler’s article may go to an extreme, but by doing so, I think it hints at ways that churches will be able to honorably improve how they serve their congregants while respecting their privacy. We will discuss this more in the next article in this series. I urge you to rely heavily on prayer and the Word of God as you move your churches forward in this coming revolution.

Ten Strategic Issues Facing Mobile Operators

February 23rd, 2015

In a recent consulting engagement, I was asked about the strategic issues facing U.S. mobile operators. I think I answered reasonably well, but it made me realize that the topic deserved a more thoughtful updating based on recent activities. With that in mind, I’d like to provide a high level outline of what I think are the biggest issues. I think each of these could be a future article in and of themselves.

1. Duopoly, The Rule of Three, or the Rule of Four
Perhaps the biggest strategic issue being played out right now is one of industry structure. Each quarter, Verizon and AT&T become stronger. Their strong balance sheets, fueled by rich cash flows, enable them to strengthen their hand. Meanwhile, the other two national operators (Sprint and T-Mobile) fight it out for third place. The Rule of Three claims that any market can only support three large generalists, implying that only one of those two can survive. Boston Consulting Group takes it a step further with their Rule of Four implying that perhaps two is the right number. American regulators would apparently block a combination of Sprint and T-Mobile, believing that a market with four competitors is better for consumers than a market with three competitors. But, in the long run, will that ultimately result in the failure of both #3 and #4, and in the short run, will it cause behaviors that damage the entire industry?

2. Wildcards: Google, Dish, América Móvil
Over the past few years, Google has done an admirable job of shaking up the broadband industry with the introduction of Google Fiber. In markets where the company has announced plans to build out local infrastructure, existing competitors have had to respond with improved offers to customers. Now, Google is rumored to be preparing to offer wireless services. Would they have a similar impact on the wireless competitive space, or are the disruptive moves already being introduced by T-Mobile and Sprint significant enough that Google’s impact would be muted? Meanwhile, Dish Networks has been spending tens of $billions accumulating a rich treasure chest full of spectrum which they are obligated to begin building out for wireless services. What will they do and how will that impact the competitive environment? Finally, América Móvil has spent the past few years preparing for a major global strategic shift. They already have a strong foothold in the U.S. prepaid market as an MVNO (TracFone), but their relationship with AT&T has been significantly altered perhaps positioning them for a more aggressive move into the U.S. Any of these three potential new entrants could have significant impacts on the American mobile market and must factor into the strategic scenarios for the four mobile operators.

3. Licensed versus Unlicensed Spectrum
As we’ll discuss more below, spectrum is the lifeblood of any wireless network. The global mobile industry has been built on licensed spectrum. Licensed spectrum has many advantages over unlicensed spectrum, including the ability to use higher power radios with better signal-to-noise resulting in greater range, throughput, and performance. Lack of unmanaged contention for the airwaves results in predictable and manageable performance, all resulting in higher reliability of each connection. The industry has invested hundreds of $billions to build out networks that provide a wireless signal for the vast majority of the U.S. However, the cost to build out a wireless network with unlicensed spectrum is a small fraction of that to build with licensed. Companies offering services with unlicensed spectrum are also unburdened by the regulatory requirements placed on Commercial Mobile Radio Service operators. The Cable MSOs have been most aggressive in shifting their focus from licensed to unlicensed spectrum. After decades of positioning to participate in the traditional cellular industry (winning spectrum in auctions, investing in Clearwire, partnering with Sprint, etc.), in 2012 Comcast, Time Warner, and others sold their licensed spectrum to Verizon and aggressively started building out a nationwide WiFi footprint using unlicensed spectrum. About a month ago, Cablevision introduced their Freewheel WiFi-based smartphone service to compete with mobile operators. Expect others to follow.

4. Spectrum Portfolio
Although mobile operators are toying with unlicensed spectrum, their strategies remain very centered on licensed spectrum. To effectively meet the growing demand for capacity, all operators will need more spectrum of some kind. However, not all spectrum is equal and operators know they need a balanced portfolio. There are a variety of criteria that factor into the attractiveness and utility of any given spectrum, but the easiest to understand is simply whether the spectrum is low-band, mid-band, or high-band. Low-band spectrum has a frequency less than 1GHz and provides the best geographic coverage (the signal travels farther) and in-building penetration (the signal passes more easily through walls). However, at these lower frequencies, there tends to be less spectrum available, and it has generally been made available in smaller channels, limiting the capacity (the amount of bandwidth that can be delivered to customers). High-band spectrum generally has a frequency above about 2.1GHz and, while it lacks the coverage of low-band spectrum, there’s generally more of it and it generally comes in larger channels providing lots of capacity. Mid-band spectrum (between 1GHz and 2.1GHz) provides a compromise – reasonable (but not outstanding) capacity with reasonable (but not outstanding) coverage. In the early 1980s, as the local telephone monopolies covering most of the country, Verizon and AT&T received free 800MHz low-band spectrum in each market they served. In 2008, the FCC auctioned off 700MHz low-band spectrum. Of the national players, only Verizon and AT&T had deep enough pockets to compete and walked away with strengthened low-band spectrum positions. Today, these two have the vast majority of low-band spectrum and T-Mobile and Sprint are hoping that the 2016 600MHz incentive auction will help them begin to balance their portfolios and are demanding that the FCC enact rules to avoid another Verizon/AT&T dominated auction process. All players have reasonable amounts of mid-band spectrum (with AT&T and Verizon again using their strong balance sheets to further strengthen their positions in the recent AWS auctions). The majority of Sprint’s spectrum is high-band 2.5GHz spectrum.

5. Network Technologies
Mobile operators face a number of strategic decisions over the next few years related to network technologies. There are enough uncertainties around the key decisions that each operator has a slightly different strategy. Two of the biggest decisions relate to small cell deployments and migration to Voice over LTE (VoLTE). AT&T has the most comprehensive strategy which revolves around their broader Velocity IP (VIP) Project, which they hope will free them from much of the regulatory oversight they currently endure in their monopoly wireline footprint and therefore provides tremendous financial incentives. This is driving a relatively aggressive small cell deployment and a moderately aggressive VoLTE plan. Verizon has been the most aggressive of the national players in deploying VoLTE, while (until recently) being the most hesitant to commit to significant small cell deployments.

6. Cash Management

6a. Capital Expenditures
None of this is cheap. It takes deep pockets to acquire spectrum and even deeper pockets to build it out. In a technology-driven industry, new network architectures will always require significant investments. As price wars constrain revenue, while demand for capacity continues its exponential growth, CapEx as a percent of revenue will likely become a significant strategic issue for all operators.

6b. Expense Management
Operating expenses and overall cash flow also can’t be overlooked. Growing demand for capacity and small cell deployments require increasing backhaul spend (although the shift to fiber for macro sites has helped bring that under control for most operators). But the biggest issue will likely continue to be the cost of providing smartphones and tablets to customers. As an illustration of how significant this cost is for a mobile operator, in Sprint’s 2013 Annual Report, the company reported equipment net subsidies of nearly $6B on service revenues of just over $29B (over 20%). In 2012, T-Mobile introduced equipment installment plan (EIP) financing as an alternative to subsidies and early in 2013 announced that it was eliminating all subsidies. Since then, the other three national operators have similarly introduced device financing. From an income statement perspective, this helps T-Mobile’s earnings since the device is accounted as an upfront sale, typically near full price. However, T-Mobile and their competitors have introduced zero-down zero interest (or close to it) terms, and they are discounting the monthly bill for the customer by roughly the same amount as their monthly equipment financing payment to keep the total monthly cost to the customer competitive with the traditional subsidized plans. The net result is that T-Mobile (and their competitors who have all followed suit) are taking on the financing risk without significantly improving their cash flow. For 2014, T-Mobile reported just over $22B in service revenues (a 17% increase over 2013). They also reported equipment sales of $6.8B (a 35% increase and 30% of service revenues). But, they also reported the cost of equipment sales at $9.6B (an increase of 38%) and they reported that they financed $5.8B in equipment sales (an increase of 75% over 2013 and 26% of service revenues). As of the end of 2014, T-Mobile had $5.1B in EIP receivables (an increase of 78%). That’s a lot of cash tied up in customer handsets. The strategy has worked in terms of attracting customers to switch to T-Mobile (which is why their competitors have had to respond), but it’s less clear that it’s been financially beneficial for the company in the long run. Verizon, for one, seems unconvinced and has been unenthusiastic about device financing. I believe this will continue to be an area of strategic deliberations at all mobile operators.

7. Plan Types
This shift from subsidized devices is also part of a disruption in how the industry views plan types. For decades, the industry focused on postpaid phone plans. These plans were subsidized, but the customer was locked in for two years, “ensuring” that the operator earned back their up-front investment in the device. Because operators, for the most part, managed this business with appropriate discipline, only prime credit customers could get a subsidized device and these tended to be fairly profitable customers. Those that didn’t qualify settled for a prepaid plan where they purchased the phone upfront at or near full price, which provided better cash flow early in the customer life, but less profitability over time. Eliminating subsides also eliminates the 2 year service plan (although the long term device financing still provides customer lock in) blurring much of the distinction between postpaid and prepaid. The number of people with multiple wireless devices is also increasing as we are carrying iPads and other tablets, as automakers are integrating wireless connectivity into the cars we drive, and as we move towards a day when virtually any product with a power supply will be wirelessly connected to the Internet. Different operators are taking different approaches to how to structure their plans to accommodate these changing customer behaviors within their business models, and I’m sure it will continue to be a topic for internal debate and discussion as the industry models evolve.

8. Commoditization
In many respects, wireless service is increasingly viewed as a commodity by customers. Operators continue to trumpet their network differentiation, but to the consumer there is generally the perception that all operators offer the same devices, in the same ways, and support those devices with networks that work reasonably well just about everywhere we go. Over the past 6 to 12 months, T-Mobile and Sprint have been very aggressive about reducing pricing or offering more for the same price, in a successful effort to take customers away from Verizon and AT&T. Those two larger operators have had to respond with lower prices or increased buckets of data. The operators may be denying it, but it sure looks like a commodity market to me, and I imagine that’s a discussion that’s happening in each operator’s strategic planning meetings.

9. Quad Play or Cord Cutting
For well over a decade, there’s been an ongoing strategic debate within the industry about whether a combined wireless and wireline bundle is critical to market success. At times, some players have decided that it will be and have taken actions, such as the strategic alliances between cable MSOs and wireless operators (Sprint, Clearwire, and Verizon), or advertising campaigns focused on integration across multiple screens (TV, computer, phone). So far, there’s little evidence that it really matters. Consumers take what landline voice, broadband, and video services they can get from the duopoly of cable provider or “telephone” provider and then they can choose from a competitive landscape for their mobile needs. For the last few years, it appears that none in the U.S. industry have seen any need to focus on a quad play future. In fact, the focus has been more on cord cutting and over-the-top players. However, in Europe, there’s a very different story playing out and it is driving massive industry consolidation. Especially while wrestling with the questions about commoditization, operators will once again question the benefits of a differentiating bundle.

10. Re-intermediation
Another common tactic to combat commoditization is to “move up the stack.” In the mobile industry, that would be “move back up the stack.” The introduction of the iPhone, followed by Android devices, led to the disintermediation of the mobile operator from much of the value chain. Prior to the iPhone, operators carefully managed their portfolio of phones, telling OEMs what features to build and it was the operators who largely drove demand for different devices. Operators collected the vast majority of revenues in the industry, directly charging the customer for the phone, the network service, any applications, any content, and any value added services (such as navigation or entertainment). The iPhone (and then Android) enabled better apps and content, provided a better marketplace for buying them, and provided an open connection to the Internet for a wide variety of over-the-top services. Although the operators had poorly managed the apps/content/services opportunity and therefore they didn’t have much “value add” revenue to lose, they clearly lost the opportunity to be more than just the underlying network. Over the past several years, the industry has tried to claw its way back up the stack. Operators pursued “open” strategies, introducing APIs for app developers and other tactics to try to be a “smart pipe” rather than just a “dumb pipe.” They have also tried to encroach on other industries by offering new mobile-enabled services, such as mobile payments and home security/automation. These efforts have not yet had meaningful success, although AT&T’s progress with Digital Life is promising. If operators want to escape the commodity “dumb pipe” trap, at some point they will need to figure out how to reclaim more of the stack.

Obviously, the mobile industry is dynamic and I expect these 10 topics to drive significant strategic decisions across all operators in the coming months and years. If you’d like to discuss any of these topics, drop me a note.

The Danger of the Intelligence Revolution

February 11th, 2015

I’m continuing here to share a series of articles I’ve written over the past several months for Christian Computing magazine on the Intelligence Revolution.

Every new technology introduces new capabilities that enable us to do things that previously weren’t possible or practical. As technologists, our job is to capture this new power for our organization. But every new technology also creates new potentials that represent risk to ourselves, our families, and the organizations that we serve. As technologists, we are also called on to manage this danger. In this post I’d like to discuss the dangers introduced by the Intelligence Revolution.

Grey Areas

A friend of mine recently asked for my advice. He is pursuing a new career path and faced a decision. Taking one path would position him for systems development opportunities. The other path would position him for big data analytics opportunities. Because I believe that the Intelligence Revolution is happening, and I anticipate that there will continue to be a shortage of data scientists who can work with big data, and because his personal background and strengths are well aligned with data analysis, I told him that the big data analytics path would be one that could create tremendous value for him personally.

But I warned him that pursuing that path may be a challenge for him as a Christian. I believe that it is a path that will pass through many “grey areas” where his moral standards may be challenged.

What do I mean by grey areas? When we’re dealing with information, it’s easy to think of types of information that we should have no problem using (e.g. the user tells us they want us to use that data for our application to personalize results for them), and it’s easy to think of types of information that we know it would be wrong to use (e.g. secretly capturing the keystrokes when a user enters their credit card number and then using that information to make unauthorized charges to the user’s account).

But in reality, there’s a lot of information that falls in between those extremes. Those of us that run websites rely on log data to optimize our sites. We want to know (on an aggregate basis) which pages get the most views, what pages cause people to leave our site, what external links brought them to our site, and any problem areas that might be causing a bad user experience. Our users want our website to work well, and our privacy policy (hopefully) clearly explains that we’re going to use this information in this manner, so this type of information usage is probably just barely creeping from the “white” into the “grey.” But what if we use log data to zero in on one user and track their page by page journey through our website? In some ways, if our motives are pure, and if our published privacy policy allows it, this is just like the above example, but it’s starting to feel a little creepy, isn’t it? Especially if we take the next step and attach the user’s information (their login id and account information) to this usage pattern, it starts to feel a lot like spying, doesn’t it?

Well some companies do exactly what I’ve described and their customers applaud them for it. When I log onto my Amazon account, I’m presented with recommendations based on what I’ve bought in the past, and even based on items I’ve simply browsed in the past. Sometimes it feels creepy, but most of the time I’m thankful for the recommendations and it helps me to find products that will meet my unique needs.

Other companies have been strongly criticized and their customer loyalty has suffered because of their use of similar customer usage information that they were using to improve the customer experience. For example, in 2011, the mobile phone industry suffered a serious black eye when someone discovered that virtually all smartphones had software that collected information about usage and reported it back to the mobile operators. The operators wanted this information because it provided precise location information and information about how well their network worked in each location. That told the operators where their customers went (and where they needed a network) and how well the network actually worked in those places. This enabled better investment decisions so that the operators could provide a better experience for their customers. Unfortunately, the software company (Carrier IQ) that the operators used was collecting information that didn’t seem necessary for the stated goal, and customers weren’t informed about the information being collected and how it was being used. Carrier IQ also didn’t respond well to the situation, all of which forced the mobile operators to remove the software from all their customers’ phones and made it much harder for the operators to provide a good network experience.

What Does That Mean for Us?

Hopefully those examples spell out the danger for us, both as consumers, and as technologists that are tasked with helping our organizations to leverage technology to best serve our constituents.

As consumers, we have to realize that businesses (and governments and others) have more and more information about us – not just what we do online, but in every transaction that we perform with anyone. How that information will be used will not be limited to the ways that we’ve explicitly requested and not even to the ways that companies have told us they would use the information. In a way, I guess, that may serve as encouragement to be “above reproach” in everything we do and perhaps may be a help in restraining sin. We know that God sees everything we do and even knows our heart, which should be motivation enough, but perhaps knowing that companies and men see our actions as well may cause some to act in a more Godly and honorable way. But it’s also rather scary, knowing that, unlike God, men are sinful and companies don’t always act in our best interests.

As technologists, we must view ourselves as wise stewards of the information that we have. Either explicitly or implicitly, those we serve have entrusted us with it and we must protect it and deal with it in an honorable manner, with right motives and a servant’s heart. But, just as Christ explained in the parable of the talents (Matthew 25), we shouldn’t just bury this treasure, we must maximize the value of it for the benefit of those that have entrusted us with it. We must capture the power of information to the good of those we serve and to the glory of God. Key to this will be right motives, transparency, security, and trust.

Mobile Impact Obvious

February 2nd, 2015

As my recent set of posts imply, I’m thinking quite a bit beyond the “mobility revolution.” A fascinating article at Wired makes it clear that the impact of mobile has become obvious, and when something is obvious, it’s much less interesting to me. (That doesn’t mean that execution and operations minded folks should ignore mobile – now is the time when the real money is obviously being made…)

Reading this article took me back to early 2012. Facebook’s IPO was the big story and the biggest knock on the company was that it lacked a mobile strategy. Today, more than half its revenue comes from mobile and they are being lauded as one of the few to have figured out mobile. Back then, Facebook wasn’t alone. Perhaps setting the tone for the year to come, in late 2011, the world’s largest technology company at the time, HP, ousted their CEO, at least in part, for a failed mobile strategy (the company doesn’t show up in the Wired piece because they haven’t been able to recover to a leadership spot in tech). Later in 2012, Intel’s CEO was forced to resign because of a failed mobile strategy. (Like HP, Intel rarely gets mentioned these days when folks talk about the companies leading the technology industry.)

2012 was the wakeup call. 2015 is showing which companies jumped and which hit snooze.

The Power of the Intelligence Revolution

January 31st, 2015

I’m continuing here to share a series of articles I’ve written over the past several months for Christian Computing magazine on the Intelligence Revolution.

Every new technology introduces new capabilities that enable us to do things that previously weren’t possible or practical. As technologists, our job is to capture this new power for our organization. But every new technology also creates new potentials that represent risk to ourselves, our families, and the organizations that we serve. As technologists, we are also called on to manage this danger. In this post I’d like to explore the power that is available from this Intelligence Revolution, and in the next I’ll address the danger.

Setting the Context

When I think, write, or speak about the major technology revolutions, I usually say things like “this revolutionized how we, as individuals, interact with the world around us, and fundamentally transformed how organizations/businesses operate.”

In hindsight, it is easy to look back and see how the introduction of the personal computer transformed how churches and ministries operate. It is similarly easy to see how the Internet has transformed how churches and ministries operate.

When we look at the Mobility Revolution, it is easy to see how mobility has revolutionized how we, and those we serve, interact with the world, but the examples of how mobility has transformed how churches and ministries operate are harder to find. I don’t believe that is because the Mobility Revolution is less transformative for Christian organizations than it is for the business community. Rather, I think it’s a reflection of the limited resources that those of us in ministry are working with, and the greater challenge we have in making investments in new technologies compared to businesses who can easily calculate the ROI (return on investment). As the technology becomes more ubiquitous, and the cost of implementing it comes down, and the expectations of those we serve that we are leveraging the technology increase, in time I believe that mobility will be as transformative for our churches and ministries as it has been for most businesses and industries.

In the same way, and perhaps even more so, I believe that it will be many years before the Intelligence Revolution has a transformative impact on how our churches and ministries operate. However, the impact on ourselves, our families, and those we serve will be much more immediate and, in fact, is happening today. With that as context, this article will deal more with how the power of the Intelligence Revolution plays out in the relationships between businesses and us as individuals. I believe that, in time, these same relational impacts will play out between our ministries and those we serve, so perhaps this article can plant a future vision for how your ministry can capture the power.

What Power Does the Intelligence Revolution Unleash?

At its core, the Intelligence Revolution is about having more information and being able to do more with it. Data scientists probe, manipulate, correlate, and analyze vast amounts of information from a variety of sources to extract “actionable insights.” What do I mean by “actionable insights?” I mean new understandings of reality (insights) that can lead to decisions to take action in order to accomplish our objectives.

Each business exists to provide a product or service that people want or need. While the most measurable objectives for a business are financial (e.g. revenues and profits), these objectives are only sustainably met when the business is doing a great job of meeting its customers’ needs and desires. Therefore, the real power of the Intelligence Revolution is achieved when a business analyzes data to gain new understandings of reality in order to better serve their customers. When that happens, it’s a win for the business and a win for us, their customer.

As a very simple example (with a lot of complexity behind it), when I turn on my television, I have hundreds of channels and thousands of shows (over the next 24 hours) to choose from. It is overwhelming to me to find something I want to watch. As a result, I only turn on the television when I know there’s something on that I want to watch (in my case, that’s usually a sporting event). Even then, it is a struggle to browse through all those channels looking for my show.

However, data exists that could dramatically improve this product for me. Out of the hundreds of available channels, there are probably fewer than a dozen that I have watched in the past year. Of the thousands of shows, those that I have watched have a very limited set of characteristics (maybe 60% sports, 30% news, and 10% movies or other entertainment), and the specifics within each of those fields could be narrowly defined (favorite sports, leagues, teams) simply by observing my behaviors.

Similarly, data exists that indicates with whom I associate (Facebook friends; e-mail, text, and telephone conversations; organizational associations, etc.). And the data exists to indicate what these people generally watch and what they are watching right now. Also similarly, it is relatively simple to identify other people who like to watch the same things I do, and to identify what other things these people watch and what they are watching right now.

This isn’t much of a stretch from what many companies do today. Amazon recommends products to me based on what others that have similar tastes as mine have bought and recommended. LinkedIn recommends contacts to me based on common associations. TripAdvisor recommends hotels to me based on what my Facebook friends enjoyed.

Theoretically, when I turn on the television, the Intelligence Revolution should allow my cable company to present to me the very small number of shows that I might actually want to watch, rather than forcing me to wade through a sea of unpalatable choices in hopes of finding a hidden gem.

Although the above example appeals to me as a consumer, it is important to point out that the Intelligence Revolution is most clearly playing out today in industries where the consumer is not the paying customer, but rather is the product. Security expert Bruce Schneier may have been the first to make the point that – if you aren’t paying for a product/service, then you aren’t the customer, you are the product.

Google and Facebook are examples of companies that do a great job of making billions of dollars in profits by translating the information they have about their users (us) to provide a more valuable service (often targeted advertising) to their paying customers. To be fair to these companies, they actually have what is called a “two sided business model.” Although we are the product, it is critical that they keep us happy so that we keep coming back in order to sell our “eyeballs” to their advertisers. Both these companies do a great job of using “actionable insights” to improve the quality of their service to us, their end users. Google’s search algorithms are a great example of how the Intelligence Revolution has already transformed the way that we, as individuals, interact with the world around us.

In time, it is my hope that Christian churches and ministries will find ethical and God-honoring ways to leverage “big data” to better serve those around us and to advance God’s kingdom here on earth.

What is the Intelligence Revolution

January 20th, 2015

In my last post I briefly introduced the Intelligence Revolution and put it in the context of the broader Information Age – following behind and building upon the Digital Revolution, the Internet Revolution, and the Mobile/Social Revolution. This month, I’d like to more thoroughly explain what this new revolution is. In coming posts, we’ll look at the new power and the new danger represented by this revolution.

A Brief Review

The Digital Revolution is often referred to as the PC or Microprocessor revolution, because the Microsoft-Intel-IBM personal computer ushered in this new era where computing power moved out of the data center, onto the desktop, and eventually into virtually every product with a power supply. However, the long term implications of this era of the information age stem from the fact that these changes enabled virtually everything in the physical world to be digitized – to be accurately represented as ones and zeros that were easy to store, copy, and manipulate.

The Internet Revolution is most notable for making it easy for that digital information to flow across boundaries – between individuals, families, companies, and countries. Among other things, this meant that information could easily be shared with others, and information from different domains could be combined to create new information.

The Mobile/Social Revolution enabled everything and everyone to be connected digitally all the time. We are growing increasingly comfortable sharing information about ourselves online in fairly public ways. Meanwhile objects around us are constantly collecting information and bringing it into the cloud – from weather stations to security cameras to car engines.

What is Big Data Analytics?

Over the past few years, a new discipline has started to emerge called Big Data Analytics. You’ve probably heard of it and you may have some idea of what it is, but unless it’s become part of your job description, I’m guessing it’s still a pretty nebulous concept to you.

Admittedly, the definitions in the industry are still swirling a bit, but I found Timo Elliot’s blog post on “7 Definitions of Big Data You Should Know About” very helpful. He starts with a 12-year old definition that describes Big Data as representing the combination of Volume, Velocity, and Variety of data. He then introduces the new technologies that have made it cost-effective to deal with high volume, high velocity data from a wide variety of sources, most notably Hadoop and NoSQL. He goes on to point out that we previously primarily dealt with data about transactions, but now we are also analyzing interactions (e.g. web page clicks) and observations (data collected automatically by connected devices). He describes making decisions based on transactional data as “managing out of the rear view mirror” but that interactions and observations can “signal” things that are likely to happen in the future. He closes his piece with a couple of analogies – “dark data” (data that we previously ignored because of technical limitations) and big data providing a “nervous system” for the planet.

Although that collection of definitions fails to provide a single crisp, clear, and comprehensive definition of big data analytics, hopefully it gives you a good sense for what is happening. Because we are on our computers and on our smartphones all the time, doing stuff and sharing stuff, each of us has become a data factory churning out massive amounts of information about ourselves and the world around us. Likewise because the objects around us are increasingly observing themselves and the world around them, collecting those observations, and then bringing those observations into the cloud, we are surrounded by data factories. Technology now enables all of that information to be stored, correlated, and analyzed to create new insights that can create value for someone.

Some of those “someone’s” scare us. The revelations by Ed Snowden about NSA surveillance programs was a wake up call that governments are putting tremendous computing power to work in ways we could never have previously imagined.

Some of those “someone’s” may bother us. Clearly, advertisers have much to gain by being able to more accurately target who sees their ads and when they see them. Nissan’s marketing dollars are best spent if they can put a compelling offer in front of someone who has a preference for Japanese automakers while they are in the process of considering their next car purchase. On one hand, we prefer to see ads that are relevant to us. On the other hand, it’s pretty creepy when advertisers are using big data analytics, based on information we didn’t realize was public, to put ads in front of our eyes.

But, to be honest, we probably welcome some of those “someone’s.” My ESPN mobile app already knows that the Kansas City Royals are my favorite baseball team (because I told it so). And because of that, when I open the app, I see the Royals score and their latest news. However, I look forward to the day when that app will also know that I’ve set up my DVR to record the game and to not provide notifications each time the Royals or their opponent scores!

The Next Revolution Defined

With all that as context, here’s my working definition for the Intelligence Revolution: “The Intelligence Revolution will help us better understand the world around us; will improve our decision making to enhance our health, safety, and peace of mind; and will enable companies to better serve us based on the correlation and analysis of data from the interrelation of people, things, and content.”

Of course, my definition paints this revolution in the most positive manner possible, and hints at the “power” of this revolution. I think it’s obvious there are many “dangers” as well. We’ll talk about the power and the danger, as well as the barriers for this revolution, starting in my next post.

The Next Revolution

January 14th, 2015

I wrote this article last August for Christian Computing magazine as the first in a series. This month I’m wrapping up the series, so thought it was timely to start sharing the articles here. I hope you enjoy. In a recent press release celebrating 25 years of publication, Christian Computing described themselves this way: CCMag is the foremost Christian publication to provide information about constantly changing technology tools and how they could apply to church business and ministry.

Last month I was asked to give the keynote address at the Nemertes Navigator 360 event near Tampa, Florida. The topic of my talk was “The Next Revolution” and I’d like to take the next few months of my column space to talk about what I see coming and how it may impact our churches and ministries. When I talk about the “Next” revolution, I’m talking about the fourth technology revolution in our current information age.

So, what were the first three revolutions?

Arguably, the information age could be said to date to the invention of the telephone in 1876 or the electric telegraph in the 1830s, or even back to Gutenberg’s press in the 1450s. All of these are incredible inventions that radically transformed how we interact with the world around us (especially information) and how businesses operate. However, since this is Christian Computing magazine, I will focus on the information age spawned by the advancement of computer technology.

The first revolution is sometimes called the PC Revolution, or more accurately the Microprocessor Revolution. This focus on the computer itself is understandable. Driven by the exponential improvements in processing power density and cost reflected in Moore’s Law, computers moved from filling a room, to sitting on a desk, to being built into virtually everything with a power supply. However, I think the real revolution was in what those technology advances enabled, so I refer to this first revolution as the Digital Revolution. The truly world-changing transformation that began with the Digital Revolution was the digitization of the world. Prior to this revolution, the real world existed in physical form that we could only perceive with our senses. Through this revolution, the real world was captured as ones and zeros. Music, and images, and videos, and books, and financial transactions, and weather measurements, and vital signs all became data that could easily be stored, copied, and manipulated.

The second revolution is known as the Internet Revolution, and this is appropriate. While the name Internet describes a vast collection of inter-connected computer networks, the transformational change follows directly from that inter-networking. The Internet revolution made it easy for digital information to cross boundaries. Before broad adoption of the Internet, it was hard to move data from one company to another, or from one family to another. Companies could pay for proprietary Electronic Data Interchange network connectivity and work through complex implementation plans to connect with other companies, and individuals could copy up to 1.4MB onto a floppy disk and carry it to their neighbor (sneaker-net), but virtually overnight, the Internet made it easy for data to flow. Now, it was not only easy for the real world to be digitized, stored, copied, and manipulated, but also transported and shared. The launch of Napster in 1999, and it’s rapid growth in popularity, sent a wake up call to all industries that the world had changed.

Some people see the mobile and social revolutions as distinct. I see them as one integral Mobile/Social Revolution. Neither could have had as significant of an impact without the other. This revolution enabled all people, things, and content to be connected all the time and everywhere. Consider the impact that the combination of the smartphone and social networks like Facebook has had on photography. We take pictures we never would’ve taken before. We enjoy our own pictures in new ways, rarely printing them. We also share our photos differently, no longer laboring to put them in a physical photo album. Finally, our friends have a much better experience enjoying the photos we share because they control how they view them and they can join in a dialog about the pictures in real time with far flung friends around the world. In the same way, as wireless connectivity gets integrated into virtually every product with a power supply, the ways in which we interact with those products and with each other will continue to be transformed.

What impact have these revolutions had on the church?

Each of these revolutions have significantly impacted the church. As the Digital Revolution rolled onto our desktops, our churches learned to become more efficient, digitizing the people, relationships, ministries, and transactions that organically defined each local body of believers. The entire church management software industry was born. Bible software started to appear, so pastors and lay people could more thoroughly and efficiently search the Word. And of course, this publication itself was on the forefront preceding all of these advances. The Internet Revolution brought church websites, Sermon Audio, and Bible Gateway, amongst other advances. In the Mobile/Social Revolution, iPads and Facebook have transformed how we interact with the Bible and other content, and how we interact with each other in Christian community. The YouVersion Bible App has been installed nearly 150 million times on smartphones and tablets. Church Management solutions have gone mobile and social, engaging the congregation.

In general, I’d say that churches tend to move a little slower in adopting technology, although some churches are always on the leading edge, but clearly each of these revolutions has advanced our ability to know God and to serve Him, wherever we go. Obviously each of these revolutions has also brought new “dangers” into the church and into our congregations. The duty of the church is to determine how best to capture the power of the technology while managing the danger and limiting its negative impact on the church and our people. As we consider the next revolution, I believe this will be particularly challenging.

What is the next revolution?

I refer to the next revolution as the Intelligence Revolution. It incorporates buzzworthy elements such as cloud computing and big data analytics to enable organizations to better serve their constituents. We will begin to explore this next revolution in next month’s column.

It is my hope and prayer that these articles will encourage you in your daily walk with Christ. As 1 Peter 4:10 teaches us “As each has received a gift, use it to serve one another, as good stewards of God’s varied grace.”

Too Mobile?

January 13th, 2015

I know… I said I would be posting more and I haven’t. I’m sorry. Even this post is something I meant to post in late December and am just now getting around to it. My hope is to start posting some content I’ve written over the past couple of years that I think would be interesting to everyone here.

But for now, let me observe on how this mobility revolution thing is working out for me.

December is a time of year when I do a lot of work with photos. My favorite site for this kinda thing is Shutterfly. Every year we use them for our Christmas cards, and then I make a bunch of personalized calendars as gifts. The last few years I’ve also been making Christmas tree ornaments to capture the main events of the year so each year when we decorate the tree we can be reminded of the wonderful memories from past years.

Anyway, all of this means that during December I ask people to e-mail me photos to use in gifts for particular people. This year is the first year that I’ve noticed that everyone embracing mobility has really caused a problem for me. You see, when someone e-mails me a picture, then I get it on my computer. I can store it in the right folder. Perhaps do some editing, if necessary. Then upload it to Shutterfly for use in the project.

This year, several times, I was frustrated because I asked people to e-mail me photos, but instead they texted them to me. Of course, I’m not surprised that the photos I asked for were sitting on their smartphones, since that’s pretty much the only camera the vast majority of us use anymore. But when I ask someone to e-mail me a picture, and it seems to me that it is just as easy to e-mail as it is to text, and multiple people text me the picture instead, then that is telling. It seems to me a strong indication that we are well into the post-PC era. (BTW – these are not tech early adopters, these are clearly mainstream tech users.) Mobile devices have replaced our desktop devices. We apparently have also entered the post-email era. (BTW – these are not millennials I’m talking about either, each of the people who texted me photos instead of e-mailing them is roughly my age.)

My frustration stems from how much harder it is for me to get the photos into my routine when they are texted to me, but my fascination stems from watching how the mobility revolution has impacted basic behaviors of mainstream consumers. It’s amazing how fast we adopt and adapt.