What Might Google Really Do?

March 14th, 2015

Google’s entry into any market is cause for existing players to pay attention and potentially be alarmed, so it’s no surprise that the news that Google will become an MVNO and provide wireless services has many forecasting doom and gloom for the existing mobile operators. Before we can jump to those conclusions, I think it’s wise to consider the different scenarios that, given what Google has said, and what they’ve historically done in mobile/telecom, have some level of credibility.

Let’s start by reviewing, briefly, the challenges that MVNO’s have traditionally had to solve. I think they fall into four buckets: distribution, customer service, devices, and brand. I think Google is in a very different place than the vast majority of MVNOs when it comes to these four topics, given their objectives and their starting point.

For distribution, Google’s original Nexus web-based distribution experiment failed, I doubt they’ll try that again. They might try using their physical “stores” in Google Fiber cities, although this isn’t likely to get them enough customers to provide meaningful scale and impact. They might also strike a distribution deal with big box retailers, like Best Buy or WalMart.

However, given Sundar Pichai’s comments, I wonder if Google isn’t actually negotiating with the mobile operators to sell the service in their own stores or through their distribution channels. This would be unusual, but not unprecedented.

When it comes to customer service, mobile operators employ tens of thousands of service reps in both owned and outsourced call centers around the world. I doubt that Google has a desire to establish that kind of customer care infrastructure. Again, it’s possible that they may limit this experiment to Google Fiber markets, in which case, they may be able to leverage the care resources they’ve put in place to support Fiber, or, perhaps, they are going to leverage the mobile operator’s existing customer care infrastructure, as with distribution. Again, this isn’t typical for MVNO’s, but I imagine the operators would seriously consider the potential incremental revenue this would generate.

MVNOs have often struggled to get deals with OEMs for devices because they can’t commit to enough volume to make it work. In recent years, Sprint, for one, has tried to help MVNOs overcome this challenge with their BYOD program and their custom-brand, white label program, but if Google wants to innovate in software, hardware, and connectivity, this won’t be an option. Of course, for Google this also isn’t the same problem as it is for other MVNOs, since they will likely pair the service with a new Nexus device, which gives them a unique position with OEMs. This likely is easily solvable for Google.

Most MVNOs in the market are new brands that must invest significantly to establish a position with a narrowly targeted segment. Google doesn’t have this problem. If anything, Google’s issue will be ensuring that only the right customers for their experiment are the ones that choose their brand for wireless.

Second, I think we need to clarify Google’s objectives with this experiment. Google wouldn’t be investing in this experiment if they didn’t think it would create direct or indirect value for their business. That being said, I doubt that Google believes they can make money competing with Verizon, AT&T, and the others with traditional cellular service.

As with Google Fiber, they may believe that Mobile Operators are constraining use of the Internet and applications and that they can introduce “innovations” that the existing players need to respond to, changing the overall trajectory for the industry.

Net neutrality, or to use the Google Fiber terminology, providing openness and choice, managing the network in an open, non-discriminatory, transparent way and giving users a choice of multiple service providers, may be an objective. Clearly Verizon and AT&T are going to resist the FCC’s new rules and Google may want to have market pressures to combine with regulatory pressures to ensure that the operators adopt “open” policies.

Another target may be the strong trend away from unlimited plans. The FCC’s new rules actually are likely to accelerate the move away from unlimited since it takes away the option for Mobile Operators to throttle unlimited plans. Any customer that doesn’t have unlimited has to stop and think about whether or not to watch that YouTube clip while on the go, or before they do just about anything bandwidth intensive when not on WiFi. This constrains use of the Internet and therefore impacts Google’s core business.

Finally, let’s not ignore what Pitchai presented as Google’s objectives during the interview. Although improving WiFi to cellular interworking and making problems like dropped calls less painful are noble goals, I don’t think that pressuring Operators to implement those types of improvements would truly justify Google’s attention. I think, more likely, as Pichai hinted, maybe this isn’t about traditional cellular service at all. Maybe this really is about the Internet of Things – clearly a space that Google is investing in at the device and software level. Maybe Google wants to make sure that the beyond-WiFi connectivity is being developed in a way that serves Google’s objectives.

So, with that as a framework, let me propose three different potential scenarios for what Google might really do.

First, this really could be like Google Fiber – disguised as an “experiment” but really a new business, competitive entry into the mobile service space. The biggest challenge with this scenario is that Google will be dependent on the mobile operators for at least network capacity, and that’s never the position you want to be in when you’re trying to disrupt the operator’s business (just ask the CLECs of the late 1990s who tried to resell RBOC service under the Telecom Act of 1996). Next, if Google were to pursue this approach, at least all operators not providing Google’s underlying service, would drop or deprioritize Android devices in their portfolios, seriously hurting Google’s momentum and leadership in the smartphone OS space. I can’t imagine that Google would see enough potential upside from this approach to offset the serious downside it would have on their core business.

As a second scenario, let’s take Pichai’s comments at face value and assume that this truly is a smartphone- and/or tablet-centric experiment, working closely with the operators. In that case, it would look a lot like Nexus. I wouldn’t be surprised to see Google rely heavily on their operator partner(s) for distribution and customer care. I also would expect the scale to be limited, meaning it would have relatively limited retail impact on the operators. I also wouldn’t be surprised to see Google want to move it around, so maybe each new Nexus device launched is a new MVNO on a different operator or set of operators. Google would effectively be proving out new/unconventional approaches to connectivity offers (e.g. unlimited) in a way that proves out to the operators that there’s market demand (enough to be a threat) and that the economics can work (so that it’s attractive).

The third scenario is that this really isn’t about smartphones and tablets at all, but it’s really all about IoT. Google obviously is making big investments in hardware and software for IoT, so it would be natural for them to invest to get the “beyond-WiFi” connectivity to work for them as well. AT&T has had meaningful success with IoT, and I think Verizon still has serious hopes for the space, so they might not be the first to open the door to Google’s entry into being a connectivity service provider here, but I think other operators may be more than happy to have Google’s wholesale business and to help define the de facto standards that others likely need to adopt.

Of course, all of this is pure conjecture. I have not been privy to any discussions between Google and mobile operators. There’s more that we don’t know than we know, at this point. However, I think these three scenarios outline a solid framework for anyone to consider the impact on the industry as a whole, or their particular business.

This should be fun to watch!

What Did Google Really Do? – A Historical Perspective

March 13th, 2015

Just as Sundar Pichai did, I think it makes sense for us to look historically at Google’s forays into mobile and connectivity. I think there are three historical precedents to consider: Android, Nexus, and Google Fiber.

Android
Google followed Apple into the smartphone market. You can either say that, together, they created the smartphone market, or you can say that they significantly disrupted an existing market dominated by RIM (Blackberry), Microsoft, Palm, and Nokia (Symbian). Google had virtually no meaningful relationships with any of those four, but Android was a key element in the destruction of what had been a very strong relationship with Apple.

Including Apple, four of the five market leaders all had an integrated hardware/software approach to the market. Google chose an “open” or “ecosystem” model, similar to Microsoft’s successful approach to the PC market. In fact, the initial announcement of Android was made by the Open Handset Alliance, made up of 34 companies including OEMs, Operators, Developers, and Chipset companies.

Today, by far, Android is the dominant smartphone operating system. In his talk last week, Pichai claimed that 8 out of every 10 phones shipping around the world are running Android. Google has built a strong relationship with OEMs and, somewhat less directly, with Mobile Operators, to get Android to market. It is important to remember how critical Android was for Operators to have a competitive response to AT&T which had the exclusive on the iPhone. Verizon particularly rode the Droid horse hard until they gained access to the iPhone.

It is also important to note that Google’s Android play has always been focused on their core business model – increasing how much time each of us spends online, with Google providing web-based services and enabling monetization by 3rd party developers that ultimately drive advertising dollars for the company. (Advertising represented $59B of their $66B in 2014 revenues.)

Nexus
In January 2010, Google partnered with HTC to launch the Nexus One smartphone running the latest release of Android. The phone introduced some new features, but mostly it was an attempt by Google to demonstrate how strong a “pure Google” device could be. At least to some extent, it was an attempt to get the OEMs to stop modifying the Android platform. As you may recall, at the time, there was a fair amount of noise in the marketplace about fragmentation in Android (multiple operating system versions, different screen sizes, user interfaces, etc.) relative to the monolithic iPhone.

With the Nexus One, Google also tried to introduce a new approach to the market, selling an unlocked phone at full price, only available for purchase via a website, and with customer service only available via online support forums. None of these experiments were successful and undoubtedly contributed to the lack of success for the phone itself.

The second Nexus handset, the Nexus S (based on Samsung’s Galaxy S platform) was more successful. It introduced the Gingerbread version of Android (2.3) and had hardware specs that were impressive, including NFC. In fact, the Sprint version of the Nexus S became the launch device for Google Wallet. For this second Nexus device, Google stepped back from selling only on the web, selling as a full price unlocked device, and providing support through forums. Instead, they adopted the traditional industry models – sales and support primarily through the Mobile Operator channels.

Google has continued to partner with OEMs to introduce new Nexus phones, often using each new model as an opportunity to introduce new capabilities that perhaps the OEMs and Operators weren’t yet ready to place a bet on otherwise. It’s important to note that Google had to work hard to make sure that this program didn’t alienate the OEMs and Operators on whom the company was dependent. With each Nexus, Google partnered with a different OEM, and made sure that versions were available for the major operators.

To some extent, Google has used the Nexus devices to continue to push openness and capabilities that can enable mobile devices to be used for more and more applications, ultimately driving their core business.

Google Fiber
On February 10, 2010, Google announced plans to build an experimental fiber network, delivering 1GBPS, which they characterized as “100 times faster than what most Americans have access to today”. In their press release, they said “We’ve urged the FCC to look at new and creative ways to get there in its National Broadband Plan – and today we’re announcing an experiment of our own.”

As with Nexus, they made a big deal about the scale being not too small and not too big, saying that they would deliver the service to as few as 50,000 and as many as 500,000 people. They said their goal “is to experiment with new ways to help make Internet access better and faster for everyone” and they specifically called out enabling developers to come up with next generation apps, test new deployment techniques that they would share with the world, and provide openness and choice, managing the network in an open, non-discriminatory, transparent way and giving users a choice of multiple service providers.

They seemed (at least initially) to not want to offend existing broadband providers, saying “Network providers are making real progress to expand and improve high-speed Internet access, but there’s still more to be done. We don’t think we have all the answers – but through our trial, we hope to make a meaningful contribution to the shared goal of delivering faster and better Internet for everyone.”

With that initial announcement, they invited communities to express interest and more than 1000 did, with many doing crazy things to try to win the network for their community. I live in the Kansas City area (the winning city), and although Google Fiber is not yet available in my neighborhood, it has been a big catalyst for innovation across the metro area.

As has been well documented, Google’s entry into broadband also forced the existing broadband providers to improve their offers (speed, capabilities, and/or price). As Google Fiber has pushed into new neighborhoods and suburbs, the competitors have had to respond. Google is coming to my neighborhood this year and that has caused AT&T to expedite construction on their GigaPower infrastructure and for Time Warner to build out outdoor WiFi using streetlight mounted antennas. Everyone is offering special deals with multi-year commitments. We’ve seen similar competitive responses as Google has announced Fiber projects in additional cities.

Of course, Google Fiber is no longer a friendly, sub-scale experiment intended to help the broadband providers. In December 2012, Eric Schmidt said “It’s actually not an experiment; we’re actually running it as a business,” and he announced expansion to additional cities.

As with Google’s other telecom initiatives, the primary focus continues to be the core business. Google Fiber, both directly and indirectly, is driving more overall Internet use, and that helps drive Google’s services and advertising revenues. It’s also important to note that Google has traditionally not had a strong relationship with broadband providers, so they likely felt free to take a more disruptive approach to the market than with Android and Nexus.

In my next post, we’ll take this historical perspective, combined with Pichai’s comments, and combined with an understanding of the challenges that MVNOs traditionally face, and try to speculate on what a Google MVNO might actually look like.

What Did Google Really Say?

March 12th, 2015

Especially over the last week or so, one of the big topics of discussion across the mobile ecosystem has been that Google finally confirmed that they DO plan to launch some kind of wireless MVNO. Over the next few days, I’d like to share my perspectives on this news, starting this morning with a quick review of what was actually said and what I think was noteworthy about those statements.

Last week, Sundar Pichai gave a keynote speech at Mobile World Congress in Barcelona. In his speech, he talked about Google’s core services, then about Android, but he spent most of his time talking about connectivity – Google Fiber, Project Link, Project Loon, and Project Titan. Then he sat down for a 20 minute interview with Bloomberg Businessweek’s Brad Stone.

For the first 10 minutes, Stone tossed him softball questions, mostly about Android. Then Stone said “There have been reports in the press that Google is talking to wireless carriers about a Google branded network, also called an MVNO, what can you tell us about those talks?” For the next four minutes, they went back and forth on this topic.

Obviously Pichai was ready for the question and started with a well crafted response. Interestingly, he went back to Android, and then he talked about Google’s Nexus devices before he ever got around to talking about their MVNO plans. In fact, at the end of his Android/Nexus discussion he said “That’s the context in which we are thinking about it.”

I’ll talk more about Android and Nexus in a future post, but I think the key points that he made about these as setting the context for Google’s MVNO plans are:

  • That Android has always been an ecosystem play, working with partners.
  • When they introduced Nexus, they did it in partnership with OEM partners.
  • They are very cautious to not compete with their OEM partners, and part of that, he said, was doing Nexus at a scale large enough to have an impact, but small enough to not be threatening to OEMs.
  • Google always tries to push the boundary of what’s next. He said that all innovations in computing happen at the intersection of hardware and software, and that Google felt they needed to do Nexus so that they could work very closely with both the hardware and software in order to push the innovation.
  • He made the case that “we are at a stage where it’s important to think about hardware, software, and connectivity together” – they want to experiment at that intersection, just as they have with the intersection of just hardware and software.

With that as context, Pichai then provided a little more (but not much) information about their plans, mostly within the context that he had already set:

  • They clearly don’t want to mess up their carrier relationships. He wanted to clearly communicate that their intent is NOT to compete with the carriers, but to experiment in order to “help” them.
  • Google is working with carrier partners for this project. The carrier partners will actually provide the service. (BTW – that could mean a few different things, which I’ll get to in a future post.)
  • They will operate this at large enough scale that people will see whether the experiments work (and hopefully carriers will adopt the ideas), but still at small scale so it won’t be a threat to carriers.

Stone specifically asked if this was about “more innovation and lower prices when it comes to mobile networks” and Pichai’s response was that Google is trying to accomplish something a bit different. He then gave a couple of examples:

  • Making the experience seamless for WiFi and cellular network interoperability.
  • Automatically reconnecting a call when it drops.

Both of these examples seem to imply a traditional smartphone use case, but earlier he had specifically pointed to IoT examples such as a connected watch or Android Auto and said that they want to be able to experiment along those lines.

That’s what Google really said. Over the next few posts, I’ll try to translate that into what might be meaningful from that for the industry.

Zoomin Market Revolutionizing Grocery Shopping

March 6th, 2015

This week, in my Kauffman FastTrac class, our guest speakers were John Yerkes and Matt Rider, the founders of Zoomin Market. John literally grew up in the grocery industry, while Matt cut his teeth optimizing and redefining logistics and reverse logistics processes for multiple companies across the wireless industry ecosystem. Together, they saw the opportunity to fundamentally redefine how Americans shop for groceries.

While drive-in grocery stores are popular in Europe, Zoomin is the first drive-in grocery store in the United States, and it’s all enabled by mobile technology. The disruptive threat is so significant that WalMart has been watching the company’s every step.

So, what is a drive-in grocery store? The process is fairly simple. You shop online, filling your virtual cart with groceries. You pick a time you want to pick them up, then you complete the transaction and drive to the store. A server brings your groceries to your car and, in minutes, you’re on the way home.

For Zoomin, more than half of their orders are coming from mobile devices, and all of their employees are using tablets to fulfill the orders. Like any grocery store, Zoomin has four environmental zones for foods ranging from frozen to fresh produce, but unlike walk-in stores, the company doesn’t need to keep shelves over-stocked and decorated to appeal to the shoppers eye. They’ve studied Amazon’s stocking system for efficiency (Matt says “let the geniuses be geniuses” and focus on what you’re great at). They use the same wholesalers as their traditional competitors, so their selection and their cost of goods are comparable. However, they can operate in a much smaller building, with much less inventory, and significantly fewer employees than the stores they’re competing against. They’ve chosen to price competitively with no pickup fees (unlike European companies), using their cost advantage to drive richer margins.

Speaking of employees, company culture is very important to both John and Matt. Delighting customers is important to them and they gave a number of examples, from surprising a customer with a product she wanted and didn’t think they had (for free), to greeting the dog of a regular customer with his favorite treat (set aside just for him) each time they pull in for their pickup. All of this, of course, is enabled by the mobile technology that makes it easy for employees to make notes so that each time you pull in they know you better and can serve you better. When asked about their hiring practices, John smiled and explained that they hire “pickers” and “grinners.” “Pickers” are detail oriented perfectionists who make sure that the order is filled correctly and with the quality that delights customers. “Grinners” deliver the order to the customers and establish that strong connection that makes them feel special and appreciated. But to fit in to the Zoomin culture, all the employees have to know how to have fun!

I happened to have a meeting near their store on Tuesday, so I set my wife up with their website and offered to pick up her groceries. She found it easy to place the order. If you want to get your food as quickly as possible, Zoomin says it will be ready in 30 minutes, but we picked a future timeslot after my meeting and I got a notification well in advance that everything was ready whenever I could arrive. When you pull in to Zoomin, you either text them to let them know you’ve arrived, or you enter your 5 digit order code to a touch screen kiosk. Either way, you are then assigned one of the 10 covered pull-through stalls. One of the Zoomin staff rolls out a cart with your shopping bags and loads your car for you, and you are on your way. John and Matt said that the average in and out time for customers is about two and a half minutes. Because of that, the store is drawing customers from a much broader geography than a typical grocery store (customers trading dramatically less time in the store for a little more driving time).

In class, I had asked John and Matt about produce. They said they love that question because everyone’s first reaction is that you’ll never buy produce that you can’t pick yourself. In reality, produce is their top selling category, so in our order, we bought a lot of produce. My wife loved the fact that she could order bananas as either green, ripe, or spotty and she could order avocados as ripe or firm. Most of what we got was fine, but some of the items, although not technically “bad” – probably are different from what we would have picked. (For example, we bought a potato and what we got was the biggest potato I’ve ever seen – a bargain since the price was 79 cents no matter the size – but actually almost a bit scary and not one we would’ve picked.) Also, when the groceries were brought to my car, the Zoomin employee explained that when they went to pull the white organic mushrooms that my wife ordered, they didn’t look good, so they could instead give us white non-organic (and credit the price difference) or brown organic ones instead. I picked the brown ones – and proved that even when I’m just picking up the groceries, I can still buy the wrong item. :) Which reminds me of another of the benefits that reviewers have identified with Zoomin – the elimination of impulse buying of unneeded items. (Ever since our son and I came home with the purple mustard and green ketchup that we thought was so cool, my wife has hesitated to send us to the store together…)

But back to how disruptive this concept can be to the grocery industry. As I mentioned above, Zoomin’s costs are dramatically lower than their competitors in key areas (real estate, inventory, head count). In Europe, many retailers have had to add a drive-in option for their customers, but this requires them to ADD to their building and hire MORE employees, while still maintaining all of the costs for their continuing traditional customers. If this model is successful in the U.S., it will be hard for existing grocers to respond. Which explains why WalMart is so interested in what Zoomin is up to. The week they opened, a handful of WalMart executives showed up with hopes of studying their operation (John and Matt met with them in the church next door instead). A few months later they found a local engineer poking around outside of their building with a clipboard and flashlight. He said that WalMart had hired him to figure out how Zoomin had implemented their refrigeration system. Last Fall, WalMart opened a test concept drive-in store in Bentonville, Arkansas.

It seems to me that John and Matt have thoughtfully implemented a defensible strategy. Convenience, friendliness, and a dramatically better cost structure will be tough, even for WalMart, to match.

If you want to try out Zoomin, be sure to use the coupon code FIRSTZOOM to save $5 off your first order.

Net Neutrality: The Anguish of Mediocrity

February 28th, 2015

It is rare for me to be on the same side of an issue as AT&T and Verizon and on the opposite side of Sprint and T-Mobile, but I think the new Net Neutrality rules that the FCC adopted this week are a mistake that will hurt consumers and the telecom industry.

I won’t take the time to go point-by-point through the various elements of the new rules. Plenty of people smarter than me on regulatory topics have written about that elsewhere. The two aspects that really have me concerned are:

  1. the inability to prioritize paid traffic
  2. the inability to impair or degrade traffic based on content, applications, etc.

I believe that these restrictions will lead to networks that will perform much more poorly than they need to.

The Importance of Prioritization

Thirteen years ago, while I was chief strategist for TeleChoice, I wrote a whitepaper using some tools that we had developed to evaluate the cost to build a network to handle the traffic that would be generated by increasingly fast broadband access networks.

In the paper I say “ATM, Frame Relay, and now MPLS have enabled carriers to have their customers prioritize traffic, which in turn gives the carriers more options in sizing their networks, however, customers have failed to seriously confront properly categorizing their traffic. There has been no need to because there was no penalty for just saying ‘It’s all important.’”

With the new rules, the FCC ensures that this will continue to be the case.

Think about it. If you live in a city that suffers from heavy highway traffic, if you’re sitting in slow traffic and you see a few cars zipping along in the HOV lane, don’t you wish you were allowed into that lane? Of course you do. Hopefully it even gets you to consider making the change necessary to use that lane. Why do HOV lanes even exist? Because it was deemed a positive outcome for everyone if more people would carpool to reduce the overall traffic. Reducing overall traffic would have many benefits including reducing the amount of money needed to be spent to make the highway big enough to handle the traffic and at the same time improving the highway experience for all travelers.

Continuing the analogy, if you’re sitting in slow traffic and you see an ambulance with its lights flashing driving up the shoulder to get a patient to the hospital, do you consider it an unfair use of highway resources that you aren’t allowed to use yourself? Hopefully not. You recognize that this is a particular use case that requires different handling.

Finally, extending the analogy one more time, as you’re sitting in that traffic (on a free highway) and you look over and see traffic zipping along on the expensive toll road that parallels the free highway, do you consider whether you can afford to switch to the toll road? I bet you at least think about it.

Analogies always break down at some point, so let me transition into explaining the problem that the new rules impose on all of us. Networks, like highways, have to be built with enough capacity to provide an acceptable level of service during peak traffic. Data access networks, unlike highways, have traffic levels that are very dynamic with sudden spikes and troughs that last seconds or less. While all telecommunications networks have predictable busy hour patterns, just like highways, unlike highways, the network user experience can be dramatically impacted by a sudden influx of traffic. This requires network operators to build enough capacity to handle the peak seconds and peak minutes reasonably well rather than just the peak hour.

Different network applications respond differently to network congestion. An e-mail that arrives in 30 seconds instead of 20 seconds will rarely (if ever) be noticed. A web page that loads in 5 seconds instead of 4 seconds will be easily forgiven. Video streaming of recorded content can be buffered to handle reasonable variations in network performance. But if a voice or video packet during a live conversation is delayed a few seconds, it can dramatically impact the user experience.

Thirteen years ago, I argued that failing to provide the right incentives for prioritizing traffic to take into account these differences could require 40% more investment in network capacity than if prioritization were enabled. In an industry that spends tens of billions of dollars each year in capacity, that’s a lot of money.

Why The New Rules Hurt Consumers and the Industry

Is the industry going to continue to invest in capacity? Yes. But the amount of revenue they can get from that capacity will place natural limits on how much investment they will make. And, without prioritization, for any given level of network investment, the experience that the user enjoys will be dramatically less acceptable than it could be.

Let’s just quickly look at the two approaches to prioritization I called out above that the new rules block.

Paid prioritization is a business mechanism for ensuring that end applications have the right performance to create the value implied by the end service provider. This is the toll road analogy, but probably a better analogy is when a supplier chooses to ship via air, train, truck, or ship. If what I’m promising is fresh seafood, I’d better put it on an airplane. If what I’m promising is inexpensive canned goods with a shelf life of years, I will choose the least expensive shipping method. Paid prioritization enables some service providers (e.g. Netflix or Skype) to offer a level of service that customers value and are willing to pay for that requires better than mediocre network performance, and for the service provider to pay for that better network performance to ensure that their customers get what they expect. The service provider (e.g. Netflix or Skype) builds their business model balancing the revenue from their customers with the cost of offering the service. This approach provides additional revenue to the network operators enabling them to invest in more capacity that benefits all customers.

Impairing or degrading traffic based on content or application is a technical mechanism that enables the network to handle traffic differently based on the performance requirements of the content or application. An e-mail can be delayed a few seconds so that a voice or video call can be handled without delay. This allows the capacity in the network to provide an optimized experience for all users.

Obviously, these mechanisms provide opportunities for abuse by the network operators, but to forbid them outright, I believe, is damaging to the industry and to consumers, and a mistake.

The Intelligence Revolution for Churches (Part 2)

February 24th, 2015

I’m continuing here to share a series of articles I’ve written over the past several months for Christian Computing magazine on the Intelligence Revolution.

Over the past several posts I’ve introduced the Intelligence Revolution and put it in the context of the broader Information Age. I’ve provided a working definition (The Intelligence Revolution will help us better understand the world around us; will improve our decision making to enhance our health, safety, and peace of mind; and will enable companies to better serve us based on the correlation and analysis of data from the interrelation of people, things, and content), I’ve identified the “power” and the “danger” of the Intelligence Revolution, and in the last post I started to answer the question of what the Intelligence Revolution will mean for each of our churches. However, last month’s column used a specific example to demonstrate the risks we face if we are too aggressive in collecting and correlating data about our congregants. What are the more positive ways that large churches can consider using big data?

Revisiting the Danger

Last month I started by making the point that most churches are too small to ever have the data or the capabilities to fully participate in the Intelligence Revolution. But to consider how large churches could potentially leverage big data, I referenced an article by Michael D. Gutzler in the Spring 2014 issue of Dialog: A Journal of Theology. In the article, titled “Big Data and the 21st Century Church,” the Lutheran pastor made the claim that “data collection and analysis could be the key to providing a deeper faith life to the people of our congregational communities.” As I introduced the approach that Pastor Gutzler advocates, I’m guessing that many of you became increasingly uncomfortable. His approach would correlate personal information (including derived assumptions about personal income) with giving, attendance, and commitment to spiritual growth, amongst other data points. His goal was to identify the actions that the church could successfully take for specific families to draw them more deeply into the church.

A few weeks ago, I discussed the article with a Christian friend who has been the data scientist for a major retailer, the chief data scientist for a big data consultancy, and is currently the manager of data analysis for a major web-based service. The approach Pastor Gutzler outlined concerned her, I think in large part because of its reliance on personally identifiable information (PII). Increasingly, regulations are being crafted and enacted to protect PII, especially in light of the growing threat of fraud and identity theft. The high profile cases of credit card data theft from retailers, e-mail and password theft from online sites, and the very broad theft of information from Sony should make it clear to all of us that we risk the reputation of our churches (and by extension, Christ Himself) the more that we collect, store, and correlate information about people that can be personally linked back to them and potentially used to their detriment. But I think she was, as many of us were, also concerned by the types of information being collected and the inferences being made from it. Would we be embarrassed if our constituents found out about the information we’re collecting and how we are using it? If so, then our actions likely aren’t bringing glory to God.

Searching for the Power

Then is there anything good that the Intelligence Revolution can do for large churches? The answer will depend on the church, but I think there’s some potential.

Whenever I talk to businesses about the Intelligence Revolution, I emphasize that they start first with the mission of their business. Is there any data that, if available, could help them to better serve their customers in accomplishing their mission? Likewise, each of us should start with the mission of your church. I know there are different views on the mission of the church, so I won’t try to lay out a comprehensive definition that all readers can agree to, but I’m guessing we all can agree that the Great Commission is at least an important part of the church’s mission. In their book What is the Mission of the Church?, Kevin DeYoung and Greg Gilbert summarized it down simply to this: “the mission of the church – as seen in the Great Commissions, the early church in Acts, and the life of the apostle Paul – is to win people to Christ and build them up in Christ.” This follows directly from Christ’s own words in Matthew 28:18-20 “All authority has been given to Me in heaven and on earth. Go therefore and make disciples of all the nations, baptizing them in the name of the Father and of the Son and of the Holy Spirit, teaching them to observe all things that I have commanded you; and lo, I am with you always, even to the end of the age.”

If we just start with this as at least part of the mission of the church, what data could help us in our Gospel outreach efforts, and what data would help us to build our people up in Christ? Many churches reflect these two dimensions of their mission as the outward facing and the inward facing aspects of their mission, and I’m guessing that the data that we could use will correspondingly come from outward and inward sources.

For decades, churches have used external sources of data to learn more about their city and how they can best reach the unchurched and the lost. The Intelligence Revolution is rapidly increasing the sources of data that are available. Demographics, crime data, addresses of certain types of businesses and facilities, all of these sources of data are becoming increasingly available and searchable. George Barna, who has long been a source for the church of information on national and global trends, has even introduced customized reports on 117 cities and 48 states.

However, to help our congregants grow in their knowledge of God and their ability to observe all that Christ commanded, we likely need to look inside – at the data that we have about our own people. What are their abilities? What are their desires? Where do they live and work? In what ways and in what settings do we touch them today? How do we leverage these opportunities and create additional ones to build them up in Christ? If we have a large enough population, we should be able to anonymize the data for our analysis and decision making. On an aggregate basis, what do we know about the people who attend the early worship service and how should that affect our interactions with them there? What do we know about those in our singles ministry and what opportunities can we create for that group to help them mature and grow?

Obviously, this isn’t fundamentally different from how we make decisions today, but the potential promised by the Intelligence Revolution is that we will have more data and greater ability to work with it, so that we can be more precise and make decisions with greater confidence, helping our churches be more successful in achieving our mission, all to the glory of God.

The Intelligence Revolution for Churches (Part 1)

February 24th, 2015

I’m continuing here to share a series of articles I’ve written over the past several months for Christian Computing magazine on the Intelligence Revolution.

Over the past few posts I’ve introduced the Intelligence Revolution and put it in the context of the broader Information Age. Three posts ago I provided this working definition: The Intelligence Revolution will help us better understand the world around us; will improve our decision making to enhance our health, safety, and peace of mind; and will enable companies to better serve us based on the correlation and analysis of data from the interrelation of people, things, and content. Over the past two posts I’ve identified the “power” and the “danger” of the Intelligence Revolution. This article will address the question that you’ve probably been pondering over the past several months – what will the Intelligence Revolution mean for my church?

Different Kinds of Churches

To be honest, I doubt that the Intelligence Revolution will ever significantly impact how many (most?) churches go about serving the Lord. According to the 2010 Religious Congregations and Membership Survey, there are nearly 333 thousand Christian congregations serving over 144 million adherents (adherents is the broadest measure of people associated with a congregation – this represents nearly half of the U.S. population). The simple math tells us that there’s an average of 432 adherents per congregation. In reality, most churches are much smaller than that. According to the 2012 National Congregations Study, the median number of people associated in any way with a congregation is 135 and the median number of attendees at the main worship service is 60. The Intelligence Revolution derives value from “big data” analysis, and with groups of people this small, there simply won’t be data that is big in volume, velocity, or variety. At churches this size, there also tends not to be the resources to do fancy analysis of whatever data might be available.

Bottom line, these churches will keep doing what they’ve always done, serving the Lord and serving their communities in Christ. I attend a small church. We don’t need fancy data analysis tools to understand the people we serve, because we have deep personal relationships within the body. We know each other’s needs, gifts, and lives. We adapt as new needs arise (as new families arrive or changes happen within families), as new gifts and talents emerge, and as we grow closer to each other in growing closer to the Lord. Just as PCs, the Internet, the smartphone, and social media have provided tools that enhance what we do and make it easier to do it, I expect that the Intelligence Revolution will provide some tools that will make it easier to see the geographic distribution of our families, the concentrations of ages that we serve, and the participation we have in different ministries, but that is simply putting a precise point on the facts that we already inherently know because we know our own small population.

Can Big Churches Benefit From Big Data?

Michael D. Gutzler wrote an eye opening article for the Spring 2014 issue of Dialog: A Journal of Theology. In the article, titled “Big Data and the 21st Century Church,” the Lutheran pastor made the claim that “data collection and analysis could be the key to providing a deeper faith life to the people of our congregational communities.” While we’ve talked about the dangers of collecting personal information in previous articles, Pastor Gutzler says “I would suggest for those working in the life of the church there is a higher calling to data analysis: to help the participants in a community of faith come to a greater understanding of God’s forgiveness, grace and love.”

As his starting framework, Pastor Gutzler rests upon the Circles of Commitment model promoted by Saddleback Church and documented in Rick Warren’s The Purpose Driven Church. The goal for church leaders, in Pastor Gutzler’s model, is to move adherents from being in the unchurched community to the crowd of regular attenders to the congregation of members to the committed maturing members and finally into the core of lay ministers. To accomplish this goal, church leadership analyzes data about each family and family member in the congregation, correlating that data with participation in specific events and activities, examining historical trends, and from that, making wise decisions.

For example, does participation in a given event or activity correlate with increased commitment to the church, no change, or actually a moving away from the core? Do the answers differ based on the current circle of commitment of different families participating? Should we do more events/activities like this or scrap them altogether? Should we target them towards specific families rather than broadly offering them to the entire congregation?

Pastor Gutzler even argues for targeting the sermon message differently for each circle of commitment. He uses the example of a sermon on stewardship: “A better way to approach the subject would be to give one general message about what stewardship is, but have illustrations that speak to each circle. Then, to emphasize the message, a follow-up communication should be sent to each group that falls into each of the demographics to further emphasize the message’s point.”

Pastor Gutzler identifies five classes of data that most churches are already collecting as being enough to get started in implementing this segmentation, targeting, and analysis-driven decision making:

  • Attendance: at worship, but also at all other church-related events
  • Community Life: tracking the amount of time congregants invest in different church activities
  • Personal Information: Pastor Gutzler makes the point that, with tools like Zillow and salary.com, even simple information like address and occupation can provide significant insights that can be correlated with other sources to indicate the family’s financial commitment to the ministry of the church.
  • Personal Giving: Not just tithes and offerings, but also donations of food, clothing, and responses to other special appeals.
  • Personal Development: Time committed to opportunities to develop and deepen their faith life.

While I respect Pastor Gutzler’s passion for using every tool available to achieve the mission of his church, I fear that he is demonstrating the “grey areas” that I warned about in my last article. Our actions will be scrutinized by the watching world and by our own church members. We are to honor and glorify God, reflecting His attributes in loving and serving those around us. We are not to trust in a mechanical, scientific exercise in data analysis, but we are to trust in the living God who works in mysterious ways, drawing people to Himself.

All that being said, I believe that, especially large churches do and will have “big data” at their fingertips. Pastor Gutzler’s article may go to an extreme, but by doing so, I think it hints at ways that churches will be able to honorably improve how they serve their congregants while respecting their privacy. We will discuss this more in the next article in this series. I urge you to rely heavily on prayer and the Word of God as you move your churches forward in this coming revolution.

Ten Strategic Issues Facing Mobile Operators

February 23rd, 2015

In a recent consulting engagement, I was asked about the strategic issues facing U.S. mobile operators. I think I answered reasonably well, but it made me realize that the topic deserved a more thoughtful updating based on recent activities. With that in mind, I’d like to provide a high level outline of what I think are the biggest issues. I think each of these could be a future article in and of themselves.

1. Duopoly, The Rule of Three, or the Rule of Four
Perhaps the biggest strategic issue being played out right now is one of industry structure. Each quarter, Verizon and AT&T become stronger. Their strong balance sheets, fueled by rich cash flows, enable them to strengthen their hand. Meanwhile, the other two national operators (Sprint and T-Mobile) fight it out for third place. The Rule of Three claims that any market can only support three large generalists, implying that only one of those two can survive. Boston Consulting Group takes it a step further with their Rule of Four implying that perhaps two is the right number. American regulators would apparently block a combination of Sprint and T-Mobile, believing that a market with four competitors is better for consumers than a market with three competitors. But, in the long run, will that ultimately result in the failure of both #3 and #4, and in the short run, will it cause behaviors that damage the entire industry?

2. Wildcards: Google, Dish, América Móvil
Over the past few years, Google has done an admirable job of shaking up the broadband industry with the introduction of Google Fiber. In markets where the company has announced plans to build out local infrastructure, existing competitors have had to respond with improved offers to customers. Now, Google is rumored to be preparing to offer wireless services. Would they have a similar impact on the wireless competitive space, or are the disruptive moves already being introduced by T-Mobile and Sprint significant enough that Google’s impact would be muted? Meanwhile, Dish Networks has been spending tens of $billions accumulating a rich treasure chest full of spectrum which they are obligated to begin building out for wireless services. What will they do and how will that impact the competitive environment? Finally, América Móvil has spent the past few years preparing for a major global strategic shift. They already have a strong foothold in the U.S. prepaid market as an MVNO (TracFone), but their relationship with AT&T has been significantly altered perhaps positioning them for a more aggressive move into the U.S. Any of these three potential new entrants could have significant impacts on the American mobile market and must factor into the strategic scenarios for the four mobile operators.

3. Licensed versus Unlicensed Spectrum
As we’ll discuss more below, spectrum is the lifeblood of any wireless network. The global mobile industry has been built on licensed spectrum. Licensed spectrum has many advantages over unlicensed spectrum, including the ability to use higher power radios with better signal-to-noise resulting in greater range, throughput, and performance. Lack of unmanaged contention for the airwaves results in predictable and manageable performance, all resulting in higher reliability of each connection. The industry has invested hundreds of $billions to build out networks that provide a wireless signal for the vast majority of the U.S. However, the cost to build out a wireless network with unlicensed spectrum is a small fraction of that to build with licensed. Companies offering services with unlicensed spectrum are also unburdened by the regulatory requirements placed on Commercial Mobile Radio Service operators. The Cable MSOs have been most aggressive in shifting their focus from licensed to unlicensed spectrum. After decades of positioning to participate in the traditional cellular industry (winning spectrum in auctions, investing in Clearwire, partnering with Sprint, etc.), in 2012 Comcast, Time Warner, and others sold their licensed spectrum to Verizon and aggressively started building out a nationwide WiFi footprint using unlicensed spectrum. About a month ago, Cablevision introduced their Freewheel WiFi-based smartphone service to compete with mobile operators. Expect others to follow.

4. Spectrum Portfolio
Although mobile operators are toying with unlicensed spectrum, their strategies remain very centered on licensed spectrum. To effectively meet the growing demand for capacity, all operators will need more spectrum of some kind. However, not all spectrum is equal and operators know they need a balanced portfolio. There are a variety of criteria that factor into the attractiveness and utility of any given spectrum, but the easiest to understand is simply whether the spectrum is low-band, mid-band, or high-band. Low-band spectrum has a frequency less than 1GHz and provides the best geographic coverage (the signal travels farther) and in-building penetration (the signal passes more easily through walls). However, at these lower frequencies, there tends to be less spectrum available, and it has generally been made available in smaller channels, limiting the capacity (the amount of bandwidth that can be delivered to customers). High-band spectrum generally has a frequency above about 2.1GHz and, while it lacks the coverage of low-band spectrum, there’s generally more of it and it generally comes in larger channels providing lots of capacity. Mid-band spectrum (between 1GHz and 2.1GHz) provides a compromise – reasonable (but not outstanding) capacity with reasonable (but not outstanding) coverage. In the early 1980s, as the local telephone monopolies covering most of the country, Verizon and AT&T received free 800MHz low-band spectrum in each market they served. In 2008, the FCC auctioned off 700MHz low-band spectrum. Of the national players, only Verizon and AT&T had deep enough pockets to compete and walked away with strengthened low-band spectrum positions. Today, these two have the vast majority of low-band spectrum and T-Mobile and Sprint are hoping that the 2016 600MHz incentive auction will help them begin to balance their portfolios and are demanding that the FCC enact rules to avoid another Verizon/AT&T dominated auction process. All players have reasonable amounts of mid-band spectrum (with AT&T and Verizon again using their strong balance sheets to further strengthen their positions in the recent AWS auctions). The majority of Sprint’s spectrum is high-band 2.5GHz spectrum.

5. Network Technologies
Mobile operators face a number of strategic decisions over the next few years related to network technologies. There are enough uncertainties around the key decisions that each operator has a slightly different strategy. Two of the biggest decisions relate to small cell deployments and migration to Voice over LTE (VoLTE). AT&T has the most comprehensive strategy which revolves around their broader Velocity IP (VIP) Project, which they hope will free them from much of the regulatory oversight they currently endure in their monopoly wireline footprint and therefore provides tremendous financial incentives. This is driving a relatively aggressive small cell deployment and a moderately aggressive VoLTE plan. Verizon has been the most aggressive of the national players in deploying VoLTE, while (until recently) being the most hesitant to commit to significant small cell deployments.

6. Cash Management

6a. Capital Expenditures
None of this is cheap. It takes deep pockets to acquire spectrum and even deeper pockets to build it out. In a technology-driven industry, new network architectures will always require significant investments. As price wars constrain revenue, while demand for capacity continues its exponential growth, CapEx as a percent of revenue will likely become a significant strategic issue for all operators.

6b. Expense Management
Operating expenses and overall cash flow also can’t be overlooked. Growing demand for capacity and small cell deployments require increasing backhaul spend (although the shift to fiber for macro sites has helped bring that under control for most operators). But the biggest issue will likely continue to be the cost of providing smartphones and tablets to customers. As an illustration of how significant this cost is for a mobile operator, in Sprint’s 2013 Annual Report, the company reported equipment net subsidies of nearly $6B on service revenues of just over $29B (over 20%). In 2012, T-Mobile introduced equipment installment plan (EIP) financing as an alternative to subsidies and early in 2013 announced that it was eliminating all subsidies. Since then, the other three national operators have similarly introduced device financing. From an income statement perspective, this helps T-Mobile’s earnings since the device is accounted as an upfront sale, typically near full price. However, T-Mobile and their competitors have introduced zero-down zero interest (or close to it) terms, and they are discounting the monthly bill for the customer by roughly the same amount as their monthly equipment financing payment to keep the total monthly cost to the customer competitive with the traditional subsidized plans. The net result is that T-Mobile (and their competitors who have all followed suit) are taking on the financing risk without significantly improving their cash flow. For 2014, T-Mobile reported just over $22B in service revenues (a 17% increase over 2013). They also reported equipment sales of $6.8B (a 35% increase and 30% of service revenues). But, they also reported the cost of equipment sales at $9.6B (an increase of 38%) and they reported that they financed $5.8B in equipment sales (an increase of 75% over 2013 and 26% of service revenues). As of the end of 2014, T-Mobile had $5.1B in EIP receivables (an increase of 78%). That’s a lot of cash tied up in customer handsets. The strategy has worked in terms of attracting customers to switch to T-Mobile (which is why their competitors have had to respond), but it’s less clear that it’s been financially beneficial for the company in the long run. Verizon, for one, seems unconvinced and has been unenthusiastic about device financing. I believe this will continue to be an area of strategic deliberations at all mobile operators.

7. Plan Types
This shift from subsidized devices is also part of a disruption in how the industry views plan types. For decades, the industry focused on postpaid phone plans. These plans were subsidized, but the customer was locked in for two years, “ensuring” that the operator earned back their up-front investment in the device. Because operators, for the most part, managed this business with appropriate discipline, only prime credit customers could get a subsidized device and these tended to be fairly profitable customers. Those that didn’t qualify settled for a prepaid plan where they purchased the phone upfront at or near full price, which provided better cash flow early in the customer life, but less profitability over time. Eliminating subsides also eliminates the 2 year service plan (although the long term device financing still provides customer lock in) blurring much of the distinction between postpaid and prepaid. The number of people with multiple wireless devices is also increasing as we are carrying iPads and other tablets, as automakers are integrating wireless connectivity into the cars we drive, and as we move towards a day when virtually any product with a power supply will be wirelessly connected to the Internet. Different operators are taking different approaches to how to structure their plans to accommodate these changing customer behaviors within their business models, and I’m sure it will continue to be a topic for internal debate and discussion as the industry models evolve.

8. Commoditization
In many respects, wireless service is increasingly viewed as a commodity by customers. Operators continue to trumpet their network differentiation, but to the consumer there is generally the perception that all operators offer the same devices, in the same ways, and support those devices with networks that work reasonably well just about everywhere we go. Over the past 6 to 12 months, T-Mobile and Sprint have been very aggressive about reducing pricing or offering more for the same price, in a successful effort to take customers away from Verizon and AT&T. Those two larger operators have had to respond with lower prices or increased buckets of data. The operators may be denying it, but it sure looks like a commodity market to me, and I imagine that’s a discussion that’s happening in each operator’s strategic planning meetings.

9. Quad Play or Cord Cutting
For well over a decade, there’s been an ongoing strategic debate within the industry about whether a combined wireless and wireline bundle is critical to market success. At times, some players have decided that it will be and have taken actions, such as the strategic alliances between cable MSOs and wireless operators (Sprint, Clearwire, and Verizon), or advertising campaigns focused on integration across multiple screens (TV, computer, phone). So far, there’s little evidence that it really matters. Consumers take what landline voice, broadband, and video services they can get from the duopoly of cable provider or “telephone” provider and then they can choose from a competitive landscape for their mobile needs. For the last few years, it appears that none in the U.S. industry have seen any need to focus on a quad play future. In fact, the focus has been more on cord cutting and over-the-top players. However, in Europe, there’s a very different story playing out and it is driving massive industry consolidation. Especially while wrestling with the questions about commoditization, operators will once again question the benefits of a differentiating bundle.

10. Re-intermediation
Another common tactic to combat commoditization is to “move up the stack.” In the mobile industry, that would be “move back up the stack.” The introduction of the iPhone, followed by Android devices, led to the disintermediation of the mobile operator from much of the value chain. Prior to the iPhone, operators carefully managed their portfolio of phones, telling OEMs what features to build and it was the operators who largely drove demand for different devices. Operators collected the vast majority of revenues in the industry, directly charging the customer for the phone, the network service, any applications, any content, and any value added services (such as navigation or entertainment). The iPhone (and then Android) enabled better apps and content, provided a better marketplace for buying them, and provided an open connection to the Internet for a wide variety of over-the-top services. Although the operators had poorly managed the apps/content/services opportunity and therefore they didn’t have much “value add” revenue to lose, they clearly lost the opportunity to be more than just the underlying network. Over the past several years, the industry has tried to claw its way back up the stack. Operators pursued “open” strategies, introducing APIs for app developers and other tactics to try to be a “smart pipe” rather than just a “dumb pipe.” They have also tried to encroach on other industries by offering new mobile-enabled services, such as mobile payments and home security/automation. These efforts have not yet had meaningful success, although AT&T’s progress with Digital Life is promising. If operators want to escape the commodity “dumb pipe” trap, at some point they will need to figure out how to reclaim more of the stack.

Obviously, the mobile industry is dynamic and I expect these 10 topics to drive significant strategic decisions across all operators in the coming months and years. If you’d like to discuss any of these topics, drop me a note.

The Danger of the Intelligence Revolution

February 11th, 2015

I’m continuing here to share a series of articles I’ve written over the past several months for Christian Computing magazine on the Intelligence Revolution.

Every new technology introduces new capabilities that enable us to do things that previously weren’t possible or practical. As technologists, our job is to capture this new power for our organization. But every new technology also creates new potentials that represent risk to ourselves, our families, and the organizations that we serve. As technologists, we are also called on to manage this danger. In this post I’d like to discuss the dangers introduced by the Intelligence Revolution.

Grey Areas

A friend of mine recently asked for my advice. He is pursuing a new career path and faced a decision. Taking one path would position him for systems development opportunities. The other path would position him for big data analytics opportunities. Because I believe that the Intelligence Revolution is happening, and I anticipate that there will continue to be a shortage of data scientists who can work with big data, and because his personal background and strengths are well aligned with data analysis, I told him that the big data analytics path would be one that could create tremendous value for him personally.

But I warned him that pursuing that path may be a challenge for him as a Christian. I believe that it is a path that will pass through many “grey areas” where his moral standards may be challenged.

What do I mean by grey areas? When we’re dealing with information, it’s easy to think of types of information that we should have no problem using (e.g. the user tells us they want us to use that data for our application to personalize results for them), and it’s easy to think of types of information that we know it would be wrong to use (e.g. secretly capturing the keystrokes when a user enters their credit card number and then using that information to make unauthorized charges to the user’s account).

But in reality, there’s a lot of information that falls in between those extremes. Those of us that run websites rely on log data to optimize our sites. We want to know (on an aggregate basis) which pages get the most views, what pages cause people to leave our site, what external links brought them to our site, and any problem areas that might be causing a bad user experience. Our users want our website to work well, and our privacy policy (hopefully) clearly explains that we’re going to use this information in this manner, so this type of information usage is probably just barely creeping from the “white” into the “grey.” But what if we use log data to zero in on one user and track their page by page journey through our website? In some ways, if our motives are pure, and if our published privacy policy allows it, this is just like the above example, but it’s starting to feel a little creepy, isn’t it? Especially if we take the next step and attach the user’s information (their login id and account information) to this usage pattern, it starts to feel a lot like spying, doesn’t it?

Well some companies do exactly what I’ve described and their customers applaud them for it. When I log onto my Amazon account, I’m presented with recommendations based on what I’ve bought in the past, and even based on items I’ve simply browsed in the past. Sometimes it feels creepy, but most of the time I’m thankful for the recommendations and it helps me to find products that will meet my unique needs.

Other companies have been strongly criticized and their customer loyalty has suffered because of their use of similar customer usage information that they were using to improve the customer experience. For example, in 2011, the mobile phone industry suffered a serious black eye when someone discovered that virtually all smartphones had software that collected information about usage and reported it back to the mobile operators. The operators wanted this information because it provided precise location information and information about how well their network worked in each location. That told the operators where their customers went (and where they needed a network) and how well the network actually worked in those places. This enabled better investment decisions so that the operators could provide a better experience for their customers. Unfortunately, the software company (Carrier IQ) that the operators used was collecting information that didn’t seem necessary for the stated goal, and customers weren’t informed about the information being collected and how it was being used. Carrier IQ also didn’t respond well to the situation, all of which forced the mobile operators to remove the software from all their customers’ phones and made it much harder for the operators to provide a good network experience.

What Does That Mean for Us?

Hopefully those examples spell out the danger for us, both as consumers, and as technologists that are tasked with helping our organizations to leverage technology to best serve our constituents.

As consumers, we have to realize that businesses (and governments and others) have more and more information about us – not just what we do online, but in every transaction that we perform with anyone. How that information will be used will not be limited to the ways that we’ve explicitly requested and not even to the ways that companies have told us they would use the information. In a way, I guess, that may serve as encouragement to be “above reproach” in everything we do and perhaps may be a help in restraining sin. We know that God sees everything we do and even knows our heart, which should be motivation enough, but perhaps knowing that companies and men see our actions as well may cause some to act in a more Godly and honorable way. But it’s also rather scary, knowing that, unlike God, men are sinful and companies don’t always act in our best interests.

As technologists, we must view ourselves as wise stewards of the information that we have. Either explicitly or implicitly, those we serve have entrusted us with it and we must protect it and deal with it in an honorable manner, with right motives and a servant’s heart. But, just as Christ explained in the parable of the talents (Matthew 25), we shouldn’t just bury this treasure, we must maximize the value of it for the benefit of those that have entrusted us with it. We must capture the power of information to the good of those we serve and to the glory of God. Key to this will be right motives, transparency, security, and trust.

Mobile Impact Obvious

February 2nd, 2015

As my recent set of posts imply, I’m thinking quite a bit beyond the “mobility revolution.” A fascinating article at Wired makes it clear that the impact of mobile has become obvious, and when something is obvious, it’s much less interesting to me. (That doesn’t mean that execution and operations minded folks should ignore mobile – now is the time when the real money is obviously being made…)

Reading this article took me back to early 2012. Facebook’s IPO was the big story and the biggest knock on the company was that it lacked a mobile strategy. Today, more than half its revenue comes from mobile and they are being lauded as one of the few to have figured out mobile. Back then, Facebook wasn’t alone. Perhaps setting the tone for the year to come, in late 2011, the world’s largest technology company at the time, HP, ousted their CEO, at least in part, for a failed mobile strategy (the company doesn’t show up in the Wired piece because they haven’t been able to recover to a leadership spot in tech). Later in 2012, Intel’s CEO was forced to resign because of a failed mobile strategy. (Like HP, Intel rarely gets mentioned these days when folks talk about the companies leading the technology industry.)

2012 was the wakeup call. 2015 is showing which companies jumped and which hit snooze.