Andover Intel https://andoverintel.com All the facts, Always True Wed, 20 Nov 2024 12:31:43 +0000 en-US hourly 1 The Role of and Prospects for the Network Digital Twin https://andoverintel.com/2024/11/20/the-role-of-and-prospects-for-the-network-digital-twin/ Wed, 20 Nov 2024 12:31:43 +0000 https://andoverintel.com/?p=5972 Nokia is one of the companies who’s addressed the “digital twin” concept explicitly, and so when it’s Bell Labs group does a piece on digital twins in telecom, it’s worth a look. Operators have mentioned digital twins in my chats, too (53 of 88), so I have some comparative data to relate as well.

A “network digital twin” is a virtual representation of a physical network or a portion thereof. Operators tell me that they would likely deploy the technology in what we might call “administrative zones” corresponding to management/ownership scope, vendor, or other subdivisions of gear that could create real partitions that should be reflected in the virtual world too. The goal of the network digital twin is to provide “contextual system” views of a network, views that mirror the real relationship among elements and thus facilitate understanding and even simulating network-wide conditions.

The Bell Labs piece identifies 35 use cases for network digital twins, and my 53 operator chats on the topic validate 22 of them, but I think the whole list in the article is credible. The ones that operators missed from the list relate largely to the sales/design phase, and I think their omission is due mostly to the fact that my chats were focused on networks-in-being and operations missions.

I was impressed by the article’s recounting of an actual operator project analysis, which identified a potential savings of around 25% in opex. I only got impact comments from 29 operators, but their estimate of opex impacts ran from 20% to a third, so the project analysis fits in the predicted range overall. This combination of information is important in telco evolution, because cost management is critical in preserving profits when revenue growth is challenging because of limits in ARPU and TAM. Up to now, operators have done targeted opex projects that have been aimed largely at limiting interaction between craft people and customer problems, and that area has been (say operators) largely tapped out in opportunities for savings.

Network digital twins offer what’s essentially a horizontal look at the old FCAPS (fault, configuration, accounting, performance, and security management) story, a way to utilize a virtual model of a network at every step of the game. While the operators I’ve chatted with are still dominantly focused on response to conditions, it’s feasible to build a digital twin of a network that’s still in the planning stage, and to use that to refine capacity plans and simulate various operational states, then actually use the twin to commission real networks and offer real services.

One challenge for the whole network digital twin concept is shared by digital-twin technology in general; there’s no user confidence in the availability of a platform, tool, language, or whatever for use in building a twin. Only 9 operators indicated they were actively involved with the technology, and only 2 said they had it operational. None of the latter groups said their digital twin plans or implementations spanned their entire network, and all were currently focused (not surprisingly) on business services and business-specific infrastructure like carrier Ethernet access or MPLS VPNs.

In the group of 53 operators who commented on digital twins, 37 said they believed the benefits of narrow applications like this “would be limited”, but the seven who were looking at digital twins but had not yet made one operational all said they had narrow missions in mind. A big part of that seems to be related to a view that it’s easier to put the twin in place if the network isn’t yet built, because five of the seven had specific comments indicating their targets were greenfield infrastructure.

Among enterprises, things aren’t any better for the network digital twin concept. Out of 414 enterprises I chatted with on operations issues, only 22 mentioned network digital twins and none said they were deploying or even trialing them. I’ve had a few talk with me about blogs I’ve done on the topic, and that group points out that they don’t hear about the concept from vendors, and that they don’t spend a lot of time considering technologies that nobody seems to be trying to sell them. One reason why this few enterprises commented to me on blogs was to see if I did digital-twin integration, or had a reference supplier. That, to me, shows that enterprises don’t know how to go about network digital twin implementation.

Interestingly, over 50 enterprises are looking at digital twin technology in relation to IoT-oriented industrial/manufacturing or other missions. This group includes two who chatted with me on my blogs, and they didn’t mention their other digital twin missions in relation to their network digital twin interest. I asked about this, and they said that their IoT digital-twin stuff was being driven by the vendor/integrator who was working with them on the application, and in both cases this was a specialty partner and not one promoting digital twin technology broadly.

In a way, the situation with digital twins overall is similar to the situation with program languages used to build applications. There are a lot of applications and languages out there, and the former tends to be promoted by vertical-market integrators/vendors and the latter by nobody in particular. Microsoft’s Azure Digital Twins and AWS TwinMaker are the tools known to enterprises, and IBM’s initiatives are known to most of IBM’s strategic accounts, but all those who have evaluated any of these have done so in the context of IoT.

IBM has a nice tutorial on the IoT applications of digital twins, and you could easily apply the story presented to network digital twins, replacing IoT elements with network management APIs. The IBM comment that “Digital twins are benefitting from the fact that there exists an abundance of machine-generated data, which is a luxury that you don’t have in other data science disciplines” could surely be applied to network operations; in fact there’s probably more network status data and more control interfaces available than there are for most industrial/manufacturing processes. Why aren’t network vendors, particularly enterprise vendors, jumping on this?

Some are, sort of. Juniper announced a Marvis update to extend “AI-Native Digital Twin” capability. Cisco uses digital twin technology in its software update/distribution system. Extreme Networks may be the most explicit vendor promoter of digital twin technology, offering an actual implementation tool/process. Their promotion of the concept goes back to 2022, and they have the most developed online data on it. But so far, none of this from Extreme or Juniper has worked its way out into my chats with enterprises, and Nokia’s approach is the one operators cite. Extreme uses a lot of channel partners for their sales, and I wonder if perhaps the digital twin story is one that’s a bit too much education for most channel players.

I believe digital twins are essential for IoT, for network operations, and for the application of AI to real-world systems of any sort. I think vendors are coming to understand that, but the education process is only now starting to spread to enterprises and even network operators. I think that while network digital twins have almost-universal potential in both these groups, the fastest-growing area of interest is still in enterprise IoT applications, and perhaps this means that edge computing services aimed at IoT applications might translate into operator edge interest in a generalized IoT edge model, which might then spread into network digital twin applications more generally. It’s interesting to note that in the Digital Twin Consortium membership roster (as of today), none of the major network vendors are listed, including those I’ve cited in this blog. I think that if network digital twins are to gain broader support, getting key vendors on that list, and getting network missions worked on, may be essential.

]]>
5972
The Security Outlook for 2025, According to Enterprises https://andoverintel.com/2024/11/14/the-security-outlook-for-2025-according-to-enterprises/ Thu, 14 Nov 2024 12:27:33 +0000 https://andoverintel.com/?p=5969 Is network spending now really nothing more than security spending? Obviously not in a total-spending sense, but probably in a capex-growth sense. Of 354 enterprises who commented to me on their 2025 network budgets, 287 said that security capex would grow at an average of 6%, where overall network spending was expected to grow by only 4%. But that doesn’t mean that enterprises are happy with network security technology; of the 287, only 44 said they believed they were getting value proportional to what they spent on security. What does this all mean?

Let’s start with the question of “why is security value questioned?” If the 243 who questioned the value of their proposed 2025 security spend increase, 183 said the top reason was that vendors were overcharging, and the remaining 60 that security threats were increasing. Thus, three times the number of enterprises believed they were being overcharged for their security gains, versus those who thought that threats were overwhelming products.

This issue, then, goes back to something I noted in a blog in January 2024, (when “211 said that they believed they overspent on security”) and again in a blog in October, “CIOs tell me they believe that security is starting to look like a pit they’re expected to toss money into, and that whatever they spend is never enough to satisfy vendors.” A year ago I summarized the threat issues enterprises cited. What I’m hearing is that somehow all the security changes don’t keep up, and that as the 2023 blog suggested, we need a change in how we think about security.

So why haven’t we gotten it? Here, I think, enterprises are rightfully blaming vendors, who (not surprisingly) tend to think first about their own revenue. If you add a layer to current offerings to address new risks, you can charge for it. If you propose a radical change, you open your accounts to new security offerings from others. You can see how this turns out.

Enterprises do have an idea of what the basis for network security should be; in 2023 65% said that the network should detect and block all unauthorized access to applications, and this increased to 82% in 2024. But if you go to those who ask for the capability and explain that they’d have to set and maintain “authorized” access policies themselves, and that the strategy would miss security problems created by infecting authorized users, they start to question their own thinking. I don’t have solid data on this, but it appears that if these two points are considered, the block-the-unauthorized strategy loses more than half its support.

I got 2024 security views from 72 sources that I believe are highly qualified. This group identified what they say are very separate risk areas that likely demand at least some individual security tool attention. Let’s look at them.

The first one was the risk of the hijacked or infected client. Most security tools can really only authenticate users, and so they’re bypassed if a user can be impersonated or contaminated. The problem here is that you either have to look at the human user or the client device user, and both of these are difficult to pin down for identification purposes. Most companies don’t use biometrics for user identification, and absent that you’re back to user ID and password, which many write down or share. Users often access their applications from home or on the road, so there’s no reliable 1:1 relationship between person and device, and no easy way to ensure that users who get a new device won’t find themselves cut off.

The second risk was that of accidental API exposure. One of the new challenges of security is created by componentization of applications, which requires network connections to “internal” APIs. If these APIs are exposed beyond the intended connectivity, they can allow a hacker to bypass traditional access-level security. Two thirds of enterprises admit that they aren’t sure exactly what internal APIs might be addressable on their networks, or even from the Internet, and almost the same number admit to having no plan for using address space management to control accessibility.

The third risk was platform software vulnerabilities and exploits. Here, “platform software” means the software that’s used to sustain the operating environment of applications, including operating systems, middleware, management tools, and even security tools. Hacker gangs interested in getting to the largest possible number of targets are likely to look for these, and it’s very difficult to identify an attack on a platform tool until the exploit becomes known. Then you have to worry about how to remedy the problem for all the platform users.

What magical tool fixes all these things? The expert enterprises agree that none do. What’s needed? Of the 72 sources, 61 said the same thing; more attention to security practices than to security tools. According to this sub-group, the big problem with security over-spend is that management often sees tools as an alternative to proper staffing, and many security vendors will make this point in a sales pitch. The problem is that it doesn’t work.

This group also points out that the problem with the network permitting only authorized connections is also one of human effort and resources. Of 11 who say that they actually enforce connection security, 100% say it requires “a lot” of effort to keep the authorized list maintained and to update how authorized users are recognized as roles and devices change. It’s worth it, though, because this group of 11 reports less than a quarter the number of security issues per enterprise as the full 354 enterprises who offered security comments. And, while this group of 11 said they were going to increase incremental security personnel costs by 3%, the full 354 postulated no 2025 staffing cost increase for security beyond the enterprise-wide expected payroll increases.

A final interesting point was that among this group of 72, 18 said that they didn’t need any specialized security tools or products at all to meet their companies’ goals. Tuning development and deployment practices alone were enough. For this group, virus scanning and firewalls were sufficient. I think that’s likely due to a specialized situation at these companies, but I also think that it’s an indication that we really do need either a new approach to security or a shift of focus from layering tools to improving security practices overall.

]]>
5969
Early Enterprise Thoughts on AI Budget Impact in 2025 https://andoverintel.com/2024/11/13/early-enterprise-thoughts-on-ai-budget-impact-in-2025/ Wed, 13 Nov 2024 12:42:27 +0000 https://andoverintel.com/?p=5967 Every year, most enterprises and telcos undertake a technology assessment intended to drive the budgeting for the following year. This can run between mid-September and mid-November, and most of these reviews are now complete. I’ve gotten a picture of the results of 308 enterprise reviews and 54 network operator reviews, and so I can now take a look at the issues, both with regard to specific technologies and the enterprise/operator grouping. I’m going to start this with a look at AI.

One thing that’s clear is that you can divide enterprises into two groups. The largest by far (274 of 308) is the “AI realist” group, and the other a group less influenced by active AI planning and thus tending to follow the media stories on AI in their vision of their future use. In the first group, 41 say that “generative AI” or “large-language models” are critical, and in the latter all 34 are generative-AI-focused.

Let me dispose of this last group first. Of the 34, 11 see AI dominantly in customer support or personal productivity roles, and so actually need LLM technology. The remainder have ceded AI decisions to line organizations, and are experiencing AI primarily through public services from companies like Microsoft or OpenAI, or in search applications. Only two of this group expressed any interest in self-hosting AI, and this group didn’t express any specific commitments regarding 2025 spending on AI; “stay the course” was the summary view.

In our first group, the AI realists, the 41 who see generative AI as critical are all looking at self-hosting, and of this group 29 are looking at AI in customer-facing chatbot missions. The other 233 are looking at missions they don’t see as generative AI missions at all, but rather as “AI” or “machine learning” (ML). Two missions dominate this group, business intelligence (199), and real-time/edge (78) applications linked to IoT.

The reason this critical group of 233 doesn’t toe the generative AI line is not so much the resource issues associated with LLM training and usage (they’re a factor for only the 78 who are looking at edge missions) but a lack of utility of generative AI in the missions they’re seeing. For nearly all, their focus has evolved from “generative AI service” to “self-hosted LLMs” to small language models and machine learning applications, largely due to the fact that both the missions that dominate the group are based on analysis of a very limited amount of data, where “generative AI” is usually trained on a vast amount, via the Internet or social-media interactions. We’ll get to how this might (emphasis on the qualifier is important) change in the future, at a later point in this blog.

I get the sense that SLMs are becoming the preferred AI/ML platform, though only about a quarter of my 233 make that point explicitly. The challenge for AI today may well be that just about everything being published online and promoted that’s related to AI is really related to LLMs, and just about everything that’s important to LLM evolution, like RAG, is not much of a factor to SLMs. If you’d like to read a nice piece on SLM technology, check THIS out (the full survey paper is HERE). Of the 233 companies, 188 said they were led to SLMs by a trusted strategic vendor, and of that number, 72 said they didn’t at first even realize they had shifted to an SLM focus until they were heavily involved in planning or trials.

Within the 233, there were 58 enterprises who seemed thoroughly familiar with SLM-based AI. These people say that the reason for SLM focus is simple; businesses aren’t run through some single uber-mind thinking and planning, but as a connected set of activities that are locally handled, and whose outcomes are then fed upward. Each activity, and each combinatory layer element above, are really SLM applications, they say. Focusing on SLMs means that the AI elements are now less demanding in terms of both training and running, and that keeping data in house for sovereignty reasons is facilitated. This is what AI types mean by “domain-specific applications”.

What’s particularly interesting here is the view of this group of 233 enterprises on “personal productivity” missions of AI. The point made, by 175 of that group, was that the basic generative AI support might be fine for helping with generic documents, emails, etc. this wasn’t really their goal. They point out that productivity benefits are proportional to the unit value of labor of the targets, and that those with higher unit values of labor tend to have them because of special skill requirements. Its support of those skills that are valuable, which means domain-specific knowledge.

Within this group of 175, 59 say that they believe that you could inject domain knowledge via RAG into LLMs, but the rest say that they believe SLMs would be better, or at least that an open-source LLM that would scale down to a small resource footprint and could be trained on company data would be best. However, 128 admit that it would be possible to use a pre-trained, RAG-augmented, form of generative AI for applications that involved having “typical” people generating questions.

Edge IoT-related SLM missions are getting increased attention. The 78 who had already started work on edge AI said that you could generally classify IoT applications as being “process-bound” or “independent” in terms of sensor/effector behavior. The former applications linked IoT elements to real-world processes that defined what was being done, and the latter used IoT to gather information about the real world, from which the things going on could be deduced. They point out that the former class of applications don’t require as much AI analysis; an assembly line sets the mission and coordinates the elements. The latter has to accommodate a bunch of real-world things that may be individually self-determining, and so the state of the system depends on how individual behavior and mass stimuli might impact the collection of things overall.

How about AI budgets for that AI realist group? There was a wide range of comments in that area, ranging from an 18% increase to a 300% increase over this year, but over 90% of the group said that AI spending would be the largest source of overall IT spending, and the 78 AI edge companies said that AI would increase their IoT spending by over 50% once it ramped up fully, but expected only about a 40% increase in 2025.

The sense of all of this, IMHO, is that most of what we hear about AI isn’t really moving the ball. Could online generative AI improve search, help you write emails, and so forth? Sure, but where’s the profit in that for the provider of the service or the user? According to enterprises, their 2025 AI focus is going to be on small models in domain-specific missions.

I want to emphasize that we’re still early in this SLM game, though. Remember that only a quarter of the companies whose comments identified them as SLM prospects self-classified their activities as “SLM”; most were still talking about machine learning. I think that the pace of SLM in 2025 will depend in part on the vendors (like IBM, Dell, and HPE), and on the media, who need to start giving more space to the topic. If both sources of SLM interest ramp up, then even these budget estimates may prove conservative. This, I believe, is the real AI.

]]>
5967
How Will the Election Impact Enterprise IT and Network Capex? https://andoverintel.com/2024/11/12/how-will-the-election-impact-enterprise-it-and-network-capex/ Tue, 12 Nov 2024 13:49:12 +0000 https://andoverintel.com/?p=5965 What do enterprises think about 2025, following the election? It’s very early to get a complete answer to that, largely because most have been telling me that it would take time for the results to gel. Well, I’ve gotten 84 comments on 2025 by enterprises since the election, and while I’m sure I’ll be revisiting this down the line, the topic is important enough to justify a first look now.

Let me start by making a point, which Is that the enterprise comments I get are dominantly from males, and dominantly from people earning in the upper 15% of the population. The mixture is out of my control because I don’t actively survey, but rather take comments offered spontaneously, and this is the mixture of the sources. It’s important here because this demographic isn’t representative of the political mix we have, so it’s possible this group has a different view than the population at large. That population isn’t making tech decisions, though, so the views of the commenting group may be a better reflection of what companies will do, or think they will do. This isn’t a political commentary, it’s a summary of views offered me. OK? Now let’s get to the comments.

First, 80 of the 84 comments expressed the belief that the political climate for enterprises in 2025 would be more favorable (the dissents came from banking and health care verticals). However, 71 said that they were still concerned about interest rates and/or tariffs, though it was interesting to note that only 22 said they believed that even an attempt would be made to cut taxes (which would raise borrowing pressure) or impose broad tariffs, and only 7 thought it would actually happen.

This, so far at least, hasn’t translated into any changes in forecasts of tech spending; none of the 84 had changed their estimates so far. This wasn’t due to a belief that policy changes would absolutely impact capex, but that there were too many uncertainties on just what policy changes would actually go into law, and what the impact would be. Wait and see, they say.

But how about current thinking? Of our 84, 60 said that the largest factor in determining 2025 capex was profit goals, and that an increase in corporate tax rates would pressure them to cut dividends or stock buybacks, and perhaps to raise prices. Any of that would tend to tighten capital budgets, so this group is cautiously expecting or hoping for some capex relief, and almost all of them said they didn’t expect any negative pressure on capex in 2025.

There is also a general consensus (77 out of 84) that there will be less pressure on enterprises created by regulators. This view is universal among energy companies, automotive, etc. Of the 77, 67 said that this was positive for them, and the remaining ten negative. The latter group was concentrated in areas benefitting from tighter pro-environment policy.

Do enterprises see any other areas where actual change might come along? Yes, and the place they’re watching the most is in network service policy, meaning to them net neutrality and the FCC. As many of you know, the FCC changes hands when the US presidency changes parties, and Republicans have generally not favored the strongest neutrality measures. However, there are already a half-dozen or so states that have enacted their own rules in this area, and so enterprises are not expecting immediate and drastic changes. Instead, they wonder if there might end up being a profound policy divide that might result in different service offerings in different areas of the US.

Regulation of AI and of the stock market itself is expected (by 71 of 84) to be a lighter touch. Of that group, 12 think this raises a risk of a boom-and-bust cycle that would create even more capex uncertainty. About half of those who favor lighter regulation think regulation of their own vertical would be lighter, and beneficial to IT/network capex “in the long run”, while the rest think that less regulation on stock market practices would ease profit pressure, as noted above.

This group of 84 enterprises, of 221 US enterprises, could be described as “cautious” more than “cautiously optimistic”. There are simply a lot of issues to parse, because there are a lot of things that might be done, but not necessarily will be done. I think that this caution will likely prevail through at least the first and possibly second quarter of 2025. It’s possible that actions and conditions will reinforce caution in that period, but unlikely they’ll reduce it. All this means that tangible gains in IT spending are more likely to come in 2H25 than the first half.

What do enterprises fear? The top issue for the 84 was tariffs. While 18 of the group think tariffs would help them, the rest think it would hurt. Retailers fear a rise in consumer prices. Manufacturers generally rely on offshore parts or finished goods that tariffs could make more expensive. Almost every sector believes that tariffs set by the US would likely result in retaliation, meaning a trade war that would hurt exports from the US. There was slight majority (48 versus 36) support for selective tariffs (on China, primarily) but that’s the extent of support for the idea.

The second fear was excessive tax reductions, including the tax on tips and on social security benefits. In all, 77 of the 84 said this risked a major problem with interest rates; even food service firms were mixed in their support of the idea.

War and terrorism was the third fear, held by only 33 of 84. Foreign policy questions are always very difficult for enterprises to address, so while about 40% of enterprises have concerns in this area, I doubt that these will impact capex decisions unless something actually happens in the classic “first 100 days” to raise the risk level.

The first 100 days of any new administration are critical, in terms of what they propose and how much of that they can actually accomplish. In addition, government policy impacts consumer/business demand as well as capex policies overall, and that means that broad economic impacts might influence enterprise revenues, which of course would then impact capex. I suspect that, while I’m likely to get further comments from enterprises through at least 1Q25, I may not get any clarity until closer to the end of the first half.

]]>
5965
What Vendors Think About Tech and Hype https://andoverintel.com/2024/11/07/what-vendors-think-about-tech-and-hype/ Thu, 07 Nov 2024 12:32:41 +0000 https://andoverintel.com/?p=5963 I’ve cited enterprise views, and those of operators as well, in past blogs, but I’ve gotten over 150 comments on technology from technology vendors so far this year, commenting on past blogs and just expressing their views. This group of people is different from the enterprises in that I know most of them, and in many cases have known them for years. I want to point that out because it creates a potential bias; since these are people familiar with me and my views, they may favor them more than a random sample would. Keep that in mind, but also note that in some cases, the vendor contacts have a view very different from mine.

The thing I’ve gotten the most comment on, and in fact that every one of these vendor contacts has commented on, is tech hype. I tend to see hype as a force that, if anything, is destructive to tech value in the long run. Enterprises also hold that view, perhaps not as strongly as I do, but vendor employees are more likely to see hype as neutral or even positive, for a couple of reasons that vary depending on the job of the person making the comment.

Salespeople and marketing people both tend to speak out in favor of hype, the former being the most favorable. According to sales types, hype is what gets them appointments in many cases. “If I don’t have anything new to say,” one said, “I’d never be able to get in to see someone.” These sales types admit that often, even usually, the hype doesn’t stimulate a sale of the specific thing that’s being hyped, but rather something that’s related.

The marketing people who favor hype say that without it, it would be difficult or impossible to get media coverage for anything. “You understand I’m sure that most publications use SEO [search engine optimization],” said one CMO. “We have to align with what they’re finding are the most-searched terms, and if we don’t they won’t pay any attention to press releases.”

Both sales and marketing people agree on one thing, which is that hype generates leads. Whether they’re aimed at getting “editorial” notice or an appointments with prospects/customers, leads are critical to a successful sales strategy, particularly for vendors who don’t have dominant strategic influence in their target accounts, or perhaps even a presence.

Of 101 sales/marketing comments I got on hype, only 32 came from a source that was comfortable using the term “hype”; most preferred to think of it as “positioning” or a similar neutral term. They don’t even like to say that they “exaggerate”, but rather say that emphasizing positives and minimizing negatives has been a part of tech sales for decades, and maybe of sales for centuries.

One of my Wall Street friends noted that hype in sales/marketing is like “bubbles” are to Wall Street, and I think that’s true. Wall Street loves a bubble, and in fact the thing they really love is volatility, “You can’t make much money in a market with nothing moving,” he said, and I think that may be what the sales/marketing types are saying too. But if that’s true, then why isn’t it working now? As I’ve pointed out in past blogs, we’ve had three wave periods when IT spending growth was sharply above GDP growth, but none in this century.

I think this may be the on-ramp to something important. If I’m right saying that hype is destructive to building a new set of business justifications for tech, why has it worked for so long? Has something changed? It’s hard to answer that from either enterprise or vendor comments, so I’ll have to give it a try myself.

Might hype be a kind of trial balloon? I personally worked through the three waves of the past, and in all of them the drivers of the waves were promoted aggressively. The difference was that they stuck, but was this just random chance, was it an indication that there’s gold in some shovels-full of dirt but only worms in most? Or might it be the old cry-wolf story?

One enterprise CIO told me in 2020 “You can’t have a tech revolution every week. We need a three to five year useful life on [our IT] equipment, and a revolution is a paradigm shift that by definition would render a lot of gear obsolete.” Could hype waves be cresting too often, to the point where there’s really little choice but to ignore them? I also remember writing that if you added up the total amount of dollars people said they were responsible for spending on tech, when they filled out controlled-circulation qualifications, it exceeded the GDP of the US, and that there were over four times the number of influencers on those lists than there were actual technical professionals. Could things like SEO be tricked by “amateur clicks”, from people who have no real role in decision-making and are only looking for entertainment?

Yes to all these things, but that still leaves us with the question of why those waves of IT spending growth, having occurred in a regular symmetry for four decades, suddenly ceased. If all this was random, you’d expect to see a lack of regularity in the pattern of hype-to-reality conversion. If there’s a pattern that’s been broken, something has been added or removed. Maybe several “somethings”.

To me, the flex point in everything was roughly around the year 2000, and that may be one of the “somethings”. The Y2K craze was one of the first truly nonsensical hype waves. I remember getting calls from reporters looking for me to validate the claim that elevators would plummet and kill all aboard; same with airplanes, on the critical second of transition. It generated a lot of buzz, and that may have offered the first proof that tech hype could be a profitable thing.

The second thing was the dot-com bubble, and the resulting legislation (Sarbanes-Oxley or “SOX”). What SOX did was to effectively focus Wall Street on the current quarter’s numbers rather than letting Street analysts run wild and free in speculating on what was going to create the thing that would have been the fourth wave. You can’t leap tall strategic buildings if you’re forced to stare at your feet.

The third thing was the dilution in strategic influence we saw. The prior two waves of IT had decentralized IT, taking it from a giant data center to a distributed mesh of systems of multiple types, and exploded the number of vendors. Prior to that, IBM was the IT giant, the benevolent dictator of strategy for the major accounts, and then by osmosis to others. Even the PC was an IBM success, until others jumped on the space. The problem this created was a dilution of beneficial opportunity; why push a ten-year strategy when you’ll get only a small piece of the riches? Watch your feet instead.

Revolutions are hard, and hype postulating one is easy. To me, that’s the net of what vendors are telling me. Yes, we’d love another wave of IT spending, but we’ll settle for getting a meeting with the buyer, or getting a mention in a relevant trade rag or analyst report. I think that ultimately this stalemate in IT will end, but to be candid, I didn’t think it would go on this long, and obviously I was wrong.

]]>
5963
The Two Telco Infrastructure Paradigms We’ll Have to Choose Between https://andoverintel.com/2024/11/06/the-two-telco-infrastructure-paradigms-well-have-to-choose-between/ Wed, 06 Nov 2024 12:27:17 +0000 https://andoverintel.com/?p=5961 Most telcos and even telco vendors agree that there’s a need, an urgent need, for telcos to transform their infrastructure. The “legacy” way of building networks was built on paradigms that have long been challenged, but in order to displace it, telcos and vendors have to embrace some different model, and recognize different (and new) paradigms. That the latter requirement has to drive the former is clear, but what those paradigms are is far less than clear. I’ve had 57 telco comments on the nature of their future network infrastructure, and two models have dominated, each postulating a different path forward.

Legacy telecom networks, so telcos themselves say, were built based on too many boxes. The reason is simple; in the past, there was a sharp “bandwidth economy of scale” factor to be considered. A fat pipe cost a lot less than a skinny one, so aggregating traffic was essential. You couldn’t do that by any means other than a nodal element, an electrical device, and this created layers of network technology that were more expensive to operate because 1) there were more elements, and 2) the technologies for each layer tended to be optimized for their local mission, making them different from layer to layer.

Evolving away from this approach obviously demands having something to evolve toward. Of our 57 telcos, 39 say that their goal is a combination of capacity and “flattening”, and it’s based on a simple truth, which is that a big part of the cost of a network trunk is the cost of the transmission medium, including running it. Fiber has taken over most of the transmission other than at the network edge, so their view is to run fiber to a logical point of edge concentration, and then push as many bits through it as possible. The edge points would then be linked to the core network through a maximum of one additional layer, and the core network would be built with as much meshing and capacity as possible. One operator said this approach would reduce the number of devices in their network by 40% and by doing that, reduce “process opex” related to network management by 63%.

The underlying goal here is to cut costs in general, but in particular to cut opex while at the same time helping reduce churn, which is why most of the 39 supporting operators are largely mobile players or targeting mobile infrastructure. Implicit in this goal is the presumption that their earnings, meaning their return on infrastructure, can’t be raised much at the top line, so costs have to be reduced to show Wall Street there’s progress being made.

The other 18 telcos have a bit of a different view. They believe that cost-cutting isn’t going to help in the long run, that it’s already been taken about as far as it can go. That means that improving their financials and stock price means raising the top line, revenue, and that means offering something new. To them, there’s still a need to improve capacity and reduce layers and box counts, but it’s specifically aligned toward “service injection”.

It’s difficult to define credible new services that don’t involve some form of specialization or personalization. It’s difficult to do that if traffic is too aggregated, which means that you need to have a place somewhat close to the edge, but not so close that you end up distributing too much and losing capex and opex economies of scale. This group thinks in terms of where to do service injection, which largely turns out to be “in each metro area”. In the US, this generally aligns with standard metropolitan statistical areas (SMSA) or the old telco concept of the local access and transport area (LATA). This strategy thus focuses on metro deployment.

If service injection is the primary goal, then you need to think about service creation. There are two options to consider; the service may be created by a third party like an OTT, or the service may be created by the operator (including the possibility that some features of the service are from a third party and some from the operator). Service orchestration, feature creation, and interconnect are then the specific requirements. This combines to suggest that in this potential paradigm for the future telco, you’d need metro infrastructure that could involve servers and LAN switches as well as routers.

It’s my view, given the direction that mobile standards have taken and given the intent (if not realization) of NFV, that specialized appliances involved in things other than data-plane handling of traffic will be replaced by hosted software and servers. Given that, and given the likely dependence of metro (under the service-injection paradigm) on the same technologies, I suggest that metro-level concentration of almost everything other than basic data handling (routing, aggregation) is the logical goal. Put fairly dumb and cheap devices out in the edge network, but bigger but still dumb and cheap devices in the core, and put everything else in the metro.

If we were to assume that a lot of mobile-infrastructure functionality were housed in the metro area, we might see actual value in Open RAN in general and the RAN Intelligent Controller (RIC) and its real-time and non-real-time applications. Pushing the RIC domain toward the tower means losing much of the economy of scale that modern hosting demands. Pushing it into the metro opens the door to RIC control over feature hosting, both within the Open RAN model and beyond it.

This, I think, requires some additional thinking about edge features, thinking that frankly should have been done a long time ago as a part of mobile standards. There, we find the origin of the notion of control/data-plane separation, but the separation isn’t fully framed because some data-plane elements have interfaces specialized to mobile. What terminates them? We need to be thinking about a more modular notion of the features of white-box and proprietary data devices, so that we can deploy standard interfaces between data- and control-plane elements. In doing so, we would be allowing for the control plane to be metro-hosted.

I would also argue that we should be looking at defining APIs as those specialized interfaces, rather than presumptive physical interfaces. It doesn’t make sense to be accentuating the positives of software and hosting while treating connections and exchanges among elements as though those elements were still appliances. And having architecture diagrams that represented a “hosted” system as boxes connected by links doesn’t help either. NFV went off the rails in part because a diagram like that was interpreted literally, which should never have been done for a mission that demanded cloud-centric thinking.

What will turn out here? I think picking between these options is futile; the second is obviously the right one in the long term, but I’d argue that telcos haven’t made a single right choice in all the years I’ve worked with them. The first option will lead to change, but not enough to stave off commoditization of telecom services and, in some markets, subsidization. Not ideal, but we are all the sum of the decisions we make.

]]>
5961
ROI, IBM, Torvalds, and the Future of AI https://andoverintel.com/2024/10/31/roi-ibm-torvalds-and-the-future-of-ai/ Thu, 31 Oct 2024 11:21:54 +0000 https://andoverintel.com/?p=5958 There’s been a rethinking of the value of cloud computing, for sure. As I pointed out in my last blog, there’s also been a rethinking of the value of NaaS. In both cases, hype has been one of the factors. Linus Torvalds recently characterized the AI space as “90% marketing and 10% reality”, and my candidate as the enterprise AI reality leader, IBM, disappointed Wall Street in its last report. Could it be that Torvalds is right, and that AI reality leadership isn’t important if AI is going nowhere?

Let’s start with a story about AI ROI. A study commissioned by Appen, also reported in The Register, says that ROI on AI projects declined this year, from 56.7% to 47.3%, resulting in a decline in the rate of AI projects that actually get deployed. The Register story says Appen attributes this to a lack of high-quality training data, but the first of my references shows a figure that lists six bottleneck changes year over year for AI, half of which have gotten worse in 2024. Of course, Appen is a company that provides data services for AI, so there’s always the question of survey bias, which a look at enterprises who have commented to me may help address.

So far in 2H24, I’ve gotten AI comments from 131 enterprises. At the high level, their commentary matches a lot of what Appen said; the number of new AI produce launches has steadily declined since the beginning of the year, but interestingly faster in the second half (so far) than in the first. On the average, the decline over 2023 looks to be roughly 14%, which is more than the Appen study found. For the second half, the rate seems to have accelerated to an annualized 17%.

IBM’s earnings report showed that their consulting revenue dipped and missed, as did their infrastructure segment. Software was up, and beat. IBM said that their genAI stuff was three billion dollars, up a billion quarter to quarter. This, to me, says that the IT vendor who leads in strategic influence saw pressure on consulting services, a class of revenue likely to be linked with new strategic projects, but at the same time saw a gain in their AI business. Can we reconcile those points?

First, of the 354 enterprises who offered commentary on IT in the second half of this year so far, 288 said that they were being more cautious overall on new projects in the second half. Two stand-out reasons were the Fed interest rate policy and the election, both representing macro-market conditions. None of the enterprises cited AI project ROI or other issues spontaneously in relation to this slowdown.

Second, of the 131 who commented on AI, 108 said that generative AI services linked to personal productivity (Microsoft 365, for example) were not expanding as quickly because the benefit was in doubt. In addition, 77 said that the pace of evolution of generative AI tools was fast enough to convince them to “wait a while” until the state of the art was more fully known. However, of the 14 who were doing self-hosted AI, none indicated they had cut back on the pace of the activity, 9 said the pace was expanding, and only 5 of the 22 companies who had started AI trials in missions beyond personal productivity said they were holding off.

Third, 95 of the enterprises were evaluating something other than traditional generative AI, something more like deep learning or small models or just ML. This group seemed totally unaffected by either project slowdowns or AI skepticism. Only 4 said they were slowing their initiatives, none were cutting back, but only 28 were “committed” or “in the process of deploying”. This number didn’t suggest a problem; the rate of advance was consistent over the year.

How about the ROI comments? First, almost every CIO I’ve ever talked with would kill for projects that could deliver an almost-50% ROI. Among the enterprises I talked with, the target ROI was rarely much more than 30%. Combine that with some of the other data in the study and you get what looks like a survey bias. A company that addresses training data shortcomings could be expected to encounter those who have them. The ROI bias is harder for me to explain, but it could be that the heady publicity generative AI has seen has set unrealistic expectations, that IT professionals weren’t involved in the projects and thus the ROI assessments weren’t accurate…take your pick.

Overall, I found that generative-AI services related to personal productivity, and “public” approaches to other LLM applications, were indeed suffering a little. To say the problem was lack of good training data, though, was an oversimplification according to enterprises. Overall lack of familiarity with AI was their number one problem, followed by lack of confidence in or support from AI partners. IBM’s success in the space, then, can be traced to providing a combination of education and good advice, and to having a progressively stronger open-model AI position.

What this means for AI overall can be summed up by Torvalds’ comment. Right now, there’s a lot more being spent on promoting AI than on actually realizing it, which is likely the problem with all of our hype waves. It’s troubling, then, but not unexpected to see caution developing among prospective users of AI who aren’t getting solid guidance from a source they trust. The question here isn’t whether AI is over-hyped (over-marketed, to paraphrase Torvalds) because we live in a world of constant hype. It’s whether the hype is covering a real value trend for AI.

Google turned in a nice quarter, which they attributed in part to AI. They may be right, but that’s also not the question. The real question is whether Google is seeing the effect of the AI hype, the kicking-the-tires and, yes, the questionable linkage between effect and cause, or that real value trend. Which kind of makes this question the same as the first one.

Which, in detail, is this: Is the value of AI focused on its ability to help a lot of people a little, or a few people a lot? Google and Microsoft and Meta and even OpenAI can profit from mass use of AI applications that actually yield little value but make it up in volume. IBM can profit from the use of AI by key decision-makers and professionals, of which there are very few, but whose production has significant value. Can we call this? Not yet, I think, but consider how often hyped trends pan out. It does make you wonder.

]]>
5958
NaaS, Virtual Networks, and the Evolution of Services https://andoverintel.com/2024/10/30/naas-virtual-networks-and-the-evolution-of-services/ Wed, 30 Oct 2024 11:40:25 +0000 https://andoverintel.com/?p=5956 The notion of “network-as-a-service” or NaaS has been one of those things that’s regularly gained and lost visibility over time, while at the same time exhibiting a lack of a precise definition or even value proposition. In early 2023, I blogged about enterprise views on NaaS, and in the interval since then I’ve seen some evolution of the concept, including a July post this year in the TMF/Telco view of the topic. Where do enterprises stand now? I have some answers collected since the July post, that I think make some interesting points.

Since the July post, I’ve gotten comments from 88 enterprises on NaaS, fewer than I had for my 2023 blog because of the limited time. One thing that was consistent with the earlier view was the universal conviction of enterprises that the top benefit NaaS would have to deliver was lower cost. About a third went further, saying that cost savings was they only NaaS element they valued.

Another related and equally critical point is that 57 of the 88 users actually rejected the notion of universal usage pricing as a NaaS feature, and they all cited two reasons. First, they believed it would lead to higher costs, period. Second, they believed that NaaS support within their company was likely to come from line organizations, because cloud computing as a service was to them appealing to line organizations. They saw the democratization of network service procurement going that same way, and the way of “citizen developers”, leading some to characterize NaaS as a “citizen networks” technology.

Of the 88 users, 72 said that they believed that network service provider offerings of NaaS wouldn’t currently meet their requirements, but only 15 had actually gone through a NaaS proposal from such a provider. Of the 72, 44 seemed to have a generally cynical view of any new service offering, believing that what they were encouraged to buy was almost certainly what operators believed would earn them a higher profit. All 15 of those who’d actually assessed a NaaS offering had that more cynical view.

Since I’d written on this topic before, I did get some other follow-on comments. One was that it wasn’t just service cost that mattered, but “private network TCO”, including the cost associated with user-owned network gear and user-provided network operations support. However, enterprises were clear that they didn’t see themselves having network operators lease them LAN or other on-premises gear (84 of 88 rejected that), and only 45 of the 88 were at all interested in having the operator provide them with service-termination gear only. Of that group, though, 32 said they believed that operators wouldn’t be able to offer them the kind of deal on the CPE that they’d need.

One thing that I found interesting is that security, named as a NaaS benefit by 43 of 112 enterprises in 2023, was cited as a benefit by only 17 of the 88. More enterprises (22) thought NaaS posed a greater security risk. This shift seems due to enterprise efforts to harmonize their disorderly, multi-layered, complex, security environment. Adding a new player, it seems, is against the management mandate for security containment. In addition, they believed that the populist model of NaaS usage, which IT/network professionals smacks of “citizen networks”, would lead to amateurs assessing security, and thus almost certainly increase risk.

In 2023, less than a third of enterprises had any notion of how or whether SD-WAN might improve security, to the point that they could even list an SD-WAN vendor with security features. Of the 88, interestingly, nearly half understood how SD-WAN or any form of virtual networking could aid security, and these were largely the enterprises who rejected the notion of a NaaS security benefit. This group believed in the concept of virtual networking overall, too, and that was a topic that in 2023 generated almost no user interest or even awareness.

My view is that service-driven and even product-driven thinking and planning have had a decisive impact on the concept of NaaS, not to mention excessive operator and equipment vendor opportunism. Another factor, one that reinforces the latter of these impact factors, is the rethinking of the cloud we’re now seeing. The cloud essentially launched the as-a-service concept, and the cloud was (like, let’s face it, most technologies these days) overhyped and oversold to the point where backlash was inevitable. Companies don’t always cite it (only 27 of the 88 mentioned it this year) but I think a lot of the falling from grace we see NaaS experiencing is due to the fact that “aaS” itself has fallen from grace.

On the other hand, enterprise comments on NaaS and on how it might be subducted into a new vision for virtual networks makes me wonder. Are we struggling not only to name a virtualization paradigm in networking, but to define it? Right now, it appears that network virtualization is trapped in lockstep with compute virtualization and cloud computing. My July blog points out that cloud and multi-cloud is the focus of at least some of the new announcements in NaaS, which suggests that the linkage between virtualization in networks and that of computing is tightening. I think that’s a bad thing, not the least because the enterprises who I talk with are more skeptical about the value proposition of cloud and multi-cloud. I think that the premises side of virtual networks needs to be addressed, and neither the cloud providers nor the telcos can do that.

What should happen now is that virtual networking as a concept, a concept that then makes things like SD-WAN and SASE merely partial implementations, may now be the way forward. Virtualization created the cloud, enabling efficiency, scalability, and agility that something hard-rooted in physical infrastructure could not deliver. VPNs brought some of that to networks even before the cloud, and virtual networking forms an important (though usually unrecognized) piece of the virtual data center we’ve already committed to. Could it, should it, be our next transformation. I think “Yes!” Whether it will, though, is likely to depend on network vendors and how they present it.

]]>
5956
Learning from Metaswitch https://andoverintel.com/2024/10/29/learning-from-metaswitch/ Tue, 29 Oct 2024 11:37:56 +0000 https://andoverintel.com/?p=5954 It’s apparently time to say goodbye to Metaswitch, one of the most potentially transformational telecom vendors/startups of our time. The company was a provider of open-source telco software designed to implement the features of the IP Multimedia Subsystem (IMS), and a key part of the multi-vendor alliance that I assembled to provide a cloud-centric implementation of the then-emerging concept of Network Function Virtualization (NFV). Light Reading (who offered this view on Microsoft’s decision to drop the telco tools it got by acquiring Metaswitch, named Martin Taylor, a Metaswitch executive, as one of the most influential players in NFV. What happened here?

The two most obvious things that happened was the acquisition of Metaswitch by Microsoft, and the fact that the open-software telco infrastructure revolution it represented could be inspired by a startup but completed only by one of the telecom sector’s incumbent vendors. I think that these two obvious points are related.

The most significant factor in the evolution of telecommunications is the “IP dialtone” phenomena. Today, every meaningful new data service that has ever developed is created “over the top” of the Internet, and it seems highly unlikely that will ever change. It’s also true that this dialtone phenomena has shifted the focus of pre-Internet (or at least pre-Internet-dominance) service implementation toward that same IP dialtone overlay model.

This combination truth also leads to the next issue, which is the displacement of traditional services by OTT and mobile apps. Telephony has inter-calling and cross-messaging standards that are critical, and whose implementation required some sort of standards set like mobile’s IMS. The broad group of OTT apps we tend to call “social media” has rendered a lot of this obsolete. Instead we have opt-in social communities that have their own common client apps, and interoperation isn’t a factor. Traditional calling is for many more an intrusion than something to be cultivated, and while texting does require some interoperability among services, it is falling out of favor as a regular means of interaction within a social group. In all, OTT applications, even simple email, are displacing more and more “telco-type” services, which limit the value of something like IMS. Revolution has overcome evolution, in terms of telecom infrastructure.

I also think there’s a healthy dose of reliance on hype here. 5G generated a lot of media attention, but nearly everything said about it turned out to be exaggeration at the least, and often simply wrong. Did Microsoft think they could use open-software technology and cloud computing to displace major incumbent vendors on big deals, or were they satisfied with lower-tier telcos?

Maybe they believed in NFV, though Microsoft wasn’t a mover in that group, because Metaswitch surely was. The goals of NFV, which were involved use of virtual functions to displace appliance dependence, were surely compatible with Microsoft’s vision, and it should have been clear to everyone that cloud technology should have been the explicit foundation of NFV. But NFV never really addressed the foundational points in function hosting, and so it was not likely to drive any adoption of the model Microsoft might have hoped for.

If you look back to packet switching (X.25/X.75), the “Integrated Services Digital Network” (ISDN) and Asynchronous Transfer Mode (ATM) you can see the same sort of thing at work. In all these cases, the telecom space was responding to an emerging market opportunity, in an evolutionary way. In all cases, they were unsuccessful, in the main because they “shot behind the duck”, by being too late overall or failing to anticipate the real basis of the opportunity. The market will always seek the largest profit and ROI possible, as fast as it can be delivered. One OTT visionary told me over 20 years ago that “first-mover advantage is the only advantage that counts.” Telcos rely on standards, and standards never deliver any of that. Where, in any of the standards I’ve cited or in any of the mobile standards, do we see a real analysis of the business cases that will generate the benefits, the profits?

The notion of IP dialtone disconnected telecom services from retail services, forever. The profit-driven innovation of the OTT space has run with the notion that everything in the future should be over IP, and that IP is all that telecom should provide. This is not a perfect solution because the division between providing the dialtone and the service has divided us into a profitable space and one that’s increasingly not profitable, but is absolutely essential. How that’s resolved is yet to be known, but it must be.

The fault here, though, lies with the telcos. Orderly evolution is the goal of telco standards processes, which is understandable at first but unreasonable if you look deeper. The cost of change, even revolutionary change, is only one ingredient in ROI, the “I” piece. If you have enough “R”, then revolution is actually a good idea. The fact is that the goal of telecom infrastructure has to be revolutionary, while at the same time not forcing unnecessary or unjustified displacement costs. IMS should have been what the name suggested, a way of making multimedia and mobile infrastructure converge, not a way to accommodate the thing that should have been recognized as the primary traffic driver—content—with minimal impact on mobile. Mobile needed to have a minimal impact on content.

Telcos I talk with blame regulators for this. They point out that telecommunications as we have it today evolved from either a public-utility model or from an entity that was actually part of the government (Postal, Telephone, and Telegraph or PTT). It’s still held in by antiquated regulations on one hand and bound on the other by politically volatile net neutrality policies.

I think regulation of telecom has been ineffective, in some cases for political reasons but in all cases because regulators lack the technical background they need, and the economic advice that has to underpin any successful regulation of a major market. But in the end it’s the telcos themselves that have to set the agenda for their business, to educate and influence policy-making. OTTs have done that, and telcos have been largely unsuccessful in setting their own agenda.

Metaswitch was the right technology for its time, which was in the 2010-2014 period when it could have been possible to do NFV and telco evolution in general in an appropriate way. It was simply overtaken by events, or maybe the lack of events.

]]>
5954
Do Enterprises See Opportunities for Telco APIs in Business Services? https://andoverintel.com/2024/10/24/do-enterprises-see-opportunities-for-telco-apis-in-business-services/ Thu, 24 Oct 2024 11:32:03 +0000 https://andoverintel.com/?p=5952 OK, you know I said yesterday that telco APIs would fail to move the profit needle for both vendors and operators, unless they linked to features of transformational new services. There’s a Light Reading piece that calls the telco API picture as it stands into question, so what might those transformational services be? I’ve offered my own view of what would ignite a major change in network operator and vendor fortunes, but what do enterprises think? I have data from only a limited number, 93 to be exact, who commented on my prior blogs on telco APIs, and I’ll analyze the views here.

Let’s set the stage on a point. It’s very difficult to imagine how a telco AI strategy could be launched to drive a consumer service initiative. Consumer services today are almost exclusively linked to social and other experience delivery over the Internet. The market opportunity builds, crests, and passes too quickly for any standards to readily accommodate. It’s business-linked services that offer a potential opportunity for APIs, because the value proposition of a business service is less than fad and more one of specific transformation benefits. So how enterprises see API value may be how API value really is, for all practical purposes.

The key point that comes from my 93 enterprises is that they do not see any immediate prospects of a major increase in their spending on telecom services. Will you spend more in 2025? No more than we have to. Are you looking at projects that could increase spending? No, we’re looking at ways to cut it. What could your operator do to increase your use of telecom services? Make them cheaper. There was a time a couple decades ago when these answers might have been different, and a very short period during the pandemic when that might also have been true, but not today. Business buyers don’t see an immediate value to this whole API thing.

Inside this 93-enterprise group were 18 “futurist” planners, looking to drive business transformation. They are the only ones likely to have developed a view that goes beyond those “immediate prospects”. What are they thinking might be an aid to the transformation they hope for? Could it be offered as an API?

Well, maybe. Of the 18, 12 said they’d be very interested in having telcos offer APIs linked to a variety of public IoT sensors. Some of these enterprises are looking at industry-specific projects, often in transportation, and others recognize that a lot of their workers (40%, overall, as I’ve noted in past blogs) are out in the wide, wild, world and need to be supported with real-world activity context, conditions, and intelligence. But note that what this group is really asking for is telco deployment of public IoT, and the API is only how they’d get the information.

The likely founder of the IoT concept, Kevin Ashton of MIT, said this in 1999: “If we had computers that knew everything there was to know about things, using data they gathered without any help from us, we would be able to track and count everything and greatly reduce waste, loss, and cost. We would know when things needed replacing, repairing, or recalling and whether they were fresh, or past their best.” I think this clearly shows that the origin of IoT was linked to a concept of public sensors, because gathering that scope of data without human involvement couldn’t happen any other way.

But how do you get public sensors? Obviously not by having people go out in install sensors here and there. Two things are needed. First, you need a trusted entity who will assure that the sensors are not misused, meaning used for something illegal or that compromises privacy. Second, you need to have an entity who can invest in the sensors.

Who do enterprises trust? For that, I’ve got 434 responses, and it’s not government (trusted by 28%) or credit/financial companies (trusted by 44%) or tech companies (trusted by only 12%), or even utilities (trusted by 54%) it’s telcos, who are trusted by 79% of enterprises. I think this shows that only utilities and telcos have the level of trust needed to build a chance of getting public IoT out there.

In terms of investment, you only have two paths to get public sensors deployed, because it’s clear that the deployment would have to resemble the infrastructure-building a public utility would do. One is to launch a new IoT utility or leverage a current one, and the other to take advantage of the utility past of the telco world. Utilities are characterized by low internal rates of return (IRR), which means that they can be capital-effective on projects with a low ROI, and can tolerate a high “first cost” while they build out and prepare for their service.

You can see from this why enterprise transformation visionaries would see telco IoT deployment as a positive, but if you dig in a bit it’s even more obvious. You cannot simply deploy public sensors; they’d be attacked, hacked, exploited in minutes. You need to deploy sensors that are “public” through their APIs and services based on them, and that is something that best fits the telco model. Electrical utilities have a similar infrastructure, but often have a very limited geographic scope compared to a telco. Telco networks already gather data, telco services are regulated in a way similar to the way we’d need to see public IoT regulated.

But of 88 operators I chat with fairly regularly, none said their company was exploring public IoT deployment, and while visionaries in 17 of them said they believed it was a good idea, none of the 17 believed their companies would be willing to take the step any time soon.

What would convince them? Of that 17, the top answer (by 12 of the 17) was government subsidization. The question there would be “what government”, and what legal framework would be needed. In most cases, legislation would be needed, and in addition there might have to be regulatory changes adopted. For sure, there are policy issues that relate to how any public investment could be protected, and how the deployment would be shared. In addition, there’s the question of the terms (financial and usage policy) that would be imposed to use the sensor/data population.

Government subsidization is possible, but not likely in the near term, and less likely absent public pressure, which isn’t going to come about without some highly publicized successes, which as it happens was the second-most-often cited convincing factor with support from 8 of the 17 (some gave multiple answers). Some believed that early success could come about through local subsidies/investments, but most thought it would have to be brought about by players like Google, who they believed launched local government broadband initiatives by sponsoring FTTH in some under-served areas. They may be right.

]]>
5952