Andover Intel https://andoverintel.com All the facts, Always True Wed, 18 Dec 2024 12:56:30 +0000 en-US hourly 1 Are We Starting to See the Real AI? https://andoverintel.com/2024/12/18/are-we-starting-to-see-the-real-ai/ Wed, 18 Dec 2024 12:56:30 +0000 https://andoverintel.com/?p=5990 There are plenty of reasons to get concerned, really concerned, about what’s going on in AI. There seems to be a growing disconnect between AI press releases and stories, and what enterprises are telling me. That disconnect seems linked, as is often the case, with the concept of “washing” announcements with mentions of new technology when the premise of the linkage is tenuous at best. We may be fueling an AI bubble, based not on what AI could really do but on how exciting you can make it sound. That’s bad, but there’s also encouraging news.

What is the core value of AI? The answer is that it can do stuff a person or group could do, but faster. “Artificial intelligence” is the key term; maybe our use of the acronym has made us forget. The point here is that AI does what we do, but faster/better. We can enter something into a search engine and AI can digest the results, summarize the points if you like. I get AI results all the time when I search, which is often. I sometimes find them helpful, sufficient for my purpose, but most of the time I need to look at actual search results to get the details I need. Is this use of AI transformational? Not to me, nor to enterprises I talk with. Businesses don’t run on the Cliff Notes of economic activity, they run on the details. The most common use of AI fills our appetite for instant answers, but that’s not going to be enough to justify massive investment.

Where do enterprises find AI helpful? In most cases, what they really like is something like the notion of the AI agent, which enterprises believe is an AI model that’s trained to operate on a very specific set of information. While we hear about agents mostly in the context of autonomy, meaning AI acting alone, enterprises are generally not comfortable with having AI act on information without human supervision. So the value here is specialization, and the reason that’s valuable is that AI can quickly analyze something that a person or people would take longer to analyze, and in applications where time is critical, AI offers a real advantage.

Enterprises say that they also like AI in business analytics missions, because AI can spot patterns that people simply take too long to find. Do enterprises believe that their staff is incapable of analyzing the same data and reaching the right conclusions? No, but they think AI could do it faster and perhaps (in its agent form, not its generative form) more reliably. Can I do my taxes? Sure. Could an AI tax agent do them better? Sure, but so could a CPA. AI agents offer speed and specialization.

CIOs are getting fairly strident in their rejection of the popular “copilot” form of AI, which they classify as a kind of attempt to popularize AI and dodge actual business-case scrutiny. One told me “We have thousands using AI to help them write emails or maybe short memos. Tell me how this does anything for the company. What’s driving it is that as-a-service AI is expensed, and most companies, like us, don’t evaluate line-department use of technology delivered that way. If we did, we’d probably crush it out.”

All of this seems validated by recent comments by Broadcom. The new-age chip giant says that there’s a sea change underway in the AI space, a shift away from GPUs to specialty chips designed for machine-learning applications that sure sound like agent applications to me. If true, it could be the first silicon signal that AI focus is shifting away from the hosted chatbot model to something enterprises have said they favored.

OK, so what does this mean? I contend it means a lot of what’s claimed for AI doesn’t stand up to what those who’d have to invest in it consider a realistic assessment. I read an article early this week that claimed that AI would demand fiber and that telcos were eager to see that. Will AI demand fiber? Surely it will inside clusters of AI-GPU servers, but in the network? Our first example of AI, the typical search-enhancement example, may be nice to get an answer to a simple question, but how much is that worth? Remember my comment about running a business on Cliff Notes? Same with running a part of one, or a network. Given that, how much traffic does AI generate outside its own cluster? Enterprises have told me from the first that there is no impact on network traffic created by AI outside the AI cluster and training connections.

What would make AI transformational? Data, and more specifically, real-time data. We run businesses and enhance productivity, buy and sell products, based on information very similar in terms of timeliness to what we had when it was punched onto cards in Hollerith code. AI value demands we shift not how we process things as much as shift what things we process. Getting that real-time data to AI could increase network demand. AI could enable applications that, without it, would be difficult to create, and the running of those applications and/or review of the results could generate traffic, too. The business value of those applications could create benefits to justify investing in them, and in their traffic handling.

A lot of the value of AI, then, is linked to growth in IoT. It’s not so much in what gets media and PR attention—things like autonomous vehicles—as it is simply exploiting real-time information about business processes, not simply recording the result of those process. A sale, for example, might be a single record in the traditional handling of business results, but it might be multiple steps in real time. As real-time processing is integrated into the work itself, it generates more data and also has the potential to impact the productivity of the worker more directly.

The problem is that an IoT-real-time AI approach crosses a lot of technology lines, and few vendors are in a position to profit from the whole food chain. Given that, will any vendor see enough benefit to drive its own portion forward, especially when other essential elements may not be provided by the vendors responsible for other related technology spaces? Enterprises seem to think that progress here will come from a vendor willing to frame a kind of IoT/AI platform, and they name three candidates—Broadcom, HPE, and IBM.

I think the situation with AI is hopeful. Despite a major wave of hype on applications enterprises don’t think will make a business case, we’re seeing enterprises dig through the hype to find actual, valuable applications. We’re also seeing some companies talk about AI reality in a forum that really matters, Wall Street. Good things may be on tap for 2025.

]]>
5990
Picking All the Broadband Apples https://andoverintel.com/2024/12/17/picking-all-the-broadband-apples/ Tue, 17 Dec 2024 12:35:47 +0000 https://andoverintel.com/?p=5988 What happens when all those proverbial “low apples” are picked? Technology markets, and in fact most markets, are made up of prospects that vary considerably in terms of ease of access, return on investment, and other economic factors. The combination of attributes mean some are really attractive and should be targeted quickly, and others much less so, meaning that maybe they’d not be targeted at all. The concept of “universal service” is seen as protecting the higher apples of the telecom world, but perhaps they’re not going to work. In fact, they may not be working even now.

A couple decades ago, I was fiddling with my modeling tools and determined that there were some very simple factors that decided just what broadband prospects might represent low versus high apples. One was “demand density”, a measure of the economic strength of a specific area of geography, and it’s roughly the GDP per unit of area. The second was “access efficiency”, which was a measure of the cost of deployment per unit area. If we were to (OK, I know this is one of those “roughly” things) relate the two, the potential ROI of an area is roughly the demand density divided by the access efficiency.

If we look at large geographies, like countries or the old Bell Operating Company regions, I’ve found that this “roughly” is good enough for all practical purposes, but if you zoom in you find that within a given large geography, there are places where our magic ROI ratio comes out favorable, and others (perhaps even others nearby) where it indicates the apple there is very high indeed. I did some quick assessments of this, and found that in the US it’s easy to find places as big as a hundred square miles that had magic-ratio ROI potential five hundred times as good as other places of similar size less than a hundred miles away.

When broadband is looked at through a wide-angle lens, we can assume that regulatory policies would be effective in leveling the playing field across most of the pieces of a large geography; a form of cross-subsidization. But this is less effective in a “privatized” world where players have some flexibility in where they target. It’s also not always fair to have people who elect to live in an area where magic-ROI ratios are certain to be low be offered superior facilities at the expense of others. One operator told me five years ago that the majority of their high-apple locations were high-end dwellings, and that the public “would be upset” if they knew who subsidies were being directed to.

Universal service was critical in getting basic phone service to everyone, but I think it’s clear to everyone that it’s not going to be as easy to get superior broadband to everyone. There have been suggestions that the solution to this lies with “public broadband” services, offered by cities, counties, or states, but where the governments span a large area we’re back to the question of whether the subsidies are equitable, and where they’re focused on a small area they may exacerbate the problem by picking the middle levels of our apples and putting the high one even more out of reach. Classic broadband, meaning media-based broadband creates this problem.

Ever hear of “pass cost”? It’s the cost a broadband company has to pay just to get broadband close enough to allow customers who order to connect; the cost to “pass” their home. When I first moved into my current home, I had no broadband at all because no provider “passed” me. Today, two media-based broadband providers pass me, but there are parts of my community where only one provider passes homes, and in those areas the broadband speeds available to me are simply not available at all. In a few areas within 20 miles of my home, nothing that approaches competitive broadband is available at all. Rural broadband subsidies have helped, but I think it’s clear that even in my own area, “universal broadband” isn’t the same thing as “equivalent broadband.”

The situation with broadband is not unlike what’s been, and is being, faced by postal services worldwide, at various levels. Changes in consumer behavior and communications, including and especially broadband and the Internet, have had major impacts on postal revenues, and most proposals to transform the agencies would create a risk to those living in rural areas, because the daily delivery costs can’t be covered by the revenue customers there can generate.

But things there are changing quickly, in networking at least. What’s changing them is mobile broadband and FWA. The explosion in the use of smartphones combines with the fact that premium customers often travel, to create demand for cell service in a broad area. Starting with 5G, the same technology can be used for FWA, and the fact that FWA carries the “last mile” without physical media significantly reduces that pass cost problem. Satellite broadband is also growing in popularity, though it rarely provides much more than a shadow of the service bandwidth that’s available to media-based or cellular broadband.

The problem here, in an era of global privatization, is that uneven demand density and access efficiency situation. In even rural areas there are towns, and whether these communities decide to deploy their own broadband media or a specialty operator decides to offer service there, quality broadband may be practical there when it’s not practical over the larger surrounding area. Various universal service subsidy approaches make more sense in these pockets of opportunity, too, so programs may appear to be improving broadband populism when they may be doing little or nothing for those high-apple areas.

I grew up in a rural area, and I know many who still live there. They often complain about broadband quality, but the also complain about a lack of convenient shopping, easy access to airports, quality and accessibility of schools and colleges, and other things that relate to the economic efficiency of providing a given service where there are simply not enough consumers to make the service profitable. Would it be possible to level the broad service playing field across all geographies and demographics? I doubt it. Should we try to accommodate the differences better? Surely.

With respect to broadband, providing better FWA is an obvious strategy, but could local governments not also be encouraged to run broadband media to residential areas, and require ISPs to connect to it? A standard strategy for that media, and even a “kit” to provide local access via the media, could reduce or eliminate the variation in service quality that some say is a problem with municipal broadband. Another useful step might be to require websites to deliver leaner content to sites with low bandwidth, and to stop launching video in windows without the user asking.

Another point here is the link between broadband policy and “copper retirement”. In many countries, including the US, there’s been a duel between the local access providers and regulators over the issue of retiring the copper twisted-pair plant. This plant cannot deliver competitive broadband, and the cost of sustaining it is a burden on telcos, enough that it may hamper deployment of suitable broadband facilities. The argument that retiring copper will put everyone at the mercy of “flaky Internet telephony” flies in the face of the fact that the majority of people already depend on smartphones for calls (if they call at all) and that many don’t even have a copper-based telephone installed in their home. This sure looks like a bad example of regulatory policy, which we’ll get to now.

The problem with broadband quality I’ve been talking about is greatest where overall (meaning national) demand density and access efficiency are low, which tends to be those with a large geography or with economic challenges. The US, Canada, and Australia fit into the first group, and most of the “third world” into the second. For that second group, there may be no good solution in the near term, other than to focus on wireless and the 5G standards. The problem with the first group has arguably been political—doing the right thing isn’t often doing the thing with the best political outcome.

I suspect that, even with wireless and FWA, broadband inequity is going to increase simply because it makes business sense to serve best those that pay most. There are no simple solutions to this, and things like municipal broadband may help some rural users but will exacerbate the problem for others who don’t live in an opportunity pocket. To the extent that the problem can be solved, it’s going to take a more thoughtful (and less political) approach to solve it.

]]>
5988
Extreme Takes Aim at Competitors in an Era of Change https://andoverintel.com/2024/12/12/extreme-takes-aim-at-competitors-in-an-era-of-change/ Thu, 12 Dec 2024 12:44:13 +0000 https://andoverintel.com/?p=5985 Enterprises have long had a choice of vendors in the networking space, but for most the dominant players have been Cisco and Juniper. The former is undergoing a reorg, and the latter is being acquired (subject to approvals) by HPE. While I’m not hearing enterprises express worry about this (see my blog on this), there is surely something afoot in network equipment, and that means there are both risks and opportunities in play. For some network vendors, the opportunity is clear, and Extreme Networks is one. Their announcement of their Platform ONE is surely a shot across the bow of the two networking giants.

Extreme has, for decades, battled for market share against giants like Cisco and Juniper. In this fight, as is often the case with a battle against giants, it’s been willing to deploy unconventional weapons. It introduced the notion of cloud-based management, and even a form of AI, well before it was prominent in the positioning of others, and even launched a network digital twin in 2022, as a means of gaining a systemic understanding of complex network infrastructure and its relationships to users and services. All of this seemed to be aimed at creating differentiation in a market where pushing packets is pretty much a matter of ones and zeros. Management and security seemed to be a good place to differentiate, and they still seem that way today.

Platform ONE is, as the name suggests, a broad tool. Its target is both network management and security, which hits a lot of enterprise hot buttons. The dual mission may be helpful to Extreme, because while security is a major enterprise priority with secure funding, there’s continuing skepticism about “platform” tools in the security space. It’s already got enough layers, say enterprises, and for most, Extreme isn’t a current layer provider but it wants to change that with a combination of cloud composability and AI.

AI is an almost-universal add-on to tools and systems these days, but it appears to me that Platform One is designed around AI rather than having AI plugged into it. In their presentation to analysts, Extreme AI Expert is the glowing core of the concept, in fact, the “One Ring” without the sinister Tolkien context. The principle is that networks are networks, and operationalizing and securing them as separate technologies or vendors is sub-optimal. Ops is best thought of as systemic, crossing management and security, LAN and WAN, virtual and physical. The more you know, the better it is.

The platform is then a cloud-hosted SaaS application, framed around an AI Expert core that in turn surrounds Extreme’s management/feature layer of tools, already hosted in the cloud. All Extreme product data is collected, and ecosystem partner data is likewise integrated. Other vendor equipment can be linked in via APIs, but it seems this would likely be the responsibility of channel partners or users to accomplish, at least at this point.

The user interface is both hierarchical and role-based, meaning that since Extreme’s sales conduit is largely based on resellers/integrators, the channel partner has a super-view of its customers, which then have their own set of role-based views within their own domain. Orchestration of the agent elements, governance and interface/data security, and platform service features are all integral to the Platform ONE core.

AI is integrated with the GUI features at all these roles/levels, and as noted it’s a core element of the platform and not a chatbot add-on. Three modes of AI operations are supported; conversational, interactive, and autonomous. In conversational mode, the AI element responds to user questions, much like a chatbot. This mode seems similar to a polled management framework; look when you want and see what you need. In interactive mode, the AI element will present conditions, like an event-driven system, and make suggestions, and the user can ask questions and implement recommendations. In autonomous mode, the AI element will actually take control and respond to things.

In terms of roles, Extreme offers two distinct classes of “users” (as opposed to channel partners offering integrated services). One class is called “users” and the other “buyers”, which may be a bit confusing, but reflects a distinction between those who operate the network and those who procure it. Things like budget planning and license and contract management fall into the latter category, while the former focus is the traditional operations elements. The progression of Learn, Plan, Deliver, and Fix is explicit in the design, with both the user and buyer classes having involvement in each step.

The goal of Platform ONE, in a functional sense, seems tied to workflows as a binder between infrastructure elements, network services, and user experience. Extreme has a longstanding interest in and support for virtual networking, and while the use of a virtual network is not mandatory with Platform ONE, I think it would enhance its capabilities by providing an explicit connectivity framework that can integrate the network environment.

Speaking of integration, one major question Platform ONE raises is how it’s adopted. Obviously, Extreme users can expect to leap into it and achieve real benefits. What about taking on Cisco or Juniper, though? The big money in networking these days is in the data center. The more cloud-centric an organization is, the less chance there is that there’s substantial network incumbency and an associated equipment transformation to deal with at the time of sale, but of course the less network there is, the less money there is in winning the customer in the first place. I think that real Extreme success has to come from actually displacing Cisco or Juniper, not from having the user shift away from the data center.

Would Platform ONE be enough, functionally, to justify a rip-and-replace? Probably not, unless either the competing gear was already old or there was a major change in network requirements that would justify replacing gear. Would it be enough to justify taking some or all of any planned network refresh? Yes, for many users, providing that it could deliver value to users during what Extreme would surely hope would be a network transformation in Extreme’s direction. That means pulling non-Extreme gear into the tent or depending on that “major change” to justify the refresh. The former approach is obviously safer and, if successful, more profitable. It’s also more appropriate for a channel-dependent vendor like Extreme.

Channel partners want leads more than anything else. They want their vendors to do the heavy lifting at the marketing and strategy level, generating excitement and prospects to call on. Even partners who have the skill and visibility to build their own leads will usually rely on vendors to pave the way in positioning. Extreme’s Platform ONE has the potential to generate excitement and leads, but the question of how it’s introduced into an account that’s not already using Extreme gear is important if Extreme is to take on, for example, Juniper as its customers navigate the HPE acquisition. Extreme CEO Ed Meyercord told CRN “Business doesn’t continue as normal when there’s a fundamental change like that. … And then that becomes a great opportunity to consider an alternative.” Like Extreme, obviously, and Platform ONE, with an appropriate Juniper bridge, could be just that. We’ll have to wait to see if that appropriate Juniper bridge, and the one for Cisco, is built. If it is, then both Juniper/HPE and Cisco might have to start looking over their shoulders.

]]>
5985
Technical Debt, Data Debt, and AI https://andoverintel.com/2024/12/11/technical-debt-data-debt-and-ai/ Wed, 11 Dec 2024 12:43:55 +0000 https://andoverintel.com/?p=5983 Most of us see debt as something to be avoided, so “technical debt” minimization has been a priority for development teams. Essentially, the term means an erosion in software quality caused by taking the expedient path, not taking enough time, or simple carelessness and errors. There’s also a growing interest in what many consider a subset of technical debt, which is “data debt”. What is it? The accumulation of bad practices and data errors that contaminate business management decisions. Enterprises see a rise in both technical and data debt, and see AI as a risk in both areas, but also tie both up in what they see as a larger problem.

When enterprises comment to me about “debt”, they tend to focus on what they see as a kind of IT populism, the increased direct use of IT-facilitating tools by line departments. CIOs and other IT professionals understand this; there’s pressure on line management to improve their operations, and often the time required to engage internal IT is seen as an issue. They also realize that as-a-service trends have made applications more accessible to line organizations’ staff. Yes, there is surely some IT parochialism involved here, but it does seem clear that the impact of what’s often called “citizen development” on technical and data debt has been under-appreciated.

Line organizations are parochial in their own thinking, by design. Companies are organized by role in the business, and you can’t have everyone running out to do others’ jobs without coordination. The same thing is true of development, of IT in any form. In the past, I’ve noted that things like cloud computing, particularly SaaS, and low-/no-code are most likely to be successful if there is some IT coordination, at least in the initial design, and particularly as a means of ensuring that organizations with interlocking activity don’t end up building their own silos.

Almost half of enterprises say that they impose little or no policy constraints on citizen developers, and almost two-thirds of this group say it’s not necessary. Of the over-half who do set constraints, the most common is a restriction on “chaining” applications, meaning having citizen developers write applications that run on the output of other such applications. However, it’s interesting to note that just over than a quarter of enterprises who set that constraint don’t constrain having citizen applications create databases or database records, and of course that can easily lead to the chaining they theoretically forbid. It’s also, I think, the source of a lot of serious data-debt risk.

Almost all enterprises say that it’s possible that citizen developers might create data that is redundant, contradictory, or flat incorrect. Often, some say, the duplicated data is in a different format from an IT source that the citizen developer didn’t know about. Five enterprises who set rigid control say that they had a major problem with data integrity that arose from use of low-/no-code tools, and now require an audit on such applications.

One area where data debt seems most likely relates to Office applications, spreadsheets and databases. Not only are these often passed around among workers, they are sometimes imported into major and even core applications. Spreadsheets were the big data-debt problem for four of the five enterprises who found it necessary to clamp down on citizen developer practices, but all four admit that they really have no way of knowing whether workers with Excel skills are conforming to policy. Half admit they suspect they are not.

How about AI? Only a few (less than one in ten) enterprises have considered the impact of AI on data debt, but all of them expressed some common concerns. The majority of them, while not necessarily spreadsheet-specific, are often related to spreadsheets.

One of the common value propositions for AI copilot technology involves assisting in the creation or analysis of spreadsheets, and this format is regularly used within line organizations for “casual” analysis of data. I’ve seen, in client companies, issues with what we’d now call “data debt” in Excel spreadsheets and Microsoft Access databases almost from the first, well before AI. But AI might well make things worse.

AI copilot technology used in development organizations is regularly characterized by enterprises as a “junior programmer”. They believe that results of AI code generation requires collaborative code review to prevent the classic technical debt problem. Surely the same sort of problem could happen with Office tools, and I’ve seen AI-assisted Word documents and AI research results that were truly awful in terms of quality. Could we expect our line worker, who obviously feels a need for assistance in the use of Excel, to understand the results and audit data quality? Obviously, no.

Enterprises almost never offer AI-linked comments on data debt at this point (which I think means any purported research on the topic has major risks), but remember that one of the long-standing complaints enterprises have offered on AI results is the difficulty associated with tracing the steps taken to get those results. Any given AI result could be a “hallucination”, and work to allow AI to retain context through complex analysis means chaining those results. Can we trust them? If there’s even a five percent error/hallucination rate in an AI analysis, the chances of getting accurate results from four chained analyses is less than one in four. And, would we know if that happened?

Data debt is a real risk, perhaps a greater risk than technical/code debt because of the “garbage in, garbage out” truth of IT. While there are surely benefits to AI, and to broader “citizen developer” participation, there doesn’t seem to be much doubt that both can contribute to data debt, and that would work against the business case for AI. You can’t improve company operations when the core data you use is being eroded in quality by the very mechanisms you’re relying on to make things better.

]]>
5983
What Enterprises Really Think About AI Agents https://andoverintel.com/2024/12/10/what-enterprises-really-think-about-ai-agents/ Tue, 10 Dec 2024 12:49:27 +0000 https://andoverintel.com/?p=5981 What do AI and wireless have in common? Yeah, this is sort of a trick question, but most people probably respond with “hype”, and that’s true. What’s also true is that both have a kind of generational succession to them. In wireless, most remember 4G, know 5G is current, and 6G is next. AI succession isn’t as numerologic; we had AI and machine learning, then we had generative AI, and then RAG (retrieval-augmented generation), and now many would say we have autonomous agents. Like 6G, the AI agent concept is a bit fuzzy at the moment, but like 6G it has potential, maybe even enough to salvage the technology as a whole.

The fuzz in AI agency is related to the fuzz in AI overall. The majority of AI that’s deployed is not the generative AI we always hear about, it’s in the form of smaller and simpler models. An AI agent is, at the heart, an AI element tasked with something specific. That doesn’t necessarily mean that it’s generative AI based on LLMs. In my own view, and in the view of a slight majority of enterprises I hear from, it’s not even necessarily fully autonomous. It’s task-oriented AI, and the task might as easily be to recommend as to actually act.

Another source of fuzz is that the pace of generation is too high; it’s fruit flies not elephants. Enterprises have been telling me from the first that they don’t have AI expertise on staff, in no small part because the AI experts want to be hired by AI companies who will offer more. How, they now ask, are they to hit a target that’s moving as fast as AI is? Thus, anyone who surveys enterprises looking for expert opinion on the evolution of AI are, by enterprises’ own accounts, wasting their time. So I won’t (exactly) do that.

Enterprises by well over 4:1 margins, think that any AI transformation is going to be self-hosted. They say it won’t involve “generative” AI by about 2:1, and only slightly less than that say that they believe AI will be used and valuable in contained missions, so multiple AI models are likely needed to really transform their business. They’ve also said that they like AI giving them advice, but are wary of it running things on its own. This AI fear is greatest where AI scope is greatest, so contained AI is more readily accepted as acting on its own. From this, you can see that at least implicitly, enterprises view AI agents as small, specialists, acting like a pilot on a ship leaving harbor. They can recommend, but they’re not the captain.

You can see this clearly in network operations missions for AI. Less than ten percent of enterprises say they’d like to turn netops over to AI completely, though a third think that might change in five years. On the other hand, no enterprises reject the notion of an AI agent giving them advice, and only a quarter say that they couldn’t accept autonomous reactions in netops in specialized areas of their network. Traffic and WiFi capacity management? Bring it on! Fault response? Recommend, but let me make the final choice.

What does this have to do with autonomous agents? You can probably deduce it at this point. Think about a company, a typical enterprise that runs on human rather than artificial intelligence. These companies are largely run by specialists in multiple areas, whose decisions fit within a broader framework set at the top. An AI analog of this would be a bunch of AI models/agents, each with an area of specialization, coordinated perhaps by a top-level super-agent. I think, based on what I hear from enterprises, that this approach is what they are most comfortable with, and also that they might see each of the specialty areas being managed by its own hierarchy of agents, just like a real organization would be, and then that some of the lower-level agents might be allowed to make decisions on their own, as long as they fit into policy constraints set from above.

To me, the important points here are first that autonomy has to be granted based on policy, that AI should be viewed as working within an agent hierarchy just like humans, and that some AI has the role for analyzing the results of other AI agents. I don’t hear these points spontaneously from enterprises, but they sure seem to be validated by inference. Why don’t I ask? Because my whole approach to getting user information on tech is to rely completely on spontaneous comments; asking questions inevitably leads the subjects and creates a bias in responses.

The reason I think these points are important is that I also don’t see much recognition of them among AI providers. If I’m interpreting enterprise comments correctly, and so far they’re not contacting me to say otherwise, then there’s an opportunity here to frame AI the right way. It’s also an indication that a lot of the current AI activity, startup and otherwise, my be missing the sweet spot in the market.

Some companies (like Cohere, as reported by Reuters) say they’re focusing on customized models rather than following the industry push to build one huge model that behaves like a superhuman expert. IBM, as I’ve noted in other blogs, seems to be accepting a specialist model approach too, and most small-language-model providers are at least offering specialist support, but in many cases they’re still positioning for less demanding generative AI missions. A part of this, I believe, is due to the fact that the majority of highly successful enterprise AI applications are chatbots, and these mimic generative-LLM tools but operate in a more specialized way. Another piece is due to the fact that various techniques to specialize cloud LLM tools (like RAG) are getting most of the ink in the space, no doubt in part because the providers are lobbying the media.

Is the human-organization-bound model of AI agents, a model that allows for both human and agent policy supervision, a missing link in enterprise transformation via AI? I think it may well be. We have, after all, made the study of optimum organizational structure a business-school classic, so shouldn’t we expect to do the same for optimizing AI agents? Have we perhaps gotten so obsessed with the future of AI, the concept of sentience, that we’re mortgaging the present? Maybe the proverbial hype cycle is partly to blame, or maybe the get-rich-quick mindset we see in the VC space.

If we did 6G right we could revolutionize not only networking but our lives. If we did AI right we could do the same, perhaps even easier. If we did both right, the impact would be massive. So dreaming big isn’t so much the problem of not parsing big dreams into achievable chunks. We can do better.

]]>
5981
Why Do Technologies Get Hyped and then Dissed? https://andoverintel.com/2024/12/05/why-do-technologies-get-hyped-and-then-dissed/ Thu, 05 Dec 2024 12:25:38 +0000 https://andoverintel.com/?p=5979 Is it “everything old is new again”, or “everything new is a bad idea”? Whichever it is, it sure seems like a lot of hot new tech ideas are getting a harder second look these days. The cloud, AI, and microservices are all tech revolutions that are getting some serious questions asked about their value. It’s like tradition is returning to IT, which of course isn’t what media/analysts traditionally like to see. How many articles about the data center of old winning out can we tolerate? But tradition gave us punched cards, analog modems, and mobile phones the size of a Manhattan phone book (which, of course, is another tradition we’ve weaned away from). Innovation is what created the modern world, so how can these seemingly innovative things now be questioned? Enterprises have some idea.

“It’s all bandwagoning,” one long-time CIO told me. “You get something new and interesting, and it gets noticed. Those who are using it get noticed, and being noticed is better than being anonymous. So you use it.” About two-thirds of all enterprise contacts I’ve had over the last three decades have said something like this. But however true it is, it doesn’t explain why those new things turn out to be called “bad”. Some new and interesting things have been solid winners from the first. Some had a slow start and then gradually proved themselves. Why do we see three more recent revolutions all questioned?

Many enterprises still see this as a “bandwagoning” issue. The problem with status-driven adoption is that it isn’t driven by thorough, thoughtful, assessment. As a result, there’s a higher probability that the new thing will be applied the wrong way, or to something that it doesn’t fit at all. About half of all enterprises think this is a root cause of our harder-look phenomena.

A root cause, but not the only root cause. Almost the same percentage think that hype is a root cause. “The cloud will take over, or maybe AI will take over, and in any event you need to be planning for microservices,” another CIO remarked. “It’s hard to resist at least an exploration of something that everyone is reading about, particularly your CEO. And everyone knows that exploratory projects have inertia, and they can blunder into being adoption just because so much time and effort was sunk into evaluation.”

It’s interesting that the CEO who’s held the position the longest of all those I’ve interacted with took a different slant. “There’s really one driver of IT change, and that’s the need to empower people, decisions, better. That means making IT reliable and accessible, more of each every year. And every year, after more low-apple strategies have been followed, it gets harder.”

The fusion of these four comments, I think, reveal what’s as close to the truth as we’re likely to get. What we’re seeing is a failure of cost-benefit analysis, created in large part because we’ve taken good ideas too far, not that we’ve created bad ideas. All three of these technologies are good, probably great, and possibly even revolutionary, but none of them are the “universal constant”, the hypothetical thing that, multiplied by your answer, yields the correct answer. No matter how great your new electric drill is, it won’t be valuable in turning on your TV. But…we’ve tried that with them all.

“The dumbest thing we did,” one enterprise CFO told me, “was to try to do business analytics in the cloud. When the software was in the cloud and the business data was in the data center, the transfer costs and latency killed us. When we put the data in the cloud too, the cost of updating it and our compliance people ganged up to kill us.” Of course any thorough assessment of that application would have revealed the problem, but everyone was caught up in the cloud euphoria.

“We thought that microservices would give us scalability and availability, but the application was so complex it never ran correctly, and in any case just processing the one most common transaction took five times as long as it used to,” said a development director. “We fixed it by tossing out the whole idea of services and going to a monolithic model, but that didn’t give us what we wanted either.” The director admitted that they eventually realized that some “servicication” was good, but that too much was a disaster, and they’d never realized that would be the case until they’d had two failures.

“We paid nearly a hundred thousand dollars for an AI analysis of our business, and it produced five recommendations that we presented to a department head meeting,” a CEO told me. “Two of them would have involved doing something everyone agreed would completely fail, for very obvious reasons. One was illegal, and the two remaining ones couldn’t generate enough benefit over three years to justify the cost of the analysis and ongoing AI use.” This company, like others, found out that it’s often impossible to know how much you’ll get from an AI project without first doing the project.

What do enterprises think would fix the problem? Some offer joking (maybe half-joking) notions like “Put all your visionaries in a sealed room for the first year after something new comes along!” Almost all CFOs, and well over half of CIOs, say that what’s needed is to “stamp out generalization and justification by reference”. With regard to both the cloud and AI, both groups agree that there should have been a requirement to frame a business case in enough detail to permit a test of it before going forward. “We accepted that everything was moving to the cloud, or that the cloud was always cheaper. Well a lot of the everything is moving back because the cloud turned out to be usually more expensive.”

The problem with things like cloud computing and AI, say these CxOs, is that companies don’t quantify benefits, which they say takes three steps. First, you have to identify the source of the costs and offsetting benefits, and how they’re to be realized by the technology. Second, you have to define a test or trial to validate both the cost and benefit assumptions associated the first step. Finally, you have to run the test/trial and gather the data to get approval. You don’t accept published claims.

Microservices are in one way different and in another not so much. None of the enterprises said that CFO validation of a microservice decision was needed unless implementing the decision was associated with project spending that had to be reviewed. Even when that happened, it was rare for the CFO review to actually look at the “value” of microservices versus alternatives. CIOs say essentially the same thing; microservices were recommended by development teams, and no more likely to be questioned than the choice of a programming language. They admit that they wonder whether other decisions, like containerization or virtualization, should also have been given a harder look.

In summary, enterprises admit they’ve not given new technologies the thorough validation that they should have. I think that the comment on “bandwagoning” captures a big part of the problem, but I also think that we’ve come to rely too much on information sources that are more beholding to vendors/sellers than to buyers, and also on summaries or snippets of information rather than extensive documentation. Enterprises are mixed on this one, though; most executives are looking for Cliff Notes. Maybe they’re finding them too often already.

]]>
5979
Are Enterprises Worried About Network Vendor Health? https://andoverintel.com/2024/12/04/are-enterprises-worried-about-network-vendor-health/ Wed, 04 Dec 2024 12:34:11 +0000 https://andoverintel.com/?p=5977 Are enterprises worried about their network equipment vendors’ health? Do they have something to worry about? What’s causing whatever is happening, and how might it change networking? All good questions, and ones we’ll try to answer here.

There have been multiple stories about how things like the HPE/Juniper deal and the Cisco restructuring are making “enterprises nervous”, and in the second half of 2024, over 90% of enterprises I chatted with told me that they were “aware of” or “watching” these trends. Only 18% were “concerned” or “alarmed”, and even this group pointed out that any business changes one of their vendors made was always a potential issue. “I’d like all my vendor partners to have a stable model,” one CIO said, “but I also want them to evolve to serve market conditions and my own requirements.” Change, then, can be bad, but it can also be essential.

While the article I cited above notes an analyst view that some enterprises are putting “things on hold”, I didn’t get that comment from anyone, at least not referencing vendor changes. There are plenty of uncertainties in the market right now, and over the last couple months, and so there are reasons beyond vendor shifts to be slow-rolling some projects. I think we have to chalk up some of the “concern” to coincidental factors.

Not all of it, though. I think that if you look at the HPE/Juniper and Cisco moves, you can’t ignore the fact that they both portend shifts in the business model of network vendors enterprises depend on. The questions this raises, say enterprises themselves, boil down to two big ones. First, are changes being driven by things that could help, or hurt enterprises? What’s best for sellers isn’t always best for buyers. Second, are any of the shifts driven by something that could indicate a fundamental shift in technology or practices that enterprises should be planning for? Both questions are complicated.

M&A and restructuring in the vendor space is almost always associated with a significant change in market conditions, something that enterprise buyers aren’t always (or even often) considering proactively. My view, based on my own interactions with enterprises over many decades, is that the change in play here is the shift in network investment off the critical path of empowerment. Thirty years ago, there was plenty of information available to applications that could improve productivity and decision quality, but a shortage of means of connecting that to workers. That’s not true today. For the last three years, the number of enterprise complaints I’ve heard that network delivery of application/data access was limiting them has been zero. For the last twenty, it’s been consistently less than fifteen percent, and declining steadily.

If enterprises’ need for connectivity is being met, then they don’t need to spend more every year to accommodate change; there isn’t any. That means that network budgets are under pressure to do more for less, since every company (vendor or buyer) is looking to manage costs to improve profit. This is demonstrated by the steady decline in the growth of spending on new network technology over the last thirty years, and the explosive interest in new (however hypothetical) drivers of network change, like AI.

The impact of this on vendors is obvious. They’ll attempt to engage better on projects that actually do represent incremental opportunity, and that takes two forms. One is getting aligned with the influencers of those projects, and the other is shifting technology development toward the stuff the projects involve. HPE/Juniper is an example of the first, and Cisco’s reorg of the second.

Almost from the first, my interactions with enterprises have shown that data center changes drive network changes, and that vendors with the most strategic influence in the data center have the greatest chance of controlling opportunities in network equipment. Juniper, driven first and foremost as a provider of routers to network operators, thus falls short of having a “bully pulpit” in the data center. Years ago, in fact, I criticized them in their lack of aggressive positioning in data center and application evolution (see, “A Strategy of Absence or an Absence of Strategy” in Network World, perhaps still available HERE). HPE, next to IBM, has the most strategic influence in the data center, so they’re in a better position to leverage Juniper, providing them actually influence Juniper’s marketing/sales strategy. The HPE/Juniper deal, in my view at least, reinforces the switching area that enterprises spend most on.

With Cisco, the situation is more complex, at one level simplifying the organization by merging security and networking to recognize the potential interdependence, and at another introducing things like the cloud and AI as megatrends that cut across both. Organizing to focus on security doesn’t, in itself, commit to shifting development emphasis away from switching, but it doesn’t reinforce it either. The Cisco comments in the article I cited don’t do that either. ““Networking continues to be incredibly important to us and we’ll continue to support that space as well,” Herren added. “But it’s looking for efficiencies as we look across the company really in every way so that we can take those resources and allocate them into the fastest growing spaces.”

The common point here, between the HPE/Juniper deal and Cisco’s reorg, seems to be AI, but it’s really what enterprises said they wanted, which is support for change. Networking is stagnating in its traditional mission, but there are things that pose a risk to tradition, to the networking status quo, boring as it may be. AI and the cloud are two at the head of the list of those risks.

Enterprises don’t fear the changes as much as some might believe, perhaps thinking that they’d be asked to do more with less. Well over 90% of enterprises I chat with are sure that if major changes come about because of the cloud or AI, they’ll be projects that link new benefits to new spending, and those projects can kick in some network bucks too. Many welcome them, because it’s easy to see how new projects could help add interest to a CV, relieve boredom, etc.

Should they be more worried, though? Perhaps. It’s clear that the basic network equipment they depend on is commoditizing, and that could force some players to pull back from the lowest-margin pieces of the business. That in turn could mean enterprises have to validate new vendors, perhaps multiple vendors, and none of the ones I’ve chatted with are eager to do that. I think they’re safe for now, but out three to five years? Change is in the wind.

]]>
5977
Comcast, Cable, and Content https://andoverintel.com/2024/11/21/comcast-cable-and-content/ Thu, 21 Nov 2024 12:40:50 +0000 https://andoverintel.com/?p=5974 Video, they say, killed the radio star. Content, they say, is king. But now we hear that cable companies are in trouble because of streaming, and Comcast is spinning off a bunch of its TV properties. I’ve gotten 44 comments on video streaming and content from operators since July first, eight of which were from cable companies, and here’s what seems to be going on, for Comcast, for cable, and for content.

First, linear RF video is a dead end, according to all but two operators (both cable companies). Even those two agree that in the long run, it’s almost certain that this original form of cable TV won’t withstand streaming competition, but most of the operators and all the cable players think that it will still be a service element for “three to five years.” The question isn’t so much whether that form of video could still be sold, as whether it would be smart to offer it when future video customers prefer a streaming service. One cable expert told me that they see broadband-only adds outstripping cable TV new customers by 2:1 even now, and expect that to go to 3:1 by the end of 2025. That’s important because this expert believes that when new adds favor broadband only by 5:1, they’d likely start shifting to a pure broadband model for service.

Does this mean cable companies would get out of the video delivery business completely? Over half the non-cable operators say “Yes!” but none of the cable types believe that. Is that wishful thinking, though? I think that depends on whether the cable company owns content provider assets.

A cable company or other operator has to pay for content if they’re going to charge for it. If they own content, they’re paying themselves for it, and if they do not, whether they want to sell it depends on two things. First, the spread between retail and wholesale content price. A small spread means little profit. Second, whether they can strike favorable content deals if they’re competing with the owner of the content as a retail provider.

Many content providers are starting to offer streaming on their own, as well as offering their stuff to aggregators like cable companies (in linear form) and streaming providers like YouTube TV or Hulu. That’s because they’d love to have the same retail/wholesale spread added to their own profit. Over time, 35 of the 44 operators (5 of 8 cable companies) believe that the big content producers will all offer their own streaming services, and in the very long term (over 5 years) will start to raise their prices for syndication of their content, even to the point where they’d not syndicate at any price.

Both my opening statements are true; video has killed pretty much everything as far as syndication of content goes, and video delivery dominates infrastructure planning requirements. What’s also true is that the Internet and the diversity of video content it offers has made it more and more difficult to promote individual content providers’ works unless they’re already well-known. That’s why Comcast is likely keeping NBC while selling off other lesser channels. The lid on pricing means a lid on revenues, which have to come from ad sponsorship, license fees, direct payments for viewing, etc.

Cable companies’ problems here are related to all of this. They have, in CATV cable, outside plant that has a relatively low pass cost (getting service out to the curb where new customers can be connected) but whose historical value has been in delivering linear RF, and which is a shared media. With linear RF, you push out scheduled content en masse. You can add broadband to this (as cable has done) but there’s a limit to the download and upload speeds, generally much lower than would be supported by fiber. To the extent that a given cable span is shared, the data capacity is also shared, and this means that widespread streaming, which is not synchronized by a schedule and comes from many sources, can stress out even a high theoretical data rate (DOCSIS 4.0 promises 10 Gbps maximum download speed). This means, say the cable companies who commented, that there’s pressure to shift to a future model where there is no cable spectrum spent on linear channel delivery, and less sharing of spans. All that means more investment. Given the risk of revenue pressure, that’s not a good thing. Given the slowing pace of adds for linear RF service, that’ particularly not good, because net adds are the net of new customers and losses to streaming services, and it’s likely in most areas that it’s the latter that cable companies worry about.

The video story isn’t all bad news to cable companies, though. Their current big fear is the FWA trend, because FWA has a pass cost as low, or lower, than cable. However, FWA is also a bandwidth-sharing strategy that could be pressured by a lot of streaming. In some areas, with some implementations, it has a greater risk of video congestion than CATV. Thus, a decision by cable companies to shift to a total-streaming model might be within the limits of CATV but put FWA at a disadvantage.

The long-term impact of streaming, though, is likely to be highly disruptive. Here are some other views expressed to me.

First, over two-thirds of overall comments and cable-company comments agree that Comcast will eventually spin off the remainder of its content. Wall Street agrees, I hear, because unlocking shareholder value is key and Comcast will need to offset infrastructure spending with something.

Second, linear RF will die off first in major, competitive, markets; likely first in Verizon’s territory. Where fiber is practical, telcos will deploy it and that will induce cable companies to compete in broadband, which in turn will mean that they’ll want the full RF spectrum of CATV for data.

Third, cable companies will start to specialize infrastructure for businesses, and to offer things like VPN or SD-WAN over business cable, when data-only CATV offerings come along, or perhaps even a little before. They’ll also likely lead telcos in offering things like edge computing.

In all, the operators expect a lot of market changes are coming, and that some of them will likely be visible toward the end of 2025.

]]>
5974
The Role of and Prospects for the Network Digital Twin https://andoverintel.com/2024/11/20/the-role-of-and-prospects-for-the-network-digital-twin/ Wed, 20 Nov 2024 12:31:43 +0000 https://andoverintel.com/?p=5972 Nokia is one of the companies who’s addressed the “digital twin” concept explicitly, and so when it’s Bell Labs group does a piece on digital twins in telecom, it’s worth a look. Operators have mentioned digital twins in my chats, too (53 of 88), so I have some comparative data to relate as well.

A “network digital twin” is a virtual representation of a physical network or a portion thereof. Operators tell me that they would likely deploy the technology in what we might call “administrative zones” corresponding to management/ownership scope, vendor, or other subdivisions of gear that could create real partitions that should be reflected in the virtual world too. The goal of the network digital twin is to provide “contextual system” views of a network, views that mirror the real relationship among elements and thus facilitate understanding and even simulating network-wide conditions.

The Bell Labs piece identifies 35 use cases for network digital twins, and my 53 operator chats on the topic validate 22 of them, but I think the whole list in the article is credible. The ones that operators missed from the list relate largely to the sales/design phase, and I think their omission is due mostly to the fact that my chats were focused on networks-in-being and operations missions.

I was impressed by the article’s recounting of an actual operator project analysis, which identified a potential savings of around 25% in opex. I only got impact comments from 29 operators, but their estimate of opex impacts ran from 20% to a third, so the project analysis fits in the predicted range overall. This combination of information is important in telco evolution, because cost management is critical in preserving profits when revenue growth is challenging because of limits in ARPU and TAM. Up to now, operators have done targeted opex projects that have been aimed largely at limiting interaction between craft people and customer problems, and that area has been (say operators) largely tapped out in opportunities for savings.

Network digital twins offer what’s essentially a horizontal look at the old FCAPS (fault, configuration, accounting, performance, and security management) story, a way to utilize a virtual model of a network at every step of the game. While the operators I’ve chatted with are still dominantly focused on response to conditions, it’s feasible to build a digital twin of a network that’s still in the planning stage, and to use that to refine capacity plans and simulate various operational states, then actually use the twin to commission real networks and offer real services.

One challenge for the whole network digital twin concept is shared by digital-twin technology in general; there’s no user confidence in the availability of a platform, tool, language, or whatever for use in building a twin. Only 9 operators indicated they were actively involved with the technology, and only 2 said they had it operational. None of the latter groups said their digital twin plans or implementations spanned their entire network, and all were currently focused (not surprisingly) on business services and business-specific infrastructure like carrier Ethernet access or MPLS VPNs.

In the group of 53 operators who commented on digital twins, 37 said they believed the benefits of narrow applications like this “would be limited”, but the seven who were looking at digital twins but had not yet made one operational all said they had narrow missions in mind. A big part of that seems to be related to a view that it’s easier to put the twin in place if the network isn’t yet built, because five of the seven had specific comments indicating their targets were greenfield infrastructure.

Among enterprises, things aren’t any better for the network digital twin concept. Out of 414 enterprises I chatted with on operations issues, only 22 mentioned network digital twins and none said they were deploying or even trialing them. I’ve had a few talk with me about blogs I’ve done on the topic, and that group points out that they don’t hear about the concept from vendors, and that they don’t spend a lot of time considering technologies that nobody seems to be trying to sell them. One reason why this few enterprises commented to me on blogs was to see if I did digital-twin integration, or had a reference supplier. That, to me, shows that enterprises don’t know how to go about network digital twin implementation.

Interestingly, over 50 enterprises are looking at digital twin technology in relation to IoT-oriented industrial/manufacturing or other missions. This group includes two who chatted with me on my blogs, and they didn’t mention their other digital twin missions in relation to their network digital twin interest. I asked about this, and they said that their IoT digital-twin stuff was being driven by the vendor/integrator who was working with them on the application, and in both cases this was a specialty partner and not one promoting digital twin technology broadly.

In a way, the situation with digital twins overall is similar to the situation with program languages used to build applications. There are a lot of applications and languages out there, and the former tends to be promoted by vertical-market integrators/vendors and the latter by nobody in particular. Microsoft’s Azure Digital Twins and AWS TwinMaker are the tools known to enterprises, and IBM’s initiatives are known to most of IBM’s strategic accounts, but all those who have evaluated any of these have done so in the context of IoT.

IBM has a nice tutorial on the IoT applications of digital twins, and you could easily apply the story presented to network digital twins, replacing IoT elements with network management APIs. The IBM comment that “Digital twins are benefitting from the fact that there exists an abundance of machine-generated data, which is a luxury that you don’t have in other data science disciplines” could surely be applied to network operations; in fact there’s probably more network status data and more control interfaces available than there are for most industrial/manufacturing processes. Why aren’t network vendors, particularly enterprise vendors, jumping on this?

Some are, sort of. Juniper announced a Marvis update to extend “AI-Native Digital Twin” capability. Cisco uses digital twin technology in its software update/distribution system. Extreme Networks may be the most explicit vendor promoter of digital twin technology, offering an actual implementation tool/process. Their promotion of the concept goes back to 2022, and they have the most developed online data on it. But so far, none of this from Extreme or Juniper has worked its way out into my chats with enterprises, and Nokia’s approach is the one operators cite. Extreme uses a lot of channel partners for their sales, and I wonder if perhaps the digital twin story is one that’s a bit too much education for most channel players.

I believe digital twins are essential for IoT, for network operations, and for the application of AI to real-world systems of any sort. I think vendors are coming to understand that, but the education process is only now starting to spread to enterprises and even network operators. I think that while network digital twins have almost-universal potential in both these groups, the fastest-growing area of interest is still in enterprise IoT applications, and perhaps this means that edge computing services aimed at IoT applications might translate into operator edge interest in a generalized IoT edge model, which might then spread into network digital twin applications more generally. It’s interesting to note that in the Digital Twin Consortium membership roster (as of today), none of the major network vendors are listed, including those I’ve cited in this blog. I think that if network digital twins are to gain broader support, getting key vendors on that list, and getting network missions worked on, may be essential.

]]>
5972
The Security Outlook for 2025, According to Enterprises https://andoverintel.com/2024/11/14/the-security-outlook-for-2025-according-to-enterprises/ Thu, 14 Nov 2024 12:27:33 +0000 https://andoverintel.com/?p=5969 Is network spending now really nothing more than security spending? Obviously not in a total-spending sense, but probably in a capex-growth sense. Of 354 enterprises who commented to me on their 2025 network budgets, 287 said that security capex would grow at an average of 6%, where overall network spending was expected to grow by only 4%. But that doesn’t mean that enterprises are happy with network security technology; of the 287, only 44 said they believed they were getting value proportional to what they spent on security. What does this all mean?

Let’s start with the question of “why is security value questioned?” If the 243 who questioned the value of their proposed 2025 security spend increase, 183 said the top reason was that vendors were overcharging, and the remaining 60 that security threats were increasing. Thus, three times the number of enterprises believed they were being overcharged for their security gains, versus those who thought that threats were overwhelming products.

This issue, then, goes back to something I noted in a blog in January 2024, (when “211 said that they believed they overspent on security”) and again in a blog in October, “CIOs tell me they believe that security is starting to look like a pit they’re expected to toss money into, and that whatever they spend is never enough to satisfy vendors.” A year ago I summarized the threat issues enterprises cited. What I’m hearing is that somehow all the security changes don’t keep up, and that as the 2023 blog suggested, we need a change in how we think about security.

So why haven’t we gotten it? Here, I think, enterprises are rightfully blaming vendors, who (not surprisingly) tend to think first about their own revenue. If you add a layer to current offerings to address new risks, you can charge for it. If you propose a radical change, you open your accounts to new security offerings from others. You can see how this turns out.

Enterprises do have an idea of what the basis for network security should be; in 2023 65% said that the network should detect and block all unauthorized access to applications, and this increased to 82% in 2024. But if you go to those who ask for the capability and explain that they’d have to set and maintain “authorized” access policies themselves, and that the strategy would miss security problems created by infecting authorized users, they start to question their own thinking. I don’t have solid data on this, but it appears that if these two points are considered, the block-the-unauthorized strategy loses more than half its support.

I got 2024 security views from 72 sources that I believe are highly qualified. This group identified what they say are very separate risk areas that likely demand at least some individual security tool attention. Let’s look at them.

The first one was the risk of the hijacked or infected client. Most security tools can really only authenticate users, and so they’re bypassed if a user can be impersonated or contaminated. The problem here is that you either have to look at the human user or the client device user, and both of these are difficult to pin down for identification purposes. Most companies don’t use biometrics for user identification, and absent that you’re back to user ID and password, which many write down or share. Users often access their applications from home or on the road, so there’s no reliable 1:1 relationship between person and device, and no easy way to ensure that users who get a new device won’t find themselves cut off.

The second risk was that of accidental API exposure. One of the new challenges of security is created by componentization of applications, which requires network connections to “internal” APIs. If these APIs are exposed beyond the intended connectivity, they can allow a hacker to bypass traditional access-level security. Two thirds of enterprises admit that they aren’t sure exactly what internal APIs might be addressable on their networks, or even from the Internet, and almost the same number admit to having no plan for using address space management to control accessibility.

The third risk was platform software vulnerabilities and exploits. Here, “platform software” means the software that’s used to sustain the operating environment of applications, including operating systems, middleware, management tools, and even security tools. Hacker gangs interested in getting to the largest possible number of targets are likely to look for these, and it’s very difficult to identify an attack on a platform tool until the exploit becomes known. Then you have to worry about how to remedy the problem for all the platform users.

What magical tool fixes all these things? The expert enterprises agree that none do. What’s needed? Of the 72 sources, 61 said the same thing; more attention to security practices than to security tools. According to this sub-group, the big problem with security over-spend is that management often sees tools as an alternative to proper staffing, and many security vendors will make this point in a sales pitch. The problem is that it doesn’t work.

This group also points out that the problem with the network permitting only authorized connections is also one of human effort and resources. Of 11 who say that they actually enforce connection security, 100% say it requires “a lot” of effort to keep the authorized list maintained and to update how authorized users are recognized as roles and devices change. It’s worth it, though, because this group of 11 reports less than a quarter the number of security issues per enterprise as the full 354 enterprises who offered security comments. And, while this group of 11 said they were going to increase incremental security personnel costs by 3%, the full 354 postulated no 2025 staffing cost increase for security beyond the enterprise-wide expected payroll increases.

A final interesting point was that among this group of 72, 18 said that they didn’t need any specialized security tools or products at all to meet their companies’ goals. Tuning development and deployment practices alone were enough. For this group, virus scanning and firewalls were sufficient. I think that’s likely due to a specialized situation at these companies, but I also think that it’s an indication that we really do need either a new approach to security or a shift of focus from layering tools to improving security practices overall.

]]>
5969