Behind the buzzword: What is edge computing?

Want to know what edge computing is, but you're sick of wading through all the hype? Here's what you need to know about it with no fluff, only facts.

Dec 17, 2025 • 4 Minute Read

Please set an alt value for this image...
  • Cloud
  • Tech Operations

In this edition of Behind the Buzzword, we cover an industry buzzword that has been around for a while, yet is often tricky to understand: edge computing. Here's what you need to know about it, explained in a four-minute read.

Edge computing: Bringing things closer to the user

Back in the ancient 1990s, we had blockbuster video, Space Jam, Gameboy Color—wait, am I old?—and also, a very slow version of the internet. With your computer, you would go to access a website or video, but it would often be hosted somewhere far away, perhaps in a different country. That resulted in huge latency, and if you were playing a hot new multiplayer game like Doom or StarCraft, the dreaded lag. 

It was the lag that killed me, I swear! Not my lack of gaming skills.

That’s when Content Delivery Networks, or CDNs, were invented. The idea was that rather than offering an online service from a single location in the world and having everyone try and access it from there, that service would be hosted in multiple locations around the world, so people could access it from wherever was closest. 

This philosophy of bringing computing closer to the source of the data was called Edge Computing. Edge computing happens between the edge of a network and the cloud, so there’s less distance for data to travel, thereby reducing latency. Think of it like having a grocery store close to your house instead of having to go out to the farm to get your fruits and vegetables. 

Edge nodes: The backbone of edge computing

Edge computing happens at edge nodes, and those nodes are what exist between that local point and that distant data center or cloud. These could be any device or computing resource, like:

  • An Internet-of-Things (IoT) device

  • Other devices or laptops doing local processing

  • Edge servers

  • Virtual machines

  • Microservices

In fact, edge computing is closely linked with IoT (that said, they’re not the same thing.) Imagine if your sensor, camera, smart car, or heart monitor had to do all its processing in a data center on a different continent—this would be impossibly slow! Instead, that processing often happens much closer for latency-sensitive applications.

The benefits of edge computing

  • Speed and efficiency: Less physical distance, reduced network congestion.

  • Less service interruptions: Edge computing often works with limited connectivity and during cloud and internet service outages.

  • Lower costs: Lower bandwidth, infrastructure and data storage requirements.

  • Security: Processing sensitive data locally can reduce cloud exposure and attack surfaces and help with complying with regional and industry regulations.

The downsides of edge computing

  • Complexity: Distributing a system using different components is more complex than keeping it in one place using homogenous components.

  • Hardware failures: Cloud computing is great because you’re not responsible for physical maintenance, but some forms of edge computing can make this your problem (and require the appropriate in-house skills.)

  • Also security: Edge nodes can often be lax in terms of security, and they’re susceptible to malicious attacks (particularly denial of service for IoT)

Edge computing and modern applications

Edge computing has many applications, including but not limited to connected and self-driving cars, smart cities, home automation, defense and satellite systems. In healthcare alone, you’ve got infusion pumps, heart monitors, MRI machines, and asset tracking tags where a hospital might want to use edge computing to process vital signals and trigger alerts rather than have it pass back and forth with a data center.

There’s also rising interest in edge artificial intelligence (aka. Edge AI or on-device AI), where people are trying to run AI close or on the device where data is collected. The advent of generative AI in particular has impacted edge computing.

Is edge computing a buzzword? Not really

Part of the problem with edge computing is that it’s very nebulous. Edge computing can take so many forms because it has so many use cases, and what counts as an edge node is quite broad. It also goes through hype cycles, particularly when a leader reads a thought piece on how edge computing is going to save on cloud or AI costs or solve X business problem, only to have the actual complexities set in later.

Does that make it a buzzword? No, just a badly used one. After all, CDNs are one of the earliest forms of edge computing, and it would be hard to argue that Cloudflare—which is used by 19% of all websites on the internet, and makes it a far less 1990’s experience—is not a useful, real application of this concept.

Conclusion

Hopefully from this brief article hosted on a CDN close to your location, you’ve got a bit more understanding of how edge computing works. That's a wrap on the latest Behind the Buzzword!

Want to know more about edge computing?

If you’re wondering if your organization needs edge computing, check out Obinna Amalu’s article: “To the edge: Strategies to adopt edge computing in your organization.” Obinna is a known expert in this space who had a hand in rolling out edge solutions at Google, so definitely worth a read.

More Behind the Buzzword articles

Adam Ipsen

Adam I.

Adam is a Lead Content Strategist at Pluralsight, with over 13 years of experience writing about technology. An award-winning game developer, Adam has also designed software for controlling airfield lighting at major airports. He has a keen interest in AI and cybersecurity, and is passionate about making technical content and subjects accessible to everyone. In his spare time, Adam enjoys writing science fiction that explores future tech advancements.

More about this author