home About Us Podcast Episode 36

The Critical Lowdown Podcast Episode 36

The Open Source Advantage: Maximizing Data Center Efficiency Through Disaggregation - Part 2

Listen back: Part 1

Open Networking is gaining traction in data centers, offering alternatives to traditional, vendor-locked systems. It promises greater flexibility, simplified operations, and potential cost savings. But what does it mean for your data center in practice?

In Part 2, we explore the critical link between financial decisions and technological innovation, unpacking how cost-effectiveness is becoming a primary driver in network infrastructure choices. Discover the unexpected synergy between finance and technology, and how it's paving the way for more accessible, efficient, and innovative network solutions.

Subscribe to The Critical Lowdown from EPS Global wherever you get your podcasts:

Barry McGinley - EPS Global

Barry McGinley
Head of Technical for EMEA, EPS Global

Prasanna Kumara S - IP Infusion

Prasanna Kumara S
Technical Marketing Engineer, IP Infusion

Kei Lee - UfiSpace

Kei Lee
AVP Technical Sales, UfiSpace

If you have any questions about or need advice or tech support for your upcoming project, don’t hesitate to get in touch.

Transcript of Podcast


Barry: Kei, something we hear about all the time is vendor lock-in and how Open Networking avoids it. Can you explain the financial benefits of avoiding vendor lock-in?

Kei: I'll summarize this into two key points.

The first key point is the Pay-As-You-Go model. When you look at Data Centers or Central Offices, power, cooling, and space are crucial considerations beyond the technical design of the hardware itself. With the Pay-As-You-Go model, especially in open disaggregated systems, devices are typically one RU or two RU, not large chassis. This allows you to build infrastructure incrementally. For example, you can start with a manageable scale and easily expand as your user base grows from hundreds to thousands without altering your existing infrastructure. This scalability is a significant financial benefit.

The second key point is the distinction between CapEx and OpEx. For CapEx, using off-the-shelf components rather than proprietary ASICs reduces costs while maintaining performance. The economies of scale from leveraging merchant silicon allow us to build devices at a lower cost, which is why the open disaggregated model is gaining momentum, even among large industry players. Merchant silicon has advanced significantly over the past decade, matching many of the features of proprietary ASIC designs.

For OpEx, in a closed, vendor-locked model, you typically have to buy hardware, bundled software, and a service support contract from the same vendor. This lock-in means you can only purchase service contracts from that vendor, often at a higher cost. In an open disaggregated model, hardware and software are decoupled, reducing service contract costs. As you mentioned, Barry, this allows companies like IPI to act as system integrators, providing a one-stop shop for end customers, which significantly lowers maintenance and operational costs.

Lastly, from my experience, financial decisions often drive technical decisions. If a solution makes financial sense, it will eventually be adopted. I've noticed that CTOs often report to CFOs, underscoring the importance of financial considerations in technical decisions.

Barry: I completely agree with you there. While I don't like it when customers focus solely on cost savings, in the end, that's often what it comes down to. Off-the-shelf silicon and initiatives like the OCP over the last 14 years have allowed companies to make hardware more affordably. The goal is for everyone to use the same components, driving prices down for consumers, and it is working.

Prasanna: Thanks. The software brings in a lot of features and legacy, beyond data centers. We've been providing software for advanced carrier-grade routing features, like SRMPLS and MPLS with VPN, all with the same familiar and reliable network operating system. When it comes to data centers with EVPN and VXLAN, the port plus VLAN feature is one to look for, which frees the end user from worrying about VLANs. You can connect any VLAN to any port and map it to any customer, bringing simplicity.

OcNOS addresses licensing difficulties with simple perpetual licensing. You buy it once and you're done. IP Infusion also offers global support at a very reduced price, helping customers save significantly on OPEX.

It has helped many customers achieve their technical goals and reach desired speeds. For example, Scott Data, one of the largest multi-tenant data centers in the USA, selected IPI over Sonic and Cisco because we provide all the features, a one-stop solution, and support. The main things are the support and technical excellence. We tested all their use cases before providing the solution.

There are similar examples with Madeo. They have data centers and selected us over Sonic, Arista, and Microtech solutions. They had preferred optics which IP Infusion was able to support. These are the success stories happening.

Barry: Yeah, I actually saw the Madeo stuff at TIP in Fyuz in Lisbon this year. There was a big presentation on it. Good win for you guys.

So Kei, can you provide some examples of cost savings? If an organization adopts Ufi hardware with IPI as the software, what cost savings are achieved by the organization?

Kei: Sure. We actually have some real case customers we are working with, both here and also in Europe. These are pretty large carrier providers. The cost savings actually came in two parts by moving into the open disaggregated model.

We can preserve the scalability because of the merchant silicon solution. I'll give out some numbers in a second. The CapEx savings have gone down quite a lot, but that's not the end of the story. As we talked about a second ago, using the open platform with the hardware and decoupling the software means the traditional bundled service contract concept has gone away. So the savings are not only on CapEx, but also on OpEx.

For CapEx, you pay for it and maybe do a straight-line depreciation for X number of years. But OpEx, you don't depreciate. You actually have to pay for it like electricity in your house. You have to keep paying over and over, year after year. So that actually is a very significant cost-saving model in terms of ongoing expenses. A lot of these companies do depreciation - I see anything starting from four or five years all the way up to seven or even 10 years depreciation. So when your hardware has gone down to zero value, if you keep running it because it still meets your business requirements, you still have to pay for the maintenance, the service support, and all these other things. Our open disaggregated model has reduced both sides significantly.

One other thing I want to point out and share: when we started this business, we built a lot of devices using 16 nanometer merchant silicon. Then it went down to seven. And right now, as you and I are talking this morning, we are building the latest generation with five nanometer technology.

Now, to the end user, what does this smaller nanometer mean? To summarize in plain English, it means two things in terms of cost benefit:

  1. Because the gap in transistor design on the wafer becomes smaller and smaller from 16nm to 7nm and now to 5nm, we can fit a lot more gates - all the different logic gates - into the wafer. What that means to you as an end user is you can expect a lot higher computing processing power on the same surface area chip design.
  2. The smaller the nanometer, the more power saving on the device. Now, I want to be careful - more power saving doesn't mean the switch itself has lower power supply usage. What it means is per megabyte of processing or per 400G port, the power consumed has gone down using a 5nm design compared to a 16nm design.

So these benefits are built into the hardware we're developing, and all these cost benefits go back to the end user who's leveraging this technology.

Barry: That's actually really interesting. So you could use that in your innovation or reducing power. That's brilliant.

Okay, guys, we're going to finish up, but I have a couple of questions:

  1. What's one takeaway for our listeners to remember about open networking?
  2. Where do you see open networking in seven years' time?

I'm always interested to see what people in the industry think about where it's going to go.

Kei: Before I answer the seven-year question, I've actually seen open networking come a long way in the last few years. That's why you see OCP, TIP, and all these open foundations every year. When I present at OCP, I see more and more people attending, even during the pandemic with people coming online. So it's certainly catching up with a lot of momentum.

To answer your question, where I see this in seven years:

  1. The technology is becoming more mature. I want to emphasize technology in the sense of both hardware and peripherals. This includes merchant silicon, thermal design, more intelligent temperature control, and things like liquid cooling. These advancements have driven a lot of momentum and partnerships with companies like yours.
  2. . We continue to see a lot of software features become more integrated into our open-source platform.

So in seven years, or even five years, or even three years, Barry, I see the momentum continuing to build. I think it's a very exciting time to be in this field.

Lastly, I think customers see the benefit. I want to emphasize one point I said earlier: I observe that financial decisions drive technical decisions. So the last thing I want to leave with the audience is this: if open networking makes technical sense from both an OpEx and CapEx perspective, eventually all these things will catch up from a deployment perspective.

Barry: Okay. Thanks. You've painted a nice rosy picture. That's quite good. And Prasanna, basically the same question for yourself:

  1. Where do you see open networking in seven years?
  2. Regarding IPI, what is allowing companies to shift from traditional networking and move to IPI?

Prasanna: It's an interesting aspect actually. When we look at the next seven years:

  • More sophistication is going to come in the software.
  • Hardware is going to increase in speed and capabilities.
  • AI is going to play a bigger role in terms of network monitoring and recovery.

All these elements work together to provide a very good manageable solution to the end customer. And it will keep evolving - it doesn't stop at one point.

In seven years, we can expect to see:

  • Networks trying to auto-recover by themselves.
  • Networks allowing for greater workloads to go through them.

So this is where I see a lot of transition happening, both in terms of data handling and intelligence in the network.

Barry: I completely agree with you there. While I don't like it when customers focus solely on cost savings, in the end, that's often what it comes down to. Off-the-shelf silicon and initiatives like the OCP over the last 14 years have allowed companies to make hardware more affordably. The goal is for everyone to use the same components, driving prices down for consumers, and it is working.

Glossary of Terms

  • ASIC (Application-Specific Integrated Circuit): A type of integrated circuit customized for a particular use, rather than intended for general-purpose use.
  • BMC (Board Management Controller): A specialized microcontroller embedded on the motherboard of many computers, especially servers, to manage the interface between system management software and hardware.
  • Data Plane: The part of a network that carries user traffic, as opposed to the control plane, which carries signaling traffic.
  • EVPN (Ethernet VPN): A network technology that provides Layer 2 and Layer 3 VPN services over an IP/MPLS network.
  • Merchant Silicon: Standardized silicon chips used in networking hardware, as opposed to proprietary ASICs.
  • RDMA (Remote Direct Memory Access): A technology that allows data to be transferred directly from the memory of one computer to another without involving the CPU, cache, or operating system of either computer.
  • RoCE (RDMA over Converged Ethernet): A network protocol that allows RDMA over an Ethernet network.
  • VXLAN (Virtual Extensible LAN): A network virtualization technology that attempts to improve the scalability problems associated with large cloud computing deployments.
  • IRB (Integrated Routing and Bridging): A network configuration that allows for both routing and bridging within the same device, providing flexibility in network design.
  • Leaf-Spine Architecture: A network topology commonly used in data centers where leaf switches connect to spine switches, providing a scalable and efficient network design.

Need Help?

We have local language and currency support in each of our 28 locations, ensuring you always have access to friendly customer support to deliver your hardware solutions regardless of your location.