Nexvec from Edgecore

A Turnkey Open Infrastructure Solution for Enterprise AI

Composable Compute

In partnership with Liqid, Edgecore’s composable infrastructure solution utilizes industry-standard data center components to create a flexible, scalable architecture—built from pools of disaggregated resources.

Composable compute illustration

Dynamic Resource Allocation, On Demand

Compute, networking, storage, GPU, FPGA, and Intel® Optane™ memory are interconnected via intelligent fabrics, enabling dynamically-configurable bare-metal servers. Each server is precisely tailored with only the physical resources required by the application—nothing more, nothing less.

Improve Efficiency, Reduce Waste

By disaggregating and reallocating hardware as needed, you can double or even triple resource utilization, significantly reducing power consumption and lowering your carbon footprint—especially valuable for AI-centric deployments.

Powered by Liqid Matrix Technology

Edgecore’s composable infrastructure, driven by Liqid Matrix, allows infrastructure to adapt in real time to workload demands. Full utilization becomes achievable, while improving scalability and responsiveness.

Automation for Next-Gen Workloads

Infrastructure processes can be fully automated, unlocking new efficiencies to meet the data demands of next-generation applications—AI, IoT, DevOps, Cloud, Edge Computing, and support for NVMe-over-Fabric (NVMe-oF) and GPU-over-Fabric (GPU-oF) technologies.

Architecture comparison
Better AI Performance
Optimized Efficiency
Maximum Flexibility
Lower Power Consumption

More GPU Horsepower. Fewer Servers. Greater AI Results.

Scale Up to 30 GPUs per server to meet your AI workload demands while lowering your power and increasing your AI utilization.

Read Whitepaper
GPU performance chart
GPU server layout

Drive Down AI Costs with Smarter GPU Utilization

Achieve up to 100% GPU Utilization for Maximum Tokens per Watt and Dollar.

Read the Solution Brief

Leverage Multi-Vendor GPUs

Your AI, your choice. Harness the power of silicon diversity for unmatched flexibility and agility.

GPU performance chart
Self-driving fabric diagram

The Path to a Self-driving Fabric Starts Here

Build your own private AI Inference cloud with LIQID Matrix® Software, Kubernetes and Nvidia NIM automation.

Read the Whitepaper

Accelerate AI with On-Demand GPU Provisioning

Choose your own infrastructure adventure. Leverage our intuitive UI, CLI, and Northbound APIs for Kubernetes, VMware and SLURM.

Read More
Provisioning UI screenshot

Need Help?

We have local language and currency support in each of our 28 locations, ensuring you always have access to friendly customer support to deliver your hardware solutions regardless of your location.