Acer is turning the humble “mini PC” into something far more ambitious with the new Veriton GN100 AI Mini Workstation, and its latest move in New York shows exactly what it’s aiming for. Instead of treating AI development as something that lives only in the cloud or in massive data centers, Acer is betting on a future where serious AI work happens on a small box sitting right on your desk.
At the center of that bet is the Veriton GN100, a compact, 150 x 150 x 50.5 mm system that weighs under 1.5kg, but is built on the same NVIDIA DGX Spark platform that NVIDIA markets as a kind of “AI supercomputer on your desk.” Inside, it runs the NVIDIA GB10 Grace Blackwell Superchip, pairing a 20-core Arm CPU with a Blackwell-generation GPU, 128GB of unified LPDDR5x memory, and up to 4TB of self-encrypting NVMe storage. In raw numbers, you’re looking at up to 1 petaFLOP of FP4 AI performance in something roughly the footprint of a small router.
On paper, that spec sheet makes the GN100 sound like a scaled-down data-center node, not a typical workstation tower. And that’s the whole point: Acer is trying to collapse the gap between “personal PC” and “enterprise AI rig” into a single, developer-friendly box.
The latest announcement pushes this idea even further. Acer has confirmed that the Veriton GN100 will power “The Spark Hack Series – New York,” a three-day AI hackathon co-hosted with NVIDIA and early-stage VC firm Antler from April 10–12, 2026. Every team in the event gets hands-on access to the GN100, but with some meaningful upgrades that go beyond the original launch spec.
First, Acer is flipping a switch on multi-node scaling. The GN100 can now be linked in clusters of up to four systems over a 200GbE RoCE switch, effectively turning a handful of shoebox-sized machines into a small AI cluster. With that setup, Acer says developers can push models up to around 700 billion parameters, up from roughly 405 billion when only two systems were supported. It’s not quite “frontier-lab scale,” but for an on-prem workstation cluster that fits under a table, that’s a big jump in ambition.
Equally important is what’s happening on the software side. The GN100 is now positioned as a turnkey box for NVIDIA’s NemoClaw reference stack, which is NVIDIA’s own foundation for building autonomous, long-running AI agents. NemoClaw leans on open models such as NVIDIA’s Nemotron and runs through NVIDIA’s OpenShell runtime, which is designed to keep these agents inside secure sandboxes while still letting them perform complex, multi-step work.
In practice, that means you can run the kind of AI agents that don’t just answer a single question or write one function—they plan, iterate, call sub-agents, and stitch together longer workflows, whether that’s refactoring a large codebase, running recurring analytics, or automating content pipelines. These are the same style of agents people are experimenting with through tools like Claude Code, Cursor, or other coding-assistant platforms, but with the twist that here they’re running fully local, under tight governance rules, and without sending data out to a shared cloud environment.
Security and control are clearly part of the pitch. NemoClaw and OpenShell let teams start agents with zero permissions and then explicitly grant access based on intent and policy. That builds a more enterprise-palatable model, where you can limit what an AI agent can touch—files, internal systems, or tools—while still taking advantage of its ability to run continuously in the background. For companies worried about data leakage or per‑token cloud costs, the idea of a local, policy-driven AI agent that runs on a small box under someone’s desk is pretty compelling.
To keep all of that manageable, Acer is layering its own software on top in the form of Acer Sense Pro, which debuts as a kind of control tower for these GN100 units. Rather than being a generic system monitor, Sense Pro is built with AI developers in mind. It pulls together real-time readouts on CPU, GPU, storage, and memory into a single dashboard, but also exposes inference-centric metrics like Tokens per Second (TPS) and Time to First Token (TTFT) via simple, vertical bar charts.
That last bit might sound minor, but anyone who has tried to tune a local LLM knows that perceived latency can make or break the user experience. Sense Pro’s tooling is set up so teams can compare different configurations and models, balancing context length, response quality, and speed in a more structured way instead of flying blind.
The app doesn’t stop at performance charts. Acer is also baking in self-diagnostics, a community-driven knowledge base, and even a local AI agent to help troubleshoot hardware issues privately. The idea is that if your GN100 cluster starts misbehaving in the middle of a long experiment, you don’t have to sift through raw system logs alone—the system can help flag what’s going wrong, and suggest next steps, without sending data back to Acer’s servers.
On the performance-testing side, Sense Pro makes it easier to benchmark and compare multiple models on the same hardware, both in terms of speed and output quality. For teams that are juggling everything from code‑focused models to multimodal or reasoning-heavy LLMs, that matters: you can more quickly decide which engine is best for a given workflow without re-creating ad-hoc scripts each time.
All of this—NemoClaw support, multi‑node scaling, Sense Pro—is coming together for The Spark Hack Series in New York, which doubles as a real-world stress test for Acer’s vision. Co-hosted at Antler’s NYC office, the event is aimed at frontier builders, applied ML engineers, systems engineers, and startup teams, with tracks that focus on human, environmental, and cultural impact challenges using open data from the City of New York.
It’s not just a generic hackathon brief. Teams are asked to use that open data to improve health, safety, and economic opportunity, rethink sustainability and urban movement, or expand access to culture and recreation while preserving neighborhood history. And they’re expected to do it with fully local AI workflows running on the GN100, rather than spinning up yet another cloud-heavy stack.
There’s also a very practical incentive: up to 40 teams of three to four participants will be competing, and the three winning teams will each take home a Veriton GN100 of their own. For early-stage founders or small dev groups, that’s not just some swag—it’s a serious piece of on-prem compute they can keep using once the weekend is over.
Stepping back, the GN100 is more than just a hardware spec sheet. It’s Acer’s answer to a question the industry is wrestling with: how much AI should run locally, and how much should we keep dependent on hyperscale cloud infrastructure? With NVIDIA’s DGX Spark architecture under the hood, the GN100 sits squarely on the “local” side of that spectrum.
By pairing 128GB of unified memory with fast local NVMe storage and a modern Blackwell-class GPU, the box is built to run large language models and other AI workloads directly on-device, without shunting every request over the network. For organizations that promise lower latency, more predictable costs, and tighter data control, especially when you start chaining together multiple GN100s into that four-node configuration for models approaching 700B parameters.
There are trade-offs, of course. Clustering four mini workstations to handle near-frontier-scale models won’t replace massive data-center clusters for the very largest training runs or global-scale inference services. But that’s not really the target. Acer seems focused on the “personal supercomputer” niche—teams and individuals who want serious AI compute they can own, manage, and keep on-prem, possibly as a complement to cloud resources rather than a full replacement.
From a developer’s perspective, the value proposition is straightforward:
You get a compact box with DGX-class software, NVIDIA’s AI stack pre-installed, support for frameworks like PyTorch and Jupyter, and a carefully curated reference stack for running agentic workflows via NemoClaw, all with Acer’s Sense Pro keeping an eye on performance, health, and latency. If your work involves fine-tuning, inference, or building specialized AI agents that need to stay close to sensitive data—think internal tools for finance, healthcare, product design, or R&D—this is the kind of machine that can slot into a lab, office, or even a home setup without demanding a server rack.
Availability-wise, Acer says the Veriton GN100 AI Mini Workstation is already shipping globally, though exact configurations, pricing, and regional SKUs vary by market. For concrete pricing or channel details, Acer is still pointing buyers to local offices and regional websites, which is typical for this kind of semi-enterprise device.
What’s notable is the timing and positioning: the GN100 isn’t arriving in a vacuum. It lands just as hardware vendors and cloud providers are all racing to define what “AI PCs” and “personal AI workstations” actually look like. Acer is effectively planting its flag on the high-end, developer-first side of that spectrum, pairing serious NVIDIA silicon with a stack that’s explicitly tuned for local, long-running agents and multi-model experimentation.
For the teams heading into The Spark Hack Series in New York, the GN100 will simply be the machine they’re building on for the weekend. But zoomed out, it’s also a small preview of a likely near future where every serious AI team has at least one of these “personal supercomputers” humming away somewhere nearby—quietly chewing through models, running agents, and keeping more of the AI workflow in-house rather than somewhere in the cloud.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
