Launch FAQ

Is an LLM controlling greenhouse hardware directly?

No. Iris (our OpenClaw AI agent) writes tactical parameters and hypotheses. The dispatcher validates those values. The ESP32 firmware owns relay control and evaluates the greenhouse every 5 seconds.

The exact tactical parameters are listed in AI-Writable Tunables.

That split is deliberate. LLMs are useful for slow-loop planning across forecast, crop needs, equipment state, cost, water, and prior lessons. They are not the right tool for deterministic relay timing, hysteresis, interlocks, or safety preemption.

Where does local inference fit?

Iris is an OpenClaw agent, not a single hard-coded API call. Routine planning events can route to Gemma 4 26B A4B (MoE), served locally under the gemma4-26b alias. Larger milestone reviews and major deviations can route to a heavyweight cloud peer.

Both paths use the same safety boundary: the model writes bounded tactics through MCP tools, the dispatcher validates them, and the ESP32 owns real-time control. Local inference makes frequent greenhouse reasoning cheaper and less dependent on an external model provider; it does not make the LLM safety-critical.

Is Verdify local-first or local-only?

Local-first. Routine reasoning should stay on Cortex when the local route is healthy and the prompt fits the current budget. Verdify is not local-only: required full plans and major reviews can use the cloud peer while local full-plan context trimming is still being baked.

That distinction is part of the audit contract. OpenClaw stamps the planner instance, and public plan records should make the route visible instead of pretending every planning cycle used the same backend.

What does Gemma 4 26B A4B (MoE), served locally under the gemma4-26b alias, mean here?

The local planner model is Gemma 4 26B A4B (MoE), served locally under the gemma4-26b alias through vLLM. MoE means mixture-of-experts: a router activates a subset of expert weights for each token. It is useful for local planning capacity, but it does not remove throughput limits from long-context planning.

The safety claim does not depend on Gemma being perfect. If the model is slow, wrong, or unavailable, the planner SLA and public archive expose that failure while the ESP32 continues enforcing the last valid bounded setpoints and hard safety rails.

What memory does Iris use?

Iris reads more than a prompt template. The planning context includes live telemetry, 72-hour forecast, active crop bands, equipment state, recent stress/compliance, previous plan hypotheses, scorecard outcomes, validated lessons, alerts, and the static website content that documents the greenhouse’s structure, zones, equipment, crops, and known limits.

Image observations and history have embedding-backed similarity where available. For plan outcomes, Verdify currently exposes structured memory through plan_journal, scorecards, active lessons, and the public archive; that is the primary memory the planner uses before writing new tactics.

Why not just use a PID controller?

PID is useful when the control objective is stable, the actuator effect is continuous, and the system is mostly single-input/single-output. Verdify’s greenhouse is not that clean.

Temperature and VPD can fight each other. Ventilation can cool the greenhouse while importing dry air. Misting can lower VPD while trapping heat if the room stays sealed too long. Gas heat, electric heat, fog, fans, vents, misters, grow lights, water budget, forecast error, and crop bands all interact.

The ESP32 still handles deterministic control. Iris is used for tactical planning: when to pre-cool, when to spend water, when to accept a short excursion, when to widen hysteresis, and when a previous lesson should override the default tactic.

Those tactical choices map to bounded registry parameters in AI-Writable Tunables.

Why not RL?

RL is a credible research path for greenhouse control, especially with environments such as GreenLight-Gym. It is not Verdify’s launch claim.

Verdify has one physical greenhouse. Exploration mistakes have plant and hardware consequences. The next credible step is counterfactual replay: take recent telemetry, replay alternate tunables, and estimate whether they would likely have reduced stress. Simulator-trained policies come later, if the replay and data quality justify them.

What does “self-correcting” mean here?

It does not mean the AI rewrites its own code or invents new tools. It means every plan is a hypothesis, the next cycle measures the outcome, and durable findings become lessons that future plans read.

The public lessons page is intentionally split between curated canonical lessons and raw machine output. That separation exists because noisy auto-extraction is not the same as operational knowledge.

What is VPD and why does Verdify care so much?

VPD is vapor pressure deficit: the drying pressure plants experience. It is a better greenhouse control target than relative humidity alone because it combines temperature and moisture into the plant’s transpiration environment.

On dry Colorado spring days, VPD can become the binding constraint even when temperature looks acceptable. That is why Verdify tracks temperature compliance and VPD compliance separately, then scores the overlap.

Does the AI know when physics wins?

It has to. The greenhouse has finite cooling capacity, an undersized intake path, dry outdoor air, solar gain through polycarbonate, and mixed crop targets. On hot dry days, no software plan can create shade cloth or more vent area.

Good planning means reducing avoidable stress, using water and energy intentionally, and logging the physical limit when the greenhouse cannot hold band.

Does Verdify claim better yield or profit?

Not yet. Verdify has crop and harvest logging, but the public proof layer is currently focused on system automation: climate tactics, relay-boundary enforcement, telemetry, costs, failures, and lessons.

The defensible claim is operational: plans, telemetry, scorecards, costs, failures, and lessons are public. Yield and profit claims need more harvest records, crop-stage normalization, and comparable baselines.

What happens if Iris misses a plan?

The ESP32 keeps running with the last valid setpoints and safety rails. The archive shows the missing plan. The scorecard shows the plant-stress consequence.

The April 22-25 outage window is published because hiding it would make the evidence layer weaker. The system is more credible when failures are visible.

Is this solar powered?

The public wording is “solar-aligned,” not “off-grid solar powered.” The home has rooftop solar and batteries, but the greenhouse still uses grid electricity and gas heat when physics requires it. Costs are tracked separately from planner score.

Can someone rebuild this?

Not as a turnkey kit today. The site publishes architecture, equipment, a sample dataset, plan examples, scorecard examples, and build notes. Full code/prompt publication remains a deliberate project decision because the live system controls real equipment and contains operational details that need a careful scrub before broad release.

How should Verdify be compared to Mycodo, HAGR, iGrow, or commercial CEA systems?

Mycodo and HAGR are strong maker/grow-room control references. iGrow is a serious optimization benchmark. Koidra, Source.ag, and Blue Radix target commercial growers at a different scale.

Verdify’s lane is public falsifiability at home scale: a real greenhouse where the AI plan, physical outcome, cost, failure, and lesson are inspectable.

Next: Related Work gives the peer comparison in more detail.