AI Greenhouse Control
Verdify is a public case study in AI-assisted greenhouse control. Crop profiles define the target bands, Iris (our OpenClaw AI agent) plans tactics through OpenClaw, the ESP32 executes local control every 5 seconds, and scorecards measure whether those tactics reduced plant stress without wasting water, heat, electricity, or equipment cycles.
This is not a chatbot bolted onto a garden. It is a closed-loop planning system running against a real 367 sq ft solar-aligned greenhouse in Longmont, Colorado. Routine planning is local-first with Gemma 4 26B A4B (MoE), served locally under the gemma4-26b alias; it is not local-only, because larger cloud reasoning remains available for heavyweight reviews and major deviations. Slack is the human operations surface where Iris explains successful plans, forecast deviations, watch items, and checklist work, but the safety-critical control loop stays local on the ESP32. The home has rooftop solar and batteries, but the greenhouse still uses grid power and gas heat when physics requires it.
The Control Loop
Climate probes, soil sensors, water quality, equipment state, weather feeds, power meters, and camera observations describe the greenhouse.
OpenClaw gives Iris memory, MCP tools, prior plans, lessons, scorecards, forecast context, and the static site pages that describe the greenhouse.
Gemma 4 26B A4B (MoE), served locally under the gemma4-26b alias, handles routine checks and smaller deviations; a larger cloud peer handles milestone reviews and major shifts.
The ESP32 enforces tactical setpoints through heaters, fans, misters, fog, vents, grow lights, and pumps.
Slack receives the operator-facing explanation: plan ID, prior scorecard, forecast, intended posture, experiment, and watch items.
Daily summaries compare planned targets against measured temperature, VPD, stress hours, cost, water, and runtime.
Validated findings become lessons that the next planning cycle reads before writing new tactics.
The operator-facing version of this loop lives in Slack Operations. That page shows how a successful plan, a daily task queue, and a forecast-deviation adjustment look when Iris and Orbit explain the work in the greenhouse channel.
Local Inference and Memory
The interesting AI layer is not just “an LLM writes setpoints.” Iris is an OpenClaw agent with a local inference path. Routine triggers can be routed to Gemma 4 26B A4B (MoE), served locally under the gemma4-26b alias, while higher-consequence planning reviews use a larger cloud peer. The routing policy keeps frequent reasoning cheap and local without putting the model in the relay loop.
Control split: Iris writes bounded tactical intent. The dispatcher validates it. The ESP32 owns relay decisions every 5 seconds. Local-first reasoning never means LLM-direct hardware control.
Before writing a plan, Iris can read several kinds of memory:
Previous hypotheses, outcomes, stress hours, water use, cost, and planner scores from plan_journal and scorecard views.
Curated lesson families from prior plan outcomes, with validation counts and raw machine output kept separately for auditability.
Structure, zones, equipment, crop pages, known limits, and build notes are bundled as static context so Iris reasons against the real room, not a generic greenhouse.
Current climate, equipment state, 72-hour forecast, crop target bands, DLI, water, energy, alerts, and data-health checks arrive through MCP and the planner context window.
Image observations and prior records are indexed for similarity where available; structured plan history and lessons provide the planner's main greenhouse memory today.
Iris turns the same plan into a readable Slack brief, while Orbit posts human tasks like hydro checks, pest inspection, grow-light checks, and reservoir service.
That is the launch story: a local agent using retrieval, memory, and all available greenhouse data to tune bounded tactics, while deterministic firmware remains responsible for real-time safety.
What the AI Actually Controls
Iris does not flip relays, open vents, start heaters, or run pumps directly. That separation is intentional. The AI writes tactical parameters: temperature bounds, VPD thresholds, misting aggressiveness, fog escalation, ventilation posture, water budget, light posture, and experiment notes. The ESP32 reads those parameters and owns the real-time state machine, relay decisions, dwell timers, and safety behavior.
For the exact writable parameters, defaults, bounds, and relay impact, see AI-Writable Tunables.
The full safety split is documented in Why the AI Does Not Control Relays.
Temperature planning is constrained by outdoor air, solar gain, heating capacity, fan airflow, vents, fog, and the concrete slab. Good control is not a perfect line. Good control is reducing stress when the greenhouse can act, and being honest when physics is stronger than software.
VPD is usually the harder planning problem. Ventilation can cool the greenhouse while importing dry outdoor air. Misters and fog can reduce VPD while adding water and sometimes fighting ventilation. The planner decides which tradeoff is least bad for each part of the day.
Why This Is a Useful Testbed
Most automation examples are demonstrated in friendly conditions. Verdify is not friendly. Longmont’s elevation and dry Front Range air create sharp daily swings. The greenhouse gets real winter snow, spring afternoons with humidity in the teens, strong summer solar gain, tree shade, uneven zones, and crops that prefer different environments.
That makes the site useful for people searching for:
- AI greenhouse automation
- greenhouse VPD control
- ESP32 greenhouse controller design
- solar-aligned greenhouse monitoring
- forecast-driven climate control
- public greenhouse telemetry and scorecards
The Proof Layer
The strongest part of Verdify is that the claims are checkable. The planner publishes daily plans, Grafana renders measured greenhouse state, and scorecards show whether each planning cycle helped, did nothing, or made a tradeoff worse.
The planning score is not the only metric. Verdify separates compliance percentages from stress hours, then also tracks cost, water, equipment runtime, and forecast error. When a plan fails, that failure remains in the record and can become a lesson Iris reads before future plans.
Where to Go Next
- The Planning Loop explains how Iris turns context into tactical setpoints.
- OpenClaw Configuration explains agent routing, trigger-scoped sessions, and audit headers.
- Slack Operations shows how Iris and Orbit present plans, deviations, reminders, and task queues to the human operator.
- Local Inference Setup documents vLLM, Gemma 4 26B A4B (MoE), served locally under the
gemma4-26balias, prompt budgets, and launch capacity posture. - Planner Context Window shows the generated data packet and prompt contract.
- AI-Writable Tunables lists the exact parameters Iris can set.
- Safety Architecture explains why Iris never flips relays directly.
- Known Limits keeps physical, sensor, inference, and reliability caveats visible.
- Firmware Change Protocol documents the replay and OTA gates for controller changes.
- Planning Quality shows whether plans improved measured outcomes.
- Baseline vs Iris compares the planner-offline window with the following Iris-online window.
- Climate at 5,000 Feet explains the physical greenhouse problem.
- The Greenhouse describes the structure, zones, crops, equipment, and sensors.
- AI Greenhouse Planning Archive keeps the generated planning lab notebook.