AI Greenhouse Intelligence
Iris (our OpenClaw AI agent) is the planning layer: an OpenClaw agent with Gemma 4 26B A4B (MoE), served locally under the gemma4-26b alias, MCP tools, greenhouse memory, and a cloud escalation path for heavier reviews. This is local-first, not local-only: routine events stay on Cortex when practical, while heavyweight reviews can use an explicitly labeled cloud peer. The ESP32 is the control layer. The crop band is the target. The scorecard is the feedback loop. For a plain-English overview of the full loop, start with AI Greenhouse Control; this page is the technical doorway into the same system.
Iris does not directly turn equipment on and off. It reads the current greenhouse state, forecast, active crop targets, equipment health, previous plan outcomes, static website context and the live planner context window, observations, and validated lessons. Then it writes tactical setpoints: how tight to hold temperature, when to mist, how much water to spend, how aggressively to ventilate, and when to lean on lights or heat. The dispatcher validates and delivers those values; the ESP32 controller enforces them safely every 5 seconds.
For the exact parameters Iris can write, see AI-Writable Tunables.
Launch-critical references:
- Safety Architecture: Iris writes tactics; firmware owns relays.
- Known Limits: current physical, sensor, model, and reliability limits.
- Firmware Change Protocol: replay, test, OTA, and rollback gates for controller changes.
Slack is the operator-facing side of the same loop. Iris uses it to explain successful plans, forecast deviations, replacement plans, and watch items in plain English, while Orbit posts human greenhouse tasks such as hydro checks, pest inspection, grow-light checks, and reservoir service. See Slack Operations for the end-to-end view from OpenClaw trigger to Slack brief to public archive.
gemma4-26b alias, for routine greenhouse reasoning.
The Control Split
The crop band defines the desired environment. It changes through the day because plants do not need the same temperature and VPD at night, sunrise, solar peak, and sunset.
Iris decides how hard to chase that band. On mild days, it can tighten control. On hot dry days, it may accept some stress, save water for the worst hours, pre-cool before solar peak, or widen hysteresis to avoid relay churn.
The ESP32 controller owns real-time safety. If the planning layer goes offline, the controller keeps running with the last valid setpoints and hard safety rails.
For the launch-critical safety argument, see Why the AI Does Not Control Relays.
The human interface is separate from that safety boundary: Slack explains plans and tasks, but it does not flip relays. OpenClaw routes reasoning, MCP validates writes, the dispatcher pushes setpoints, and firmware controls physical outputs.
What Gets Optimized
Temperature is constrained by solar gain, outside air, heaters, fans, vents, fog, and the concrete slab. The planner is not trying to make a perfect line. It is trying to reduce plant stress without wasting equipment cycles or energy.
VPD is the harder optimization problem. Dry outdoor air can make ventilation and humidity control fight each other. Good planning means choosing when to seal, mist, fog, vent, or accept a short excursion.
Learning Loop
Every plan is a hypothesis. The next cycle compares the expected outcome with measured stress hours, compliance, equipment runtime, water, and cost. Useful findings graduate into generated lessons, which Iris reads before future plans.
The important question is not whether a plan exists. The important question is whether measured outcomes moved in the intended direction after the plan. That audit lives in Planning Quality.
References
- Safety Architecture: why Iris writes tactics while the ESP32 owns relays.
- System Architecture: the full ESP32, ingestor, TimescaleDB, MCP, Iris, and publishing loop.
- OpenClaw Configuration: agent IDs, trigger-scoped sessions, audit headers, and delivery contract.
- Slack Operations: how Iris and Orbit present plans, deviations, reminders, and operator work in Slack.
- Local Inference Setup: vLLM, Gemma 4 26B A4B (MoE), served locally under the
gemma4-26balias, routing, prompt budget, and launch capacity posture. - Planner Context Window and Prompts: generated greenhouse data window, static context builder, event prompts, and audit headers.
- Planning Loop: how forecasts, crop bands, lessons, and waypoints become plans.
- AI-Writable Tunables: exact names, defaults, bounds, and relay/state-machine impact.
- Planning Quality: scorecard, compliance, forecast-vs-plan-vs-actual, and lessons.
- Baseline vs Iris: launch-safe operational comparison of the planner-offline and Iris-online windows.
- Generated Lessons: validated operational lessons.
- Launch FAQ: PID, RL, direct LLM control, VPD physics, and self-correcting claims.
- Related Work: how Verdify compares to maker, research, and commercial systems.
- Build Notes: public-safe reference implementation notes.
- Data Model: database, views, scorecard, and sample exports.
- Known Limits: physical, sensor, and reliability limits.