Stream output into a scrollback view
LogView is a stateless renderer that takes a Seq[String] and a
scrollOffset. The app owns the buffer and decides whether to
auto-tail (stay pinned to the latest line) or pause (when the user
has scrolled up).
This pattern fits any streaming source: LLM tokens, build output, log tails, network frames.
Pattern
- Hold the buffer (
Vector[String]) plus ascrollOffsetand anautoTail: Booleanflag on the model. - As new lines arrive (via
Cmd.GCmd(NewLine(...))orSub.Everypolling), append to the buffer; ifautoTailis set, bumpscrollOffsetto keep the bottom in view. - ArrowUp / ArrowDown adjust
scrollOffsetand toggleautoTail.
Code
import termflow.tui.widgets
final case class Model(
buffer: Vector[String],
scrollOffset: Int,
autoTail: Boolean
):
def append(line: String, viewW: Int, viewH: Int): Model =
val nextBuf = (buffer :+ line).takeRight(2000) // bound the history
val maxScr = widgets.LogView.maxScroll(nextBuf, viewW, viewH, wrap = true)
val nextScr = if autoTail then maxScr else math.min(scrollOffset, maxScr)
copy(buffer = nextBuf, scrollOffset = nextScr)
enum Msg:
case TokenArrived(text: String)
case Up
case Down
def update(m: Model, msg: Msg, ctx: RuntimeCtx[Msg]): Tui[Model, Msg] =
msg match
case Msg.TokenArrived(t) =>
m.append(t, viewW = ctx.terminal.width - 4, viewH = 16).tui
case Msg.Up =>
val maxScr = widgets.LogView.maxScroll(m.buffer, ctx.terminal.width - 4, 16, true)
m.copy(
scrollOffset = math.max(0, m.scrollOffset - 1),
autoTail = false
).tui
case Msg.Down =>
val maxScr = widgets.LogView.maxScroll(m.buffer, ctx.terminal.width - 4, 16, true)
val next = math.min(maxScr, m.scrollOffset + 1)
m.copy(scrollOffset = next, autoTail = next == maxScr).tui
def view(m: Model): RootNode =
given Theme = Theme.dark
val viewW = 80 - 4
val nodes = widgets.LogView(
lines = m.buffer,
width = viewW,
height = 16,
scrollOffset = m.scrollOffset,
at = Coord(2.x, 2.y),
wrap = true
)
RootNode(80, 24, children = nodes, input = None)
Notes
- Bound the buffer.
takeRight(2000)keeps the history finite — important when streams can run for hours. The actual cap depends on your domain. - Token vs line. If your source emits tokens (LLM-style), keep a
partial-line string on the side and only append to
bufferwhen you see a\n. Otherwise every token becomes its own row. - Auto-tail toggles automatically. When the user scrolls back to
the bottom (offset == maxScroll),
DownflipsautoTail = trueagain — the convention every chat client uses. - Dropping
wrap = truetruncates rather than wrapping; themaxScrollmath still works either way. - Mouse-wheel scrollback. Wire mouse events through
LogView.scrollDelta(event, viewport, ticksPerDetent)— it returnsSome(delta)only when the scroll lands inside the viewport rectangle you describe withLogView.Viewport(at, width, height). Reuse the sameat/width/heightyou passed toLogView.apply, and feed the returned delta into the same scroll-update path the keyboard uses. Outside-the-viewport events returnNoneso a wheel hovering over the prompt or the status row won't accidentally page through history. DefaultticksPerDetent = 3matches the speed terminal users expect; pass1for one-line-per-detent. Keyboard equivalents (↑/↓, PageUp / PageDown, End) keep working when mouse reporting is unavailable — always wire both.
For an end-to-end example, see
apps.echo.EchoApp,
which uses a hand-rolled line column instead of LogView but
implements the same scrollback semantics — you can swap to LogView
in 20 lines.