Implement lstk logs --follow and exit by default#51
Conversation
lstk logs --follow and exit by default
📝 WalkthroughWalkthroughAdds a Changes
Sequence Diagram(s)sequenceDiagram
participant CLI as CLI (cmd/logs.go)
participant Container as container.Logs
participant Runtime as Runtime.StreamLogs
participant Docker as DockerRuntime.StreamLogs
CLI->>Container: invoke Logs(ctx, rt, sink, follow)
Container->>Runtime: call StreamLogs(ctx, containerID, writer, follow)
Runtime->>Docker: delegate StreamLogs(ctx, containerID, writer, follow)
Docker-->>Runtime: stream logs (Follow option controlled by follow)
Runtime-->>Container: return when stream ends or error
Container-->>CLI: emit logs to sink / return error
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 2✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (1)
test/integration/logs_test.go (1)
67-68: Consider replacingtime.Sleepwith a readiness signal.The 500 ms pause is a heuristic — it can cause the marker to be written before
lstk logs --followhas connected to the Docker log stream, causing the test to hang until the 30 s timeout on a slow CI runner.A more robust pattern is to scan
lstk logsoutput in the goroutine and only write the marker once a line (any line) has been received, confirming the stream is live:♻️ Proposed refactor
- // Give lstk logs a moment to connect before generating output - time.Sleep(500 * time.Millisecond) - - // Write to /proc/1/fd/1 (PID 1's stdout) so the line appears in docker logs. - execResp, err := dockerClient.ContainerExecCreate(ctx, containerName, container.ExecOptions{ - Cmd: []string{"sh", "-c", "echo " + marker + " >/proc/1/fd/1"}, - }) - require.NoError(t, err, "failed to create exec") - err = dockerClient.ContainerExecStart(ctx, execResp.ID, container.ExecStartOptions{Detach: true}) - require.NoError(t, err, "failed to start exec") - - // Scan lines in a goroutine because reading from the pipe blocks until lstk logs exits - found := make(chan struct{}, 1) - go func() { - scanner := bufio.NewScanner(stdout) - for scanner.Scan() { - if strings.Contains(scanner.Text(), marker) { - found <- struct{}{} - return - } - } - }() + // connected is closed once at least one log line has been received, confirming + // the stream is live before we inject the marker. + connected := make(chan struct{}) + found := make(chan struct{}, 1) + go func() { + var once sync.Once + scanner := bufio.NewScanner(stdout) + for scanner.Scan() { + once.Do(func() { close(connected) }) + if strings.Contains(scanner.Text(), marker) { + found <- struct{}{} + return + } + } + }() + + select { + case <-connected: + case <-ctx.Done(): + t.Fatal("lstk logs --follow did not produce any output within timeout") + } + + // Write to /proc/1/fd/1 (PID 1's stdout) so the line appears in docker logs. + execResp, err := dockerClient.ContainerExecCreate(ctx, containerName, container.ExecOptions{ + Cmd: []string{"sh", "-c", "echo " + marker + " >/proc/1/fd/1"}, + }) + require.NoError(t, err, "failed to create exec") + err = dockerClient.ContainerExecStart(ctx, execResp.ID, container.ExecStartOptions{Detach: true}) + require.NoError(t, err, "failed to start exec")Add
"sync"to the imports accordingly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/logs_test.go` around lines 67 - 68, Replace the heuristic time.Sleep call with a readiness signal: when launching the goroutine that runs the "lstk logs --follow" command, have that goroutine scan stdout/stderr and send a message on a channel (or close a ready sync.WaitGroup) as soon as it reads the first line; then only write the marker after receiving that readiness signal instead of calling time.Sleep(500 * time.Millisecond). Update imports (add sync or use a chan struct{}) and modify the code that currently calls time.Sleep to wait on the readiness channel/WaitGroup before writing the marker so the test only proceeds once the log stream is confirmed live.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@test/integration/logs_test.go`:
- Around line 67-68: Replace the heuristic time.Sleep call with a readiness
signal: when launching the goroutine that runs the "lstk logs --follow" command,
have that goroutine scan stdout/stderr and send a message on a channel (or close
a ready sync.WaitGroup) as soon as it reads the first line; then only write the
marker after receiving that readiness signal instead of calling time.Sleep(500 *
time.Millisecond). Update imports (add sync or use a chan struct{}) and modify
the code that currently calls time.Sleep to wait on the readiness
channel/WaitGroup before writing the marker so the test only proceeds once the
log stream is confirmed live.
ℹ️ Review info
Configuration used: Repository UI (base), Organization UI (inherited)
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
cmd/logs.gointernal/container/logs.gointernal/runtime/docker.gointernal/runtime/runtime.gotest/integration/logs_test.go
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
test/integration/logs_test.go (1)
59-93:⚠️ Potential issue | 🟡 Minor
time.Sleep(500ms)is a fragile synchronization point — consider retrying the marker injection.If
lstk logs --followhasn't finished subscribing to the Docker log stream before the exec writes the marker on line 72, the line is emitted before the scanner is active and the test times out after 30 s. On a loaded CI runner 500 ms is often not enough.A more robust pattern is to inject the marker in a short retry loop rather than assuming a fixed delay:
♻️ Suggested approach
-// Give lstk logs a moment to connect before generating output -time.Sleep(500 * time.Millisecond) - -// Write to /proc/1/fd/1 (PID 1's stdout) so the line appears in docker logs. -execResp, err := dockerClient.ContainerExecCreate(ctx, containerName, container.ExecOptions{ - Cmd: []string{"sh", "-c", "echo " + marker + " >/proc/1/fd/1"}, -}) -require.NoError(t, err, "failed to create exec") -err = dockerClient.ContainerExecStart(ctx, execResp.ID, container.ExecStartOptions{Detach: true}) -require.NoError(t, err, "failed to start exec") +// Inject the marker repeatedly so lstk logs catches it regardless of when it connects. +go func() { + for { + select { + case <-ctx.Done(): + return + case <-found: + return + case <-time.After(300 * time.Millisecond): + execResp, err := dockerClient.ContainerExecCreate(ctx, containerName, container.ExecOptions{ + Cmd: []string{"sh", "-c", "echo " + marker + " >/proc/1/fd/1"}, + }) + if err != nil { + continue + } + _ = dockerClient.ContainerExecStart(ctx, execResp.ID, container.ExecStartOptions{Detach: true}) + } + } +}()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/logs_test.go` around lines 59 - 93, The fixed sleep is fragile; replace the static time.Sleep(500*time.Millisecond) with a short retry loop that repeatedly performs the marker injection (the dockerClient.ContainerExecCreate + dockerClient.ContainerExecStart sequence that echoes marker into /proc/1/fd/1) until the goroutine scanning logs (the bufio.NewScanner on stdout and the found channel) observes the marker or the test context (ctx) times out; keep the scanner goroutine as-is, use ctx.Done() or a deadline to stop retries, add a small backoff between attempts (e.g., 100–200ms), and ensure each retry returns any errors to require.NoError only when you give up so the test reliably waits for logsCmd to subscribe before emitting the marker.
🧹 Nitpick comments (1)
internal/container/logs.go (1)
14-41:pris never explicitly closed — consider adefer.
pwis closed viaCloseWithErrorin the goroutine, butpr(the read end) has nodefer pr.Close(). In-memory pipes carry no OS descriptor, so there's no hard leak, but closing it signals to any pendingscanner.Scan()that the pipe is done and is safer if the scanner loop exits early (e.g. via context cancellation before the goroutine closespw).♻️ Suggested fix
pr, pw := io.Pipe() +defer pr.Close() errCh := make(chan error, 1)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/container/logs.go` around lines 14 - 41, In Logs, the read end of the in-memory pipe (pr) is never closed which can leave the scanner blocked on cancellation; add an explicit defer pr.Close() immediately after creating pr, pw in the Logs function so pr is always closed when Logs returns (keep the existing pw.CloseWithError(err) in the goroutine). This ensures the bufio.Scanner in Logs sees EOF on early exits (e.g., context cancellation) and prevents the scanner from hanging; reference the pr/pw variables, the goroutine calling rt.StreamLogs, and the scanner loop in Logs when making the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@test/integration/logs_test.go`:
- Around line 59-93: The fixed sleep is fragile; replace the static
time.Sleep(500*time.Millisecond) with a short retry loop that repeatedly
performs the marker injection (the dockerClient.ContainerExecCreate +
dockerClient.ContainerExecStart sequence that echoes marker into /proc/1/fd/1)
until the goroutine scanning logs (the bufio.NewScanner on stdout and the found
channel) observes the marker or the test context (ctx) times out; keep the
scanner goroutine as-is, use ctx.Done() or a deadline to stop retries, add a
small backoff between attempts (e.g., 100–200ms), and ensure each retry returns
any errors to require.NoError only when you give up so the test reliably waits
for logsCmd to subscribe before emitting the marker.
---
Nitpick comments:
In `@internal/container/logs.go`:
- Around line 14-41: In Logs, the read end of the in-memory pipe (pr) is never
closed which can leave the scanner blocked on cancellation; add an explicit
defer pr.Close() immediately after creating pr, pw in the Logs function so pr is
always closed when Logs returns (keep the existing pw.CloseWithError(err) in the
goroutine). This ensures the bufio.Scanner in Logs sees EOF on early exits
(e.g., context cancellation) and prevents the scanner from hanging; reference
the pr/pw variables, the goroutine calling rt.StreamLogs, and the scanner loop
in Logs when making the change.
ℹ️ Review info
Configuration used: Repository UI (base), Organization UI (inherited)
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
cmd/logs.gointernal/container/logs.gointernal/runtime/docker.gointernal/runtime/runtime.gotest/integration/logs_test.go
🚧 Files skipped from review as they are similar to previous changes (2)
- cmd/logs.go
- internal/runtime/runtime.go
silv-io
left a comment
There was a problem hiding this comment.
LGTM! Congrats on the first PR! :)
carole-lavillonniere
left a comment
There was a problem hiding this comment.
This looks good! 👏
Implement
lstk logs --follow, which should stream the logs. Exit by default onlstk logs.Resolves DRG-537