Skip to content

Conversation

@zh1C
Copy link
Contributor

@zh1C zh1C commented Nov 20, 2025

Hey @joshmedeski , I want to express my gratitude for this tool! I've been integrating your t-smart-tmux-session-manager into my workflow for a long time. Today, I discovered sesh and was immediately intrigued.

After diving into the codebase, I noticed a potential performance improvement in the session listing logic. Although I haven't performed rigorous benchmarks, I believe this change should be effective and I'd love to contribute to the project.
Changes:

  • Parallel Execution: The List method in lister/list.go now queries different session sources (Tmux, Zoxide, Config, Tmuxinator) concurrently using goroutines.

  • Order Preservation: While the queries run in parallel, the final results are reassembled according to the user's configured sort order, ensuring the UI behavior remains consistent.

  • Error Handling: Any error returned by a source strategy is properly captured and propagated, maintaining the original error-handling behavior (fail-fast).

This should significantly reduce the latency of sesh list, especially when one of the sources (like a large Zoxide database or a slow Tmux query) takes longer to respond, as the total time will now be determined by the slowest source rather than the sum of all sources.

@zh1C zh1C force-pushed the feature/list-use-chan branch from 7b6c718 to b0d22d7 Compare November 20, 2025 11:54
@joshmedeski
Copy link
Owner

Thanks glad you like sesh!

This is great, I'll make sure to do some testing to ensure it's good to go.

Do you know how to run benchmarks? It would be interesting to get some numbers on before and after, especially for larger datasets.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR introduces parallel execution to the session listing functionality in sesh, improving performance by querying different session sources (Tmux, Zoxide, Config, Tmuxinator) concurrently rather than sequentially. This optimization is particularly beneficial when one source has high latency, as the total execution time becomes bounded by the slowest source rather than the sum of all sources.

Key Changes:

  • Converted sequential session source queries to concurrent execution using goroutines
  • Implemented result collection via channels while preserving the user-configured sort order
  • Added proper synchronization with sync.WaitGroup and buffered channels

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 47 to +54
for _, src := range srcsOrderedIndex {
sessions, err := srcStrategies[src](l)
if err != nil {
return model.SeshSessions{}, err
wg.Add(1)
go func(s string) {
defer wg.Done()
sessions, err := srcStrategies[s](l)
resultsChan <- strategyResult{source: s, sessions: sessions, err: err}
}(src)
}
Copy link

Copilot AI Nov 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The concurrent execution of strategies may cause race conditions in tests that use testify/mock objects. The testify/mock library is not thread-safe by default, as it maintains internal state for tracking method calls and expectations. When multiple goroutines call methods on the same mock object concurrently (as they will with l.tmux, l.zoxide, l.home, and l.tmuxinator), this can lead to data races.

Consider one of the following solutions:

  1. Protect mock method calls with a mutex in the test mocks
  2. Use a thread-safe mocking library
  3. Document that tests should be run without the -race flag (not recommended)
  4. Ensure real implementations are thread-safe and add integration tests

The production code itself appears safe since the real implementations only read configuration or call external commands, but the test suite may exhibit race conditions.

Copilot uses AI. Check for mistakes.
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Manually testing it worked well! But I'd like to make sure we acknowledge any potential issues with testing. Can you clarify this then I'm happy to merge this PR.

Copy link
Contributor Author

@zh1C zh1C Nov 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @joshmedeski , thanks for catching that. I initially didn't fully consider the impact of concurrent execution on the testify/mock objects.

I want to share some findings:
Current test status:

  • The tests are currently running with the -race flag enabled and all tests pass without any race condition warnings.
  • This suggests that the concurrent execution is working correctly with the current mockery-generated mock_Home.go.

About testify/mock thread safety:
I'm not deeply experienced with mockery, so I wasn't initially certain about its thread-safety guarantees. However, after checking the source code, I found that testify/mock's Mock struct does include a sync.Mutex field.
I looked into the testify repository and found several related issues about concurrency (#1340, #1020, #1128). After reviewing them and our code, I believe our test doesn't have these issues:

  • Issue #1340 warns against calling assert/require from multiple goroutines. In our test, all assertions are called only in the main test goroutine after List() completes.
  • Issue #1020 is about mutable arguments affecting mock recordings. We only use immutable string parameters.
  • Issue #1128 discusses unsafe direct access to mock.Calls field. We only use the safe methods like AssertExpectations(), which are protected by testify's internal mutex.

That said:
If you have evidence or experience showing that testify/mock is not thread-safe in concurrent scenarios, or if you've seen race conditions that the race detector might have missed, I'd be happy to implement a thread-safe FakeHome wrapper:

type FakeHome struct {
    mu          sync.Mutex
    shortenHome map[string]string
    expandHome  map[string]string
}

func (h *FakeHome) ShortenHome(path string) (string, error) {
    h.mu.Lock()
    defer h.mu.Unlock()
    if shortened, ok := h.shortenHome[path]; ok {
        return shortened, nil
    }
    return path, nil
}

// ... similar for ExpandHome

This would provide explicit concurrency protection and remove any dependency on the framework's internal implementation.

Let me know what you think – I'm open to either approach!

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't that kind of work is necessary right now, I think I'm okay with merging as-is. But thanks for the research and suggestions, we can revisit this if it starts becoming an issue.

@joshmedeski joshmedeski merged commit 6487292 into joshmedeski:main Nov 21, 2025
10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants