Skip to content

Conversation

@Parvm1102
Copy link

@Parvm1102 Parvm1102 commented Dec 18, 2025

Closes #170

📝 Description

The Pull request deals with multiple things: from fixing the initial setup to actually get the project working, here's what it means:

  • The Dashboard, Contributors and Pull Requests pages are fully functional now and show real data. The Analytics page is also working but it might contain some hard-coded data that was present earlier in the code. I will fix it in the future pull request. This is a significant change as earlier even the Analyze Repository button wasn't working.
  • There were a lot of issues while starting the project, the steps mentioned in the INSTALL_GUIDE.md were not clear and they didn't include everything. After running the project myself, I have included everyting a contributor might face while setting up the project.

🔧 Changes Made

Features

  • Dashboard Data Service: Implemented GitHubRepoStatsService to fetch comprehensive repository statistics (contributors, pull requests etc) directly from the GitHub API. The pages now show real data.
  • Repo Stats Endpoint: Added POST /api/repo-stats endpoint to serve rich repository data to the frontend dashboard.
  • GraphDB Integration: Updated RepoService to gracefully handle already indexed repositories, returning existing graph data instead of an error, earlier the Analyze repository button always threw an error, I have fixed it by creating the required API endpoints.
  • Migrations: Added 02_create_indexed_repositories.sql to create the indexed_repositories table in Supabase for tracking analyzed repositories.

Fixes

  • Installation Guide: Updated INSTALL_GUIDE.md with clearer environment variable setup instruction for Root, Backend and Frontend (creating the .env files in backend and frontend folders wasn't mentioned earlier), Specific steps for running the GraphDB service (Python 3.11 requirement), Instructions for executing Supabase SQL migrations. The lack of proper instructions to setup the project required a lot of time to understand the project. Now as everything is clear, it will be better for future contributions as setup will be easier.
  • Router: Registered the new repo_stats router in the main API application.
  • Gitignore: Updated the .gitignore files to prevent indexed repositories from being pushed.

Note: You might face the issue of py-cord and discord.py while starting the project, it was there already but I didn't change it because it worked for me somehow. If the conflict occur while you run the project, remove the py-cord dependency from pyproject.toml.

📷 Screenshots or Visual Changes

Dashboard

dashboard

Contributors

contributors

Analytics

Analytics

Pull Requests

Pull_Requests

✅ Checklist

  • I have read the contributing guidelines.
  • I have added tests that prove my fix is effective or that my feature works.
  • I have added necessary documentation (if applicable).
  • Any dependent changes have been merged and published in downstream modules.

Summary by CodeRabbit

  • New Features

    • Added a repository analysis endpoint that returns comprehensive GitHub stats (contributors, PRs, issues, commit activity, languages, releases).
    • New repository statistics service powering the analysis and an API route to access it.
    • New repository-indexing table to track indexing status, counts, timestamps, and soft-deletes.
  • Bug Fixes / Behavior

    • Indexing responses now return explicit success or pending statuses instead of erroring for already-processed repos.
  • Documentation

    • Installation guide updated with environment setup and migration steps.
  • Chores

    • Added ignore paths for repository metadata.

✏️ Tip: You can customize this high-level summary in your review settings.

Signed-off-by: Parvm1102 <parvmittal31757@gmail.com>
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 18, 2025

📝 Walkthrough

Walkthrough

Adds a new /repo-stats POST API, a GitHubRepoStatsService for async GitHub data aggregation, adjusts RepoService indexing responses, introduces an indexed_repositories DB migration, and updates .gitignore and installation docs/environment steps.

Changes

Cohort / File(s) Summary
API: Repo stats endpoint
backend/app/api/v1/repo_stats.py, backend/app/api/router.py
New POST /repo-stats route, Pydantic request/response models, repo URL parsing, request handling, error mapping; router registration under /api with tag "Repository Stats".
Service: GitHub stats aggregator
backend/app/services/github/repo_stats.py
New GitHubRepoStatsService async context manager that calls GitHub API (repo info, contributors, PRs, issues, commits, languages, releases) and exposes get_comprehensive_stats aggregating metrics.
Service: CodeGraph indexing behavior
backend/app/services/codegraph/repo_service.py
Adjusted index_repo control flow: return structured success when status is completed, and explicit pending response when pending instead of error.
Database: Indexed repositories migration
backend/database/02_create_indexed_repositories.sql
New indexed_repositories table with UUID id, repository_full_name, graph_name, indexing_status (pending/completed/failed), soft-delete flag, counts, indexes, UNIQUE constraint when not deleted, and updated_at trigger.
Config & docs
.gitignore, backend/app/database/falkor/code-graph-backend/.gitignore, docs/INSTALL_GUIDE.md
Adds ignore for repositories/ path, ensures trailing newline fix, updates install guide with env locations, Python 3.11 backend steps, and Supabase migrations instructions.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  rect rgb(240,250,255)
    participant Client as Frontend
    participant API as Backend API (/api/repo-stats)
    participant Service as GitHubRepoStatsService
    participant DB as Supabase/Postgres (indexed_repositories)
  end

  Client->>API: POST /api/repo-stats { repo_url }
  API->>API: parse_repo_url(repo_url)
  API->>Service: async with GitHubRepoStatsService -> get_comprehensive_stats(owner, repo)
  Service-->>API: comprehensive stats JSON
  alt repo indexing tracked
    API->>DB: insert/update indexed_repositories (status, counts, graph_name)
    DB-->>API: OK
  end
  API-->>Client: 200 RepoStatsResponse (metrics, contributors, PRs, issues, releases)
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~22 minutes

Suggested reviewers

  • smokeyScraper

Poem

🐰 I hop on keys and fetch the stars,
I chase the repo through PRs and jars.
Async I dig and metrics I bring,
Now dashboards hum — let data sing!
thump-thump

Pre-merge checks and finishing touches

✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: Enable real-time dashboard statistics and fix developer setup' accurately captures the main change—adding dashboard stats via GitHub API service and improving setup documentation.
Linked Issues check ✅ Passed All objectives from issue #170 are met: backend /api/repo-stats endpoint created, GitHubRepoStatsService implemented, RepoService updated for existing repos, indexed_repositories table added, and setup docs updated.
Out of Scope Changes check ✅ Passed All changes are scoped to issue #170: API endpoint, GitHub stats service, repo indexing flow, database schema, and developer setup docs. No unrelated modifications detected.
Docstring Coverage ✅ Passed Docstring coverage is 93.33% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (5)
backend/app/api/v1/repo_stats.py (2)

127-145: Code duplication with RepoService._parse_repo_url.

This function duplicates the parsing logic in backend/app/services/codegraph/repo_service.py. Consider extracting to a shared utility module to maintain DRY principle.


160-185: Use exception chaining for better debugging.

When re-raising exceptions as HTTPException, use from to preserve the exception chain. This aids debugging by showing the original cause.

Apply this fix:
         try:
             owner, repo = parse_repo_url(request.repo_url)
         except ValueError as e:
-            raise HTTPException(status_code=400, detail=str(e))
+            raise HTTPException(status_code=400, detail=str(e)) from e
         
         logger.info(f"Fetching stats for {owner}/{repo}")
         
         # Fetch comprehensive stats from GitHub
         async with GitHubRepoStatsService() as stats_service:
             result = await stats_service.get_comprehensive_stats(owner, repo)
         
         logger.info(f"Successfully fetched stats for {owner}/{repo}")
         
         return result
         
     except HTTPException:
         raise
     except ValueError as e:
-        logger.error(f"Value error: {e}")
-        raise HTTPException(status_code=404, detail=str(e))
+        logger.exception(f"Value error: {e}")
+        raise HTTPException(status_code=404, detail=str(e)) from e
     except Exception as e:
         logger.error(f"Error analyzing repository: {e}", exc_info=True)
         raise HTTPException(
             status_code=500,
-            detail=f"Failed to analyze repository: {str(e)}"
-        )
+            detail=f"Failed to analyze repository: {e!s}"
+        ) from e
backend/app/services/github/repo_stats.py (3)

58-69: Rate limit handling could be more informative.

When rate limited (403), the method silently returns None, making it indistinguishable from a genuine 404. Consider raising a specific exception or returning a structured response so callers can handle rate limiting appropriately.

Also, per static analysis hint, use logger.exception() instead of logger.error() to automatically include stack traces.

Suggested improvement:
                 elif response.status == 403:
-                    logger.error(f"GitHub API rate limit exceeded: {url}")
-                    return None
+                    logger.warning(f"GitHub API rate limit exceeded: {url}")
+                    raise Exception("GitHub API rate limit exceeded")
                 else:
                     logger.error(f"GitHub API error {response.status}: {url}")
                     return None
         except asyncio.TimeoutError:
             logger.error(f"Timeout accessing GitHub API: {url}")
             return None
         except Exception as e:
-            logger.error(f"Error making request to {url}: {str(e)}")
+            logger.exception(f"Error making request to {url}: {e!s}")
             return None

315-317: Use logger.exception for automatic stack traces.

Per static analysis, replace logger.error with logger.exception to automatically capture and log the full stack trace.

Apply this fix:
         except Exception as e:
-            logger.error(f"Error fetching comprehensive stats for {owner}/{repo}: {str(e)}")
+            logger.exception(f"Error fetching comprehensive stats for {owner}/{repo}: {e!s}")
             raise

49-49: Use explicit Optional type hint.

Per PEP 484, avoid implicit Optional. Use Dict | None or Optional[Dict].

Apply this fix:
-    async def _make_request(self, url: str, params: Dict = None) -> Optional[Any]:
+    async def _make_request(self, url: str, params: Dict | None = None) -> Optional[Any]:
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8eeacad and 21a0463.

⛔ Files ignored due to path filters (1)
  • poetry.lock is excluded by !**/*.lock
📒 Files selected for processing (8)
  • .gitignore (1 hunks)
  • backend/app/api/router.py (2 hunks)
  • backend/app/api/v1/repo_stats.py (1 hunks)
  • backend/app/database/falkor/code-graph-backend/.gitignore (1 hunks)
  • backend/app/services/codegraph/repo_service.py (1 hunks)
  • backend/app/services/github/repo_stats.py (1 hunks)
  • backend/database/02_create_indexed_repositories.sql (1 hunks)
  • docs/INSTALL_GUIDE.md (3 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
backend/app/api/v1/repo_stats.py (1)
backend/app/services/github/repo_stats.py (2)
  • GitHubRepoStatsService (16-317)
  • get_comprehensive_stats (231-317)
backend/app/services/github/repo_stats.py (1)
backend/app/core/orchestration/queue_manager.py (1)
  • connect (34-45)
🪛 Ruff (0.14.8)
backend/app/api/v1/repo_stats.py

142-145: Avoid specifying long messages outside the exception class

(TRY003)


163-163: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


173-173: Consider moving this statement to an else block

(TRY300)


178-178: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


179-179: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


182-185: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


184-184: Use explicit conversion flag

Replace with conversion flag

(RUF010)

backend/app/services/github/repo_stats.py

23-23: Avoid specifying long messages outside the exception class

(TRY003)


49-49: PEP 484 prohibits implicit Optional

Convert to T | None

(RUF013)


65-65: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


67-67: Do not catch blind exception: Exception

(BLE001)


68-68: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


68-68: Use explicit conversion flag

Replace with conversion flag

(RUF010)


275-275: Abstract raise to an inner function

(TRY301)


275-275: Avoid specifying long messages outside the exception class

(TRY003)


316-316: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


316-316: Use explicit conversion flag

Replace with conversion flag

(RUF010)

🔇 Additional comments (9)
.gitignore (1)

177-178: LGTM!

The addition of the repositories/ directory to .gitignore correctly prevents indexed repository data from being committed, which aligns with the new repository indexing functionality.

backend/app/database/falkor/code-graph-backend/.gitignore (1)

45-45: LGTM!

Local .gitignore entry complements the root-level ignore rule for defense-in-depth.

backend/app/services/codegraph/repo_service.py (1)

66-82: LGTM!

The updated handling for already-indexed repositories is a good improvement. Returning existing graph data for completed status and providing progress info for pending status improves the user experience and makes the API idempotent.

docs/INSTALL_GUIDE.md (1)

183-195: LGTM!

Clear step-by-step instructions for running Supabase migrations. The documentation properly guides developers through the required database setup.

backend/app/api/v1/repo_stats.py (1)

16-125: LGTM!

Well-structured Pydantic models with appropriate type hints and default values. The nested structure properly captures the complexity of GitHub repository statistics.

backend/database/02_create_indexed_repositories.sql (2)

1-24: LGTM!

Well-designed schema with:

  • UUID primary key for distributed safety
  • CHECK constraint for valid status values
  • Partial unique constraint preventing duplicate active entries
  • Appropriate indexes for expected query patterns
  • Soft delete support

26-30: No action required. The update_updated_at_column() function is properly defined in 01_create_integration_tables.sql and will exist before this trigger is created.

backend/app/services/github/repo_stats.py (2)

21-31: LGTM!

Good defensive programming by validating the GitHub token on initialization, preventing silent failures later.


231-249: LGTM!

Excellent use of asyncio.gather with return_exceptions=True for concurrent fetching with graceful degradation. This pattern ensures the endpoint can return partial data even if some GitHub API calls fail.

Comment on lines 27 to 31
api_router.include_router(
repo_stats_router,
prefix="/api",
tags=["Repository Stats"]
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# Search for main.py and check how api_router is mounted
find . -name "main.py" -type f | head -5

Repository: AOSSIE-Org/Devr.AI

Length of output: 79


🏁 Script executed:

# Search for router.py in backend/app/api and examine its content
find . -path "*/backend/app/api/router.py" -o -path "*/backend/app/main.py" | xargs ls -la 2>/dev/null

Repository: AOSSIE-Org/Devr.AI

Length of output: 137


🏁 Script executed:

# Search for all include_router calls to understand the prefix pattern
rg "include_router" --type py -A 3 -B 1

Repository: AOSSIE-Org/Devr.AI

Length of output: 1125


🏁 Script executed:

#!/bin/bash
# Look for main.py and router.py
echo "=== Looking for main.py and router.py ==="
fd -t f "main.py|router.py" backend 2>/dev/null || fd -t f "main.py|router.py" . --max-depth 4 2>/dev/null | head -20

echo ""
echo "=== Searching for api_router mentions ==="
rg "api_router" --type py -B 2 -A 2 2>/dev/null | head -100

Repository: AOSSIE-Org/Devr.AI

Length of output: 1990


🏁 Script executed:

find . -type f -name "main.py" -o -name "router.py" | grep -E "(backend|app)" | head -10

Repository: AOSSIE-Org/Devr.AI

Length of output: 107


🏁 Script executed:

rg "include_router.*repo_stats" -B 3 -A 3

Repository: AOSSIE-Org/Devr.AI

Length of output: 44


🏁 Script executed:

rg "app.include_router\(api_router" -B 2 -A 2

Repository: AOSSIE-Org/Devr.AI

Length of output: 44


🏁 Script executed:

#!/bin/bash
# Find files and search for router configurations
git ls-files | grep -E "(main\.py|router\.py)" | head -20

Repository: AOSSIE-Org/Devr.AI

Length of output: 222


🏁 Script executed:

#!/bin/bash
# Search for repo_stats_router and how api_router is used
rg -l "repo_stats_router|api_router" --type py | head -10

Repository: AOSSIE-Org/Devr.AI

Length of output: 131


🏁 Script executed:

rg "repo_stats_router" --type py -B 5 -A 5

Repository: AOSSIE-Org/Devr.AI

Length of output: 1102


🏁 Script executed:

cat backend/app/api/v1/repo_stats.py

Repository: AOSSIE-Org/Devr.AI

Length of output: 4845


🏁 Script executed:

# Also check backend/main.py to see the full mounting context
cat backend/main.py | head -30

Repository: AOSSIE-Org/Devr.AI

Length of output: 989


🏁 Script executed:

#!/bin/bash
# Get router.py to see exact configuration
cat backend/app/api/router.py

Repository: AOSSIE-Org/Devr.AI

Length of output: 746


🏁 Script executed:

# Get main.py relevant lines
head -20 backend/main.py

Repository: AOSSIE-Org/Devr.AI

Length of output: 749


Inconsistent prefix pattern breaks versioning consistency.

The repo_stats_router uses prefix /api while all other routers use /v1/... prefixes (/v1/auth, /v1, /v1/integrations). This creates an inconsistent endpoint structure where repository stats will be at /api/repo-stats instead of following the /v1 versioning pattern.

Consider using a consistent prefix:

 api_router.include_router(
     repo_stats_router,
-    prefix="/api",
+    prefix="/v1/repo-stats",
     tags=["Repository Stats"]
 )
🤖 Prompt for AI Agents
In backend/app/api/router.py around lines 27 to 31, the repo_stats_router is
included with prefix "/api" which breaks the v1 versioning pattern; change the
include_router call to use the consistent v1 prefix (for example
prefix="/v1/repo-stats" or prefix="/v1" depending on how repo_stats_router
defines its internal routes) so endpoints align with /v1/...; update any
references/tests/docs expecting /api/repo-stats to the new /v1 path.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (3)
backend/app/services/github/repo_stats.py (2)

64-69: Use logging.exception for better error diagnostics.

The error handling correctly returns None for resilience, but using logging.exception instead of logging.error will automatically include stack traces, which aids debugging.

🔎 Proposed fix
-        except asyncio.TimeoutError:
-            logger.error(f"Timeout accessing GitHub API: {url}")
-            return None
-        except Exception as e:
-            logger.error(f"Error making request to {url}: {str(e)}")
-            return None
+        except asyncio.TimeoutError:
+            logger.exception(f"Timeout accessing GitHub API: {url}")
+            return None
+        except Exception as e:
+            logger.exception(f"Error making request to {url}")
+            return None

315-317: Simplify logging.exception call.

The exception object is redundant in logging.exception as it automatically includes exception info, and str(e) loses some detail.

🔎 Proposed fix
-        except Exception as e:
-            logger.exception(f"Error fetching comprehensive stats for {owner}/{repo}: {str(e)}")
+        except Exception:
+            logger.exception(f"Error fetching comprehensive stats for {owner}/{repo}")
             raise
backend/app/api/v1/repo_stats.py (1)

175-185: Simplify logging.exception calls.

The exception object is automatically included by logging.exception, making explicit references redundant. Also, avoid str(e) as it may lose detail.

🔎 Proposed fix
     except HTTPException:
         raise
     except ValueError as e:
-        logger.exception(f"Value error: {e}")
+        logger.exception("Value error")
         raise HTTPException(status_code=404, detail=str(e)) from e
     except Exception as e:
-        logger.exception(f"Error analyzing repository: {e}")
+        logger.exception("Error analyzing repository")
         raise HTTPException(
             status_code=500,
-            detail=f"Failed to analyze repository: {str(e)}"
+            detail="Failed to analyze repository"
         ) from e

Note: For the HTTPException detail in line 184, keeping str(e) may be acceptable if you want to expose error details to the client, but consider whether internal error messages should be exposed.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 21a0463 and c611aca.

📒 Files selected for processing (4)
  • backend/app/api/router.py
  • backend/app/api/v1/repo_stats.py
  • backend/app/services/github/repo_stats.py
  • docs/INSTALL_GUIDE.md
🚧 Files skipped from review as they are similar to previous changes (2)
  • backend/app/api/router.py
  • docs/INSTALL_GUIDE.md
🧰 Additional context used
🧬 Code graph analysis (1)
backend/app/api/v1/repo_stats.py (2)
backend/app/services/github/repo_stats.py (2)
  • GitHubRepoStatsService (16-317)
  • get_comprehensive_stats (231-317)
backend/app/services/github/user/profiling.py (1)
  • request (77-79)
🪛 Ruff (0.14.10)
backend/app/services/github/repo_stats.py

23-23: Avoid specifying long messages outside the exception class

(TRY003)


65-65: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


67-67: Do not catch blind exception: Exception

(BLE001)


68-68: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


68-68: Use explicit conversion flag

Replace with conversion flag

(RUF010)


275-275: Abstract raise to an inner function

(TRY301)


275-275: Avoid specifying long messages outside the exception class

(TRY003)


316-316: Use explicit conversion flag

Replace with conversion flag

(RUF010)


316-316: Redundant exception object included in logging.exception call

(TRY401)

backend/app/api/v1/repo_stats.py

142-145: Avoid specifying long messages outside the exception class

(TRY003)


173-173: Consider moving this statement to an else block

(TRY300)


178-178: Redundant exception object included in logging.exception call

(TRY401)


181-181: Redundant exception object included in logging.exception call

(TRY401)


184-184: Use explicit conversion flag

Replace with conversion flag

(RUF010)

🔇 Additional comments (10)
backend/app/services/github/repo_stats.py (8)

21-31: LGTM! Good security and API configuration.

The initialization properly validates the GitHub token, sets appropriate headers (including User-Agent), and initializes the session placeholder for the async context manager pattern.


33-47: LGTM! Proper async context manager implementation.

The timeout and connector settings are appropriate for GitHub API interactions, and the cleanup logic properly handles session closure.


71-93: LGTM! Clean data fetching with proper defaults.

These methods correctly delegate to the request helper, handle None responses gracefully, and transform GitHub API responses into the expected structure with sensible defaults.

Also applies to: 206-229


95-146: LGTM! Correct PR state classification.

The logic properly distinguishes merged PRs from closed ones using the merged_at field, safely extracts nested user data, and returns a well-structured response with appropriate defaults.


148-186: LGTM! Proper filtering of PRs from issues.

The implementation correctly filters out pull requests (Line 158), as GitHub's /issues endpoint includes both issues and PRs. The counting and data extraction logic is sound.


238-273: LGTM! Excellent concurrent fetching with robust error handling.

The use of asyncio.gather with return_exceptions=True ensures that partial failures don't prevent successful data retrieval. Each exception is logged and replaced with a sensible default, providing resilience.


274-313: LGTM! Well-structured comprehensive response.

The response aggregates all repository data into a clear structure. The metrics section provides convenient access to key statistics without requiring callers to navigate nested objects.


188-204: Implement retry logic for GitHub's 202 responses on statistics endpoints.

The commit activity endpoint returns HTTP 202 (Accepted) when GitHub is computing statistics. The current code treats 202 as an error (returns empty list), but GitHub's API documentation explicitly requires retrying the request after a short delay. While the graceful handling in get_comprehensive_stats prevents crashes, this violates the documented API usage pattern and means callers always receive empty data when stats are being computed.

backend/app/api/v1/repo_stats.py (2)

16-124: LGTM! Comprehensive and well-typed Pydantic models.

The models accurately represent the GitHub stats structure with appropriate optional fields and defaults, providing both validation and automatic API documentation.


156-174: LGTM! Well-structured endpoint with clear flow.

The endpoint properly validates input, uses the service as a context manager, logs progress, and returns the result. The separation of concerns between parsing, fetching, and error handling is clean.

@Parvm1102
Copy link
Author

Hi @smokeyScraper,

I’ve reviewed my PR carefully and applied the CodeRabbit suggestions that were applicable and left the others unchanged intentionally. All changes have been tested and are working as expected.

Please let me know if you’d like any of those revisited.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

BUG: Landing Page Repository Input Not Connected to CodeGraph Indexing

1 participant