Skip to content
View kabir058205's full-sized avatar
  • Stockholm

Block or report kabir058205

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
kabir058205/README.md

Hi there, I'm Kabir Sharan Chandra ! πŸ‘‹

πŸš€ Data Engineering | Data Analysis | Data Modeling | Business Intelligence | ETL Development | Modern Data Solutions Explorer


πŸ‘©β€πŸ’» About Me

  • πŸ— Building Scalable Data Solutions with Azure, Snowflake, Databricks, DBT & SQL
  • 🎯 Passionate about Data Modeling, ETL, and AI-powered solutions
  • 🌱 Currently *learning Microsoft Azure febric, *
  • πŸ“ Based in Sweden πŸ‡ΈπŸ‡ͺ, exploring opportunities in Sweden and EU

Throughout my career, I with over 10 years of proven expertise across Banking, Finance, Retail, and Insurance sectors. Skilled in Azure Databricks, DBT, Snowflake, Oracle, SQL, IBM Datastage, Abinito ETL,IDMC Nice Actimize, Erwin and Data Warehousing, with expertise in building end-to-end data pipelines, implementing Azure Delta Lake architectures, and ensuring data governance and integrity. Strong ability to translate business requirements into robust, production-ready data

βœ… Data Warehouse Development & Optimization: Developed centralized data platforms, integrated data from multiple sources, and optimized processes, achieving a 40% improvement in data accessibility and a 25% reduction in processing times.

☁ Cloud Transition & Data Modeling: Experience in transitioning data solutions to *Azure and Snowflake, implementing *Kimball-based data models for scalable, future-proof solutions.

⚑ Performance & Process Optimization: Proven ability to optimize *SQL queries, improving *report generation times by 30% and addressing performance bottlenecks while ensuring data accuracy and compliance.

🀝 Collaboration & Training: Passionate about teamwork and mentorship, trained colleagues on **ETL design, SQL, and Tableau, boosting efficiency by **20%.

πŸ“Š Regulatory Compliance & Reporting: Led efforts to enhance data solutions for regulatory compliance, supporting stakeholders’ reporting requirements and reducing reporting times by **15%.


πŸŽ“ Certifications


⚑ Tech Stack

πŸ’» Languages & Databases:
SQL
Python
Snowflake

πŸš€ Cloud & Tools:
Azure
Databricks
Synapse Analytics
Power BI

🌍 Let's Connect!

πŸ“§ Email: [kabirmca"gmail.com)
πŸ’Ό LinkedIn: linkedin.com/in/kabir-sharan-chandra

kabir

The 'kabir' project was generated by using the default-python template.

  • src/: Python source code for this project.
    • src/kabir/: Shared Python code that can be used by jobs and pipelines.
  • resources/: Resource configurations (jobs, pipelines, etc.)
  • tests/: Unit tests for the shared Python code.
  • fixtures/: Fixtures for data sets (primarily used for testing).

Getting started

Choose how you want to work on this project:

(a) Directly in your Databricks workspace, see https://docs.databricks.com/dev-tools/bundles/workspace.

(b) Locally with an IDE like Cursor or VS Code, see https://docs.databricks.com/dev-tools/vscode-ext.html.

(c) With command line tools, see https://docs.databricks.com/dev-tools/cli/databricks-cli.html

If you're developing with an IDE, dependencies for this project should be installed using uv:

Using this project using the CLI

The Databricks workspace and IDE extensions provide a graphical interface for working with this project. It's also possible to interact with it directly using the CLI:

  1. Authenticate to your Databricks workspace, if you have not done so already:

    $ databricks configure
    
  2. To deploy a development copy of this project, type:

    $ databricks bundle deploy --target dev
    

    (Note that "dev" is the default target, so the --target parameter is optional here.)

    This deploys everything that's defined for this project. For example, the default template would deploy a pipeline called [dev yourname] kabir_etl to your workspace. You can find that resource by opening your workpace and clicking on Jobs & Pipelines.

  3. Similarly, to deploy a production copy, type:

    $ databricks bundle deploy --target prod
    

    Note the default template has a includes a job that runs the pipeline every day (defined in resources/sample_job.job.yml). The schedule is paused when deploying in development mode (see https://docs.databricks.com/dev-tools/bundles/deployment-modes.html).

  4. To run a job or pipeline, use the "run" command:

    $ databricks bundle run
    
  5. Finally, to run tests locally, use pytest:

    $ uv run pytest
    

Popular repositories Loading

  1. kabir058205 kabir058205 Public

    Config files for my GitHub profile.

    Jupyter Notebook

  2. chandra-cake-data-engineering chandra-cake-data-engineering Public

    Python

  3. dbt-snowflake-main dbt-snowflake-main Public

    dbt-snowflake-main

  4. sfguide-deploying-pipelines-with-snowflake-and-dbt-labs-main sfguide-deploying-pipelines-with-snowflake-and-dbt-labs-main Public

    Jupyter Notebook

  5. https-github.com-kabir058205 https-github.com-kabir058205 Public

  6. snowflake-ingest-java snowflake-ingest-java Public

    Forked from snowflakedb/snowflake-ingest-java

    Java SDK for the Snowflake Ingest Service -

    Java