π Data Engineering | Data Analysis | Data Modeling | Business Intelligence | ETL Development | Modern Data Solutions Explorer
- π Building Scalable Data Solutions with Azure, Snowflake, Databricks, DBT & SQL
- π― Passionate about Data Modeling, ETL, and AI-powered solutions
- π± Currently *learning Microsoft Azure febric, *
- π Based in Sweden πΈπͺ, exploring opportunities in Sweden and EU
Throughout my career, I with over 10 years of proven expertise across Banking, Finance, Retail, and Insurance sectors. Skilled in Azure Databricks, DBT, Snowflake, Oracle, SQL, IBM Datastage, Abinito ETL,IDMC Nice Actimize, Erwin and Data Warehousing, with expertise in building end-to-end data pipelines, implementing Azure Delta Lake architectures, and ensuring data governance and integrity. Strong ability to translate business requirements into robust, production-ready data
β Data Warehouse Development & Optimization: Developed centralized data platforms, integrated data from multiple sources, and optimized processes, achieving a 40% improvement in data accessibility and a 25% reduction in processing times.
β Cloud Transition & Data Modeling: Experience in transitioning data solutions to *Azure and Snowflake, implementing *Kimball-based data models for scalable, future-proof solutions.
β‘ Performance & Process Optimization: Proven ability to optimize *SQL queries, improving *report generation times by 30% and addressing performance bottlenecks while ensuring data accuracy and compliance.
π€ Collaboration & Training: Passionate about teamwork and mentorship, trained colleagues on **ETL design, SQL, and Tableau, boosting efficiency by **20%.
π Regulatory Compliance & Reporting: Led efforts to enhance data solutions for regulatory compliance, supporting stakeholdersβ reporting requirements and reducing reporting times by **15%.
-
*Databricks Certified Data Engineer Professional * https://credentials.databricks.com/profile/kabirsharanchandra566495/wallet
-
Oracle Certified OCA & OCP
π§ Email: [kabirmca"gmail.com)
πΌ LinkedIn: linkedin.com/in/kabir-sharan-chandra
The 'kabir' project was generated by using the default-python template.
src/: Python source code for this project.src/kabir/: Shared Python code that can be used by jobs and pipelines.
resources/: Resource configurations (jobs, pipelines, etc.)tests/: Unit tests for the shared Python code.fixtures/: Fixtures for data sets (primarily used for testing).
Choose how you want to work on this project:
(a) Directly in your Databricks workspace, see https://docs.databricks.com/dev-tools/bundles/workspace.
(b) Locally with an IDE like Cursor or VS Code, see https://docs.databricks.com/dev-tools/vscode-ext.html.
(c) With command line tools, see https://docs.databricks.com/dev-tools/cli/databricks-cli.html
If you're developing with an IDE, dependencies for this project should be installed using uv:
- Make sure you have the UV package manager installed. It's an alternative to tools like pip: https://docs.astral.sh/uv/getting-started/installation/.
- Run
uv sync --devto install the project's dependencies.
The Databricks workspace and IDE extensions provide a graphical interface for working with this project. It's also possible to interact with it directly using the CLI:
-
Authenticate to your Databricks workspace, if you have not done so already:
$ databricks configure -
To deploy a development copy of this project, type:
$ databricks bundle deploy --target dev(Note that "dev" is the default target, so the
--targetparameter is optional here.)This deploys everything that's defined for this project. For example, the default template would deploy a pipeline called
[dev yourname] kabir_etlto your workspace. You can find that resource by opening your workpace and clicking on Jobs & Pipelines. -
Similarly, to deploy a production copy, type:
$ databricks bundle deploy --target prodNote the default template has a includes a job that runs the pipeline every day (defined in resources/sample_job.job.yml). The schedule is paused when deploying in development mode (see https://docs.databricks.com/dev-tools/bundles/deployment-modes.html).
-
To run a job or pipeline, use the "run" command:
$ databricks bundle run -
Finally, to run tests locally, use
pytest:$ uv run pytest
