Skip to main content
B

Data Engineer

Bhash Software Labs
Full Timemid
Bengaluru, Karnataka, INPosted March 6, 2026

Resume Keywords to Include

Make sure these keywords appear in your resume to improve ATS scoring

PythonSQLAWSGCPKafkaAirflowAPI

Sign up free to auto-tailor your resume with all these keywords and get a higher ATS score

Job Description

Data Engineer (Pipeline + Scraping + Enrichment) — Bhashsms

Location: Bangalore

Experience: 1–3 years

Compensation: ₹8–12 LPA

Role Type: Hands-on IC

About the Role

We process millions of phone numbers, business profiles, and campaign interactions.

We need a Data Engineer who can build and scale data pipelines to support AI models and enrich SME data.

This role is perfect for someone who loves:

  • Scraping
  • ETL
  • Data cleaning
  • Automating workflows
  • Fast execution

What You Will Build (First 180 Days)

1. Data Ingestion Pipelines

  • Scrape open directories
  • Parse public business information
  • Integrate partner APIs
  • Build clean datasets

2. Data Cleaning + Normalization

  • Remove duplicates
  • Standardize numbers
  • Tag location, industry, business type
  • Build category mapping files

3. Enrichment Pipelines

  • Infer attributes
  • Build connection graphs
  • Support the Data Scientist with clean feature-ready data

Skills Required

Must Have

  • Python (requests, BeautifulSoup, Selenium optional)
  • SQL
  • ETL tools or custom scripts
  • Experience with large CSV/JSON processing
  • API integration

Good to Have

  • Kafka / Airflow basics
  • Scrapy
  • AWS / GCP
  • Basic ML familiarity (not mandatory)

Responsibilities

  • Build scalable scraping system
  • Maintain clean, up-to-date datasets
  • Create ETL pipelines for ML models
  • Ensure high data quality
  • Debug pipeline failures
  • Support DS and CTO with data requirements

Want AI-powered job matching?

Upload your resume and get every job scored, your resume tailored, and hiring manager emails found - automatically.

Get Started Free