Data Engineer, I
Zebra TechnologiesResume Keywords to Include
Make sure these keywords appear in your resume to improve ATS scoring
Sign up free to auto-tailor your resume with all these keywords and get a higher ATS score
Job Description
In this role, the Data Engineer, besides developing the solution, will also oversee other Engineers' development. Should have worked with version control tools like Git/Github and should have reasonable understanding of best practices o Bachelors, Master's or Ph.D. Degree in Computer Science or Engineering. o 2 years of experience programming with at least one of the following languages: Python, Scala, Go. o 2 years of experience in SQL and data transformation o2 years of experience in developing distributed systems using open source technologies such as Spark and Dask. o 2 years of experience with relational databases or NoSQL databases running in Linux environments (MySQL, MariaDB, PostgreSQL, MongoDB, Redis). o Experience working with AWS / Azure / GCP environment is highly desired. o Experience in data models in the Retail and Consumer products industry is desired. o Experience working on agile projects and understanding of agile concepts is desired. o Demonstrated ability to learn new technologies quickly and independently. o Excellent verbal and written communication skills, especially in technical communications. o Ability to work and achieve stretch goals in a very innovative and fast-paced environment. o Ability to work collaboratively in a diverse team environment. o Ability to telework Play a critical role in the design and implementation of data platforms for the AI products. Develop productized and parameterized data pipelines that feed AI products leveraging GPUs and CPUs. Develop efficient data transformation code in spark (in Python and Scala) and Dask. Build workflows to automate data pipeline using python and Argo. Develop data validation tests to assess the quality of the input data. Conduct performance testing and profiling of the code using a variety of tools and techniques. Build data pipeline frameworks to automate high-volume and real-time data delivery for our data hub. Operationalize scalable data pipelines to support data science and advanced analytics. Optimize customer data science workloads and manage cloud services costs/utilization.
Want AI-powered job matching?
Upload your resume and get every job scored, your resume tailored, and hiring manager emails found - automatically.
Get Started Free