Big Data Engineer – PySpark & SQL
Job Description
You will be doing the following in this Role:
Develop and maintain complex SQL queries and stored procedures for data extraction, transformation, and loading (ETL).
Build and optimize scalable data pipelines and data processing workflows using PySpark and Python.
Collaborate with data engineers, scientists, and analysts to understand and fulfill data requirements.
Ensure data quality, integrity, and consistent performance across big data environments.
Debug, monitor, and fine-tune data jobs for optimal performance.
Document code and processes; adhere to best practices for coding and performance.
This role is pivotal in enabling organizations to process, analyze, and manage large datasets efficiently on modern enterprise data platforms.
Requirements
Qualifications
Please see job description for qualifications.
Benefits
Apply for this position
Why Join RansomLock
Be part of a team making a real impact in cybersecurity
Career Advancement
Continuous learning opportunities with clear paths for growth and promotion.
Cutting-Edge Tech
Work with AI-driven security systems and the latest cybersecurity tools.
Collaborative Culture
Join a diverse team of experts working together to solve complex security challenges.