SGA's Data Analytics (DA) team requires a self-learning Lead Data Engineer, that can lead a team of Data engineer analysts/ sr analysts, to solve complex data engineering problems for our US based global Media clients.
Our team of 60+ Data Engineers + Data scientists already use multiple available open source and licensed tools in the market to solve complex problems and drive business decisions.
Leverage experience in usage of tools and technologies like, SQL, Python etc. to Extract transform and prepare large scale datasets from various data source systems. Employ the appropriate algorithms to discover patterns, test hypotheses, and build actionable models to optimize business processes. Solve analytical problems, and effectively communicate methodologies and results. Draw relevant inferences and insights from data including, identification of trends and anomalies. Work closely with internal/external stakeholders such as business teams, product managers, engineering teams, and partner teams and align them with respect to your focus area What you will be responsible for -
• Writing clean, fully-tested and well-documented code in Python 3.5+ with pandas, NumPy, Dask, TensorFlow, scikit-learn and Django • Creating complex data processing pipelines including optimization and user experience
• Design, develop, test, deploy, support, enhance data integration solutions seamlessly to connect and integrate enterprise systems in an Enterprise Data Platform
• Working directly with clients to identify pain points and opportunities in pre-existing data pipelines and build or improve clients analytics processes • Developing and testing models using appropriate tools and technologies and deploying the same in the production environment using continuous delivery practices
• Working directly with the stakeholders on analytics framework model building, database design and deployment strategies
• Advising clients on the usage of different distributed storage and computing technologies from the plethora of options available in the ecosystem
What you should have -
• 5+ years of overall industry experience specifically in data engineering
• 4+ years of experience building and deploying large scale data processing pipelines in a production environment
• Strong experience in building data pipelines and analysis tools using Python and PySpark
- Leverage experience in usage of tools and technologies like, SQL, Python, PySpark, etc. to Extract transform and prepare large scale datasets from various data source systems.
- Writing clean, fully-tested and well-documented code in Python 3.5+ with pandas, NumPy, Dask, TensorFlow, scikit-learn and Django
- Creating complex data processing pipelines including optimization and user experience.
- Client stakeholder management. You will get to work directly with Global Analytics leads or Global CDOs of large MNCs and you will be the SME for them!
Specialties : ESG Consulting, Equity & Fixed Income Research, Corporate Finance & Valuation, Market Research, Market Intelligence, Data Engineering, Data Governance & Management, Data Science, AI/ML, Customer Analytics, Marketing Analytics, Sales Analytics, Competitive Intelligence , Intelligent Automation, RPA, Quality Engineering, and Business Consulting