Sr. Software Engineer (India) - Research and Development

Are you familiar with the Hadoop platform? Able to design, implement and maintain optimal data/ML pipelines in the Hadoop platform?

We are looking for a strong Senior Software Engineer to fulfil the following:

  1. Design, create and maintain production-ready data/ML pipelines and ML pipelines
  2. Drive optimization, testing and tooling to improve quality
  3. Review and approve solution designs and architecture for data and ML pipelines
  4. Experience in developing rest API services using one of the scala frameworks
  5. Experience [Within Last 3 Years]:
    1. Scala: Min 2 Year
    2. Spark: Min 2 Year
    3. Hadoop: Min 2 Year
      1. Security
      2. Spark on yarn
      3. Architectural knowledge
    4. Hbase: Min 2 year
    5. Hive – 1 Year Exp.
    6. RDBMS (MySql / Postgres / Maria) – 2 Years Exp.
    7. CI/CD 1 Year Exp.
    8. Best coding practices in Scala
  6. Good to have:
    1. Kafka
    2. Spark Streaming
    3. Apache Phoenix
    4. Caching layer (Memcache / Redis)
    5. Spark ML
    6. FP (Scala cats / scalaz)
  7. Good time management and multitasking skills to work to deadlines by working independently and as a part of a team.

 Responsibilities

  • Design and implement a fine-tuned production-ready data/ML pipelines in the Hadoop platform
  • Understand business requirement and solution design to develop and implement solutions that adhere to big data architectural guidelines and address business requirements
  • Following proper SDLC (Code review, sprint process)
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, etc
  • Build robust and scalable data infrastructure (both batch processing and real-time) to support needs from internal and external users
  • Review and approve high level & detailed design to ensure that the solution delivers to the business needs and align with the data & analytics architecture principles and roadmap
  • Understand various data security standards and use secure data security tools to apply and adhere to the required data controls for user access in Hadoop platform
  • Support and contribute to developing guidelines and standards for data ingestion
  • Work with data scientist and business analytics team to assist in data ingestion and data-related technical issues
  • Design and document the development & deployment flow.

 The ideal candidate should possess

  • Bachelor’s degree in IT, Computer Science, Software Engineering, Business Analytics or equivalent with at least 2 years of experience in big data systems such as Hadoop as well as cloud-based solutions.
  • Ability to troubleshoot and optimize complex queries on the Spark platform
  • Expert in building and optimizing ‘big data’ data/ML pipelines, architectures and data sets
  • Excellent experience in Scala
  • Knowledge of modelling unstructured to structured data design/modelling,
  • Experience in Big Data access and storage techniques
  • Experience to do cost estimation based on the design and development
  • Excellent debugging skills for the technical stack mentioned above which even includes analyzing server logs and application logs
  • Highly organized, self-motivated, proactive, and ability to propose best design solutions
  • Ability to analyse and understand complex problems
  • Ability to explain technical information in business terms
  • Ability to communicate clearly and effectively, both verbally and in writing
  • Strong in User Requirements Gathering, Maintenance and Support
  • Excellent understanding of Agile Methodology
  • Good experience in Data Architecture, Data Modelling, Data Security experience

Job Perks

  • Attractive variable compensation package
  • Opportunity to work with an award-winning organization in the hottest space in tech –
    artificial intelligence and advanced machine learning