Implement end-to-end Hadoop ecosystem components and accompanying frameworks with minimal assistance.
• Produce and refactor code without assistance, and observe basic hygiene practices.
• Good knowledge on oops and well equipped with good programing knowledge on Java, Scala, shell scripting
• Good knowledge on distributed data processing framework like Spark
• Good knowledge on engineering practices like CI/CD, Versioning tools - GIT hub
• Drive out features via appropriate test frameworks, e.g. drive out features via Junit, scala tests.
• Apply my understanding of different programming paradigms to influence the correct approach for a task.
• Identify patterns in code and refactor the code towards them where it increases understanding and/or maintainability with minimal guidance.
• Translate small behavior requirements into tasks & code.
• Develop high quality code that can lead to rapid delivery. Ruthlessly pursuing continuous integration and delivery.
• Commit code early and often, demonstrating my understanding of version control & branching strategies.
• Apply patterns for integration (events/services).
• Implement relevant project instrumentation.
• Follow the best practices of continuous BDD/TDD/Performance/Security/Smoke testing.
• Work effectively with my product stakeholders to communicate and translate their needs into improvements in my product.
• Participate in team ceremonies.
• Support production systems, resolve incidents and perform root cause analysis.
• Debug code and support/maintain the software solution.
Hadoop certified candidates will be preferred.
|