Title | Hadoop Developer |
Posting Date: | 04/28/2020 |
Location: | Texas |
Job Type | Full Time |
Job Description:
If you are passionate about data, software development and analytics with first-hand experience with Hadoop, data warehousing and data integration; if you believe in modern software development methods, teamwork, creativity and collaboration within a diverse development environment to produce the next generation of data and analytics platforms and services, we are interested in talking to you.
Equinox is seeking a professionally-trained and qualified senior Hadoop developer within the Customer Information & Analytics organization. The role requires ability to perform software development activities in an Agile environment, working closely with IT and business stakeholders.
Job Responsibilities:
· Develop solutions using Hadoop and related technologies like Big Data tools Spark, Kafka, Hive, HBase, Sqoop, Pig, Impala, Flume, Oozie, MapReduce, etc.
· Write software in programming languages like Java, Python, Scala, using object-oriented approaches for designing, coding, testing and debugging programs.
· Design, build and maintain Big Data workflows/pipelines to process billions of records in large-scale data environments with experience in end-to-end design and build process of Near-Real Time and Batch Data Pipelines.
· Partner with Development teams to ensure Coding standards are in alignment with DevOps practices with respect to Tools, Standards and Security
· Lead code review sessions to validate adherence with development standards and benchmark application performance by capacity testing.
· Leverage DevOps techniques and Experience with DevOps tools - GitHub, Jira, Jenkins, Crucible for Continuous Integration, Continuous Deployment and build automation.
· Provide strong technical expertise (performance, application design, stack upgrades) to lead Platform Engineering
· Develop, implement and optimize streaming, data lake, and big data analytics solutions
· Support reusable framework and data governance processes by partnering with LOBs for any code/requirements remediation
· Engage in application design and data modeling discussions; also participate in developing and enforcing data security policies
· Collaborate as part of a cross-functional Agile team to create and enhance software for next generation Big Data applications
· Work independently and drive solutions end to end leveraging various technologies to solve data problems and develop solutions.
Required Education, Experience & Skills:
· Bachelor's or Master's degree in related field or equivalent experience
· Related technical experience, expert level knowledge of Java, Python, Spark and SQL; Hands on experience with HBase
· At least 4+ years of experience with the Hadoop ecosystem and Big Data technologies.
· Working knowledge of NoSQL, RDBMS, SQL, JSON, XML and ETL skills are a must.
· Solid design and development background in ETL, DW, BI and data migration projects.
· Experience with Hadoop testing frameworks and familiarity with Hadoop administration.
· Experience working in Agile Scrum teams
· Passionate for continuous learning, experimenting, applying and contributing towards cutting edge open source technologies and software paradigms
· Ability to research and assess open source technologies and components to recommend and integrate into the design and implementation
· Excellent problem-solving skills, verbal/written communication, and the ability to explain technical concepts to business people.
· Ability to work in a fast-paced environment and manage multiple priorities in parallel.
· Self-motivated and capable of delivering results with minimal ongoing direction