Title | Hadoop Developer |
Posting Date: | 09/21/2020 |
Location: | Multiple Locations |
Job Type | Full Time |
Job Description:
we are seeking a Big Data/Python Developer with experience in platform administration preferred. The position will primarily be responsible interface with key stakeholders and apply your technical proficiency across different stages of the Software Development Life Cycle including Requirements Elicitation, HLD & LLD definition and Design. You will play an important role in creating the high-level design artifacts. You will also deliver high quality code deliverables for a module, lead validation for all types of testing and support activities related to implementation, transition and warranty. You will be part of a learning culture, where teamwork and collaboration are encouraged, excellence is rewarded, and diversity is respected and valued.
Required Qualifications:
Candidate must be located within commuting distance of Charlotte, NC or be willing to relocate to the area. This position may require travel in the US and Canada.
Bachelor’s Degree or foreign equivalent, will consider work experience in lieu of a degree
At least 4 years of experience with Information Technology
3+ years of experience in Hadoop ecosystem, i.e. Hadoop, HBase, Hive, Scala, SPARK, Sqoop, Flume, Kafka, Python
3+ years of experience in Python programming - strong understanding and hands-on programming/scripting experience skills – Python, UNIX shell, Perl, and JavaScript.
Strong knowledge in object oriented concepts, data structures and algorithms
Good experience in end-to-end implementation of DW BI projects, especially in data warehouse and mart developments
Knowledge and experience with full SDLC lifecycle
Experience with Lean / Agile development methodologies
U.S. Citizenship or Permanent Residency required, we are not able to sponsor at this time
Preferred Qualifications:
At least 3 years of experience in software development life cycle
At least 3 years of experience in Project life cycle activities on development and maintenance projects
3+ years of experience in Hadoop ecosystem, i.e. Hadoop, HBase, Hive, Scala, SPARK, Sqoop, Flume, Kafka, Python
Experience in developing data science/analytics pipeline, i.e. starting from installation of various packages, data engineering, data analytics is preferred.
Working experience with Machine learning libraries, H2O, NLP is preferred
At least 1 year of experience in Relational Modeling, Dimensional Modeling and Modeling of Unstructured Data
Good experience in end-to-end implementation of DW BI projects, especially in data warehouse and mart developments
Good understanding of Data integration, Data Quality and data architecture
Experience to Big data technologies is preferred.
Good expertise in impact analysis due to changes or issues
Experience in preparing test scripts and test cases to validate data and maintaining data quality
Strong understanding and hands-on programming/scripting experience skills – UNIX shell, Perl, and JavaScript
Experience with design and implementation of ETL/ELT framework for complex warehouses/marts. Knowledge of large data sets and experience with performance tuning and troubleshooting
Hands-on development, with a willingness to troubleshoot and solve complex problems
CI / CD exposure
Strong written and oral communication skills
Ability to work in team in diverse/ multiple stakeholder environment
Ability to communicate complex technology solutions to diverse teams namely, technical, business and management teams
Experience managing team size of 2-3 would be a plus
Excellent verbal and written communication skills
Experience and desire to work in a Global delivery environment