Minimum 3 years of working experience with Hadoop echo system including building distributed applications on Hadoop. Minimum 2 years of experience working in Data. Required strong knowledge on Spark, NoSQL databases, and Scalaintegration domain with several databases like Oracle, Teradata, Netezza etc.Analyze the data integration software product and recommend improvements. Analyze the different frameworks available in the same technology domain where the product is used and recommend/prioritize their integration in to the product. e.g. big data processing frameworks, distributed computing frameworks, Streaming and IoT. Tag team with the product engineering team in assisting them with adapting a new technology into the product. Research and build prototypes on new features and Prepare document for the product engineering team for integration.
Salary: Not Disclosed by Recruiter
Industry:IT-Software / Software Services
Functional Area:IT Software – DBA , Datawarehousing
Role Category:Programming & Design
Spark Big Data Oracle Hadoop NoSQL Netezza Data Integration Product Engineering Product Research Data Processing Architect Hadeep Java Scala R&D Research Data Inte
Desired Candidate Profile:
UG: Any Graduate – Any Specialization, B.Tech/B.E. – Any Specialization, B.Sc – Any Specialization, Diploma – Any Specialization, BCA – Computers
PG:Any Postgraduate, MBA/PGDM – Any Specialization, MCA – Computers, M.Tech – Any Specialization
Doctorate:Doctorate Not Required
Diyotta is leading the move to modern data integration and helping enterprises turn Hadoop into a powerful information hub.