Posted on: October 12, 2018
This opportunity is located at their Charlotte, NC campus - some relocation assistance may be available. We are only able to consider US citizens or GreenCard Holders for this position. We are searching for a talented Hadoop Administrator/Engineer with proven engineering, scripting, and trouble shooting skills. The ideal candidate will have strong experience with Hadoop data stores/clusters and relational database experience. In addition, awareness and working knowledge of No-SQL databases and strong communication skills and the ability to work independently are preferred. Tasks and Major Responsibilities include: Experience working on Hadoop ecosystem components like DHFS, hive, map-reduce, yarn, impala, spark, Sqoop, HBase, Sentry, Hue and Oozie Installation, configuration and Upgrading Cloudera distribution of Hadoop Exposure to Kafka and Apache NIFI Responsible for implementation and on-going administration of Hadoop Infrastructure Experience working on Hadoop security aspects including Kerberos setup, RBAC authorization using Apache Sentry Create and document best practices for Hadoop and Big data environment File system management and cluster monitoring using Cloudera Manager Performance tuning of Hadoop clusters and Hadoop MapReduce routines Strong troubleshooting skills involving map reduce, yarn, sqoop job failure and its resolution Analyze multi-tenancy job execution issues and resolve Backup and disaster recovery solution for Hadoop cluster Experience working on Unix operating system & can efficiently handle system administration tasks related to Hadoop cluster Knowledge or experience working on NO-SQL databases like HBase, Cassandra, MongoDB Troubleshooting connectivity issues between BI tools like Datameer, SAS and Tableau and Hadoop cluster Working with data delivery teams to setup new Hadoop users. (Job includes setting up Linux users, setting up Kerberos principals and testing HDFS, Hive, Pig and Map Reduce access for new users Point of contact for vendor escalation; be available for 24*7 Hadoop support issues Participate in new data product or new technology evaluation; manage certification process. Evaluate and implement new initiatives on technology and process improvements. Interact with Security Engineering to design solutions, tools, testing and validation for controls Technical Skills: You will have 3 to 4 years of experience with Hadoop data stores/cluster administration and 5 to 8 years relational database experience. Strong Hadoop cluster administration expertise; Understanding of internals Excellent performance and tuning skills of large workload inside Hadoop cluster Strong Partitioning knowledge ( data sharding concepts) Scripting Skills - Shell and Python Experience in upgrading Cloudera Hadoop distributions Experience in performance tuning and troubleshooting - drill down approach with O/S, database and application - End to End Application connectivity Familiarity with NoSQL data stores (MongoDB / Cassandra/HBase) Familiarity with Cloud Architecture (Public and Private clouds) - AWS , AZURE familiarity Prior experience of administration of Teradata or any other relational databases. Desired Skills (Preferred not required): Scripting with Pig Proficient in using Microsoft Office 2010 (Word, Excel, PowerPoint) Automation Tools - Puppet , CFEngine Familiarity with Solr, Spark is preferred Data Movement Tools such as Data Stage, Informatica, Sqoop etc
Keywords: TRC, Charlotte , Hadoop Administrator, IT / Software / Systems , Charlotte, North Carolina
Didn't find what you're looking for? Search again!