12+ years of experience in IT with 8+ years in Big Data Ecosystem

• Vast experience in troubleshooting and development on Hadoop technologies like HDFS, Hive, Pig, Flume, MongoDB, Accumulo, Sqoop, Zookeeper, Spark, MapReduce2, YARN, HBase, Tez, Kafka, and Storm.

• To be able to benchmark systems, analyze system bottlenecks and propose solutions to eliminate them

• Experience in Planning, designing and strategizing the Big Data roadmap to meet the Organization’s objective and goal towards Data Analytics revolution

• Build distributed, reliable and scalable data pipelines to ingest and process data in batch and real-time.

• Experience in end-to-end responsibility of the Hadoop life cycle in the organization

• Experience in development of large-scale Data Platform and in real-time streaming analytics

• Experience in Implementing, managing and administering the overall hadoop infrastructure.

• To be able to clearly articulate pros and cons of various Big Data technologies and platforms

• To be able to document Big Data use cases, solutions and recommendations

• To be able to help program and project managers in the design, planning and governance of implementing Big Data projects of any kind

• Responsible for identifying data sources and deciding the components to work on the data sources for ingestion

• To be able to perform detailed analysis of business problems and technical environments and use this in designing the Big Data solution

• To be able to work creatively and analytically in a problem-solving environment

• Experience in an agile Big Data environment

• Good understanding and experience of Lambda Architecture, along with its advantages and drawbacks

• Design Big data processing pipelines

• Experience with Big Data Security, Data Governance and Data Encryption

• Experience with Spark Programming

• Experience with integration of data from multiple data sources

• Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming

• Experience in fine tuning applications and systems for high performance and higher volume throughput.

• Experience in Hadoop Cluster planning, screening and Maintenance

• Ability to work with huge volumes of data so as to derive Business Intelligence

• Experience and Knowledge to analyze data, uncover information, derive insights, and propose data-driven strategies

• Decipher intricate technical requirements

• Experience in translating, loading and exhibiting unrelated data sets in various formats and sources like JSON, text files, Kafka queues, and log data

• Experience in designing and Implementing ETL/ELT process

• Monitoring performance and advising any necessary infrastructure changes

• Defining data retention policies

• Cross-industry, cross-functional and cross-domain experience (Oil and Gas Industry experience will be an added advantage)

• To have excellent written and verbal communication skills

Source link



Post Views:
1



Source link