This job posting isn't available in all website languages
Experian Careers Job Search
Product Development
194707 Requisition #

Experian is seeking a Big Data Engineer to join our CIS Product Delivery team. This is a great opportunity for someone who specializes in Kafka Architecture to enable cloud-based financial services platform to access timely, accurate and relevant data. An ideal candidate will have built real time software services platform where large volume messaging is core to the solution set.

About Experian

Experian is the world’s leading global information services company, unlocking the power of data to create more opportunities for consumers, businesses and society.  For five years in a row, we have been named in the Top 100 “World’s Most Innovative Companies” by Forbes Magazine.  With a focus on our employees, we were rated the #1 Top Workplace by the Orange County Register. Experian Consumer Information Services is redefining the way our clients do business within all aspects of the customer credit lifecycle. Fueled by best-in-class data and innovative technology we help businesses make smarter decisions, identify consumers, make decisions on loans, market to prospects and collect.


About this role


As a Kafka Engineer, candidate must be messaging expert with extensive, well-rounded background in a diverse set of messaging middleware solutions (commercial, open source, in-house) with in-depth understanding of architectures of such solutions such Kafka. Established track record with Kafka technology [administration, configuration, and troubleshooting], with hands-on production experience and a deep understanding of the Kafka architecture and internals of how it works, along with interplay of architectural components: Kafka Connect, Kafka Streams.


·         Design, develop and implement the Kafka ecosystem by creating a framework for leveraging technologies such as Kafka Connect, KStreams/KSQL, Attunity, Schema Registry, and other streaming-oriented technology 

·         Assist in building out the DevOps strategy for hosting and managing our SDP microservice and connector infrastructure in AWS cloud 

·         Strong track record of design/implementing big data technologies around Apache Hadoop, Kafka streaming, No SQL, Java/J2EE and distributed computing platforms in large enterprises where scale and complexity have been tackled.

·         Proven experience participating in agile development projects for enterprise-level systems component design and implementation

·         Deep understanding and application of enterprise software design for implementation of data services and middleware.

  • 5+ years of experience in relevant Streaming/Queueing implementation roles
  • Bachelor's degree in Engineering discipline; Masters preferred
  • Experience in monitoring the health of Kafka cluster (data loss and data lagging) and strategy for short TTD (time to detect) of broker failure and fast TTR (time to recover)
  • Strong coder who can implement Kafka producers and consumers in various programming languages following the common patterns and best practices
  • Experience in various integration with Kakfa such as Elastic Search, Databases (RDBMS or NoSQL)
  • Experience in Spark stream processing is a plus
  • Experience in RDBMS change log streaming is a plus

·       Systems integration experience, including design and development of APIs, Adapters, and Connectors and Integration with Hadoop/HDFS, Real-Time Systems, Data Warehouses, and Analytics solutions.

·         Financial Industry experience preferred

My Profile

Create and manage profiles for future opportunities.

Go to Profile

My Submissions

Track your opportunities.

My Submissions

Similar Listings

Hyderabad, Andhra Pradesh, India

📁 Product Development

Hyderabad, Andhra Pradesh, India

📁 Product Development

Hyderabad, Andhra Pradesh, India

📁 Product Development

Privacy Policy  |  Online Community  |  Press  |  Investor Relations