Job Search

Home > Job Search

Sr Data Science Engineer

Posted on:


Closing on:


Brief description :

About Techversant Infotech,


"We are a global software solutions firm providing the full life cycle of capabilities from strategy through development, testing, maintenance and outsourcing. Techversant, offers expertise and consultation in multiple technology domains like - ColdFusion, Python, Ruby on Rails, Javascript, NodeJS, Moodle, Odoo, PHP, Golang, Blockchain, Artificial intelligence, Data Engineering, Machine Learning and iOT.


What’s important to us:


We are seeking an experienced Sr. Data Science Engineer who will be responsible for developing and driving new business opportunities internationally. The incumbent will be responsible for discovering sales opportunities and creating qualified leads.


Your reporting head will always be open to new ideas and encourage proactivity. If you were to have recommendations on how they can improve their approach, processes they will welcome them.


Scope Opportunity 


The successful applicant will understand the need to achieve a balance between innovation and the most appropriate solution for our clients. Fundamental to this role is a willingness to learn, become an integral part of the team and adopt the languages, tools, and applications that form part of our environment. Your managers will always be open to new ideas and encourage proactivity. If you were to have recommendations on how they can improve their approach, processes or technology, they will welcome them.


We know that people do their best work when they are taken care of. So we make sure to offer great benefits.


At Techversant, you’ll enjoy:


1) Excellent career growth opportunities and exposure multiple technologies 


2) Fixed weekday day schedule meaning you’ll have your weekends off!


3) Family Medical Insurance


4) Unique leave benefits and encashment options based on performance 


5) Long term growth opportunities Fun family environment surrounded by other experienced developers.


6)Various internal employee rewards programmes based on Performance. 


7) Opportunities for various other bonus programmes – for training hours taken, certifications, special value to business through idea and innovation 


8) Work life Balance - Work from home, flexible work timings, early out Fridays, various social and cultural activities activities etc.


9) Company Sponsored International Tours. 


Job description:


* Create and maintain optimal data pipeline architecture,


* Assemble large, complex data sets that meet functional / non-functional business requirements.


* Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.


*  Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data  sources using SQL and AWS ‘big data’ technologies.


* Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.


* Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Should have experience in Data-Pre processing, Cleaning, Wrangling etc.


* Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.


* Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.


* Work with data and analytics experts to strive for greater functionality in our data systems.

Preferred skills

We are looking for a candidate with 5+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:



* Experience with big data tools: Hadoop, Spark, Kafka, etc. Experience with all kind of ML algorithm in PySpark


* Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.


* Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.


* Experience with AWS cloud services: EC2, EMR, RDS, Redshift


* Experience with stream-processing systems: Storm, Spark-Streaming, etc.


* Experience with object-oriented/object function scripting languages: Python, Java,

    Scala, etc.