Job Details
Hi,
We currently have a few job openings that may interest you. Please find the below summary of one of our current opportunities.
Kindly let me know your interest and share your available updated resume/details ASAP-
Role: Consultant AI/ML, MLOps, Python, Azure, Kubernetes, Spark, ETL
Location: CA- Hybrid Role, No Remote - San Francisco or Cupertino Office in Hybrid Model 3 days per week
Duration: 6 Months+
Job Description:
- MLOps / Client Engineering => 8/10 => 6 to 8 experience
- Platform Development / MicroServices / Arch => 7/10 => 8 to 10 experience
- Docker/Containers/Kubernetes => 6/10 => 5 to 6 experience
- Data Science / Machine Learning => 5/10 => 5 to 6 experience
- Azure Highly preferred to have the experience => 6 to 8 experience
- Python must have => 8 to 10 experience
- Spark- Required => 4 to 5 experience
- Client tools experience such as AzureML/MLFlow/Databricks/Kubeflow etc. - Deployed & worked on some of these tools
Kubernetes Very Strong and #1 4 to 5 years
Data Pipelines ETL Preferred Bring data to send back to other team
Understanding of Python is good and will code in Python Not working on API's
Good Understanding of Machine Learning Pipelines
Argo WorkFlow Experience
Docker and Jenkins Needed
Workflow Experience Needed
Build, modernize and maintain the Client's AI/Client Platform & related frameworks/solutions.
Participate and contribute to architecture & design reviews.
Build/Deploy AI/Client platform in Azure with open-source applications (Argo, Seldon) and/or cloud/SaaS solutions (Azure Client, Databricks, Truera).
You will design, develop, test, deploy, and maintain distributed & GPU-enabled Machine Learning Pipelines using K8s/AKS based Argo Workflow Orchestration solutions, while collaborating with Data Scientists.
Enable/Support platform to do distributed data processing using Apache Spark and other distributed / scale technologies.
Build ETL pipelines, ingress/egress methodologies in context to AIML use-cases.
Build highly scalable backend REST APIs for metadata management and other misc. business needs.
Deploy Application in Azure Kubernetes Service using GitLab CICD, Jenkins, Docker, Kubectl, Helm, Terraform and Manifest
Experience in branching, tagging, and maintaining the versions across different environments in GitLab.
Review code developed by other developers and provide feedback to ensure best practices (e.g., design patterns, accuracy, testability, efficiency etc.)
Work with relevant engineering, operations, business lines, and infrastructure groups to ensure effective architectures and designs and communicate findings clearly to technical and non-technical partners.
Perform functional, benchmark & performance testing and tuning to achieve performant AIML workflow(s), interactive notebook user experiences, and pipelines.
Assess, design & optimize the resources capacities for Client based resource (GPU) intensive workloads.
Communicate processes and results of the application with all parties involved in the product team, like engineers, product owner, scrum master and third-party vendors.
What skills/technologies are required (please include the number of years of experience required)?
- Client Platform / Client Engineering => 8 out of 10 experience
- Platform Development / MicroServices / Arch => 7/10 experience
- Docker/Containers/Kubernetes => 6/10 experience
- Data Science / Machine Learning => 5/10 experience
- Azure Highly preferred to have the experience
- Python must have 6 - 8 years of experience in software development and with data structures/algorithms
- Good understanding of distributed systems like Spark and Kafka - Good to Have => 5/10
- Good understanding of security - TLS and RBAC
- Client tools experience such as Argoworkflow/ AzureML /MLFlow/Databricks/etc. - Deployed & worked on some of these tools
- AI tools experience such as Generative AI, ChatGPT, MetaGPT, LLM, Llama 2 etc - Deployed & worked on some of these tools
-----
Ashish
- Dice Id: 91132378
- Position Id: 2024-6600