AI/NLP Engineer
Location
Texas
Posted
26 days ago
Salary
$130K - $150K / year
Job Description
Job Requirements
- Inform robust data ingestion and retrieval pipelines that power real-time and batch AI applications using open-source and proprietary tools.
- Integrate external data sources (e.g., knowledge graphs, internal databases, third-party APIs) to enhance the context-awareness and capabilities of LLM-based workflows.
- Evaluate and implement best practices for prompt design, model alignment, safety, and guardrails for responsible AI deployment.
- Stay on top of emerging AI research and contribute to internal knowledge-sharing, tech talks, and proof-of-concept projects.
- Author clean, well-documented, and testable code; participate in peer code reviews and engineering design discussions.
- Proactively identify bottlenecks and propose solutions to improve system scalability, efficiency, and reliability.
- What We Value
- Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Data Science, or a related field.
- 5+ years of hands-on experience in applied AI, NLP, or ML engineering (with at least 2 years working directly with LLMs, RAG, semantic search and Agentic AI).
- Deep familiarity with LLMs (e.g. OpenAI, Claude, Gemini), prompt engineering, and responsible deployment in production settings.
- Experience designing, building, and optimizing RAG pipelines, semantic search, vector databases (e.g. ElasticSearch, Pinecone), and Agentic or multi-agent AI workflows in in large scale production setup. Exposure to MCP and A2A protocol is a plus.
- Exposure to GraphRAG or graph-based knowledge retrieval techniques is a strong plus.
- Strong proficiency with modern ML frameworks and libraries (e.g. LangChain, LlamaIndex, PyTorch, HuggingFace Transformers).
- Ability to design APIs and scalable backend services, with hands-on experience in Python.
- Experience building, deploying, and monitoring AI/ML workloads in cloud environments (AWS, Azure) using services like AWS SageMaker, AWS Bedrock, AzureAI, etc. Experience with tools to load balance different LLMs providers is a plus.
- Familiarity with MLOps practices, CI/CD for AI, model monitoring, data versioning, and continuous integration.
- Demonstrated ability to work with large, complex datasets, perform data cleaning, feature engineering, and develop scalable data pipelines.
- Excellent problem-solving, collaboration, and communication skills; able to work effectively across remote and distributed teams.
- Proven record of shipping robust, high-impact AI solutions, ideally in fast-paced or regulated environments.
- Technologies We Use
- Cloud & AI Platforms: AWS (Bedrock, SageMaker, Lambda), AzureAI, Pinecone, ElasticCloud, Imply Polaris.
- LLMs & NLP: HuggingFace, OpenAI API, LangChain, LlamaIndex, Cohere, Anthropic.
- Backend: Python (primary), Elixir (other teams).
- Data Infrastructure: ElasticSearch, Pinecone, Weaviate, Apache Kafka, Airflow.
- Frontend: TypeScript, React.
- DevOps & Automation: Terraform, EKS, GitHub Actions, CodePipeline, ArgoCD.
- Monitoring & Metrics: Grafana (metrics dashboards, alerting), Langfuse (Agentic AI observability, prompt management)
- Testing: Playwright for end-to-end test automation.
- Other Tools: Mix of open-source and proprietary frameworks tailored to complex, real-world problems.
- What You Can Expect
- Enjoy great team camaraderie whether at our Irvine office or working remotely.
- Thrive on the fast pace and challenging problems to solve.
- Modern technologies and tools.
- Continuous learning environment.
- Opportunity to communicate and work with people of all technical levels in a team environment.
- Grow as you are given feedback and incorporate it into your work.
- Be part of a self-managing team that enjoys support and direction when required.
- 3 weeks of paid vacation – out the gate!!
Related Guides
Related Job Pages
More AI Engineer Jobs
The job seeks a passionate individual to innovate in healthcare technology, focusing on creating unique solutions and revolutionizing telehealth.
AI Engineer designing machine learning models at J.D. Power
RAG + Agentic AI Lead
Codvo.aiBuilding Advance AI & Cloud Native Software Using The "Virtual Silicon Valley" Model. Let’s Talk AI, Cloud and Outcomes.
RAG + Agentic AI Engineer building production-grade AI applications
Lead development of a scalable AI tutoring system: build prompt architectures and agentic workflows, run experiments and custom evaluations, integrate models into production, ensure safety and observability, prototype novel model interactions, and collaborate with educators and product teams to translate pedagogical goals into robust technical solutions.