Realize the full value of the cloud.
Sr. Forward Deployed Engineer - Private Cloud, Data & AI Enterprise Solutions
Location
United States
Posted
2 days ago
Salary
$164K - $274K / year
Seniority
Senior
Job Description
Key Accountabilities:
- Embed with strategic enterprise customers to rapidly diagnose critical business challenges, map data landscapes, and co-design AI solutions on-site.
- Lead end-to-end solution design and delivery of agentic AI workflows, RAG pipelines, knowledge graphs, and real-time decision-making applications.
- Drive rapid prototyping and POCs that demonstrate tangible business value within days to weeks.
- Serve as the primary technical owner across the full project lifecycle: scoping, architecture, build, deployment, and post-launch optimization.
- Architect production-grade Enterprise AI applications on Partner Foundry Solutions or Rackspace Private Cloud and GPU infrastructure, integrating with enterprise systems (ERP, CRM, data warehouses, data lakes).
- Build scalable data pipelines across structured and unstructured data using ETL/ELT, vector databases (Pinecone, Weaviate, AstraDB), and knowledge base frameworks.
- Develop and fine-tune LLM/SLM solutions; implement RAG architectures (LlamaIndex, Haystack) and orchestrate multi-agent workflows (LangChain, LangGraph, CrewAI).
- Ship with full-stack and DevOps depth: Python, Node.js/Go, React/Vue, Docker, Kubernetes, CI/CD, and GPU cluster management.
- Champion observability, monitoring, and telemetry to ensure trustworthy, auditable, and versioned AI agents in production.
- Identify expansion opportunities by working with sales and customer success to uncover high-value use cases across new business domains.
- Feed structured field insights back to Platform Engineering and Product on feature gaps, emerging needs, and usability improvements.
- Build reusable IP through reference architectures, accelerators, frameworks, and technical best practices that scale future engagements.
- Mentor engineers and customer teams, driving knowledge transfer and building internal AI competencies.
Preferred Qualifications:
- Experience with Palantir Foundry, AIP, ontology modeling, Uniphore BAIC, or similar Enterprise AI development platforms.
- Knowledge of SLM fine-tuning, model distillation, RLHF, and AI evaluation frameworks.
- Experience building agentic AI solutions: multi-agent systems, tool use, and autonomous workflow orchestration.
- Familiarity with GPU infrastructure (NVIDIA H100/B200, InfiniBand) and private cloud platforms (OpenStack, VMware).
- Foundry certifications from Palantir/Uniphore or AI/ML-related certifications.
- Prior experience in technology consulting, AI startups, or Forward Deployed / Solutions Engineering roles.
- Domain expertise in financial services, healthcare, supply chain, defense, energy, or manufacturing.
- Experience with knowledge graphs, semantic modeling, and ontology-driven data management.
Required Qualifications:
- BS/MS/PhD in Computer Science, Data Science, Engineering, Mathematics, Physics, or related field.
- 10+ years in software engineering, data engineering, or AI/ML delivery; at least 4+ years in customer-facing or field roles.
- Proven track record in building and deploying AI/ML applications in production at enterprise scale.
- Deep full-stack proficiency: Python (required), Node.js/Go, React/Vue, SQL/NoSQL databases.
- Hands-on with LLMs, prompt engineering, vector databases, data pipelines, application dashboards, RAG pipelines, and agent orchestration frameworks.
- Strong DevOps skills: Docker, Kubernetes, CI/CD, GPU infrastructure, cloud-native deployment patterns.
- Experience integrating across heterogeneous enterprise systems - ERP, data warehouses, data lakes, streaming architectures.
- Ability to translate ambiguous customer needs into actionable engineering plans under tight timelines.
- Excellent communication skills - comfortable with C-suite presentations, technical workshops, and cross-functional collaboration.
- Willingness to travel up to 25% for on-site customer engagements.
About Rackspace Technology
We are the multicloud solutions experts. We combine our expertise with the world’s leading technologies — across applications, data and security — to deliver end-to-end solutions. We have a proven record of advising customers based on their business challenges, designing solutions that scale, building and managing those solutions, and optimizing returns into the future. Named a best place to work, year after year according to Fortune, Forbes and Glassdoor, we attract and develop world-class talent. Join us on our mission to embrace technology, empower customers and deliver the future.
More on Rackspace Technology
Though we’re all different, Rackers thrive through our connection to a central goal: to be a valued member of a winning team on an inspiring mission. We bring our whole selves to work every day. And we embrace the notion that unique perspectives fuel innovation and enable us to best serve our customers and communities around the globe. We welcome you to apply today and want you to know that we are committed to offering equal employment opportunity without regard to age, color, disability, gender reassignment or identity or expression, genetic information, marital or civil partner status, pregnancy or maternity status, military or veteran status, nationality, ethnic or national origin, race, religion or belief, sexual orientation, or any legally protected characteristic. If you have a disability or special need that requires accommodation, please let us know.
Job Requirements
- Experience with Palantir Foundry, AIP, ontology modeling, Uniphore BAIC, or similar Enterprise AI development platforms.
- Knowledge of SLM fine-tuning, model distillation, RLHF, and AI evaluation frameworks.
- Experience building agentic AI solutions: multi-agent systems, tool use, and autonomous workflow orchestration.
- Familiarity with GPU infrastructure (NVIDIA H100/B200, InfiniBand) and private cloud platforms (OpenStack, VMware).
- Foundry certifications from Palantir/Uniphore or AI/ML-related certifications.
- Prior experience in technology consulting, AI startups, or Forward Deployed / Solutions Engineering roles.
- Domain expertise in financial services, healthcare, supply chain, defense, energy, or manufacturing.
- Experience with knowledge graphs, semantic modeling, and ontology-driven data management.
- BS/MS/PhD in Computer Science, Data Science, Engineering, Mathematics, Physics, or related field.
- 10+ years in software engineering, data engineering, or AI/ML delivery; at least 4+ years in customer-facing or field roles.
- Proven track record in building and deploying AI/ML applications in production at enterprise scale.
- Deep full-stack proficiency: Python (required), Node.js/Go, React/Vue, SQL/NoSQL databases.
- Hands-on with LLMs, prompt engineering, vector databases, data pipelines, application dashboards, RAG pipelines, and agent orchestration frameworks.
- Strong DevOps skills: Docker, Kubernetes, CI/CD, GPU infrastructure, cloud-native deployment patterns.
- Experience integrating across heterogeneous enterprise systems - ERP, data warehouses, data lakes, streaming architectures.
- Ability to translate ambiguous customer needs into actionable engineering plans under tight timelines.
- Excellent communication skills - comfortable with C-suite presentations, technical workshops, and cross-functional collaboration.
- Willingness to travel up to 25% for on-site customer engagements.
Benefits
- Compensation reflects the cost of labor across several geographic markets.
- The base pay for this position ranges from $164,851.50/year in our lowest geographic market up to $274,752.50/year in our highest geographic market.
- Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience.
- The compensation package may also include incentive compensation opportunities in the form of annual bonus or incentives, equity awards, and an Employee Stock Purchase Plan (ESPP).
Related Guides
Related Job Pages
More AI Engineer Jobs
AI Platform Architect
SymphonyAISymphonyAI is redefining retail decision making through an AI first, connected data platform. The CINDE platform brings together intelligence, automation, and Generative AI to help the world’s largest retailers drive profitable growth at scale. This role sits at the center of that transformation.
The AI Architect will own the end-to-end architecture for the AI platform, defining reference architectures for agent frameworks, knowledge graphs, and model services, while ensuring architectural consistency across enterprise teams. Key duties include architecting agent-based systems, designing hybrid AI solutions combining LLMs and knowledge graphs, and defining standards for MLOps, LLMOps, and Responsible AI governance.
Senior AI Engineer
Khan Academy TürkçeHerkese, her yerde, dünya standartlarında, ücretsiz eğitim... #HerŞeyiÖğrenebilirsin www.khanacademy.org.tr
Senior AI Engineer integrating AI into Khan Academy product features
The Senior AI Engineer will be responsible for integrating AI into Company product features, working with cross-functional teams to develop tools and processes that ensure AI implementation quality and consistency. Key tasks include building AI-driven features, developing test datasets and evaluation criteria, and improving AI output quality over time through experimentation.
Intern - Development, AI Integration
esVolta, LPA leading developer, owner and operator of utility scale energy storage projects in North America.
We are seeking a motivated and talented individual to join us for a 12-week summer internship within our Development department. This internship will focus on integrating AI tools (e.g., ChatGPT Enterprise / similar LLM platforms) into existing team workflows to reduce manual rep...


