Canvas Medical

EMR and payments platform for healthcare

Applied AI Software Engineer

Software EngineerSoftware EngineerFull TimeRemoteTeam 51-200Since 2016H1B No SponsorCompany SiteLinkedIn

Location

California

Posted

26 days ago

Salary

$300K - $400K / year

Bachelor Degree9 yrs expEnglishClaudeFoundation Model ApisGeminiOpenaiPythonSQL

Job Description

Canvas Medical is the electronic medical records (EMR) and payments development platform for healthcare. We build modern, elegant front- and back-end tooling to enable new ways for developers and clinicians to collaborate to solve healthcare’s toughest challenges. Canvas is institutionally backed by some of the greatest technology investors in the world (funded notable health tech companies such as GoodRx, Oscar Health, and Hims & Hers Health). The Role We’re hiring an Applied AI Software Engineer to lead evaluations for agents in development and the post-deployment fleet of agents operating in Canvas to automate work for our customers. You will help develop agents in Canvas using state of the art foundation model inference and fine-tuning APIs along with our server-side SDK. The server-side SDK provides extensive tools and virtually all the context necessary for excellent agent performance. You’ll be responsible for designing and running rigorous evaluation experiments that measure performance, safety, and reliability across a wide variety of clinical, operational, and financial use cases. This role is ideal for someone with deep experience evaluating LLM-based agents at scale. You’ll create high-fidelity unit evals and end-to-end evaluations, define expert-determined ground truth outcomes, and manage iterations across model variants, prompts, tool use, and context window configurations. Your work will directly inform model selection, fine-tuning, and go/no-go decisions for AI features used in production settings. You’ll collaborate with product, ML engineering, and clinical informatics teams to ensure that Canvas's AI agents are not only capable, but trustworthy and robust under real-world healthcare constraints. You will also work with technical product marketers and developer advocates to help our broader developer community and the broader market understand the uniquely differentiated value of agents in Canvas. Who You Are What You’ll Do Design and execute large-scale evaluation plans for LLM-based agents performing clinical documentation, scheduling, billing, communications, and general workflow automation tasks. Build end-to-end test harnesses that validate model behavior under different configurations (prompt templates, context sources, tool availability, etc.). Partner with clinicians to define accurate expected outcomes (gold standard) for performance comparisons in domains of clinical consequence, and partner with other subject matter experts in other non-clinical domains. Run and replicate experiments across multiple models, parameters, and interaction types to determine optimal configurations. Deploy and maintain ongoing sampling for post-deployment governance of agent fleets. Analyze results and summarize tradeoffs in clarity for product and engineering stakeholders, as well as for technical stakeholders among our customers and the broader market. Take ownership over internal eval tooling and infrastructure, ensuring speed, rigor, and reproducibility. Identify and recommend candidates for reinforcement fine-tuning or retrieval augmentation based on gaps identified in evals. What Success Looks Like at 90 Days An expanded set of robust evaluation suites exists for all major AI features currently in development and in production. We have well-defined correctness criteria for each workflow and a reliable source of expert-determined outcome objects. Product and engineering teams have integrated your evaluation tools into their daily workflows. Evaluation results are clearly documented and reproducible, enabling trust in the performance trajectory. Your have effectively engaged your marketing counterparts to translate your work into key messages to the market and to Canvas customers. Qualifications 5+ years of experience in applied machine learning or AI engineering, with a focus on evaluation and benchmarking. Proficiency with foundation model APIs and experience orchestrating complex agent behaviors via prompts or tools. Experience designing and running high-throughput evaluation pipelines, ideally including human-in-the-loop or expert-labeled benchmarks. Superlative Python engineering skills and familiarity with experiment management tools and data engineering toolsets in general including, yes, SQL and database management. Familiarity with clinical or healthcare data is a strong plus. Experience with reinforcement fine-tuning, model monitoring, or RLHF is a plus. Research shows that women and other minority groups might avoid applying if they don’t meet 100% of the qualifications. We encourage you to apply even if you don’t meet everything listed in the job posting. Canvas Medical provides equal employment opportunities to all employees and applicants for employment without regard to race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.

Job Requirements

  • You have extensive hands-on experience evaluating LLM-based systems, including multi-agent architectures and prompt-based pipelines.
  • You are deeply familiar with foundation model APIs (OpenAI, Claude, Gemini, etc.) and how to systematically benchmark agent performance using those models in applied settings.
  • You care about correctness and reproducibility and have built or contributed to frameworks for automated evals, annotation pipelines, and experiment tracking.
  • You bring structure to ambiguity and know how to define “correctness” in complex, nuanced domains.
  • You are comfortable collaborating across engineering, product, and clinical subject matter experts.
  • You are not afraid of complexity and are energized by the rigor required in healthcare deployments.

Related Job Pages

More Software Engineer Jobs

Software Engineer26 days ago
Full TimeRemoteTeam 33Since 2019

The Unqork Configurator will design and build applications on the Unqork platform, translating business needs into no-code solutions and ensuring functionality, usability, and performance meet project goals.

JSONNo-Code DevelopmentREST APIUnqork
United States

Software Engineer III - Data Applications

TetraScience

Open | Cloud-Native | Purpose-Built for Science

Software Engineer26 days ago
Full TimeRemoteTeam 51-200Since 2015H1B Sponsor

The Software Engineer III will develop infrastructure for scientific analysis software, focusing on tools for high-quality delivery, scalability, and high availability, working in a team-oriented environment.

AWSAzureDatabasesDockerGCPNode.jsPythonSQLTypeScript
United States
Full TimeRemoteTeam 1,001-5,000Since 2006H1B No Sponsor

The Software Engineer II will support existing products through their lifecycle, executing firmware and software changes, ensuring compliance, and collaborating with cross-functional teams to resolve issues and improve product yield and reliability.

AppiumC#C/C++CucumberCypressGitJavaJavaScriptJunitPlaywrightPythonSeleniumTestng
California
Software Engineer26 days ago
Full TimeRemoteTeam 1-10H1B No Sponsor

Design, develop, and maintain cloud-based services for IoT solutions, collaborate with engineers, ensure security and data privacy, and mentor junior engineers.

AWSAzureCI/CDCoapDockerGoGCPJavaKubernetesMqttPython
California
$225K - $305K / year