Who are we?
Coastal Carbon is a seed-funded startup on a mission to create positive impact through earth observation and AI. Founded at the University of Waterloo by a team of PhDs and engineers, we’re backed by some of the best AI and climate tech investors like HF0, Inovia Capital and Propeller Ventures, angels like James Tamplin (cofounder Firebase) and Sid Gorham (cofounder OpenTable, Granular), and partners like Amazon AWS and the United Nations.
What do we do?
We’re building multimodal foundation models for the natural world. We believe there’s more to the world than the internet + more to intelligence than memorizing the internet. Our models are trained on satellite remote sensing and real world ground truth data, and are used by our customers in nature conservation, carbon dioxide removal, and government to protect and positively impact our increasingly changing world. Our ultimate goal is to build AGI of the natural world.
About the role
We are looking for an AI Engineer to join our team and assist in scaling, building, and training large machine learning models.
The role will involve:
  1. Programming & Implementation:
    • Collaborate with researchers and scientists to implement and scale proof-of-concept (POC) models.
    • Rapidly implement the latest state-of-the-art methods from literature.
    • Train large-scale foundation models with billions of parameters, as well as smaller specialized models.
    • Develop AI systems capable of accurately understanding the universe and generating new knowledge.
    • Master various data modalities, including text, audio, images, and video.
  2. Distributed Training:
    • Build distributed training systems for AI models on high-performance computing (HPC) clusters.
    • Ensure smooth operation of large ML training jobs.
    • Debug and customize third-party source code.
    • Resolve non-ML software issues related to data quality, data preparation, and job startup speed.
Requirements
  • Bachelor’s degree in computer science, engineering, a related field, or equivalent experience.
  • 3+ years of relevant experience.
  • Willingness to dive into large ML codebases for debugging.
  • Demonstrated experience with deep learning and transformer models.
  • Familiarity with training and fine-tuning large models.
  • Proficiency in frameworks like PyTorch or TensorFlow.
  • Location wise, strong preference for in-person in Waterloo however remote work is possible for exceptional candidates.
Nice to have
  • Familiarity with cloud platforms such as AWS, GCP, or Azure.
  • Experience with scalable training-inference pipelines, preferably on AWS.
  • Proficiency in scripting languages such as Python, Bash, or PowerShell.
  • Experience with containerization and orchestration technologies like Docker and Kubernetes.
  • Team player, willing to undertake various tasks to support the team.