Encultured AI

This is gonna be fun...

Encultured AI is a for-profit company with a public benefit mission: to develop technologies promoting the long-term survival and flourishing of humanity and other sentient life.

We’re hiring! See below for our job openings.

Our principles

At Encultured, we believe advanced AI technology could be used to make the world a safer, happier, and healthier place to live. However, we also realize that AI poses an existential risk to humanity if not developed with adequate safety precautions. Given this, our goal is to develop products and services that help humanity steer toward the benefits and away from the risks of advanced AI systems.

Our current main strategy involves building a platform usable for AI safety and alignment experiments, comprising a suite of environments, tasks, and tools for building more environments and tasks. The platform itself will be an interface to a number of consumer-facing products, so our researchers and collaborators will have back-end access to services with real-world users. Over the next decade or so, we expect an increasing number of researchers — both inside and outside our company — will transition to developing safety and alignment solutions for AI technology, and through our platform and products, we’re aiming to provide them with a rich and interesting testbed for increasingly challenging experiments and benchmarks.

Like any start-up, we’re pretty optimistic about our potential to grow and make it big. Still, we don’t believe our company or products alone will make the difference between a positive future for humanity versus a negative one, and we’re not aiming to have that kind of power over the world. Rather, we’re aiming to take part in a global ecosystem of companies using AI to benefit humanity, by making our products, services, and scientific platform available to other institutions and researchers.

Also, fun! We think our approach to AI has the potential to be very fun, and we’re very much hoping to keep it that way for the whole team :)

Experiments we’re interested in supporting

We’re particularly interested in experiments examining:

Eventually, we want every safety and alignment researcher in the world to be able to test their solution ideas on our platform. This will not only yield benchmarks for comparison, but also a playground for interaction, where at least some acute safety failures and harmful interaction dynamics can be discovered in silico before reaching the real world. It’s probably irresponsible to expect our platform alone will ever be enough to certify a new AI technology as definitely safe, but if it appears unsafe on our platform then it probably shouldn’t be deployed in reality (which would probably be the opposite of fun).

Founders

Andrew Critch
Dr. Andrew Critch
CEO & Co-founder

Andrew is deeply driven to contribute to AI safety efforts on a global scale. Immediately prior to Encultured, Andrew spent 5 years working as a full-time research scientist at UC Berkeley, within the Center for Human-Compatible AI (CHAI), where he retains a part-time appointment. In 2017, Andrew also co-founded the Berkeley Existential Risk Initiative, a non-profit dedicated to improving humanity’s long-term prospects for survival and flourishing, where he volunteered as Executive Director for three years, and now volunteers as President. Andrew also established the Survival and Flourishing Fund and Survivel and Flourishing Projects with the support of philanthropist Jaan Tallinn, and co-developed the S-process for philanthropic grant-making with Oliver Habryka.

In 2013, Andrew earned his Ph.D. in mathematics at UC Berkeley studying applications of algebraic geometry to machine learning models. During that time, he cofounded the Center for Applied Rationality (CFAR) and the Summer Program on Applied Rationality and Cognition (SPARC). He was offered university faculty and research positions in mathematics, mathematical biosciences, and philosophy, worked as an algorithmic stock trader at Jane Street Capital’s New York City office (2014-2015), and as a Research Fellow at the Machine Intelligence Research Institute (2015-2017).

Most recently, Andrew had the good sense to be super-impressed by Nick’s amazing engineering skills and realized they should found a company together :)

Nick Hay
Dr. Nick Hay
CTO & Co-founder

Nick wants to ensure that powerful AI is developed for the benefit of humanity, and believes that to do this we need a good understanding not only of artificial intelligence but also of human intelligence, including deep questions about human minds and culture spanning anthropology, cognitive linguistics, and neuroscience. Prior to Encultured, Nick spent 5 years at Vicarious AI working on approaches to artificial general intelligence (AGI) grounded in real-world robotics. In 2015, Nick earned his PhD at UC Berkeley under Professor Stuart Russell applying reinforcement learning and Bayesian analysis to the metalevel control problem: how can an agent learn to control its own computations.

Nick first began thinking deeply about the impact of AI on humanity upon reading Eliezer Yudkowsky’s Creating Friendly AI in 2001, subsequently interning at MIRI in 2006 and attending the Singularity Summit in 2007. Originally hailing from New Zealand, Nick is still getting used to walking upside down.

Job openings

Job: Machine Learning Engineer

Apply here for this technical staff position.

Location:

Mostly remote, over Zoom calls and Slack channels. A few times per month we’ll rent a workspace in the San Francisco / Berkeley area for an in-person workday, so if you live nearby it’d be great to have you attend those.

Compensation:

Starting between $120k and $180k per year depending on experience, plus healthcare benefits and equity incentives vesting over 5 years, with raises also becoming available with good individual performance or team-wide accomplishments that expand our revenue stream.

Necessary qualifications:

In this role we need candidates to have experience with:

Other nice-to-have qualifications (not required):

These are not required, but our team will welcome and make good use of experience with:

Apply here for this technical staff position.

Job: Immersive Interface Engineer

Apply here for this technical staff position.

Location:

Mostly remote, over Zoom calls and Slack channels. A few times per month we’ll rent a workspace in the San Francisco / Berkeley area for an in-person workday, so if you live nearby it’d be great to have you attend those.

Compensation:

Starting between $120k and $180k per year depending on experience, plus healthcare benefits and equity incentives vesting over 5 years, with raises also becoming available with good individual performance or team-wide accomplishments that expand our revenue stream.

Necessary qualifications:

In this role we need candidates to have experience with:

Other nice-to-have qualifications (not required):

These are not required, but our team will welcome and make good use of experience with:

Apply here for this technical staff position.

Job: Operations & Executive Assistant

Apply here for this operations position.

Applicants for this role must have authorization to work in the United States (i.e., be a US citizen or green card holder).

Location:

Mostly remote. Ideally some work would be done in-person in the SF Bay Area, but we will also consider fully remote candidates.

Compensation:

Pay will be adjustable upward for existing experience and/or good performance:

Operations Assistant task examples:

Executive Assistant task examples:

Room for growth:

With good performance, the “Operations Assistant” title is open to promotion to “Operations Associate”, and with excellent performance, “Operations Manager”.

Apply here for this operations position.