Encultured AI is a for-profit company with a public benefit mission: to develop technologies promoting the long-term survival and flourishing of humanity and other sentient life.
We’re hiring! See below for our job openings.
Our principles
At Encultured, we believe advanced AI technology could be used to make the world a safer, happier, and healthier place to live. However, we also realize that AI poses an existential risk to humanity if not developed with adequate safety precautions. Given this, our goal is to develop products and services that help humanity steer toward the benefits and away from the risks of advanced AI systems.
Our current main strategy involves building a platform usable for AI safety and alignment experiments, comprising a suite of environments, tasks, and tools for building more environments and tasks. The platform itself will be an interface to a number of consumer-facing products, so our researchers and collaborators will have back-end access to services with real-world users. Over the next decade or so, we expect an increasing number of researchers — both inside and outside our company — will transition to developing safety and alignment solutions for AI technology, and through our platform and products, we’re aiming to provide them with a rich and interesting testbed for increasingly challenging experiments and benchmarks.
Like any start-up, we’re pretty optimistic about our potential to grow and make it big. Still, we don’t believe our company or products alone will make the difference between a positive future for humanity versus a negative one, and we’re not aiming to have that kind of power over the world. Rather, we’re aiming to take part in a global ecosystem of companies using AI to benefit humanity, by making our products, services, and scientific platform available to other institutions and researchers.
Also, fun! We think our approach to AI has the potential to be very fun, and we’re very much hoping to keep it that way for the whole team :)
Experiments we’re interested in supporting
We’re particularly interested in experiments examining:
- Alignment with humans and human-like cultures. For AI alignment solutions to work in the real world, they have to work with the full richness of humans and human culture (e.g, transmissibility of culture through interaction, and language as a form of culture). We would like to develop benchmarks that capture aspects of this richness.
- Multi-paradigm cooperation. We’re interested in testing whether a given agent or group is able to interact safely and productively with another agent or group employing different alignment paradigms, different coordination norms, or different value functions.
- Assistants. AI “assistant” algorithms are intended to learn and align with the intentions of a particular person or group, and take safe actions assisting with those intentions.
- Mediators. We’re interested in whether “mediator” agents, when introduced into an interaction between two or more groups, can bring greater peace and prosperity between those groups.
Eventually, we want every safety and alignment researcher in the world to be able to test their solution ideas on our platform. This will not only yield benchmarks for comparison, but also a playground for interaction, where at least some acute safety failures and harmful interaction dynamics can be discovered in silico before reaching the real world. It’s probably irresponsible to expect our platform alone will ever be enough to certify a new AI technology as definitely safe, but if it appears unsafe on our platform then it probably shouldn’t be deployed in reality (which would probably be the opposite of fun).
Founders

Andrew is deeply driven to contribute to AI safety efforts on a global scale. Immediately prior to Encultured, Andrew spent 5 years working as a full-time research scientist at UC Berkeley, within the Center for Human-Compatible AI (CHAI), where he retains a part-time appointment. In 2017, Andrew also co-founded the Berkeley Existential Risk Initiative, a non-profit dedicated to improving humanity’s long-term prospects for survival and flourishing, where he volunteered as Executive Director for three years, and now volunteers as President. Andrew also established the Survival and Flourishing Fund and Survivel and Flourishing Projects with the support of philanthropist Jaan Tallinn, and co-developed the S-process for philanthropic grant-making with Oliver Habryka.
In 2013, Andrew earned his Ph.D. in mathematics at UC Berkeley studying applications of algebraic geometry to machine learning models. During that time, he cofounded the Center for Applied Rationality (CFAR) and the Summer Program on Applied Rationality and Cognition (SPARC). He was offered university faculty and research positions in mathematics, mathematical biosciences, and philosophy, worked as an algorithmic stock trader at Jane Street Capital’s New York City office (2014-2015), and as a Research Fellow at the Machine Intelligence Research Institute (2015-2017).
Most recently, Andrew had the good sense to be super-impressed by Nick’s amazing engineering skills and realized they should found a company together :)

Nick wants to ensure that powerful AI is developed for the benefit of humanity, and believes that to do this we need a good understanding not only of artificial intelligence but also of human intelligence, including deep questions about human minds and culture spanning anthropology, cognitive linguistics, and neuroscience. Prior to Encultured, Nick spent 5 years at Vicarious AI working on approaches to artificial general intelligence (AGI) grounded in real-world robotics. In 2015, Nick earned his PhD at UC Berkeley under Professor Stuart Russell applying reinforcement learning and Bayesian analysis to the metalevel control problem: how can an agent learn to control its own computations.
Nick first began thinking deeply about the impact of AI on humanity upon reading Eliezer Yudkowsky’s Creating Friendly AI in 2001, subsequently interning at MIRI in 2006 and attending the Singularity Summit in 2007. Originally hailing from New Zealand, Nick is still getting used to walking upside down.
Job openings
Job: Machine Learning Engineer
Apply here for this technical staff position.
Location:
Mostly remote, over Zoom calls and Slack channels. A few times per month we’ll rent a workspace in the San Francisco / Berkeley area for an in-person workday, so if you live nearby it’d be great to have you attend those.
Compensation:
Starting between $120k and $180k per year depending on experience, plus healthcare benefits and equity incentives vesting over 5 years, with raises also becoming available with good individual performance or team-wide accomplishments that expand our revenue stream.
Necessary qualifications:
In this role we need candidates to have experience with:
- building and training reinforcement learning systems in Python, ideally for simulated robotics (e.g., MuJoCo), video-game-like environments (e.g., Atari), or multi-agent systems.
Other nice-to-have qualifications (not required):
These are not required, but our team will welcome and make good use of experience with:
- building and training state-of-the-art natural language models, e.g., transformers;
- applying and developing AI alignment methods;
- constructing complicated RL environments, especially multiagent or robotics environments, or related areas like simulation and video game development;
- scalable web app development, either frontend or backend;
- mobile app development;
- research on humans and human culture, including but not limited to background in the humanities, social sciences, cognitive science, and biology;
- machine learning interpretability research and tools;
- PhD-level research and writing.
Apply here for this technical staff position.
Job: Immersive Interface Engineer
Apply here for this technical staff position.
Location:
Mostly remote, over Zoom calls and Slack channels. A few times per month we’ll rent a workspace in the San Francisco / Berkeley area for an in-person workday, so if you live nearby it’d be great to have you attend those.
Compensation:
Starting between $120k and $180k per year depending on experience, plus healthcare benefits and equity incentives vesting over 5 years, with raises also becoming available with good individual performance or team-wide accomplishments that expand our revenue stream.
Necessary qualifications:
In this role we need candidates to have experience with:
- building immersive user experiences, e.g., video games on mobile devices or gaming consoles, in any language(s)
- rendering 2D and 3D graphics
- physics engines
- producing digital art, and/or recruiting and contracting skilled digital artists
- Python (for any application)
Other nice-to-have qualifications (not required):
These are not required, but our team will welcome and make good use of experience with:
- scalable web app development, either frontend or backend;
- mobile app/game development;
- console game development
- research on humans and human culture, including but not limited to background in the humanities, social sciences, cognitive science, and biology;
- machine learning;
- PhD-level research and writing.
Apply here for this technical staff position.
Job: Operations & Executive Assistant
Apply here for this operations position.
Applicants for this role must have authorization to work in the United States (i.e., be a US citizen or green card holder).
Location:
Mostly remote. Ideally some work would be done in-person in the SF Bay Area, but we will also consider fully remote candidates.
Compensation:
Pay will be adjustable upward for existing experience and/or good performance:
- starting at $100k/year plus benefits and equity incentives, for candidates ready to perform the in-person aspects of the work (e.g., by moving to the SF Bay Area).
- starting at $75k/year plus benefits and equity incentives, for candidates not ready to perform the in-person tasks in the Berkeley/Oakland/SF area.
Operations Assistant task examples:
- internet research to find business professionals like lawyers and accountants, e.g., “Can you find me a good accountant who deeply understands the tax treatment of international loans?”;
- corresponding with business professionals and service providers about things like doing our taxes and maintaining our healthcare insurance plan;
- following up by phone and email with the professionals our company chooses to work with, providing basic info to those professionals about our company (e.g. # of employees, etc.), and booking appointments with them;
- being reachable by telephone (“on call”) during regular business hours, for calls from others on our team and from outsiders.
- following instructions regarding the completion of operations tasks;
- (in-person) coming to our physical office space in the Berkeley/Oakland/SF area for an in-person workday with 3-10 other people every 1-2 weeks;
- (in-person) picking up and processing paper mail.
Executive Assistant task examples:
- helping our CEO and CTO to schedule work-related appointments that are not ops-related, e.g., meetings with potential recruits or investors;
- market research: helping our team to answer questions about industries of interest, e.g., “How did Company X first popularize Product Y?”.
Room for growth:
With good performance, the “Operations Assistant” title is open to promotion to “Operations Associate”, and with excellent performance, “Operations Manager”.