Sam Altman — "I think the biggest risk is that we build AI that is misaligned with human value…"
I think the biggest risk is that we build AI that is misaligned with human values.
I think the biggest risk is that we build AI that is misaligned with human values.
Click any product to generate a realistic preview (~30s). Up to 3 at a time.
"We're trying to build something that is both powerful and safe."
"The future is going to be weirder than you think."
"Don't be afraid to pivot."
"We need to make sure that AI doesn't exacerbate existing inequalities."
"I think the best way to learn is to teach."
Your cart is empty