Interview with Vinod Khosla

Khosla Ventures has been at the forefront of investing in AI and tech. How do you decide what to put your bets on, and what's your approach to innovation?


I first mentioned AI publicly in 2000, when I said that AI would redefine what it means to be human. Ten years later, I wrote a blog post called “Do we need doctors?” In that post, I focused on almost all expertise that will be free through AI for the benefit of humanity.
In 2014, we made our first deep learning investment around AI for images, and soon after, we invested in AI radiology. In late 2018, we decided to commit to investing in OpenAI.
That was a big, big bet for us, and I normally don't make bets that large. But we want to invest in high-risk technical breakthroughs and science experiments.
Our focus here is on what's bold, early, and impactful. OpenAI was very bold, very early. Nobody was talking about investing in AI and it was obviously very impactful.

You were one of the early investors in OpenAI. What role did you play in bringing Sam Altman back into his role as CEO last year?

I don't want to go into too much detail as I don't think I was the pivotal person doing that, but I was definitely very supportive [of Altman]. I wrote a public blog post that Thanksgiving weekend, and I was very vocal that we needed to get rid of those, frankly, EA [Effective Altruism] nuts, who were really just religious bigots.
Humanity faces risks and we have to manage them, but that doesn't mean we completely forgo the benefits of especially powerful technologies like AI.

What risks do you think AI poses now and in 10 years? And how do you propose to manage those risks?


There was a paper from Anthropic that looked at the issue of explainability of these models. We're nowhere near where we need to be, but that is still making progress.
Some researchers are dedicated full-time to this issue of ‘how do you characterize models and how do you get them to behave in the way we want them to behave?’ It's a complex question, but we will have the technical tools if we put the effort in to ensure safety.
In fact, I believe the principal area where national funding in universities should go is researchers doing safety research.
I do think explainability will get better and better progressively over the next decade. But to demand it be fully developed before it is deployed would be going too far.

Could it not be that investors are more skeptical than you are of your ability to execute this turnaround strategy, which is very ambitious?


You're saying that explainability can help mitigate the risk. But what onus does it put on the makers of this technology—the Sam Altmans of the world—to ensure that they are listening to this research and integrating that thinking into the technology itself?
I don't believe any of the major model makers are ignoring it.
Obviously, they don't want to share all the proprietary work they're doing, and each one has a slightly different approach. And so sharing everything they're doing after spending billions of dollars is just not a good capitalistic approach, but that does not mean they're not paying attention.
I believe everybody is. And frankly, safety becomes more of an issue when you get to things like robotics.







Viewed By:

CONTACT US

Spain, Plaza del Rey No.3

Madrid, 25101

Info@digitalgurus.com

Send us your query anytime!