Dearborn considers this an extreme, but plausible, situation. “Who makes that decision?” she asks. If we don’t get involved in the discussion on artificial intelligence, it will be these large companies, she says.

Dearborn worries that people are happily giving away their privacy for convenience’s sake. “They say, ‘I’m fine with it.’ But what happens if [a company] crosses the line?”

“Most companies are now holding more power than the government,” says Dearborn. These are going to be big discussions regarding your privacy, personal freedom and boundaries. Therefore, “education is key.”

Dearborn says we should determine how we want this new economy to look, understand our role, participate in the conversation and be informed citizens. As a result, society at large can make informed decisions about how artificial intelligence is being “designed and oriented.”

Musk agrees: “AI is a fundamental risk to the existence of human civilization in a way that car accidents, airplane crashes, faulty drugs or bad food were not — they were harmful to a set of individuals within society, of course, but they were not harmful to society as a whole.”

Dearborn’s solution is as follows: “We should be training young people to be critical of where we’re going,” she says, as someone who champions young people entering the tech industry. In fact, she says that the path to making the greatest impact surrounding major decisions are being made in tech, not politics.

“If you want to do healthcare, major in tech with a healthcare focus. If you want to be a neuroscientist, major in tech with that focus,” she says.

Dearborn adds that our biggest responsibility is figuring out how society will use artificial intelligence.

She gives this example: Google has been experimenting with self-driving cars. But what if a self-driving car is about to crash and it has to make a mathematical decision between saving the driver and crashing into a crowd, or avoiding a crowd and crashing the car with the driver in it?

Although she admits this is a drastic case, Dearborn says people will soon have to answer these types of questions. Therefore, it’s important to have a diverse group of people contributing to the conversation about the way artificial intelligence is made and applied.

“It’s like creating a government structure or a social order. You need to ensure you have equal representation and diversity and inclusion in the people who are building these systems,” she says.

Right now the tech space is mainly white men, she says, and they are the ones who are creating the rules regarding our moral and social principles.

“We can’t take a laissez-faire approach,” says Dearborn. “These are ethical guidelines and boundaries. Don’t let a for-profit company make these decisions on their own.”

See also:

Here’s why Tesla adding this CEO to its board is a big deal

Elon Musk’s No. 1 tip for up-and-coming leaders

Elon Musk is the most cautious CEO in tech, according to IBM data