A year ago, Facebook started up a special skunkworks team called Building 8 focused on creating futuristic gadgets, saying the secretive projects would push forward the company’s goal of connecting the world.

On Wednesday at its annual F8 developer conference, the company revealed two of the six projects that are under way, and they sound lot science fiction.

Facebook says it hopes to build a new kind of noninvasive brain-machine interface—such as a cap or headband—that lets people text by simply thinking. Another aims to build a wearable device—an armband, perhaps—that makes it possible to “hear” words with your skin.

Building 8’s leader, Regina Dugan, says both projects have been under way for six months and that Facebook will decide in two years whether they’re worth continuing. Dugan was previously the head of Google’s similarly styled Advanced Technology and Projects Group and director of the Pentagon’s DARPA research agency.

The thinking-to-text project is headed up by Mark Chevillet, previously an adjunct professor of neuroscience at Johns Hopkins University. He said the goal over two years is to build a noninvasive speech-to-text system that picks up speech signals inside the brain and permits people to silently turn thoughts into words at a speed of 100 words per minute.

“We just want to be able to get those signals right before you actually produce the sound so you don’t have to say it out loud anymore,” he said.

Facebook says it is collaborating with Johns Hopkins, the University of California,  Berkeley, and the University of California, San Francisco, on the project, which Chevillet says the work will focus on finding a way to use light, like LEDs or lasers, to sense neural signals emanating from the cerebral cortex.

The method works in a way that is similar to how functional near-infrared spectroscopy is currently used to measure brain activity.

Such a device—a headband or some sort of cap—could be useful to people who are so severely paralyzed that they can’t communicate. Over time, though, brain interfaces could be a way to ”think” a message rather than typing it, or send a text in the middle of a conversation, Facebook thinks. They could also be a way to communicate with others in virtual or augmented reality, which are technologies that Facebook has been pushing heavily.

Chevillet said there are already some good demonstrations of brain-computer interfaces, like a recent study in which three people with paralysis were able to use their minds to select letters using an on-screen cursor, one of them typing at eight words per minute. In that study, an implant in the brain recorded neural signals. Others have experimented with trying to interpret what sounds people are making or thinking about. That project, too, involved surgery to install an electronic brain implant.

Now the Facebook researchers are exploring whether it’s possible to figure out what someone wants to say by detecting signals outside the brain and then translating it into text. Doing so accurately, in real time, at the rate Facebook proposes, would represent a huge step forward over what neuroscience has shown is possible so far.

The second project, which focuses on making it possible to for people to recognize words with their skin, draws inspiration from Braille and Tadoma—a method of communication in which people who are both deaf and blind place a hand on the face of another person to feel the vibrations and air as that person speaks.

In an experiment, researchers built a device with 16 actuators on it and strapped it to an engineer’s arm. Another engineer had a tablet computer with nine different words on its display; as he tapped the different words—like “grasp,” “black,” and “cone”—the first engineer felt vibrations on her arm that corresponded with the words and was able to correctly interpret that she needed to pick up a black cone on the table in front of her.

To do this, the researchers are taking a spoken word—like “black”—and separating it into its frequency components, then delivering those frequencies to the actuators on her arm, Dugan said.

“Instead of from her cochlea to her brain, she’s taking [the signal] from her arm to her brain,” she added.

The researchers think of this as a way to deliver language on the skin, hoping that eventually people will be able to use the method to distinguish between about 100 words. They may also use nonverbal signals like pressure and temperature.

Dugan said the idea is to eventually have a wearable that sends messages you can feel, without having to take your phone out and, say, interrupt an in-person conversation.

While neither of these projects will yield a gadget that you can buy, Dugan said she can imagine it happening eventually.

“I think at two years we should have a pretty good sense of whether it’s possible to build them into consumer goods,” she said.