Connect with us

The spring of 2017 may be remembered as the coming-out party for Big Tech’s campaign to get inside your head. That was when news broke of Elon Musk’s new brain-interface company, Neuralink, which is working on how to stitch thousands of electrodes into people’s brains. Days later, Facebook joined the quest when it announced that its secretive skunkworks, named Building 8, was attempting to build a headset or headband that would allow people to send text messages by thinking—tapping them out at 100 words per minute.

The company’s goal was a hands-free interface anyone could use in virtual reality. “What if you could type directly from your brain?” asked Regina Dugan, a former DARPA officer who was then head of the Building 8 hardware dvision. “It sounds impossible, but it’s closer than you realize.”

Now the answer is in—and it’s not close at all. Four years after announcing a “crazy amazing” project to build a “silent speech” interface using optical technology to read thoughts, Facebook is shelving the project, saying consumer brain-reading still remains very far off.

In a blog post, Facebook said it is discontinuing the project and will instead focus on an experimental wrist controller for virtual reality that reads muscle signals in the arm. “While we still believe in the long-term potential of head-mounted optical [brain-computer interface] technologies, we’ve decided to focus our immediate efforts on a different neural interface approach that has a nearer-term path to market,” the company said.

Facebook’s brain-typing project had led it into uncharted territory—including funding brain surgeries at a California hospital and building prototype helmets that could shoot light through the skull—and into tough debates around whether tech companies should access private brain information. Ultimately, though, the company appears to have decided the research simply won’t lead to a product soon enough.

“We got lots of hands-on experience with these technologies,” says Mark Chevillet, the physicist and neuroscientist who until last year headed the silent-speech project but recently switched roles to study how Facebook handles elections. “That is why we can confidently say, as a consumer interface, a head-mounted optical silent speech device is still a very long way out. Possibly longer than we would have foreseen.”

Mind reading

The reason for the craze around brain-computer interfaces is that companies see mind-controlled software as a huge breakthrough—as important as the computer mouse, graphical user interface, or swipe screen. What’s more, researchers have already demonstrated that if they place electrodes directly in the brain to tap individual neurons, the results are remarkable. Paralyzed patients with such “implants” can deftly move robotic arms and play video games or type via mind control.

Facebook’s goal was to turn such findings into a consumer technology anyone could use, which meant a helmet or headset you could put on and take off. “We never had an intention to make a brain surgery product,” says Chevillet. Given the social giant’s many regulatory problems, CEO Mark Zuckerberg had once said that the last thing the company should do is crack open skulls. “I don’t want to see the congressional hearings on that one,” he had joked.

In fact, as brain-computer interfaces advance, there are serious new concerns. What would happen if large tech companies could know people’s thoughts? In Chile, legislators are even considering a human rights bill to protect brain data, free will, and mental privacy from tech companies. Given Facebook’s poor record on privacy, the decision to halt this research may have the side benefit of putting some distance between the company and rising worries about “neurorights.”

Facebook’s project aimed specifically at a brain controller that could mesh with its ambitions in virtual reality; it bought Oculus VR in 2014 for $2 billion. To get there, the company took a two-pronged approach, says Chevillet. First, it needed to determine whether a thought-to-speech interface was even possible. For that, it sponsored research at the University of California, San Francisco, where a researcher named Edward Chang has placed electrode pads on the surface of people’s brains.

Whereas implanted electrodes read data from single neurons, this technique, called electrocorticography, or ECoG, measures from fairly large groups of neurons at once. Chevillet says Facebook hoped it might also be possible to detect equivalent signals from outside the head.

The UCSF team made some surprising progress and today is reporting in the New England Journal of Medicine that it used those electrode pads to decode speech in real time. The subject was a 36-year-old man the researchers refer to as “Bravo-1,” who after a serious stroke has lost his ability to form intelligible words and can

Read More

————

By: Antonio Regalado
Title: Facebook drops funding for interface that reads the brain
Sourced From: www.technologyreview.com/2021/07/14/1028447/facebook-brain-reading-interface-stops-funding/
Published Date: Wed, 14 Jul 2021 21:37:00 +0000

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Inside the hunt for new physics at the world’s largest particle collider

In 1977, Ray and Charles Eames released a remarkable film that, over the course of just nine minutes, spanned the limits of human knowledge. Powers of Ten begins with an overhead shot of a man on a picnic blanket inside a one-square-­meter frame. The camera pans out: 10, then 100 meters, then a kilometer, and eventually all the way to the then-known edges of the observable universe—1024 meters. There, at the farthest vantage, it reverses. The camera zooms back in, flying through galaxies to arrive at the picnic scene, where it plunges into the man’s skin, digging down through successively smaller scales: tissues, cells, DNA, molecules, atoms, and eventually atomic nuclei—10-14 meters. The narrator’s smooth voice-over ends the journey: “As a single proton fills our scene, we reach the edge of present understanding.”

During the intervening half-century, particle physicists have been exploring the subatomic landscape where Powers of Ten left off. Today, much of this global effort centers on CERN’s Large Hadron Collider (LHC), an underground ring 17 miles (27 kilometers) around that straddles the border between Switzerland and France. There, powerful magnets guide hundreds of trillions of protons as they do laps at nearly the speed of light underneath the countryside. When a proton headed clockwise plows into a proton headed counterclockwise, the churn of matter into energy transmutes the protons into debris: electrons, photons, and more exotic subatomic bric-a-brac. The newly created particles explode radially outward, where they are picked up by detectors.

In 2012, using data from the LHC, researchers discovered a particle called the Higgs boson. In the process, they answered a nagging question: Where do fundamental particles, such as the ones that make up all the protons and neutrons in our bodies, get their mass? A half-­century earlier, theorists had cautiously dreamed the Higgs boson up, along with an accompanying field that would invisibly suffuse space and provide mass to particles that interact with it. When the particle was finally found, scientists celebrated with champagne. A Nobel for two of the physicists who predicted the Higgs boson soon followed.

But now, more than a decade after the excitement of finding the Higgs, there is a sense of unease, because there are still unanswered questions about the fundamental constituents of the universe.

Perhaps the most persistent of these questions is the identity of dark matter, a mysterious substance that binds galaxies together and makes up 27% of the cosmos’s mass. We know dark matter must exist because we have astronomical observations of its gravitational effects. But since the discovery of the Higgs, the LHC has seen no new particles—of dark matter or anything else—despite nearly doubling its collision energy and quintupling the amount of data it can collect. Some physicists have said that particle physics is in a “crisis,” but there is disagreement even on that characterization: another camp insists the field is fine and still others say that there is indeed a crisis, but that crisis is good. “I think the community of particle phenomenologists is in a deep crisis, and I think people are afraid to say those words,” says Yoni Kahn, a theorist at the University of Illinois Urbana-Champaign.

The anxieties of particle physicists may, at first blush, seem like inside baseball. In reality, they concern the universe, and how we can continue to study it—of interest if you care about that sort of thing. The past 50 years of research have given us a spectacularly granular view of nature’s laws, each successive particle discovery clarifying how things really work at the bottom. But now, in the post-Higgs era, particle physicists have reached an impasse in their quest to discover, produce, and study new particles at colliders. “We do not have a strong beacon telling us where to look for new physics,” Kahn says.

So, crisis or no crisis, researchers are trying something new. They are repurposing detectors to search for unusual-looking particles, squeezing what they can out of the data with machine learning, and planning for entirely new kinds of colliders. The hidden particles that physicists are looking for have proved more elusive than many expected, but the search is not over—nature has just forced them to get more creative.

n almost-complete theory

As the Eameses were finishing Powers of Ten in the late ’70s, particle physicists were bringing order to a “zoo” of particles that had been discovered in the preceding decades. Somewhat drily, they called this framework, which enumerated the kinds of particles and their dynamics, the Standard Model.

Roughly speaking, the Standard Model separates fundamental particles into two types: fermions and bosons. Fermions are the bricks of matter—two kinds of fermions called up and down quarks, for example, are bound

Read More

————

By: Dan Garisto
Title: Inside the hunt for new physics at the world’s largest particle collider
Sourced From: www.technologyreview.com/2024/02/20/1088002/higgs-boson-physics-particle-collider-large-hadron-collider/
Published Date: Tue, 20 Feb 2024 10:00:00 +0000

Continue Reading

Tech

Transforming document understanding and insights with generative AI

Adobe AI

At some point over the last two decades, productivity applications enabled humans (and machines!) to create information at the speed of digital—faster than any person could possibly consume or understand it. Modern inboxes and document folders are filled with information: digital haystacks with needles of insight that too often remain undiscovered.

Adobe AI Assistant 1000px 1

Generative AI is an incredibly exciting technology that’s already delivering tremendous value to our customers across creative and experience-building applications. Now Adobe is embarking on our next chapter of innovation by introducing our first generative AI capabilities for digital documents and bringing the new technology to the masses.

AI Assistant in Adobe Acrobat, now in beta, is a new generative AI–powered conversational engine deeply integrated into Acrobat workflows, empowering everyone with the information inside their most important documents.

ccelerating productivity across popular document formats

As the creator of PDF, the world’s most trusted digital document format, Adobe understands document challenges and opportunities well. Our continually evolving Acrobat PDF application, the gold standard for working with PDFs, is already used by more than half a billion customers to open around 400 billion documents each year. Starting immediately, customers will be able to use AI Assistant to work even more productively. All they need to do is open Acrobat on their desktop or the web and start working.

With AI Assistant in Acrobat, project managers can scan, summarize, and distribute meeting highlights in seconds, and sales teams can quickly personalize pitch decks and respond to client requests. Students can shorten the time they spend hunting through research and spend more time on analysis and understanding, while social media and marketing teams can quickly surface top trends and issues into daily updates for stakeholders. AI Assistant can also streamline the time it takes to compose an email or scan a contract of any kind, enhancing productivity for knowledge workers and consumers globally.

Innovating with AI—responsibly

Adobe has continued to evolve the digital document category for over 30 years. We invented the PDF format and open-sourced it to the world. And we brought Adobe’s decade-long legacy of AI innovation to digital documents, including the award-winning Liquid Mode, which allows Acrobat to dynamically reflow document content and make it readable on smaller screens. The experience we’ve gained by building Liquid Mode and then learning how customers get value from it is foundational to what we’ve delivered in AI Assistant.

Today, PDF is the number-one business file format stored in the cloud, and PDFs are where individuals and organizations keep, share, and collaborate on their most important information. Adobe remains committed to secure and responsible AI innovation for digital documents, and AI Assistant in Acrobat has guardrails in place so that all customers—from individuals to the largest enterprises—can use the new features with confidence.

Like other Adobe AI features, AI Assistant in Acrobat has been developed and deployed in alignment with Adobe’s AI principles and is governed by secure data protocols. Adobe has taken a model-agnostic approach to developing AI Assistant, curating best-in-class technologies to provide customers with the value they need. When working with third-party large language models (LLMs), Adobe contractually obligates them to employ confidentiality and security protocols that match our own high standards, and we specifically prohibit third-party LLMs from manually reviewing or training their models on Adobe customer data.

The future of intelligent document experiences

Today’s beta features are part of a larger Adobe vision to transform digital document experiences with generative AI. Our vision for what’s next includes the following:

Insights across multiple documents and document types: AI Assistant will work across multiple documents, document types, and sources, instantly surfacing the most important information from everywhere.AI-powered authoring, editing, and formatting: Last year, customers edited tens of billions of documents in Acrobat. AI Assistant will make it simple to quickly generate first drafts, as well as

Read More

————

By: Deepak Bharadwaj
Title: Transforming document understanding and insights with generative AI
Sourced From: www.technologyreview.com/2024/02/20/1088584/transforming-document-understanding-and-insights-with-generative-ai/
Published Date: Tue, 20 Feb 2024 16:08:01 +0000

Continue Reading

Tech

The Download: hunting for new matter, and Gary Marcus’ AI critiques

1f6f0

This is today’s edition of The Download our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Inside the hunt for new physics at the world’s largest particle collider

In 2012, using data from CERN’s Large Hadron Collider, researchers discovered a particle called the Higgs boson. In the process, they answered a nagging question: Where do fundamental particles, such as the ones that make up all the protons and neutrons in our bodies, get their mass?

When the particle was finally found, scientists celebrated with champagne. A Nobel for two of the physicists who predicted the Higgs boson soon followed.

But now, more than a decade later, there is a sense of unease. That’s because there are still so many unanswered questions about the fundamental constituents of the universe.

So researchers are trying something new. They are repurposing detectors to search for unusual-looking particles, squeezing what they can out of the data with machine learning, and planning for entirely new kinds of colliders. Read the full story.

—Dan Garisto

This story is from the upcoming print issue of MIT Technology Review, dedicated to exploring hidden worlds. Want to get your hands on a copy when it publishes next Wednesday? Subscribe now.

I went for a walk with Gary Marcus, AI’s loudest critic

Gary Marcus, a professor emeritus at NYU, is a prominent AI researcher and cognitive scientist who has positioned himself as a vocal critic of deep learning and AI. He is a divisive figure, and can often be found engaged in spats on social media with AI heavyweights such as Yann LeCun and Geoffrey Hinton (“All attempts to socialize me have failed,” he jokes.)

Marcus does much of his tweeting on scenic walks around his hometown of Vancouver. Our senior AI reporter Melissa Heikkilä decided to join him on one such stroll while she was visiting the city, to hear his thoughts on the latest product releases and goings-on in AI. Here’s what he had to say to her.

This story is from The Algorithm, our weekly newsletter all about AI. Sign up to receive it in your inbox every Monday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 A new class of satellites could change everything
🛰
They’re armed with cameras powerful enough to capture peoples’ individual features. (NYT $)
A big European satellite is set to return to Earth tomorrow. (Ars Technica)
A new satellite will use Google’s AI to map methane leaks from space. (MIT Technology Review)

2 How much electricity does AI consume?
It’s a lot—but working out exact sums can be tricky. (The Verge)
Making an image with generative AI uses as much energy as charging your phone. (MIT Technology Review)

3 How Silicon Valley learned to love the military
The world is feeling like a more dangerous place these days, and that’s drowning out any ethical concerns. (WP $)
Why business is booming for military AI startups. (MIT Technology Review)
+ SpaceX is getting closer to US intelligence and military agencies. (WSJ $)
Ukraine is in desperate need of better methods to clear land mines. (Wired $)

4 The EU is investigating TikTok over child safety
It alleges the company isn’t doing enough to verify users’ ages. (Mashable)

5 It’s hard to get all that excited about Bluesky
It’s just more of the same social media. (Wired $)
How to fix the internet. (MIT Technology Review)
Why millions of people are flocking to decentralized social media services. (MIT Technology Review)

6 Ozempic is taking off in China
A lack of official approval there yet isn’t stopping anyone. (WSJ $)
We’ve never understood how hunger works. That might be about to change. (MIT Technology Review)

7 Meet the people trying to make ethical AI porn
Sex work is a sector that’s already being heavily disrupted by AI. (The Guardian)

8 Why we need DNA data drives
We’re rapidly running out of storage space, but DNA is a surprisingly viable option. (IEEE Spectrum)

9 You don’t need to keep closing your phone’s background apps
It does nothing for your battery life. In fact, it can even drain it further. (Gizmodo)
Here’s another myth worth busting: you shouldn’t put your wet phone in rice. (The Verge)

10 The mysterious math of billiard tables
If you struggle to play pool, take comfort in the fact mathematicians get stumped by it too. (Quanta $)

Quote of the day

“We realized how easy it is for people to be against something, to reject something new.” 

—Silas Heineken, a 17-year-old from Grünheide, a suburb near Berlin in Germany, tells the New York

Read More

————

By: Charlotte Jee
Title: The Download: hunting for new matter, and Gary Marcus’ AI critiques
Sourced From: www.technologyreview.com/2024/02/20/1088705/hunting-new-matter-gary-marcus/
Published Date: Tue, 20 Feb 2024 13:10:00 +0000

Continue Reading

Trending