Connect with us

Published

on

This is today’s edition of The Download our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How AI can help us understand how cells work—and help cure diseases

Priscilla Chan and Mark Zuckerberg are cofounders and co-CEOs of the Chan Zuckerberg Initiative.

Cells are key to understanding disease—yet so much about them remains unknown. We do not know, for example, how billions of biomolecules come together to act as one cell. Nor do we know how our many types of cells interact within our bodies. We have limited understanding of how cells, tissues, and organs become diseased and what it takes for them to be healthy.

AI can help us answer these questions and apply that knowledge to improve health and well-being worldwide—if researchers can access and harness these powerful new technologies. Scientific discovery, patient diagnosis, and treatment decisions would all become faster, safer, and more efficient.

At the Chan Zuckerberg Initiative, we’re helping to generate the scientific data and build out the computing infrastructure to make this a reality—and give scientists the tools they need to take advantage of new advances in AI to help end disease.Read the full story.

This article is free to read—you just need to create an account. (If you already have one – just sign in!)

Deepfakes of Chinese influencers are livestreaming 24/7

Scroll through the livestreaming videos at 4 a.m. on Taobao, China’s most popular e-commerce platform, and you’ll find it weirdly busy. While most people are fast asleep, there are still many diligent streamers presenting products to the cameras and offering discounts in the wee hours.

But if you take a closer look, you may notice that many of these livestream influencers seem slightly robotic. That’s because they are AI-generated clones of the real streamers.

As technologies that create realistic avatars, voices, and movements get more sophisticated and affordable, the popularity of these deepfakes has exploded across China’s e-commerce streaming platforms. Read the full story.

—Zeyi Yang

Meet the next generation of AI superstars

This year we’ve seen tech companies racing to release their hottest new AI systems, while often neglecting safety and ethics. Thankfully, there’s a legion of young AI scientists who are more aware than ever of the harm the technology can pose, and are determined to fix it.

To do that, they’re pioneering new methods that are helping to shift the way the AI industry thinks about safety. Some of them are honored in our annual 35 Innovators Under 35 list.

Melissa Heikkilä, our senior AI reporter, has taken a closer look at the brilliant minds working towards making AI safer, more useful, and less polluting. Read the full story.

This story is from The Algorithm, our weekly AI newsletter. Sign up to receive it in your inbox every Monday.

She was a semi-pro Go player but learned that biology is even harder

When an AI beat one of the world’s best Go players in 2017, Julia Joung felt relieved. She’d spent her childhood in Taiwan mastering the ancient game and once aspired to become a professional player, representing her country. 

But by the time Google’s AlphaGo cracked the game, Joung had already moved onto a harder problem. At Stanford University, during her undergraduate research in a neuroscience lab, she observed unusual behavior in brain cells called astrocytes. Biology, she realized, was harder than Go. And it was her new fascination. Read the full story.

—Antonio Regalado

Julia Joung is one of MIT Technology Review’s 35 Innovators Under 35 for 2023. Read the full list of this year’s honorees, including those making a difference in robotics, computing, biotech, climate and energy, and AI.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Google and Microsoft are tussling over AI again
Both companies are updating their AI products this week, and Google’s Bard is first up. (Vox)
It’s safe to say that OpenAI is well on its way to becoming a tech giant. (Economist $)
Google is throwing generative AI at everything. (MIT Technology Review)

2 YouTube has suspended Russell Brand’s channels from making money
The British comedian has been accused of rape and sexual assault. (BBC)
Brand’s conspiracy theories are being pushed to new audiences. (NBC News)

3 X might start charging all users a monthly fee, says Elon Musk
Will it happen though? Who knows! (The Guardian)
He didn’t disclose how much it’s likely to cost, though. (CNBC)
Musk failed to denounce antisemitism during a meeting with the Israeli prime minister. (WP $)

4 MGM Resorts were hacked after its IT help

Read More

————

By: Rhiannon Williams
Title: The Download: AI to cure diseases, and China’s deepfake influencers
Sourced From: www.technologyreview.com/2023/09/19/1079849/the-download-ai-to-cure-diseases-and-chinas-deepfake-influencers/
Published Date: Tue, 19 Sep 2023 12:10:00 +0000

Continue Reading

Tech

What’s changed since the “pause AI” letter six months ago?

Published

on

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here

Last Friday marked six months since the Future of Life Institute (FLI), a nonprofit focusing on existential risks surrounding artificial intelligence, shared an open letter signed by famous people such as Elon Musk, Steve Wozniak, and Yoshua Bengio. The letter calling for tech companies to “pause” the development of AI language models more powerful than OpenAI’s GPT-4 for six months.

Well, that didn’t happen, obviously.

I sat down with MIT professor Max Tegmark, the founder and president of FLI, to take stock of what has happened since. Here are highlights of our conversation. 

On shifting the Overton window on AI risk: Tegmark told me that in conversations with AI researchers and tech CEOs, it had become clear that there was a huge amount of anxiety about the existential risk AI poses, but nobody felt they could speak about it openly “for fear of being ridiculed as Luddite scaremongerers.” “The key goal of the letter was to mainstream the conversation, to move the Overton window so that people felt safe expressing these concerns,” he says. “Six months later, it’s clear that part was a success.”

But that’s about it: “What’s not great is that all the companies are still going full steam ahead and we still have no meaningful regulation in America. It looks like US policymakers, for all their talk, aren’t going to pass any laws this year that meaningfully rein in the most dangerous stuff.”

Why the government should step in: Tegmark is lobbying for an FDA-style agency that would enforce rules around AI, and for the government to force tech companies to pause AI development. “It’s also clear that [AI leaders like Sam Altman, Demis Hassabis, and Dario Amodei] are very concerned themselves. But they all know they can’t pause alone,” Tegmark says. Pausing alone would be “a disaster for their company, right?” he adds. “They just get outcompeted, and then that CEO will be replaced with someone who doesn’t want to pause. The only way the pause comes about is if the governments of the world step in and put in place safety standards that force everyone to pause.”

So how about Elon … ? Musk signed the letter calling for a pause, only to set up a new AI company called X.AI to build AI systems that would “understand the true nature of the universe.” (Musk is an advisor to the FLI.) “Obviously, he wants a pause just like a lot of other AI leaders. But as long as there isn’t one, he feels he has to also stay in the game.”

Why he thinks tech CEOs have the goodness of humanity in their hearts: “What makes me think that they really want a good future with AI, not a bad one? I’ve known them for many years. I talk with them regularly. And I can tell even in private conversations—I can sense it.”

Response to critics who say focusing on existential risk distracts from current harms: “It’s crucial that those who care a lot about current problems and those who care about imminent upcoming harms work together rather than infighting. I have zero criticism of people who focus on current harms. I think it’s great that they’re doing it. I care about those things very much. If people engage in this kind of infighting, it’s just helping Big Tech divide and conquer all those who want to really rein in Big Tech.”

Three mistakes we should avoid now, according to Tegmark: 1. Letting the tech companies write the legislation. 2. Turning this into a geopolitical contest of the West versus China. 3. Focusing only on existential threats or only on current events. We have to realize they’re all part of the same threat of human disempowerment. We all have to unite against these threats. 

Deeper Learning

These new tools could make AI vision systems less biased

Computer vision systems are everywhere. They help classify and tag images on social media feeds, detect objects and faces in pictures and videos, and highlight relevant elements of an image. However, they are riddled with biases, and they’re less accurate when the images show Black or brown people and women.

And there’s another problem: the current ways researchers find biases in these systems are themselves biased, sorting people into broad categories that don’t properly account for the complexity that exists among human beings.

New tools could help: Sony has a tool—shared exclusively with MIT Technology Review—that expands the skin-tone scale into two dimensions, measuring both skin color (from light to dark) and skin hue (from red to yellow). Meta has built a fairness evaluation system called FACET that takes geographic location and lots of different personal characteristics into account, and it’s making its data set freely available. Read more from me here.

Bits and Bytes

Now

Read More

————

By: Melissa Heikkilä
Title: What’s changed since the “pause AI” letter six months ago?
Sourced From: www.technologyreview.com/2023/09/26/1080299/six-months-on-from-the-pause-letter/
Published Date: Tue, 26 Sep 2023 11:15:04 +0000

Continue Reading

Tech

The Download: metaverse fashion, and looser covid rules in China

Published

on

This is today’s edition of The Download our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

The metaverse fashion stylists are here

Fashion creator Jenni Svoboda is designing a beanie with a melted cupcake top, sprinkles, and doughnuts for ears. But this outlandish accessory isn’t destined for the physical world—Svoboda is designing for the metaverse. She’s working in a burgeoning, if bizarre, new niche: fashion stylists who create or curate outfits for people in virtual spaces.

Metaverse stylists are increasingly sought-after as frequent users seek help dressing their avatars—often in experimental, wildly creative looks that defy personal expectations, societal standards, and sometimes even physics.

Stylists like Svoboda are among those shaping the metaverse fashion industry, which is already generating hundreds of millions of dollars. But while, to the casual observer, it can seem outlandish and even obscene to spend so much money on virtual clothes, there are deeper, more personal, reasons why people are hiring professionals to curate their virtual outfits. Read the full story.

—Tanya Basu

Making sense of the changes to China’s zero-covid policy

On December 1, 2019, the first known covid-19 patient started showing symptoms in Wuhan. Three years later, China is the last country in the world holding on to strict pandemic control restrictions. However, after days of intense protests that shocked the world, it looks as if things could finally change.

Beijing has just announced wide-ranging relaxations of its zero covid policy, including allowing people to quarantine at home instead of in special facilities for the first time.

But while people are celebrating the fact that China has finally started pursuing a covid response emphasizing vaccines and treatments instead of quarantines and lockdowns, it’s just the start of what’s likely to be a long, and very difficult, road to reopening. Read the full story.

—Zeyi Yang

This story is from China Report, our weekly newsletter covering all the goings on in China. Sign up to receive it in your inbox every Tuesday.

How US police use counterterrorism money to buy spy tech

The news: Grant money meant to help cities prepare for terror attacks is being spent on surveillance technology for US police departments, a new report shows. While it’s been known that federal funding props up police budgets, these federal grants are bigger than previously understood.

Why it matters: These grants often make it possible for purchases to skirt approval mechanisms and stay out of public view. The report’s findings are yet another example of a growing pattern in which citizens are increasingly kept in the dark about police tech procurement. Read the full story.

—Tate Ryan-Mosley

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 China is relaxing some of its covid restrictions
Days after the mass protests, the government is allowing people with covid to isolate at home instead of in quarantine facilities. (AP News)
The policy change is likely to spark a huge wave of infections. (The Atlantic $)
Disinformation campaigns are making it hard to gauge citizens’ reactions. (New Yorker $)
Apple’s AirDrop restrictions are curbing the spread of protest memes in China. (Rest of World)

2 Ukraine launched another drone attack on Russia 
They managed to strike military bases that were believed to be impenetrable. (FT $)
Ukraine’s energy infrastructure is still in real danger, though. (Foreign Policy $) 

3 Renewable energy growth is “turbocharged” right now 
The global energy crisis has given the industry a much-needed shot in the arm. (The Verge)
This calculation is driving global climate policy. (Knowable Magazine)
How new versions of solar, wind, and batteries could help the grid. (MIT Technology Review)

4 Flu infections in the US are at an all-time high
The CDC has recorded more positive tests than any other week on record. (Vox)

5 San Francisco police have been barred from using killer robots
Just a week after they were given the go-ahead. (WP $)

6AI could destroy the student essay
New AI models can write ever-more convincing text. (The Atlantic $)
AI is being put to work, at long last. (Economist $)
GPT-3 can help people with dyslexia to quickly write coherent emails. (BuzzFeed News) 
AI image model Lensa is generating NSFW images without prompting. (Insider $)
ChatGPT is OpenAI’s latest fix for GPT-3. It’s slick but still spews nonsense. (MIT Technology Review)

7 How a teenager’s murder sparked a viral TikTok dance craze
The grisly commemoration raises questions over how we remember the dead. (New Yorker $)

8 The internet has changed what we understand about porn

Read More

————

By: Rhiannon Williams
Title: The Download: metaverse fashion, and looser covid rules in China
Sourced From: www.technologyreview.com/2022/12/07/1064389/download-metaverse-fashion-looser-covid-rules-china/
Published Date: Wed, 07 Dec 2022 13:15:00 +0000

Continue Reading

Tech

New year’s resolutions for CIOs

Published

on

city gaze

From security to quantum, AI and edge to cloud, our digital world is evolving and expanding more quickly than ever. With so much “noise,” it can be hard to concentrate, let alone figure out where to start beyond the bits and bytes, speeds and feeds. I hear this from everyone I meet with, and it’s clear that CIOs, in particular, are feeling the pressure. So this year, I’m going to outline four emerging technologies and describe how CIOs can take action on them today. Consider these your new year’s resolutions.

city gaze 1

1. I will not use cloud without understanding the long-term costs. I’ve been hearing from CIOs that their initial eagerness to take advantage of cloud computing has put them over budget, as they weren’t thinking strategically about how to distribute IT capabilities across different cloud providers—let alone how to make them work together. My recommendation is to both characterize the technical viability of running a workload or placing data into a specific cloud, and also fully identify the short- and long-term costs of using that cloud. If you know going in what the costs are, you can better target workloads to the right long-term home. This will also set you up to evaluate new cloud options and find potential cost reductions over time.

2. I will define my zero-trust control plane. We will continue to see an increase in industries requiring zero-trust frameworks, such as those set forth by the U.S. government. These requirements will have a global ripple effect across critical infrastructure industries. So where do you begin? You need to have an authoritative identity management, policy management, and threat management framework to do zero trust properly. And if you don’t have a well-defined and authoritative control plane over your multi-cloud environment, how can you possibly achieve consistent identity, policy, or threat management for your total enterprise? Security in the multi-cloud, more than any other aspect, needs to be consistent and common. Silos are the enemy of real zero-trust security.

3A.I will establish early skill sets to take advantage of quantum. Quantum computing is getting real, and if you don’t have someone in your business who understands how this technology works and how it influences your business, you will miss this technology wave. Identify the team, tools, and tasks you’ll devote to quantum and start experimenting. Just last month we announced the on-premises Dell Quantum Computing Solution, which enables organizations across industries to begin taking advantage of accelerated compute through quantum technology otherwise not available to them today. Investing in quantum simulation and enabling your data science and AI teams to learn the new languages and capability of quantum is critical in 2023.

3B. I will determine where my quantum-safe cryptography risks lie. Quantum computing is so disruptive because it changes many elements of modern IT. With the rise of quantum computing comes the need to better understand post-quantum cryptography, the development of cryptographic systems for classical computers that are able to prevent attacks launched by quantum computers. Bad actors globally are actively trying to capture and archive encrypted traffic on the assumption that sufficiently powerful quantum computers will eventually be able to decrypt that data.

Want to mitigate your risk? I suggest starting with understanding where your biggest risk exists—as well as the time horizon you are worried about. You can do this by first cataloging your crypto assets and then identifying which encrypted data is most exposed to public networks and possible capture. That is the first place you need post-quantum cryptography. In 2022, NIST selected the first few viable post-quantum algorithms, and in 2023 these tools will start to emerge. Over time they will be needed everywhere, but in 2023, knowing where to use them first is a critical step.

4. I will decide whether my multi-cloud edge architecture needs to be cloud extension or cloud-first. In 2023 more of your data and processing will be needed in the real world. From processing real-time data in factories to powering robot control systems, edge is expanding rapidly in the multi-cloud world. This year you will need to make a choice about which edge architecture you want long term.

Option one is to treat edges as extension of your clouds. In that common model, for each cloud you have

Read More

————

By: John Roese
Title: New year’s resolutions for CIOs
Sourced From: www.technologyreview.com/2022/12/07/1064486/new-years-resolutions-for-cios/
Published Date: Wed, 07 Dec 2022 21:02:27 +0000

Did you miss our previous article…
https://mansbrand.com/the-download-chatgpt-gets-even-chattier-and-recreating-space-on-earth/

Continue Reading

Trending