Connect with us

Published

on

For years, we’ve debated the benefits of artificial intelligence (AI) for society, but it wasn’t until now that people can finally see its daily impact. But why now? What changed that’s made AI in 2023 substantially more impactful than before?

First, consumer exposure to emerging AI innovations has elevated the subject, increasing acceptance. From songwriting and composing images in ways previously only imagined to writing college-level papers, generative AI has made its way into our everyday lives. Second, we’ve also reached a tipping point in the maturity curve for AI innovations in the enterprise—and in the cybersecurity industry, this advancement can’t come fast enough.

IBM lock 1200 1

Together, the consumerization of AI and advancement of AI use-cases for security are creating the level of trust and efficacy needed for AI to start making a real-world impact in security operation centers (SOCs). Digging further into this evolution, let’s take a closer look at how AI-driven technologies are making their way into the hands of cybersecurity analysts today.

Driving cybersecurity with speed and precision through AI

After years of trial and refinement with real-world users, coupled with ongoing advancement of the AI models themselves, AI-driven cybersecurity capabilities are no longer just buzzwords for early adopters, or simple pattern- and rule-based capabilities. Data has exploded, as have signals and meaningful insights. The algorithms have matured and can better contextualize all the information they’re ingesting—from diverse use cases to unbiased, raw data. The promise that we have been waiting for AI to deliver on all these years is manifesting.

For cybersecurity teams, this translates into the ability to drive game-changing speed and accuracy in their defenses—and perhaps, finally, gain an edge in their face-off with cybercriminals. Cybersecurity is an industry that is inherently dependent on speed and precision to be effective, both intrinsic characteristics of AI. Security teams need to know exactly where to look and what to look for. They depend on the ability to move fast and act swiftly. However, speed and precision are not guaranteed in cybersecurity, primarily due to two challenges plaguing the industry: a skills shortage and an explosion of data due to infrastructure complexity.

The reality is that a finite number of people in cybersecurity today take on infinite cyber threats. According to an IBM study, defenders are outnumbered—68% of responders to cybersecurity incidents say it’s common to respond to multiple incidents at the same time. There’s also more data flowing through an enterprise than ever before—and that enterprise is increasingly complex. Edge computing, internet of things, and remote needs are transforming modern business architectures, creating mazes with significant blind spots for security teams. And if these teams can’t “see,” then they can’t be precise in their security actions.

Today’s matured AI capabilities can help address these obstacles. But to be effective, AI must elicit trust—making it paramount that we surround it with guardrails that ensure reliable security outcomes. For example, when you drive speed for the sake of speed, the result is uncontrolled speed, leading to chaos. But when AI is trusted (i.e., the data we train the models with is free of bias and the AI models are transparent, free of drift, and explainable) it can drive reliable speed. And when it’s coupled with automation, it can improve our defense posture significantly—automatically taking action across the entire incident detection, investigation, and response lifecycle, without relying on human intervention.

Cybersecurity teams’ ‘right-hand man’

One of the common and mature use-cases in cybersecurity today is threat detection, with AI bringing in additional context from across large and disparate datasets or detecting anomalies in behavioral patterns of users. Let’s look at an example:

Imagine that an employee mistakenly clicks on a phishing email, triggering a malicious download onto their system that allows a threat actor to move laterally across the victim environment and operate in stealth. That threat actor tries to circumvent all the security tools that the environment has in place while they look for monetizable weaknesses. For example, they might be searching for compromised passwords or open

Read More

————

By: Sridhar Muppidi, IBM Fellow and CTO IBM Security
Title: AI in cybersecurity: Yesterday’s promise, today’s reality
Sourced From: www.technologyreview.com/2023/05/24/1073395/ai-in-cybersecurity-yesterdays-promise-todays-reality/
Published Date: Wed, 24 May 2023 14:00:00 +0000

Did you miss our previous article…
https://mansbrand.com/the-download-alternative-aviation-fuels-and-drone-delivered-bubble-tea/

Continue Reading

Tech

What if we could just ask AI to be less biased?

Published

on

image 3

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Think of a teacher. Close your eyes. What does that person look like? If you ask Stable Diffusion or DALL-E 2, two of the most popular AI image generators, it’s a white man with glasses. 

Last week, I published a story about new tools developed by researchers at AI startup Hugging Face and the University of Leipzig that let people see for themselves what kinds of inherent biases AI models have about different genders and ethnicities.

Although I’ve written a lot about how our biases are reflected in AI models, it still felt jarring to see exactly how pale, male, and stale the humans of AI are. That was particularly true for DALL-E 2, which generates white men 97% of the time when given prompts like “CEO” or “director.”

And the bias problem runs even deeper than you might think into the broader world created by AI. These models are built by American companies and trained on North American data, and thus when they’re asked to generate even mundane everyday items, from doors to houses, they create objects that look American, Federico Bianchi, a researcher at Stanford University, tells me.

As the world becomes increasingly filled with AI-generated imagery, we are going to mostly see images that reflect America’s biases, culture, and values. Who knew AI could end up being a major instrument of American soft power?
So how do we address these problems? A lot of work has gone into fixing biases in the data sets AI models are trained on. But two recent research papers propose interesting new approaches.

What if, instead of making the training data less biased, you could simply ask the model to give you less biased answers?

A team of researchers at the Technical University of Darmstadt, Germany, and AI startup Hugging Face developed a tool called Fair Diffusion that makes it easier to tweak AI models to generate the types of images you want. For example, you can generate stock photos of CEOs in different settings and then use Fair Diffusion to swap out the white men in the images for women or people of different ethnicities.

As the Hugging Face tools show, AI models that generate images on the basis of image-text pairs in their training data default to very strong biases about professions, gender, and ethnicity. The German researchers’ Fair Diffusion tool is based on a technique they developed called semantic guidance, which allows users to guide how the AI system generates images of people and edit the results.

The AI system stays very close to the original image, says Kristian Kersting, a computer science professor at TU Darmstadt who participated in the work. 

image 3 1

This method lets people create the images they want without having to undertake the cumbersome and time-consuming task of trying to improve the biased data set that was used to train the AI model, says Felix Friedrich, a PhD student at TU Darmstadt who worked on the tool.

However, the tool is not perfect. Changing the images for some occupations, such as “dishwasher,” didn’t work as well because the word means both a machine and a job. The tool also only works with two genders. And ultimately, the diversity of the people the model can generate is still limited by the images in the AI system’s training set. Still, while more research is needed, this tool could be an important step in mitigating biases.

A similar technique also seems to work for language models. Research from the AI lab Anthropic shows how simple instructions can steer large language models to produce less toxic content, as my colleague Niall Firth reported recently. The Anthropic team tested different language models of varying sizes and found that if the models are large enough, they self-correct for some biases after simply being asked to.

Researchers don’t know why text- and image-generating AI models do this. The Anthropic team thinks it might be because larger models have larger training data sets, which include lots of examples of biased or stereotypical behavior—but also examples of people pushing back against this biased behavior.

AI tools are becoming increasingly popular for generating stock images. Tools like Fair Diffusion could be useful for companies that want their promotional pictures to reflect society’s diversity,

Read More

————

By: Melissa Heikkilä
Title: What if we could just ask AI to be less biased?
Sourced From: www.technologyreview.com/2023/03/28/1070390/what-if-we-could-just-ask-ai-to-be-less-biased/
Published Date: Tue, 28 Mar 2023 08:22:40 +0000

Continue Reading

Tech

Proactive and predictive tools for transformation

Published

on

SuseCover 16 9Ratio

Supply chain. Finance. Accounting. Inventory. Manufacturing. Procurement. HR. Name a mission-critical application that operates in the background to keep businesses running, and it falls under the umbrella of enterprise resource planning (ERP).

SuseCover 16 9Ratio 1

Until recently, the sprawling, interconnected sets of ERP modules that ran these essential functions were configured and managed manually. In the context of an organization whose IT systems were relatively static and running in a consistent, predictable environment, this might not be a problem.

Those well-established conventional IT systems, however, can no longer be taken for granted. Companies are accelerating their digital transformation efforts, automating, optimizing, and reinventing their business processes. The pace of change continues to accelerate: Deloitte reports, for example, that 58% of organizations have stepped up their modernization plans due to the covid-19 pandemic.

v2 MIT Del SAP ITStat

Many ERP apps are now being moved to public cloud services, such as AWS, Azure, or Google Cloud, while others are being replaced with SaaS-based alternatives, including Salesforce and Workday. The previously monolithic ERP platform is being deconstructed.

Enterprises now find themselves with a mixed-bag, hybrid cloud environment: some legacy core applications remain on premises, while new applications are cloud native and run in containers or as microservices.

This new ERP landscape is more distributed and more complex than ever before. And failure to effectively monitor these ERP apps could result in business outages that can cost the company dearly. Shawn Windle, founder and managing principal at ERP Advisers Group, puts it bluntly: “The intrinsic value of these systems is that they run the business. Without these apps, you don’t have a business.”

Download the report

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Read More

————

By: MIT Technology Review Insights
Title: Proactive and predictive tools for transformation
Sourced From: www.technologyreview.com/2023/06/05/1073787/proactive-and-predictive-tools-for-transformation/
Published Date: Mon, 05 Jun 2023 15:00:00 +0000

Continue Reading

Tech

The Download: making sense of tech, and Apple’s AR ambitions

Published

on

image

This is today’s edition of The Download our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Overwhelmed by the rapid pace of new tech? Let us help.

It’s been a busy year. Over the past 12 months, we’ve witnessed the explosion of generative AI, the collapse of crypto, and a whole lot of promises from lawmakers pledging to slow the march of climate change. While it’s easy to feel overwhelmed by all this rapid change, we’re here to help.

Our MIT Technology Review Explains section is dedicated to untangling the complex, sometimes messy, world of science and technology to help you understand what’s happening.

Our series of explainers cut through the noise and get to the heart of the issues that really matter, covering everything from biotechnology and cryptocurrency to quantum computing and what’s going on in China’s tech industry.

Take a look over some of our fascinating explainers:

+ Our quick guide to the 6 ways we can regulate AI. A handy guide to all the most (and least) promising efforts to govern AI around the world. Read the full story.

Ethereum moved to proof of stake. Why can’t Bitcoin? There is no technical obstacle to making the notoriously energy-hungry cryptocurrency far more efficient—just a social one. Read the full story.

+ ChatGPT is everywhere. Here’s where it came from. OpenAI’s breakout hit was an overnight sensation—but it is built on decades of research. Read more about its fascinating history.

+ Everything you need to know about the wild world of alternative jet fuels. Find out more about how trash, cooking oil, and green electricity could power your future flights. Read the full story.

+ How to log off. Sick of spending all your time staring at your devices? Here’s how to strike a healthier balance. Read the full story.

Is there a particular topic you’d like to see our writers tackle in the future? Get in touch with your suggestions! 

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Apple wants to make you care about augmented reality
In theory, it’s got a better shot at success than companies that lack its elusive cool factor. (The Verge)
Its rumored new mixed reality headset is worrying the competition. (Wired $)
The launch could be a much-needed shot in the arm for VR startups. (FT $)
The metaverse is still fundamentally uncool, though. (NYT $)
The metaverse is a new word for an old idea. (MIT Technology Review)

2 A new antibiotic is thwarting resistant bugs
If approved, it’d be the first of its kind to be green-lit in more than two decades. (New Scientist $)
The next pandemic is already here. Covid can teach us how to fight it. (MIT Technology Review)

3 ChatGPT has pumped the tech industry back up 
But the AI boom has been far from good news for everyone. (WP $)
It could take over 10 years for some economies to reap the rewards. (FT $)
ChatGPT is about to revolutionize the economy. (MIT Technology Review)

4 A biotech company mistakenly told 400 patient they may have cancer
It’s a harrowing example of the dangers of over-relying on detection tech. (FT $)

5 China has had enough of AI-driven fraud
Its tight internet restrictions mean it could be relatively successful in cracking down on it, too. (WSJ $)

6 We still can’t seem to quit coal
It’s a lifeline for Asia, in particular—and demand is likely to grow. (Economist $)+ Climate scientists are worried about the cooling upper atmosphere. (Wired $)

7 Bitcoin enthusiasts are agonizing over what to do with memecoins
Purists argue the system is being abused by a proliferation of junk coins. (Bloomberg $)

8 An Irish town has banned children from owning smartphones
It’s a voluntary system that can only really work if everyone agrees. (The Guardian)

9 Takeout customers are increasingly picking up their orders themselves
The apps’ high delivery fees are to blame. (Insider $)

10 Recycling is rarely as simple as it should be
A new AI system makes it easier to tell whether that container should be chucked in the trash instead. (Axios)
Why you might recycle a battery—and how to do it. (MIT Technology Review)

Quote of the day

“They know how to build a religion.”

—Inga Petryaevskaya, CEO of virtual and augmented reality startup ShapesXR, tells the Wall Street Journal why Apple might give her industry a much-needed boost.

The big story

The FBI accused him of spying for China. It ruined his life.

Read More

————

By: Rhiannon Williams
Title: The Download: making sense of tech, and Apple’s AR ambitions
Sourced From: www.technologyreview.com/2023/06/05/1073932/the-download-making-sense-of-tech-and-apples-ar-ambitions/
Published Date: Mon, 05 Jun 2023 12:10:00 +0000

Continue Reading

Trending