Connect with us

Published

on

In Facebook’s vision of the metaverse, we will all interact in a mashup of the digital and physical worlds. Digital representations of ourselves will eat, talk, date, shop, and more. That’s the picture Mark Zuckerberg painted as he rebranded his company Meta a couple of weeks ago.

The Facebook founder’s typically awkward presentation used a cartoon avatar of himself doing things like scuba diving or conducting meetings. But Zuckerberg ultimately expects the metaverse to include lifelike avatars whose features would be much more realistic, and which would engage in many of the same activities we do in the real world—just digitally.

“The goal here is to have both realistic and stylized avatars that create a deep feeling that we’re present with people,” Zuckerberg said at the rebranding.

If avatars really are on their way, then we’ll need to face some tough questions about how we present ourselves to others. How might these virtual versions of ourselves change the way we feel about our bodies, for better or worse?

Avatars are not a new concept, of course. Gamers have used them for decades: the pixelated, boxy creatures of Super Mario have given way to the hyperrealistic forms ofDeath Stranding,which emote and move eerily like a living, breathing human. 

But how we use avatars becomes more complicated when we expect them to act as representations of ourselves beyond the context of a particular game. It’s one thing to inhabit the overalls and twang of Mario. It’s another to create an avatar that acts as your ambassador, your representation, your very self. The avatars of the metaverse will be participating in situations that might involve higher stakes than treasure in a race. In interviews or meetings, this self-presentation might play a bigger, far more consequential role. 

For some people, avatars that reflect who they are would be a powerful source of validation. But creating one can be a struggle. Gamer Kirby Crane, for example, recently ran an experiment where he tried to do one simple thing: make an avatar that looked like him in 10 different video games.

Thread:

Hi, I’m Kirby! I like video games, and I’ve always been a fan of avatar creators. However, as fat, gay, pre-medical transition trans man, I’m usually not represented by my avatar. For @GamerTroublePhD ‘s class, I am trying to make myself in 10 avatar creators from games pic.twitter.com/i5yTJN6T7a

— kirby

(@kirbygcrane) May 18, 2021

“My goal wasn’t so much to explore the philosophy of avatars but more to explore the representation that’s available in current avatars and see if I could portray myself accurately,” says Crane, who describes himself as a “fat, gay, pre–medical transition trans man.”

Some games allowed him to bulk up his body but bizarrely had him burst out of his clothes if he tried to make the character fat. Other games didn’t allow for an avatar to be male with breasts, which Crane found isolating, as it suggested that the only way to be male was to be male-presenting.

None of the avatars, in the end, felt like Crane—a result he wasn’t surprised by. “Not that I need validation from random game developers, but it’s dehumanizing to see the default man and the accepted parameters of what it means,” he says. 

Crane’s experiment isn’t scientific, nor is it any indication of how the metaverse will operate. But it offers a peek into why avatars in the metaverse could have far-reaching consequences for how people feel and live in the real, physical world. 

What complicates the issue further is Meta’s announcement of Codec Avatars, a project within Facebook’s VR/AR research arm, Reality Labs, that is working toward making photorealistic avatars. Zuckerberg highlighted some of the advances the group has made in making avatars seem more human, such as clearer emotions and better rendering of hair and skin.

“You’re not always going to want to look exactly like yourself,” he said. “That’s why people shave their beards, dress up, style their hair, put on makeup, or get tattoos, and of course, you’ll be able to do all of that and more in the metaverse.”

That hyperpersonalization could allow avatars to realistically portray the lived experience of millions of people who, like Crane, have thus far found the technology limiting. But people might also do the opposite and create avatars that are idealized, unhealthy versions of themselves: puffing out their lips and butt to Kardashian-ify their appearance, lightening their skin to play into racist stereotypes, whitewashing their culture by changing features outright.

In other words, what happens if the avatar you present isn’t who you are? Does it matter?

Jennifer Ogle of Colorado State University and

Read More

————

By: Tanya Basu
Title: The metaverse is the next venue for body dysmorphia online
Sourced From: www.technologyreview.com/2021/11/16/1040174/facebook-metaverse-body-dysmorphia/
Published Date: Tue, 16 Nov 2021 09:30:00 +0000

Did you miss our previous article…
https://www.mansbrand.com/bitcoin-defi-service-platform-alex-raises-5-8m/

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

High-quality data enables medical research

Published

on

MIT IMO Cover150dpi

One unexpected side effect of the covid-19 pandemic was that the usually obscure world of health data was brought to national attention. Who was most at risk for infection? Who was most likely to die? Was one treatment better than another? Was getting covid-19 more or less dangerous than getting a vaccine?

MIT IMO Cover150dpi 1

These complex questions, usually the province of medical research, became concrete seemingly overnight. While amateur epidemiologists scoured the internet for statistics to support their personal beliefs, professionals often appeared on the nightly news, even if just to say, “We don’t have good enough data.”

While our focus on the pandemic has now subsided, our health data quality problems remain. We’re swimming in health data—by some estimates, one-third of all data generated in the world is related to health and health care, and that amount increases more than 30% every year.

With all that data, then, why can’t we answer our most pressing heath questions? Which of the five top diabetes drugs (if any) will be best for me? Will back surgery be more effective than physical therapy for my spine? What are the chances that I will need chemotherapy in addition to radiation to make my tumor go away?

EHRs have become ubiquitous

Electronic health records (EHRs) have become pervasive in the U.S., largely thanks to a multi-billion-dollar federal initiative that made interoperable EHRs a national goal. The 2009 HITECH Act provided incentives for healthcare providers who computerized and penalties for those who did not. In addition to the improved patient care this would enable, the millions of digitized health records would create opportunities to transform medical research.

MIT IMO EHSChartV2

“Prior to EHRs, clinical research was all on paper,” says Dale Sanders, chief strategy officer at Intelligent Medical Objects (IMO), a healthcare data enablement company that offers clinical terminology and tooling to improve the quality of medical data. “You would transfer that paper-based data to spreadsheets and do your own data analysis in a very small local environment. It didn’t give a broader view of a patient’s life, and it certainly didn’t enable any kind of broader population analysis.”

Theoretically, EHRs should make it possible to aggregate, analyze, and search through information collected from millions of patients to discover patterns that aren’t evident on a smaller scale—as well as to track a single patient’s health status methodically over time. Imagine being able to quickly compare and analyze the cases of the few thousand people who have a particular rare condition or to follow users of a certain drug over a set period of time to observe long-term side effects that weren’t obvious in trials.

MIT IMO Quote JohnLeeRead More

————

By: MIT Technology Review Insights
Title: High-quality data enables medical research
Sourced From: www.technologyreview.com/2023/04/06/1070902/high-quality-data-enables-medical-research/
Published Date: Thu, 06 Apr 2023 17:14:39 +0000

Did you miss our previous article…
https://mansbrand.com/delivering-a-quantum-future/

Continue Reading

Tech

Delivering a quantum future

Published

on

Quantum Future cover 1

More companies are starting to consider the impact that quantum computing will have on their business in the coming years. According to a survey by Deloitte, about half of all companies believe that they are vulnerable to a “harvest now, decrypt later” attack, where encrypted information is stored until a future quantum computer can decrypt the data. No wonder, then, that 61% of firms have either conducted an assessment of their readiness or plan to analyze the issue within five years.

In 2022, the National Institute of Standards and Technology (NIST) made a significant decision to help companies prepare for a world where quantum computing is commonplace. The decision was also an effort to help protect today’s data from tomorrow’s quantum computers. The U.S. technology agency selected four algorithms for encryption methods to replace public key infrastructure (PKI) algorithms currently in use as a way of protecting data encrypted today against quantum computers developed in the future.

Quantum Future cover 1 1

Because data can be saved and archived, classified and sensitive information—which may need to be protected longer than a decade, or more—needs to be protected with quantum-resistant algorithms. The four algorithms selected by NIST represent an early milestone in the development of the post-quantum encryption standard.

“Cryptographic protocols that are deployed today can still be in use in 10 years, in 20 years, in 30 years,” says Daniel Gottesman, a professor of theoretical computer science at the University of Maryland and a quantum computing consultant at Keysight Technologies, a U.S.-based provider of design, emulation, and test equipment for electronics. “If you send messages today, if they’re still going to be relevant in that time, then you need to worry about security against quantum computers of the future.”

Yet, quantum computing’s promise goes far beyond unlocking decades-old secrets.

Quantum computing offers the enticing promise of problem-solving abilities and computing power far exceeding today’s most powerful supercomputers. Google has built a quantum AI campus with the goal of creating a “useful, error-corrected quantum computer” by 2029. IBM expanded its quantum efforts with the goal of creating a 4,000-qubit quantum computer by 2025.

Keysight Web Ready Graphic 16 9 v2

These more sophisticated platforms will allow a greater breadth of applications—such as chemical simulation and machine learning—and provide more momentum to the long-term development of quantum computer systems. According to analyst firm International Data Corporation (IDC), the global quantum computing market will grow 51% annually, as measured in spending, from $412 million in 2020 to $8.6 billion in 2027.

“Companies building quantum hardware and software services now have several platforms already used by niche customers in the financial and defense spaces,” says John Blyler, industrial solutions manager, wireline communications, at Keysight Technologies. “And new applications are being identified, such as the simulation of molecules that may result in new life-saving drugs that cure various diseases.”

Keysight Web Ready 16 9 v2Read More

————

By: MIT Technology Review Insights
Title: Delivering a quantum future
Sourced From: www.technologyreview.com/2023/04/07/1069778/delivering-a-quantum-future/
Published Date: Fri, 07 Apr 2023 16:35:12 +0000

Continue Reading

Tech

How AI is helping historians better understand our past

Published

on

It’s an evening in 1531, in the city of Venice. In a printer’s workshop, an apprentice labors over the layout of a page that’s destined for an astronomy textbook—a dense line of type and a woodblock illustration of a cherubic head observing shapes moving through the cosmos, representing a lunar eclipse.

Like all aspects of book production in the 16th century, it’s a time-consuming process, but one that allows knowledge to spread with unprecedented speed.

Five hundred years later, the production of information is a different beast entirely: terabytes of images, video, and text in torrents of digital data that circulate almost instantly and have to be analyzed nearly as quickly, allowing—and requiring—the training of machine-learning models to sort through the flow. This shift in the production of information has implications for the future of everything from art creation to drug development.

But those advances are also making it possible to look differently at data from the past. Historians have started using machine learning—deep neural networks in particular—to examine historical documents, including astronomical tables like those produced in Venice and other early modern cities, smudged by centuries spent in mildewed archives or distorted by the slip of a printer’s hand.

Historians say the application of modern computer science to the distant past helps draw connections across a broader swath of the historical record than would otherwise be possible, correcting distortions that come from analyzing history one document at a time. But it introduces distortions of its own, including the risk that machine learning will slip bias or outright falsifications into the historical record. All this adds up to a question for historians and others who, it’s often argued, understand the present by examining history: With machines set to play a greater role in the future, how much should we cede to them of the past?

Parsing complexity

Big data has come to the humanities throughinitiatives to digitize increasing numbers of historical documents, like the Library of Congress’s collection of millions of newspaper pages and the Finnish Archives’ court records dating back to the 19th century. For researchers, this is at once a problem and an opportunity: there is much more information, and often there has been no existing way to sift through it.

That challenge has been met with the development of computational tools that help scholars parse complexity. In 2009, Johannes Preiser-Kapeller, a professor at the Austrian Academy of Sciences, was examining a registry of decisions from the 14th-century Byzantine Church. Realizing that making sense of hundreds of documents would require a systematic digital survey of bishops’ relationships, Preiser-Kapeller built a database of individuals and used network analysis software to reconstruct their connections.

This reconstruction revealed hidden patterns of influence, leading Preiser-Kapeller to argue that the bishops who spoke the most in meetings weren’t the most influential; he’s since applied the technique to other networks, including the 14th-century Byzantian elite, uncovering ways in which its social fabric was sustained through the hidden contributions of women. “We were able to identify, to a certain extent, what was going on outside the official narrative,” he says.

Preiser-Kapeller’s work is but one example of this trend in scholarship. But until recently, machine learning has often been unable to draw conclusions from ever larger collections of text—not least because certain aspects of historical documents (in Preiser-Kapeller’s case, poorly handwritten Greek) made them indecipherable to machines. Now advances in deep learning have begun to address these limitations, using networks that mimic the human brain to pick out patterns in large and complicated data sets.

Nearly 800 years ago, the 13th-century astronomer Johannes de Sacrobosco published the Tractatus de sphaera, an introductory treatise on the geocentric cosmos. That treatise became required reading for early modern university students. It was the most widely distributed textbook on geocentric cosmology, enduring even after the Copernican revolution upended the geocentric view of the cosmos in the 16th century.

The treatise is also the star player in a digitized collection of 359 astronomy textbooks published between 1472 and 1650—76,000 pages, including tens of thousands of scientific illustrations and astronomical tables. In that comprehensive data set, Matteo Valleriani, a professor with the Max Planck Institute for the History of Science, saw an opportunity to trace the evolution of European knowledge toward a shared scientific worldview. But he realized that discerning the pattern required more than human capabilities. So Valleriani and a team of researchers at the Berlin Institute for the

Read More

————

By: Moira Donovan
Title: How AI is helping historians better understand our past
Sourced From: www.technologyreview.com/2023/04/11/1071104/ai-helping-historians-analyze-past/
Published Date: Tue, 11 Apr 2023 09:00:00 +0000

Continue Reading

Trending