Connect with us



Every 90 minutes on average, someone in the world is injured or killed by a landmine or other remnant of war, according to the Explosive Ordnance Risk Education Advisory Group. Even more sobering: there has been “a sharp increase” in the number of civilian casualties in recent years, says the group, which encompasses more than a dozen UN agencies and non-governmental organizations concerned about the rising accident rate. These organizations and others are working hard to help affected countries and communities regain safe use of their land.

Landmine blasts can be fatal and cause injuries including blindness, burns, damaged limbs, and shrapnel wounds. While many nations have stopped using and producing landmines, 59 countries and territories remain contaminated by mines or other explosives. In 2019, landmines and similar explosives caused at least 5,554 casualties, across 55 countries and regions, with civilians accounting for the majority (80%) and children representing nearly half of civilian casualties (43%).

Over one million landmines were dropped in Afghanistan in the 1980s. About two million landmines have been planted on the Korean Peninsula since the Korean War ended in 1953. And today, new mines are believed to be in use in northern Myanmar, while improvised explosive devices are used by violent non-state actors worldwide. Long and complex clearance operations are required in such contaminated territories, and innovative technologies will no doubt play a critical role in helping populations living under the threat of mines.

Harnessing radar to see below ground

Chaouki Kasmi, chief researcher for the Directed Energy Research Center (DERC) at the UAE-based Technology Innovation Institute (TII), believes he and his team can be part of the solution. DERC has developed a landmine detection system using ground-penetrating radar, a search technology historically deployed for tasks like inspecting concrete and masonry, locating underground utilities, and mapping archaeological sites.

“With our ground-penetrating radars, we are detecting buried objects in the ground from a flying autonomous unmanned aerial vehicle,” says Kasmi. Named “Nimble and Advanced Tomography Humanitarian Rover” (NATHR-G1), the system scans for and detects buried objects such as improvised explosive devices, landmines, and other unexploded ordnance.

Fully designed, manufactured, and assembled in Abu Dhabi, NATHR-G1’s embedded microwave sensors collect images of a predefined area or terrain, says Kasmi. Measurements completed over multiple frequency bands are then processed with geo-referenced information from a ground station.

Detecting and neutralizing threats

To neutralize landmines safely and remotely, DERC has also built and tested a high-power laser in its mobile laser laboratory. Its research team is collaborating with young science and engineering talent in the UAE to reduce the risk of unexploded ordnance at limited cost. They hope to make this technology available to as many countries as possible.

The team also continues to improve NATHR-G1: new features include an advanced signal processing engine, powered by machine learning, to detect and identify buried objects. DERC is also partnering with experts at Germany’s Ruhr University Bochum and the National University of Colombia in Bogota. These researchers are currently developing an artificial intelligence engine that will make it easier for NATHR-G1 to distinguish harmless metal objects from threats by analyzing their electromagnetic signatures.

Engineers at the Directed Energy Research Center of Abu Dhabi’s Technology Innovation Institute working on a high-power fiber laser with a wide range of uses, including telecommunications and medical applications.

Popping up power after a disaster

Identifying landmines is only one humanitarian tool made possible by directed-energy systems. Beaming power into post-disaster environments is a second application that could aid rescue operations, says Kasmi.

After disasters, damaged water and power infrastructure can turn a localized crisis into a national catastrophe. “When typhoons and earthquakes cause utility infrastructure to collapse, such events turn into large disasters,” says Kasmi. “And downed power systems hamper recovery efforts,

Read More


By: MIT Technology Review Insights
Title: Radar and laser breakthroughs serve humanitarian ends
Sourced From:
Published Date: Wed, 21 Dec 2022 16:00:00 +0000

Mens Health




495718521 wb 1024x1024 1

MRE PROTEIN MUFFIN is a whole-food protein snack that packs 15 grams of protein and is baked to perfection.

We’ve all been there: you grab some coffee after hitting the gym and you see tempting muffins available for purchase at checkout. The muffins would go great with your coffee and really hit the spot, but you pass because muffins aren’t on your diet. Well, the days of muffins being off-limits are over thanks to Redcon1, which has launched the delicious MRE PROTEIN MUFFIN, a whole-food protein snack that packs 15 grams of protein and is baked to perfection.

Whole Food Protein

MRE PROTEIN MUFFINs are freshly baked with real food ingredients and have a mouthwatering, homemade taste that makes it easy to boost your daily protein intake as well as your calories. Available in Double Chocolate Chip (230 calories) and Wild Blueberry (210 calories) flavors, each bite of a MRE PROTEIN MUFFIN is a flavor sensation and protein boost that will satisfy your nutritional needs and overwhelm your taste buds. The Double Chocolate Chip MRE PROTEIN MUFFIN is made with real chocolate chips, and the Wild Blueberry muffins will transport you to blueberry muffin paradise with their authentic flavor sensation.

Satisfying and Tempting

Redcon1 has really outdone themselves with the advent of the MRE PROTEIN MUFFIN. Now you can have your muffin and eat it too. Not only are the muffins satisfying, moist and tempting, but they are low in sugar too. And they are a convenient, on-the-go source of extra calories. Make Redcon1’s MRE PROTEIN MUFFIN part of your meal plan and rediscover the joy of eating a good muffin without any of the guilt.


• Whole Food Protein Snack

• Baked With Real Food Ingredients

• 15g Protein

• 5g Collagen

• No Whey Protein

• Freshly Baked

• Homemade Taste

• Boosts Daily Protein Intake

• 230 Calories (Double Chocolate Chip)

• 210 Calories (Wild Blueberry)

For more information, visit

Use MRE PROTEIN MUFFINs as a food supplement only. Do not use for weight reduction. Not a low-calorie food.495718521 wb 1024x1024 2

The post MRE PROTEIN MUFFIN appeared first on FitnessRX for Men.

Read More


By: Team FitRx
Sourced From:
Published Date: Mon, 05 Jun 2023 19:48:13 +0000

Did you miss our previous article…

Continue Reading


LISTEN: DRMAGDN Unveils Memorable Tribute Remix of The Beatles’ “Something” Featuring All-Star Collaborators



pasted image 0 1024x815 1
pasted image 0 1024x815 2

Renowned drummer/DJ DRMAGDN has returned with his most powerful release yet, this time coming in the form of a breathtaking tribute remix of The Beatles’ timeless hit, “Something.” Recently signed with BMG, DRMAGDN was granted access to dive into George Harrison’s decorated catalog, and he enlisted the talents of Michelle Ray (Team Blake on Season 4 of The Voice) and a stellar lineup of accomplished artists to elevate his reimagionation of “Something” to new heights. The outcome is an exceptionally captivating electronic-infused masterpiece, enriched by crisp drum fills that pay homage to the original track. Hear what we mean by watching the video below and be sure to turn your speakers up for this one.

DRMAGDN – Something Remix | Stream


‘LISTEN: DRMAGDN Unveils Memorable Tribute Remix of The Beatles’ “Something” Featuring All-Star Collaborators

The post LISTEN: DRMAGDN Unveils Memorable Tribute Remix of The Beatles’ “Something” Featuring All-Star Collaborators appeared first on Run The Trap: The Best EDM, Hip Hop & Trap Music.


By: Max Chung
Title: LISTEN: DRMAGDN Unveils Memorable Tribute Remix of The Beatles’ “Something” Featuring All-Star Collaborators
Sourced From:
Published Date: Fri, 02 Jun 2023 20:01:04 +0000

Read More

Did you miss our previous article…

Continue Reading


What if we could just ask AI to be less biased?



image 3

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Think of a teacher. Close your eyes. What does that person look like? If you ask Stable Diffusion or DALL-E 2, two of the most popular AI image generators, it’s a white man with glasses. 

Last week, I published a story about new tools developed by researchers at AI startup Hugging Face and the University of Leipzig that let people see for themselves what kinds of inherent biases AI models have about different genders and ethnicities.

Although I’ve written a lot about how our biases are reflected in AI models, it still felt jarring to see exactly how pale, male, and stale the humans of AI are. That was particularly true for DALL-E 2, which generates white men 97% of the time when given prompts like “CEO” or “director.”

And the bias problem runs even deeper than you might think into the broader world created by AI. These models are built by American companies and trained on North American data, and thus when they’re asked to generate even mundane everyday items, from doors to houses, they create objects that look American, Federico Bianchi, a researcher at Stanford University, tells me.

As the world becomes increasingly filled with AI-generated imagery, we are going to mostly see images that reflect America’s biases, culture, and values. Who knew AI could end up being a major instrument of American soft power?
So how do we address these problems? A lot of work has gone into fixing biases in the data sets AI models are trained on. But two recent research papers propose interesting new approaches.

What if, instead of making the training data less biased, you could simply ask the model to give you less biased answers?

A team of researchers at the Technical University of Darmstadt, Germany, and AI startup Hugging Face developed a tool called Fair Diffusion that makes it easier to tweak AI models to generate the types of images you want. For example, you can generate stock photos of CEOs in different settings and then use Fair Diffusion to swap out the white men in the images for women or people of different ethnicities.

As the Hugging Face tools show, AI models that generate images on the basis of image-text pairs in their training data default to very strong biases about professions, gender, and ethnicity. The German researchers’ Fair Diffusion tool is based on a technique they developed called semantic guidance, which allows users to guide how the AI system generates images of people and edit the results.

The AI system stays very close to the original image, says Kristian Kersting, a computer science professor at TU Darmstadt who participated in the work. 

image 3 1

This method lets people create the images they want without having to undertake the cumbersome and time-consuming task of trying to improve the biased data set that was used to train the AI model, says Felix Friedrich, a PhD student at TU Darmstadt who worked on the tool.

However, the tool is not perfect. Changing the images for some occupations, such as “dishwasher,” didn’t work as well because the word means both a machine and a job. The tool also only works with two genders. And ultimately, the diversity of the people the model can generate is still limited by the images in the AI system’s training set. Still, while more research is needed, this tool could be an important step in mitigating biases.

A similar technique also seems to work for language models. Research from the AI lab Anthropic shows how simple instructions can steer large language models to produce less toxic content, as my colleague Niall Firth reported recently. The Anthropic team tested different language models of varying sizes and found that if the models are large enough, they self-correct for some biases after simply being asked to.

Researchers don’t know why text- and image-generating AI models do this. The Anthropic team thinks it might be because larger models have larger training data sets, which include lots of examples of biased or stereotypical behavior—but also examples of people pushing back against this biased behavior.

AI tools are becoming increasingly popular for generating stock images. Tools like Fair Diffusion could be useful for companies that want their promotional pictures to reflect society’s diversity,

Read More


By: Melissa Heikkilä
Title: What if we could just ask AI to be less biased?
Sourced From:
Published Date: Tue, 28 Mar 2023 08:22:40 +0000

Continue Reading