Amazon Prime’s about to get more expensive

All those super yachts aren’t going to pay for themselves.

Amazon announced Thursday that the price of its Prime subscription service is set to increase. It’s the first Prime cost increase since 2018, and comes at a time when people are more reliant on deliveries than ever following two years of pandemic life.

Paris Martineau, a reporter at The Information, broke down the price hike, noting that the monthly cost will jump from $12.99 to $14.99, and an annual membership will shoot up from $119 to $139. 

According to CNBC, new customers will see increased Prime prices beginning Feb. 18. Existing members will see the change starting after March 25.

SEE ALSO:

I quit Amazon Prime a year ago. I don’t miss it.

Amazon stock was up almost 15 percent Thursday, reports Yahoo news, following the day’s positive earnings report. The new Prime price increase is sure to only add to those jaw-dropping numbers.

Notably, canceling a Prime membership still costs exactly the same thing: nothing.

Bose’s open-ear audio sunglasses are on sale at Amazon

SAVE $46: The Bose Frames, Bose’s open-ear audio sunglasses, are 23% off at Amazon: just $153.29 (original price: $199).


Ever wish your sunglasses could be more like headphones? Check out the Bose Frames. They won’t replace your earbuds, but they do have the novelty factor.

These audio sunglasses feature scratch-resistant lenses that block up to 99% of UVA/UVB rays, but their selling point is the open-ear audio system that delivers sound right to your ear, so you can enjoy your music without disturbing those around you.

It should be noted that while impressive, the sound is no match for a loud environment — Mashable’s review of the Frames notes that “Bose Frames simply can’t compete with a noisy city” — but that means that with nothing in your ears, you’ll be able to pay attention to what’s going on around you.

Plus, there’s no need to whip out your phone every time you want to switch up your audio. Press & Turn volume control lets you toggle audio settings from the sunglasses themselves, and you can stray up to 30 feet from the audio source before the Bluetooth connection starts struggling. Did we mention the integrated microphone? You can take calls or summon Siri without bothering with your phone.

On the whole, the Bose Frames are an interesting option. They’re not the best of the best in either sunglasses or headphones, but if you’re looking for a handy gadget that combines both to an impressive standard, you can check them out at Amazon for $153.29.

Front view of black sunglasses

Credit: Bose

Bose Frames Alto M/L – Audio Sunglasses with Open Ear Headphones

$153.29 at Amazon (save 23%)

Explore related content:

  • First look: Facebook’s Ray-Ban smart glasses

  • Now you can put Amazon’s Alexa assistant on your face with sunglasses

  • These sunglasses from Bose double as speakers

Crisis Text Line tried to monetize its users. Can big data ever be ethical?

Years after Nancy Lublin founded Crisis Text Line in 2013, she approached the board with an opportunity: What if they converted the nonprofit’s trove of user data and insights into an empathy-based corporate training program? The business strategy could leverage Crisis Text Line’s impressive data collection and analysis, along with lessons about how to best have hard conversations, and thereby create a needed revenue stream for a fledgling organization operating in the woefully underfunded mental health field. 

The crisis intervention service is actually doing well now; it brought in $49 million in revenue in 2020 thanks to increased contributions from corporate supporters to meet pandemic-related needs and expansion, as well as a new round of philanthropic funding. But in 2017, Crisis Text Line’s income was a relatively paltry $2.6 million. When Lublin proposed the for-profit company, the organization’s board was concerned about Crisis Text Line’s long-term sustainability, according to an account recently published by founding board member danah boyd

The idea of spinning off a for-profit enterprise from Crisis Text Line raised complex ethical questions about whether texters truly consented to the monetization of their intimate, vulnerable conversations with counselors, but the board approved the arrangement. The new company, known as Loris, launched in 2018 with the goal of providing unique “soft skills” training to companies. 

It wasn’t clear, however, that Crisis Text Line had a data-sharing agreement with Loris, which provided the company access to scrubbed, anonymized user texts, a fact that Politico reported last week. The story also contained concerning information about Loris’ business model, which sells enterprise software to companies for the purpose of optimizing customer service. On Monday, a Federal Communications Communications Commissioner requested the nonprofit cease its data-sharing relationship, calling the arrangement “disturbingly dystopian” in a letter to Crisis Text Line and Loris leadership. That same day Crisis Text Line announced that it had decided to end the agreement and requested that Loris delete the data it had previously received.

“This decision weighed heavily on me, but I did vote in favor of it,” boyd wrote about authorizing Lublin to found Loris. “Knowing what I know now, I would not have. But hindsight is always clearer.” 

SEE ALSO:

21 reasons to keep living when you feel suicidal

Though proceeds from Loris are supposed to support Crisis Text Line, the company played no role in the nonprofit’s increased revenue in 2020, according to Shawn Rodriguez, vice president and general counsel of Crisis Text Line. Still, the controversy over Crisis Text Line’s decision to monetize data generated by people seeking help while experiencing intense psychological or emotional distress has become a case study in the ethics of big data. When algorithms go to work on a massive data set, they can deliver novel insights, some of which could literally save lives. Crisis Text Line, after all, used AI to determine which texters were more at risk, and then placed them higher in the queue. 

Yet the promise of such breakthroughs often overshadows the risks of misusing or abusing data. In the absence of robust government regulation or guidance, nonprofits and companies like Crisis Text Line and Loris are left to improvise their own ethical framework. The cost of that became clear this week with the FCC’s reprimand and the sense that Crisis Text Line ultimately betrayed its users and supporters. 

Leveraging empathy

When Loris first launched, Lublin described its seemingly virtuous ambitions to Mashable: “Our goal is to make humans better humans.”

In the interview, Lublin emphasized translating the lessons of Crisis Text Line’s empathetic and data-driven counselor training to the workplace, helping people to develop critical conversational skills. This seemed like a natural outgrowth of the nonprofit’s work. It’s unclear whether Lublin knew at the time but didn’t explicitly state that Loris would have access to anonymized Crisis Text Line user data, or if the company’s access changed after its launch.

“If another entity could train more people to develop the skills our crisis counselors were developing, perhaps the need for a crisis line would be reduced,” wrote boyd, who referred Mashable’s questions about her experience to Crisis Text Line. “If we could build tools that combat the cycles of pain and suffering, we could pay forward what we were learning from those we served. I wanted to help others develop and leverage empathy.” 


“I wanted to help others develop and leverage empathy.” 

But at some point Loris pivoted away from its mission. Instead, it began offering services to help companies optimize customer service. On LinkedIn, the company cites its “extensive experience working through the most challenging conversations in the crisis space” and notes that its live coaching software “helps customer care teams make customers happier and brands stand out in the crowd.” 

While spinning off Loris from Crisis Text Line may have been a bad idea from the start, Loris’ commercialization of user data to help companies improve their bottom line felt shockingly unmoored from the nonprofit’s role in suicide prevention and crisis intervention.  

“A broader kind of failure”

John Basl, associate director of AI and Data Ethics Initiatives at the Ethics Institute of Northeastern University, says the controversy is another instance of a “broader kind of failure” in artificial intelligence. 

While Basl believes it’s possible for AI to unequivocally benefit the public good, he says the field lacks an “ethics ecosystem” that would help technologists and entrepreneurs grapple with the kind of ethical issues that Crisis Text Line tried to resolve internally. In biomedical and clinical research, for example, federal laws govern how research is conducted, decades of case studies provide insights about past mistakes, and interdisciplinary experts like bioethicists help mediate new or ongoing debates. 

“In the AI space, we just don’t have those yet,” he says. 

The federal government grasps the implications of artificial intelligence. The Food and Drug Administration’s consideration of a regulatory framework for AI medical devices is one example. But Basl says that the field is having trouble reckoning with the challenges raised by AI in the absence of significant federal efforts to create an ethics ecosystem. He can imagine a federal agency dedicated to the regulation of artificial intelligence, or at least subdivisions in major existing agencies like the National Institutes of Health, the Environmental Protection Agency, and the FDA. 

Basl, who wasn’t involved with either Loris or Crisis Text Line, also says that motives vary inside organizations and companies that utilize AI. Some people seem to genuinely want to ethically use the technology while others are more profit driven. 

Critics of the data-sharing between Loris and Crisis Text Line argued that protecting user privacy should’ve been paramount. FCC Commissioner Brendan Carr acknowledged fears that even scrubbed, anonymized user records might contain identifying details, and said there were “serious questions” about whether texters had given “meaningful consent” to have their communication with Crisis Text Line monetized.

“The organization and the board has always been and is committed to evolving and improving the way we obtain consent so that we are continually maximizing mental health support for the unique needs of our texters in crisis,” Rodriguez said in a statement to Mashable. He added that Crisis Text Line is making changes to increase transparency for users, including by adding a bulleted summary to the top of its terms of service.


“You’re collecting data about people at their most vulnerable and then using it for an economic exercise”

Yet the nature of what Loris became arguably made the arrangement ethically bereft. 

Boyd wrote that she understood why critics felt “anger and disgust.” 

She ended her lengthy account by posing a list of questions to those critics, including: “What is the best way to balance the implicit consent of users in crisis with other potentially beneficial uses of data which they likely will not have intentionally consented to but which can help them or others?” 

When boyd posted a screenshot of those questions to her Twitter account, the responses were overwhelmingly negative, with many respondents calling for her and other board members to resign. Several shared the sentiment that their trust in Crisis Text Line had been lost.

It’s likely that Crisis Text Line and Loris will become a cautionary tale about the ethical use of artificial intelligence: Thoughtful people trying to use technology for good still made a disastrous mistake.

“You’re collecting data about people at their most vulnerable and then using it for an economic exercise, which seems to not treat them as persons, in some sense,” said Basl. 

If you want to talk to someone or are experiencing suicidal thoughts, call the National Suicide Prevention Lifeline at 1-800-273-8255. Contact the NAMI HelpLine at 1-800-950-NAMI, Monday through Friday from 10:00 a.m. – 10:00 p.m. ET, or email info@nami.org. Here is a list of international resources.

That viral gold cube is actually pretty small. Oh, and it’s hollow.

It’s all a little on the nose.

The internet turned its collective head Thursday at news that a “solid gold” cube had been installed in New York City’s Central Park as part of some bizarre cryptocurrency promotional stunt. Photos strategically shot from low angles made the cube seem imposing, and Twitter was briefly impressed by The Cube.

That is, until everyone realized the cube is actually pretty small. And hollow.

That’s right, according to Artnet, the $11.7 million gold cube isn’t — despite a misleading tweet suggesting otherwise — solid at all.

“The cube measures over a foot and a half on all sides and has a wall thickness of about a quarter inch,” reports Artnet.

Reactions were swift.

Notably, the hollow cube isn’t even in the park anymore. Artnet notes that on the evening of Feb. 2, it was moved to “a private dinner on Wall Street, where numerous celebrities are said to be attending.”

Nothing hollow about that either, we’re sure.

9 of the best ‘Wordle’ clones, because one word a day isn’t enough

If you’ve spent any time on the internet lately, you know that Wordle has taken over.

Our obsession has been endlessly analyzed and dissected. We’ve shared strategies and tips. It has been meme-ed and shared all over Twitter. It was even bought by the New York Times. And of course our insatiable appetite for the simple puzzle game has been the catalyst for multiple Wordle clones. Here’s a roundup of our favorites.

1. Absurdle

If Wordle isn’t enough of a challenge, this one will have you stumped. It’s even hard to understand how it works, but a previous Mashable article breaks it down: “Instead of starting with a secret word that players work their way towards, Absurdle doesn’t have a single word up its sleeve,” Sam Haysom explains. “The game starts with 2,315 possibilities and responds to each of your guesses by keeping the maximum number of potential secret words in its back pocket, forcing you to narrow its options down until you essentially trap the AI into only having one word left.” Truly diabolical.

Screenshot of absurdle

For those who just want to watch the world burn.
Credit: Screenshot: qntm / Absurdle

2. Dordle

If you’re one of those gifted people who needs only a few guesses to get it right, Dordle is for you. The rules are the same (five letter words, six guesses, green if the letter is in the right place, yellow if it’s in the word) but there are two words you have to guess. Think of it as the ultimate form of multitasking

screenshot of Dordle board

For the ambidextrous mind.
Credit: Screenshot: Dordle / Zaratustra Productions

3. Lewdle

It was only a matter of time before someone came up with a NSFW version of Wordle. For those of us who instinctively jump to an inappropriate five-letter word, welcome to Lewdle, you need not get your mind out of the gutter.

screenshot of lewdle board

A safe space to act out your ‘Wordle’ fantasies.
Credit: Screenshot: Lewdle

4. Primel

This is technically a game, but for the mathematically-challenged, it’s an instrument of torture. With Primel, the goal is to guess a five-digit prime number instead of a five-letter word. The mere thought of a five-digit prime number is panic-inducing, but to each their own.

screenshot of primel board

I already hate this.
Credit: Screenshot: Primel / Onverged.Yt

5. Sweardle

Similar to Lewdle, Sweardle zeroes in on the human tendency to be inappropriate. The major difference is that it uses four-letter words instead of five. It may sound easy, but you’ll soon realize that the the lexicon of colorful language is quite extensive.

Screenshot of sweardle board

Nailed it.
Credit: Screenshot: Sweardle

6. Queerdle

Self-described as the “yassification of wordle,” Queerdle challenges your knowledge of LGBTQ+ vocabulary. Everything is basically the same as the original Wordle, except the words vary between four and eight letters and is sometimes two words. Why? “Because queerness can’t be contained,” according to the instructions by creator Jordan Bouvier.

Screenshot of queerdle board

Queerdle also takes suggestions for new LGBTQ+ words to include.
Credit: Screenshot: Queerdle / Jordan Bouvier

7.Taylordle

Making a Wordle clone is now becoming requisite for any stan community. And Swifties are nothing if not the ultimate stans. Using the same rules as the original, Taylordle is played with album titles, song lyrics, or really any Taylor Swift trivia, i.e. “scarf.” IYKYK.

screenshot of taylordle board

If there’s a “Blank Space,” you know what to do.
Credit: Screenshot: Taylordle

8. BTS

Not to be outdone by the Taylor Swift fandom, there’s also a version for K-pop band BTS, and the Army is already hooked. The rules of the game are the same, except with BTS-themed vocab. The tiles turn purple instead of green, which is of course a reference to the phrase “I Purple You.”

Screenshot of BTS wordle board

Put your BTS knowledge to the test.
Credit: Screenshot: hannahcode / WORDLE-BTS

9. Wordle Unlimited

If you’ve tackled the original Wordle, and all the Wordle clones, there’s Wordle Unlimited. It’s just like the game we know and love, except with unlimited words, so you don’t have to wait an entire day to play again. Plus, this version has a feature where you can enter a custom word and play with friends. Wordle purists might scoff, but we won’t “JUDGE.”

Screenshot of wordle unlimited

Play to your heart’s content.
Credit: Screenshot: Wordle Unlimited

DeFi world shaken after exploit leads to $322 million crypto hack

When they talk about decentralized finance, they don’t mean decentralized like that.

Over $300 million worth of wrapped ether (wETH) was stolen Wednesday thanks to what appears to be a massive exploit in the DeFi Wormhole protocol. In response, the team behind the protocol — which allows for interaction across different blockchains — temporarily pulled the entire thing down.

“The wormhole network is down for maintenance as we look into a potential exploit,” read a Wednesday afternoon announcement from the Wormhole team. “We will provide updates here as soon as we have them. Thank you for your patience.”

Shortly afterward, the team made clear just how much cryptocurrency was actually stolen.

“The wormhole network was exploited for 120k wETH,” wrote the team. “ETH will be added over the next hours to ensure wETH is backed 1:1. More details to come shortly. We are working to get the network back up quickly.”

SEE ALSO:

Mark Zuckerberg’s ‘killer use case’ for the metaverse is dumb as hell

At the time of this writing, 120,000 wETH was worth approximately $323,000,000.

In a plea encoded into the blockchain, the Wormhole team asked the culprit behind the theft to return the money and promised a $10 million bounty in return.

“We’d like to offer you a whitehat agreement,” read the message in part, “and present you a bug bounty of $10 million for exploit details, and returning the wETH you’ve minted. You can reach out to us at contact@certus.one.”

Wormhole was able to patch the vulnerability later in the day, and confirmed it was working to get its network back online.

Meanwhile, those paying attention called out exactly how big of a deal this all is. That’s because, in addition to the obvious issues involved with a theft of this size, what was stolen wasn’t just regular ethereum. It was wrapped ethereum. At its most basic level, wETH is a token pegged to the value of ether, and wrapped ether is fundamental to the function of many decentralized applications (known as DApps).

Notably, while begging a thief to return stolen funds may seem like a sign of desperation, that doesn’t mean it won’t work. It was just last August that another DeFi hacker returned a chunk of approximately $600 million in stolen cryptocurrency — minus a cut, of course.

The Wormhole developers just have to hope that their $10 million offer is enough to bring about that kind of luck.

Over 100 apps that sold location data to sketchy data broker revealed

The controversial data broker X-Mode bought location data from Bro, a dating app for “bi, gay, and open-minded men,” the virtual makeup app Perfect365, and the popular live streaming app Tango, along with dozens of other specific phone apps that The Markup has identified as participating in the multibillion-dollar location data trade. 

The Markup obtained a sample dataset consisting of location data X‑Mode purchased in 2018 and 2019. The data was sourced from 107 apps, with more than 50,000 points of location data from more than 20,000 unique advertising IDs collected from 140 countries during that time. About a quarter of the apps are no longer active, and none of the apps appear to contain X‑Mode’s code anymore. X‑Mode has since faced sanctions from the Google and Apple app stores as well as scrutiny from lawmakers and regulators for, among other things, selling location data to military contractors. The data was provided to The Markup by a former X‑Mode employee, and a second former employee of the company confirmed that it appeared authentic.

Generally, location data brokers are loath to disclose the sources of their data, which comes from smartphone applications that ask users to share their location with the apps. The Markup recently identified the family safety app Life360 as one of the biggest suppliers of precise location data, selling data to about a dozen companies, including X‑Mode. And last year, Motherboard reported that X‑Mode purchased location data from the Muslim prayer apps “Muslim Pro,” “Prayer Times: Qibla Compass, Quran MP3 & Azan,” “Qibla Finder: Prayer Times, Quran MP3 & Azan,” and “Qibla Compass—Prayer Times, Quran MP3 & Azan.” Motherboard also revealed that X‑Mode had supplied location data to U.S. military contractors, potentially putting Muslims who used these apps at risk of surveillance. It’s not clear which apps, specifically, benefited military contractors.

While the data The Markup obtained is not up-to-date and doesn’t contain a complete list of apps that supplied location data to X-Mode, it highlights the scale and variety of the location data broker’s sources right before the company faced major public scrutiny following Motherboard’s report. It also shows that X-Mode received location data from more sensitive sources than previously known.

The dataset points to dozens of apps, including four additional Muslim prayer apps that sold location data to X-Mode in 2019: “Qibla Locator: Prayer Times, Azan, Quran & Qibla,” “Full Quran MP3 – 50+ Languages & Translation Audio,” “Al Quran Mp3 – 50 Reciters & Translation Audio,” and “Prayer Times: Qibla & Quran.”

Tango, Perfect365, and the developers of the Muslim prayer apps did not respond to our requests for comment. The Bro App’s founder, Scott Kutler, told The Markup in an email that the company no longer provides X‑Mode with any user location data.

The Markup identified 107 apps that sold data to X‑Mode in 2018 and 2019. The list, given to us by a former employee, shows the variety of apps that sell data on people’s movements

In August, the intellectual property intelligence firm Digital Envoy acquired the company and rebranded it as Outlogic. On X‑Mode’s old website—which is still up—the company boasted that more than 400 app publishers supplied the company with people’s exact whereabouts and said that X‑Mode’s data included “25%+ of the Adult U.S. population monthly.” But on Outlogic’s current website, it claims only to have up to “10%+ of the adult U.S. population monthly.”

The new owners said they cut off all U.S. location data going to military contractors, but the company is still involved in the location data industry, albeit on what appears to be a smaller scale.  

Two former X-Mode employees told The Markup that the company’s data collection capabilities were at their peak in 2018 and 2019 and significantly dropped after the public backlash. 

X-Mode, Outlogic, and Digital Envoy did not respond to multiple requests for comment. 

The most popular apps in the sample we reviewed were the live streaming service Tango and Perfect 365, a virtual makeup app. Both have a large install base—the Android version of Tango has been installed more than 100 million times, and Perfect365 has more than 50 million installs according to their current Google Play app pages. 

The Markup reached out to all of the app publishers in the dataset for comment. Eight responded: A-Life Software, LLC (“Stock Trainer: Virtual Trading [Stock Markets]”); Difer (“Simple weather & clock widget [no ads]”); Neon Roots (“CatWang”); JRustonApps B.V. (“Guide for Animal Crossing NL,” “My Currency Converter & Rates,” “My Lightning Tracker & Alerts”); New IT Solutions Ltd. (“4shared Mobile”); BroTech LLC (“BRO: Chat, Friends, and Fun”); MOBZAPP (“VoiceFX – Voice Changer with voice effects,” “RecMe Screen Recorder,” “Screen Stream Mirroring”); and YanFlex (“CPlus Classifieds”). 

Each confirmed that they did at one point sell data to X-Mode but have since stopped.

Potentially sensitive apps sold data to X‑Mode 

Experts say that some of the apps that sold location data to X‑Mode potentially compromised sensitive information by doing so. 

Selling data from the Muslim prayer apps could subject those who use them to surveillance, said Jamal Ahmed, the CEO of the privacy consultancy firm Kazient Privacy.

“As Muslim organizations, when you are collecting information or when you are developing technology, you have to uphold that trust … that individuals are handing over to you,” Ahmed said. “You have a moral and religious obligation to do that, especially if you think about how targeted Muslims are around the world right now.”  

Other sensitive apps also sold data to X‑Mode, including Bro, which accesses location data to find other users in the area to connect with.

Eric Silverberg, CEO of the gay dating app SCRUFF, said apps that serve the LGBTQ+ community shouldn’t share or sell such data. 

“Any use of that data beyond that service poses unique and disproportionate risks and threats to any minority community, period. Especially the LGBTQ+ community, because we face unique risks in places all over the world, and in the United States,” he said. 

Bro’s Kutler said that all location data that the dating app shared with X‑Mode was “100% anonymized” but stopped giving the broker its users’ data after learning that location data could be de-anonymized.


As Muslim organizations, when you are collecting information … you have to uphold that trust.”

– Jamal Ahmed, Kazient Privacy

Researchers have found that even with anonymized datasets, you can identify a person through location data with as few as four data points.

“Discovering that third-party brokers could even attempt to use information like a person’s home address to try to de-anonymize our data, we decided it wasn’t worth the risk to our users’ privacy (or trust) to continue working with X-Mode,” Kutler said.

X-Mode sent multiple emails to Silverberg, which he provided to The Markup, in 2017 and 2018, offering at least $100,000 annually for SCRUFF’s user data.

“Since your company is already collecting location data, you might be interested in adding X‑Mode’s revenue of at least $100,000 annually (Based on your apptopia numbers) on top of what you are already making,” X‑Mode’s pitch email in September 2018 said. 

Silverberg said he has consistently ignored the offers.

Last July, a high-ranking Catholic priest resigned after a media outlet used location data to link the priest to a gay dating app and tracked his visits to gay bars. There’s no indication that X‑Mode was involved in the incident. 

Sean O’Brien, the lead researcher at the Yale Privacy Lab, has uncovered several other LGBTQ dating apps that sold location data to X-Mode by looking for apps that used X‑Mode’s SDK. (An SDK, which stands for Software Development Kit, is a tool embedded into apps that can be used for data collection.) App developers would install X-Mode’s SDK so the location data broker could collect information directly in exchange for payouts.

In 2020, O’Brien scanned the Google app store and found that the apps “Wapo: Gay Dating,” “Wapa: Lesbian Dating, Find a Match & Chat to Women,” “MEET MARKET – Gay Dating App. Chat & Date New Guys” and “FEM – Free Lesbian Dating App. Chat & Meet Singles” also had X‑Mode’s tracking code embedded. None of them do anymore, he said. 

The publishers of these apps, Mingle and Wapo y Wapa Ltd., did not respond to a request for comment. 

There are other ways for apps to give data to location data brokers, even without the SDKs. Life360, for instance, provides data brokers with location data directly through its own servers, as The Markup previously reported. 

Two former X‑Mode employees told The Markup that the company received more data from direct server transfers than from SDKs. 

This method would be more difficult for researchers like O’Brien to detect. All of the data in the sample we reviewed appears to be collected directly from mobile devices via the SDK.

It can be difficult for app stores like Apple’s and Google’s to detect and monitor such sales, according to The Wall Street Journal. Apple and Google said certain types of user data sales are prohibited, regardless of how the data is collected and received.

“We do not allow apps to surreptitiously build user profiles based on collected user data. Apps found to be using the X-Mode SDK are required to remove it or risk removal from the App Store altogether,” Apple spokesperson Adam Dema said in an email.

“Google Play’s policy explicitly prohibits apps that collect sensitive and personal user data from selling it,” Google spokesperson Scott Westover said in an email.

Neither company answered questions on how it detects and enforces against server-to-server based transfers. 

Developing business

A former employee at X-Mode told The Markup that sales team members were each responsible for bringing in new sources of location data.  Each team member’s annual goals were set at one million new combined users from apps, the ex-employee said. 

Often, that included reaching out to app developers with charts showing how much they could make based on their user count and a pitch deck showing how the data was used for targeted advertising. 

The Markup reviewed an X‑Mode pitch deck sent to Silverberg in 2017. It highlighted that X‑Mode sold location data for advertising purposes. 

Three of the developers who sold data to X-Mode said they ended their partnerships after learning about the military relationship. For them, working with X‑Mode mostly represented a simple way to monetize their apps. 

Anuj Saluja, the developer behind the app “Stock Trainer: Virtual Trading,” said he stopped sharing location data with X‑Mode in September 2019 and that he had received from $800 to $1,000 a month from the data broker. 

“Being an indie app, at the time X‑Mode was ~25% of my revenue. So financially it was a hard decision to exclude X‑Mode from my app, but I think I did the right thing by my app’s users. My app doesn’t need to know or care about users’ location,” the developer said in an email.

Daniel Fortuna, the developer of the app “Simple weather & clock widget (no ads),” also stopped supplying X‑Mode with location data once he learned about privacy concerns from Google. 

“We have stopped partnership with XMode more than a year ago after we learned XMode resold its data to certain partners,” Flex Yan, the developer of the app “CPlus Classifieds Marketplace” said in an email. 

The sales teams were also responsible for selling location data to potential buyers, with goals set at $500,000 to $800,000 in annual revenue, according to a former X‑Mode employee. 

The sales to the military could make up a good portion of those goals, as public records show. In 2019, X‑Mode sold location data to the Air Force for $283,125 and in 2020, for $140,000. 

While X-Mode didn’t explicitly tell publishers that their location data could end up with the military, Kazient Privacy’s Ahmed said publishers should have been more responsible with people’s data.

“If they are going to monetize and sell that, they should understand what is actually happening with this information, and is this being used against the people who I’m trying to offer a service to?” Ahmed said.


This article was originally published on The Markup and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.

Facebook records a drop in daily users for the first time ever

After years of scandals, congressional hearings, and generally bad vibes, it seems like people are finally (and slowly) leaving Facebook behind.

Meta, the new name for the parent company that encompasses Facebook, Instagram, WhatsApp, and Oculus, released its earnings report for the fourth quarter of 2021 on Wednesday with one fairly shocking revelation hidden in the mountain of business-speak. Daily active users on Facebook dropped very slightly between the final two quarters of last year, from 1.93 billion to 1.929 billion.

That’s a change so minuscule that it’s a little tough to notice on first glance when looking at Meta’s earnings slides, but it’s there. The drop primarily happened in a vague “Rest of World” category, which basically means Latin America and Africa. CNBC confirmed that it was the first such quarterly drop in daily users on record for Facebook, and fell short of the 1.95 billion mark that analysts projected for the quarter.

Of course, there could be countless different reasons for why this specific drop in users occurred where and when it did. Without baselessly speculating, it’s mostly interesting because of how popular it’s gotten to call for people to divorce themselves from the platform in recent years. Between rampant COVID misinformation (and lax measures to prevent it), myriad scams, and the notion that the site allows harmful posts to propagate simply because they’re popular, anti-Facebook sentiment in public discourse seems to be at an all-time high.

Plus, it seems like everyone has experienced the demoralizing feeling of arguing with a loved one about politics on the site. That, alone, is enough to make plenty of people give it up. Regardless of the reasoning for the drop, Facebook’s next quarterly report just got a heck of a lot more interesting.

Huddle is Slack’s messiest feature

Nothing puts the “mess” in “messaging platform” like Slack Huddles.

The popular workplace communication platform released “Huddles” in 2021 as part of a collection of tools designed to improve remote work. The feature lets people quickly and easily start live audio conversations within a Slack DM, group message, or channel, sort of like a phone call. 

Slack says Huddles are “particularly useful when you want to discuss a complex topic on the fly without having to negotiate busy calendars, and want a break from being on camera.” You can chat with up to 50 participants in a Slack Huddle (which sounds like utter chaos) and you’re able to share your screen with others while using the tool (which sounds genuinely helpful). I’m personally a fan of the feature and frequently use it to converse with my own colleagues.

I think Slack Huddles offer a fresh and fun way to communicate with others. There’s just one problem: They’re messy as hell.

SEE ALSO:

1 underused Slack feature that will make your work life easier

In order to explain what I mean by “messy,” I need to walk you through the Huddle process. You see, when a Slack user starts or joins a Huddle, Slack automatically updates their status to “In a Huddle” and places a headphone emoji beside their Slack name. (Presumably to let others know they’re busy.) In theory this is a kind, sensible gesture. But in reality, it informs everyone in their Slack workspace of their personal business. For Mashable’s Slack, which includes various brands owned by our parent company, that means more than 4,000 people can see if you’re in a Huddle. Yikes.

Slack doesn’t actually alert other people outside of your Huddle when you join one or let outsiders know who’s part of your Huddle. But as a curious journalist who DMs many different colleagues throughout the course of a day I’ve learned it’s easy to figure out which colleagues are Huddling together. You just have to keep an eye out for that headphone emoji. It reveals all.

When Huddles get hectic

Every Tuesday I have a weekly Huddle with my manager to check in. I feel safest in regular, pre-scheduled Huddles like these because I know nobody is wondering why we’re chatting or why the Huddle is taking a full 30 minutes. It’s the spontaneous Huddles that unleash chaos.

Once you lay eyes on that headphone emoji and realize a Huddle is taking place without you, it’s hard not knowing who’s part of it or what’s being discussed. I’ve had inquisitive colleagues slide in my DMs before to ask, “Who are you Huddling with?” after they peeped the headphone emoji beside my name, and my own brain admittedly goes into overdrive whenever a Huddle happens in my line of sight.

Think about it: Huddle possibilities are endless.

If your boss sees you and your Work BFF Huddling for an hour during the day will they think you’re brainstorming ideas and talking through assignments or will they assume you’re goofing off? If you see someone randomly Huddling with their boss, what does that mean? Bad news? Is something wrong? No, wait. Good news? Are they getting promoted?! And if you clock your friends Huddling without you, are they, say, planning a birthday surprise on your behalf? Or have they suddenly decided they hate you and don’t want you to be part of their live audio chats anymore?


It’s the spontaneous Huddles that unleash chaos.

In a sense, Slack Huddles make private conversations public — not their contents, but that they’re happening. In the same way I don’t think read receipts should exist, I don’t think anyone should be able to see which colleagues are having private conversations in real-time. It’s TMI!

How to hide your Huddle status

As we’ve established, Slack’s Huddle feature itself isn’t the problem. The tool offers a more convenient, immediate, and interesting way to communicate with colleagues than tired video chats or phone calls, and when used properly, Huddles can be great. The fact that other people in your workspace can see you’re in a Huddle and easily determine who you’re Huddling with is the issue.

If you still want to use Slack’s Huddle feature but don’t want anyone to know your business, there’s a solution. Hiding your Slack status only takes a few simple steps, but please don’t read them if you’re a colleague of mine, because I still want to imagine the who, what, and why of your Huddles. Thank you.

Next time you want to Huddle in secret, here’s what to do:

  • Once you’ve started a Huddle or joined someone else’s Huddle, click on your profile photo located in the upper righthand corner of your Slack app or desktop window.

  • Selecting “Clear Status” from the dropdown menu is the quickest way to clear the automatically enabled “In a huddle” status and headphone emoji from beside your Slack name.

A screenshot of the Slack app's Status settings page with arrows pointing to two "Clear" buttons.

Clear! That! Status!
Credit: SCREENSHOT: SLACK

  • If you want to remain in a Huddle and don’t want people to disturb you, but you also don’t want them to know you’re in a huddle. You can manually change your status (and the headphone emoji) to whatever you want. To do this, click on your profile photo located in the upper righthand corner of your Slack app or desktop window. Then click “In a huddle,” followed by the X button (Clear all) beside the status. Once you clear your Huddle status you can set a new status and corresponding emoji that will appear beside your Slack name, all while remaining in your Huddle.

Hiding your Huddle status using the Slack app:

  • Once the Slack mobile app is open and you’ve navigated to the DM or channel you want to Huddle in, you can start a Huddle by clicking the headphone icon in the upper righthand corner of your screen.

Splitscreen screenshots of Slack's mobile app. The left image shows a user starting a Huddle. The right image shows an arrow pointing to the "Clear Status" button on the user's Settings page.

Hide your hangouts when you’re on the go, too.
Credit: MASHABLE COMPOSITE: SCREENSHOT / SLACK

  • To clear or manually change your automatically enabled Huddle status, select the “You” tab located in the lower right hand corner of the Slack mobile app screen and tap the  X button beside your “In a Huddle” status. Once you clear your Huddle status you’ll also have the option to set a new status and corresponding emoji, which will appear beside your Slack name. You can do these steps all while remaining in your Huddle.

Happy secret Slack Huddling, everyone.