Tech
TikTok seals deal to launch new US entity
TikTok has finalized an agreement to create a new American entity, easing years of uncertainty and sidestepping the prospect of a US ban on the short-video platform used by more than 200 million Americans.
In a statement issued Thursday, the company said it has signed deals with major investors, including Oracle, Silver Lake and Abu Dhabi-based investment firm MGX, to form a TikTok US joint venture. TikTok said the new version will operate with “defined safeguards” aimed at protecting US national security, including strengthened data protections, algorithm security, content moderation and software assurances for American users. The company said users in the United States will continue using the same app.
President Donald Trump welcomed the announcement in a post on Truth Social, publicly thanking Chinese President Xi Jinping and saying he hoped TikTok users would remember him for keeping the platform available.
Snap settles social media addiction lawsuit ahead of trial
China has not publicly commented on TikTok’s announcement. Earlier on Thursday, Chinese Embassy spokesperson Liu Pengyu said Beijing’s position on TikTok remained “consistent and clear.”
TikTok said the new US venture will be led by Adam Presser, a former top executive who previously oversaw operations and trust and safety. The entity will have a seven-member board that the company said will be majority American, and it will include TikTok CEO Shou Chew.
The deal follows years of political and regulatory pressure in Washington over national security concerns tied to TikTok’s Chinese parent company, ByteDance. A law passed by large bipartisan majorities in Congress and signed by then-President Joe Biden required TikTok to change ownership or face a US ban by January 2025. TikTok briefly went offline ahead of the deadline, but Trump later signed an executive order on his first day in office to keep the service running while negotiations continued.
TikTok said US user data will be stored locally through a system run by Oracle, while the new joint venture will also focus on the platform’s content recommendation algorithm. Under the plan, the algorithm will be retrained, tested and updated using US user data.
The algorithm has been central to the debate, with China previously insisting it must remain under Chinese control. The US law, however, said any divestment must sever ties with ByteDance, particularly regarding the algorithm. Under the new arrangement, ByteDance would license the algorithm to the US entity for retraining, raising questions about how the plan aligns with the law’s ban on “any cooperation” involving the operation of a content recommendation algorithm between ByteDance and a new US ownership group.
UK to consult on possible social media ban for under-16s
“Who controls TikTok in the U.S. has a lot of sway over what Americans see on the app,” Georgetown University law and technology professor Anupam Chander was quoted as saying.
Under the disclosed ownership structure, Oracle, Silver Lake and MGX will serve as the three managing investors, each taking a 15% stake. Other investors include the investment firm of Dell Technologies founder Michael Dell. ByteDance will retain 19.9% of the joint venture.
1 hour ago
Musk’s Starlink faces new competition from Bezos’ Blue Origin satellite network
Amazon founder Jeff Bezos’ rocket company, Blue Origin, plans to launch over 5,400 satellites to build a new global internet network named TeraWave.
The network will provide continuous internet access worldwide and transfer large amounts of data faster than rival services. Blue Origin said TeraWave will focus on businesses, data centres, and governments, unlike Elon Musk’s Starlink, which serves individual customers.
At its fastest, TeraWave will offer upload and download speeds up to 6 terabits per second, far exceeding current commercial satellite services.
Blue Origin aims to start launching the satellites by the end of 2027. The company has previously achieved a rocket booster landing on a floating platform and conducted an 11-minute all-female space flight.
Amazon also runs a satellite project called Leo, with around 180 satellites in orbit. Leo focuses more on public internet access and plans to launch over 3,000 satellites eventually.
The TeraWave project will compete with Starlink and Amazon Leo in the growing satellite internet market.
With inputs from BBC
1 day ago
Snap settles social media addiction lawsuit ahead of trial
Snapchat’s parent company, Snap, has reached a settlement in a high-profile social media addiction lawsuit just days before the case was set to go to trial in Los Angeles.
The settlement terms were not disclosed. At a California Superior Court hearing, lawyers confirmed the resolution, and Snap told the BBC that both parties were “pleased to have been able to resolve this matter in an amicable manner.”
Other tech giants named in the lawsuit, including Instagram owner Meta, TikTok parent ByteDance, and YouTube owner Alphabet, have not settled.
The lawsuit was filed by a 19-year-old woman, identified only by her initials K.G.M., who claimed that the platforms’ algorithmic designs left her addicted and negatively impacted her mental health.
UK to consult on possible social media ban for under-16s
With Snap now settled, the trial will proceed against Meta, TikTok, and Alphabet, with jury selection scheduled for 27 January. Meta CEO Mark Zuckerberg is expected to testify, while Snap CEO Evan Spiegel was slated to appear before the settlement.
Meta, TikTok, and Alphabet did not respond to BBC requests for comment regarding Snap’s settlement.
Snap remains a defendant in other consolidated social media addiction lawsuits. Legal experts say the cases could test a long-standing defense used by social media companies, which relies on Section 230 of the Communications Decency Act of 1996 to avoid liability for content posted by third parties.
Australia cracks down on child social media use, 4.7 million accounts taken down
Plaintiffs argue that the platforms are intentionally designed to foster addictive behavior through algorithms and notifications, contributing to mental health issues such as depression and eating disorders. Social media companies maintain that the evidence presented so far does not establish responsibility for these alleged harms.
#With inputs from BBC
1 day ago
Can AI teach humans to become better listeners?
Artificial intelligence chatbots such as ChatGPT are increasingly being used not only for information and advice, but also for emotional support and companionship, raising new questions about what machines can teach humans about listening better.
Anna, a Ukrainian living in London, says she regularly uses the premium version of ChatGPT because of its ability to listen without interrupting or judging her. While she knows it is only a machine, she says its patient and consistent responses help her reflect on her thoughts and emotions.
“I can rely on it to understand my issues and communicate with me in a way that suits me,” she said, asking to remain anonymous. After a recent breakup, Anna said the chatbot’s non-judgmental presence allowed her to explore her mixed feelings in a way her friends and family could not.
Her experience reflects a growing trend. Research cited by Harvard Business Review shows that in 2025, therapy and companionship became the most common use of generative AI tools such as ChatGPT. Other studies suggest that people often rate AI-generated responses as more compassionate and understanding than those written by humans, including trained crisis hotline workers.
Researchers say this does not mean AI is genuinely empathetic, but rather that many people rarely experience truly non-judgmental and uninterrupted listening in everyday life. Experiments have found that people often feel more hopeful and less distressed after interacting with AI-generated responses compared to human ones.
Large language models are designed to recognise emotions, reflect them back and offer supportive language. They do not interrupt, do not become impatient and do not try to dominate conversations. This creates a sense of psychological safety for users, allowing them to share difficult thoughts more freely.
Experts say there are several lessons humans can learn from AI about listening, including giving uninterrupted attention, acknowledging emotions, avoiding quick judgments and resisting the urge to immediately offer solutions.
Psychologists also note that people often turn conversations back to themselves by sharing similar personal stories, which can shift attention away from the speaker. AI systems, having no personal experiences, do not fall into this habit.
However, researchers warn against over-reliance on AI for emotional support. While chatbots can simulate empathy, they do not possess genuine care or understanding. There are also concerns about vulnerable people forming emotional dependence on AI or being exposed to harmful advice.
Michael Inzlicht, a psychologist at the University of Toronto, cautioned that AI companies could potentially manipulate users and that excessive reliance on chatbots could weaken real human connections.
Despite these risks, experts say AI can still serve as a useful tool for inspiring better listening habits and greater compassion among people.
“There is something uniquely meaningful about a human choosing to be present and listen,” researchers say, adding that while AI may help people feel heard, it cannot replace the depth of real human connection.
With inputs from BBC
2 days ago
UK to consult on possible social media ban for under-16s
The UK government has announced plans to consult on whether social media use should be banned for children under 16, alongside steps to tighten controls on mobile phone use in schools.
As part of “immediate action”, Ofsted will be given authority to review schools’ phone-use policies during inspections, with schools expected to become “phone-free by default”. Staff may also be advised not to use personal devices in front of students.
The move follows growing political and public pressure, including a letter from more than 60 Labour MPs and calls from Esther Ghey, the mother of murdered teenager Brianna Ghey. “Some argue that vulnerable children need access to social media to find their community,” she wrote. “As the parent of an extremely vulnerable and trans child, I strongly disagree. In Brianna's case, social media limited her ability to engage in real-world social interactions.”
The Department of Science, Innovation and Technology said the consultation will “seek views from parents, young people and civil society” and assess stronger age-verification measures. It will also consider limiting features that “drive compulsive use of social media”. The government is expected to respond in the summer.
Technology Secretary Liz Kendall said existing online safety laws were “never meant to be the end point”, adding: “We are determined to ensure technology enriches children's lives, not harms them and to give every child the childhood they deserve.”
Opposition parties and education unions offered mixed reactions. Conservative leader Kemi Badenoch criticised the move as “more dither and delay”, while Liberal Democrats warned the consultation could slow action. Teaching unions broadly welcomed the shift but raised concerns about Ofsted’s role and the wider impact of screen time.
Read More: Australia cracks down on child social media use, 4.7 million accounts taken down
The issue is also being debated in the House of Lords, though experts and child safety organisations remain divided on whether age-based bans are effective.
3 days ago
OpenAI tests adverts on ChatGPT for free and new Go users
OpenAI will start showing ads on ChatGPT for some users in the United States, the company announced.
The trial will affect free users and a new lower-cost subscription tier, ChatGPT Go, which costs $8 per month. OpenAI said the ads will appear after prompts, such as holiday suggestions, and will not change the AI’s responses.
OpenAI stressed that user conversations will not be shared with advertisers. The company said ads are being tested so more people can use its tools with fewer limits.
Experts say the move is part of OpenAI’s effort to earn revenue, as the company has not yet made a profit despite 800 million users. Only 5% of them are paid subscribers. ChatGPT already offers Plus and Pro tiers, costing $20 and $200 per month in the US.
OpenAI first introduced ChatGPT Go in India in 2025 before expanding globally. The company began as a non-profit but is now more commercially focused.
With inputs from BBC
4 days ago
Uganda back online after five days
Uganda restored internet services on Sunday after a five-day nationwide shutdown imposed during the general elections, a move authorities said was intended to curb the misuse of online platforms.
Ibrahim Bbosa, a spokesperson for the Uganda Communications Commission, confirmed the restoration. "Yes, the internet is back," Bbosa told Xinhua. Telecommunications companies also sent messages to subscribers notifying them that services had resumed.
The restoration followed the announcement on Saturday that incumbent President Yoweri Museveni had won the 2026 presidential election, securing more than 7.9 million votes out of about 11.3 million valid ballots cast.
4 days ago
ChatGPT's free ride is ending: OpenAI plans for advertising on the chatbot
OpenAI announced Friday that it will begin showing advertisements to users of the free version of ChatGPT in the coming weeks, part of the company’s effort to generate revenue from its over 800 million users.
The ads will appear at the bottom of ChatGPT’s responses when relevant to the ongoing conversation and will be clearly labeled and separated from the AI’s answers. CEO Fidji Simo emphasized that the ads will not influence ChatGPT’s responses.
The company, valued at $500 billion, currently spends more on operations than it earns. Paid subscriptions cover some costs, but OpenAI faces over $1 trillion in obligations for chips, servers, and data centers that power its AI services.
OpenAI framed the advertising move as consistent with its mission to ensure AI benefits humanity, even as experts warn of potential risks. Miranda Bogen of the Center for Democracy and Technology noted that introducing personalized ads could erode trust, since users often rely on chatbots for advice and companionship.
OpenAI claims it will not use personal data or chat prompts for ad targeting, though analysts caution about the long-term implications. Paddy Harrington of Forrester said, “Free services are never actually free… if the service is free, you’re the product.”
The rollout will position OpenAI alongside competitors like Google and Meta, who already incorporate ads into AI-driven services. A formal testing phase for the ads is expected in the coming weeks, as OpenAI explores new ways to monetize its popular chatbot while maintaining user trust.
5 days ago
Musk AI company faces lawsuit over sexually explicit Deepfake images
The mother of one of Elon Musk’s children has filed a lawsuit against his artificial intelligence company, claiming its Grok chatbot was used to create sexually explicit fake images of her, causing humiliation and emotional trauma.
Ashley St. Clair, 27, a writer and political strategist, filed the case on Thursday in New York City against xAI. In the lawsuit, she alleged that Grok allowed users to generate manipulated images portraying her in sexualized ways. These reportedly include a photo of her at age 14 that was altered to show her in a bikini, as well as other images depicting her as an adult in explicit poses and wearing a bikini with swastikas. St. Clair is Jewish. Grok operates on Musk’s social media platform X.
Lawyers for xAI did not immediately respond to requests for comment on Friday. When asked about the lawsuit, the company replied to The Associated Press with a brief statement saying, “Legacy Media Lies.”
St. Clair said she reported the fake images to X after they began circulating last year and asked for their removal. She claimed the platform initially said the images did not violate its policies. Later, X assured her that her images would not be used or altered without consent, she said.
However, St. Clair alleged that the platform later retaliated by canceling her premium subscription and verification badge, blocking her ability to earn income from her account, which has about one million followers, and continuing to allow the altered images to circulate.
In court documents, St. Clair said she has suffered severe mental distress and humiliation because of xAI’s role in creating and spreading the images. She also said she fears the people who view the fake content.
St. Clair, who lives in New York City, is the mother of Musk’s 16-month-old son, Romulus. She is seeking an undisclosed amount in damages, along with court orders to stop xAI from allowing further fake images of her.
Later on Thursday, xAI moved the case to federal court in Manhattan and also filed a countersuit in a Texas federal court, claiming St. Clair violated user agreement terms that require lawsuits to be filed in Texas. The company is seeking an unspecified monetary judgment.
X is based in Texas, where Musk owns a home and where Tesla is headquartered in Austin.
St. Clair’s lawyer, Carrie Goldberg, described the countersuit as highly unusual and said her client would strongly contest the move, arguing that xAI’s technology enables harmful and unsafe content.
Grok AI banned from editing real people in revealing photos
Earlier this week, X announced new safeguards for Grok, including limits on image editing and stricter rules against sexual exploitation and nonconsensual content.
6 days ago
Australia cracks down on child social media use, 4.7 million accounts taken down
Social media platforms have taken down about 4.7 million accounts identified as belonging to children in Australia since the country enforced a ban on under-16s using major platforms, officials said.
Communications Minister Anika Wells said the government had proven critics wrong by compelling some of the world’s biggest tech companies to comply. “Now Australian parents can be confident their kids can have their childhoods back,” she told reporters on Friday.
The figures, submitted to the government by 10 platforms, offer the first indication of the impact of the landmark law, which came into force in December amid concerns about harmful online environments for young people. The move triggered heated debate over technology use, privacy, child safety and mental health and has prompted other countries to consider similar measures.
Under the law, Facebook, Instagram, Kick, Reddit, Snapchat, Threads, TikTok, X, YouTube and Twitch can be fined up to A$49.5 million ($33.2 million) if they fail to take reasonable steps to remove accounts of Australian users under 16. Messaging services such as WhatsApp and Facebook Messenger are exempt.
Platforms can verify age by requesting identification, using third-party facial age-estimation tools, or drawing inferences from existing account data, such as how long an account has been active.
Australia’s eSafety Commissioner Julie Inman Grant said about 2.5 million Australians are aged 8 to 15 and previous estimates showed 84% of 8- to 12-year-olds had social media accounts. While it is unclear how many accounts existed across the 10 platforms, she said the 4.7 million “deactivated or restricted” accounts was an encouraging sign.
“We’re preventing predatory social media companies from accessing our children,” Inman Grant said, adding that the companies covered by the ban had complied and reported removal figures on time. She said enforcement would now focus on stopping children from creating new accounts or evading the restrictions.
Read more: Wikipedia turns 25, announces AI partnerships with tech giants
Australian officials did not release platform-by-platform numbers. However, Meta, which owns Facebook, Instagram and Threads, said it removed nearly 550,000 accounts believed to belong to under-16s by the day after the ban took effect. In a blog post, Meta criticised the policy and warned that smaller platforms not covered by the ban might not prioritise safety.
The law has been widely backed by parents and child-safety advocates, though privacy groups and some youth organisations oppose it, arguing that vulnerable or geographically isolated teenagers find support online. Some young users say they have bypassed age checks with help from parents or older siblings.
6 days ago