<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:yandex="http://news.yandex.ru" xmlns:turbo="http://turbo.yandex.ru" xmlns:media="http://search.yahoo.com/mrss/">
  <channel>
    <title>ARTIFICIAL INTELLIGENCE</title>
    <link>https://smarttimes.net</link>
    <description/>
    <language>ru</language>
    <lastBuildDate>Wed, 01 Apr 2026 16:27:35 +0300</lastBuildDate>
    <item turbo="true">
      <title>FROM BOOTSTRAPPED TO BREAKOUT: HOW BETTERPIC RAISED $2.5M AFTER HITTING $3M REVENUE</title>
      <link>https://smarttimes.net/tpost/h0ps6pzx51-from-bootstrapped-to-breakout-how-better</link>
      <amplink>https://smarttimes.net/tpost/h0ps6pzx51-from-bootstrapped-to-breakout-how-better?amp=true</amplink>
      <pubDate>Mon, 18 Aug 2025 14:53:00 +0300</pubDate>
      <category>AI AGENTS</category>
      <enclosure url="https://static.tildacdn.com/tild3262-3434-4332-a234-636164333661/Screenshot_2025-08-1.png" type="image/png"/>
      <description>While U.S. tech giants like OpenAI, Anthropic, and Google build large foundational models, European innovators are increasingly carving out space in AI’s “edge cases” — practical, industry-specific tools that solve high-cost, high-friction problems. </description>
      <turbo:content><![CDATA[<header><h1>FROM BOOTSTRAPPED TO BREAKOUT: HOW BETTERPIC RAISED $2.5M AFTER HITTING $3M REVENUE</h1></header><figure><img alt="" src="https://static.tildacdn.com/tild3262-3434-4332-a234-636164333661/Screenshot_2025-08-1.png"/></figure><div class="t-redactor__text"><em>While U.S. tech giants like OpenAI, Anthropic, and Google build large foundational models, European innovators are increasingly carving out space in AI’s “edge cases” — practical, industry-specific tools that solve high-cost, high-friction problems. One of the clearest examples is </em><strong><em>BetterPic</em></strong><em>, a European AI imaging startup that has gone from bootstrapped profitability to securing $2.5 million in seed funding to scale its fashion and photography ecosystem.</em><br /><br /><strong>AI that replaces the $50,000 photoshoot</strong><br /><br />BetterPic enables users to generate professional headshots from just a couple of selfies — or create high-quality product photos using real and digital models. With its <strong>BetterStudio</strong> product, brands can reduce costs by up to 100x compared to traditional fashion shoots, where expenses for photographers, models, stylists, studios, and equipment can easily range between $10,000 and $50,000.<br /><br />In just 18 months, BetterPic hit <strong>$3.2 million in annual revenue</strong> without external capital, an exceptional feat in the startup world. More than <strong>32 million professional photos</strong> have already been generated on its platform, including work for <strong>Fortune 500 clients</strong>.<br /><br />BetterGroup, the parent company of BetterPic and BetterStudio, is headquartered in Belgium. Founded by Ricardo Ghekiere and Miguel Rasero, it is one of Europe’s fastest-growing AI challengers.<br /><br /><strong><em>“Achieving multi-million dollar revenue without external capital is extremely rare, and it’s a testament to our focus on product-market fit, sustainable growth, and customer obsession,” said Ricardo Ghekiere, Founder and CEO of BetterPic. “Now, with this investment, we can double down on scaling our B2B offerings — while keeping our strong margins and customer success intact.”</em></strong></div><img src="https://static.tildacdn.com/tild3263-6235-4737-b164-643839633339/Screenshot_2025-08-1.png"><div class="t-redactor__text"><strong>Backed by fashion and tech leaders</strong><br /><br />The $2.5 million round was led by <strong>MOC Capital</strong> and <strong>Shilling VC</strong>, with support from a range of notable angels across tech and fashion: Louis Jonckheere (Showpad, Wintercircus Ghent), Matthias Geeroms (Lighthouse), Joris Van Der Gucht (Ravical), Hyperson (AI agency), and <strong>Severine Nijs</strong> (fashion veteran and founder of Jackie Lee Modeling Agency).<br /><br />Nijs, an early pioneer in digital modeling, emphasized that BetterPic is not replacing humans but extending opportunity:<br /><br />“We’re not replacing models. We’re creating more access for creators, brands, and individuals to participate in fashion without traditional gatekeepers. All of this is done within a clear legal framework that protects the real person behind every digital presence.”<br /><br />Her agency, which represents more than 2,000 models, is now experimenting with <strong>BetterModels</strong> — Verified AI twins of real models, built under ethical and legal standards.<br /><br /><strong>A European challenger to Big Tech</strong><br /><br />According to investors, BetterPic represents a new guard of European AI startups.<br /><br /><strong><em>“There's a real changing of the guard happening in AI, and it's Europe's time,” said Marcin Zabielski of MOC Capital. “BetterPic has a Silicon Valley-level product with European DNA — pragmatic innovation that reaches profitability fast and solves real industry pain points.”</em></strong><br /><br />By merging real and digital models, BetterPic is pioneering the world’s first AI-powered platform that integrates Verified AI twins into professional fashion shoots. The implications for fashion are huge: designers can test outfits virtually, retailers can generate campaigns instantly, and individuals can access professional portraits without a traditional shoot.<br /><br /><strong>The bigger picture: AI and Smart Fashion</strong><br /><br />What makes BetterPic particularly interesting is how it aligns with the broader <strong>Smart Fashion economy</strong> — where AI, blockchain, and digital identity converge to create new value chains in fashion. BetterPic’s tech cuts production costs, decentralizes access, and empowers both professionals and everyday users, proving that fashion is one of AI’s most commercially viable frontiers.</div><img src="https://static.tildacdn.com/tild3030-3463-4964-a535-633030363164/Screenshot_2025-08-1.png">]]></turbo:content>
    </item>
    <item turbo="true">
      <title>BLNG AI SECURES $3 MILLION SEED FUNDING TO REVOLUTIONIZE JEWELRY DESIGN</title>
      <link>https://smarttimes.net/tpost/7b338k5ze1-blng-ai-secures-3-million-seed-funding-t</link>
      <amplink>https://smarttimes.net/tpost/7b338k5ze1-blng-ai-secures-3-million-seed-funding-t?amp=true</amplink>
      <pubDate>Fri, 22 Aug 2025 07:34:00 +0300</pubDate>
      <category>PRODUCTIVITY</category>
      <enclosure url="https://static.tildacdn.com/tild6334-3930-4662-a638-346131313330/Screenshot_2025-08-2.png" type="image/png"/>
      <description>Headquartered in Los Angeles, Blng AI also maintains a presence at Paris’ Station F tech hub through LVMH’s accelerator program. </description>
      <turbo:content><![CDATA[<header><h1>BLNG AI SECURES $3 MILLION SEED FUNDING TO REVOLUTIONIZE JEWELRY DESIGN</h1></header><figure><img alt="" src="https://static.tildacdn.com/tild6334-3930-4662-a638-346131313330/Screenshot_2025-08-2.png"/></figure><div class="t-redactor__text"><em>Headquartered in Los Angeles, </em><strong><em>Blng AI</em></strong><em> also maintains a presence at Paris’ Station F tech hub through LVMH’s accelerator program. A pioneering sketch-to-design generative AI and virtual studio platform for jewelry, has raised $3 million in seed funding. The funding round was spearheaded by </em><strong><em>Speedinvest</em></strong><em>, a prominent venture capital firm, with additional backing from </em><strong><em>Cove Fund</em></strong><em>, </em><strong><em>eSeed</em></strong><em>, and </em><strong><em>Focal</em></strong><em>. The fresh capital will fuel Blng AI’s expansion plans, including scaling its teams across Europe and the United States and bolstering production capacity to meet growing demand from luxury brands and independent jewelers alike.</em><br /><br />Founded in 2023 by <strong>Valérie Leblond Dumëne Comploi</strong>, Blng AI is on a mission to transform the jewelry design process by leveraging cutting-edge AI tools. The platform eliminates the need for labor-intensive manual revisions and renderings, enabling designers to seamlessly convert sketches into production-ready designs. Its suite of offerings includes three standout AI-powered solutions: instant design visualization, high-fidelity marketing content generation that bypasses traditional photo shoots, and real-time interactive customization experiences tailored for retail.<br /><br />Leblond, who brings a rich background as program director of <strong>UCLA</strong> Architecture and Urban Design’s IDEAS research platform and a decade of experience at Cirque du Soleil, emphasized the company’s long-term vision. “At the same time, we’re laying the groundwork for the next stage of our business by gradually integrating our self-service and enterprise offerings into one unified platform,” she said. The ultimate goal? An AI-driven creative suite that connects designers, manufacturers, and retailers, streamlining collaboration, enabling mass personalization, and slashing time-to-market.<br /><br />Blng AI’s technological edge is further underscored by cofounder Comploi’s expertise. With a dozen years at Walt Disney Imagineering and Disney Streaming, where he pioneered AI-driven personalized avatars and held 10 engineering patents, Comploi’s contributions span digital-to-physical design and immersive experiences.<br /><br />The company’s profile soared after a standout demonstration at the French luxury conglomerate LVMH’s booth, where it showcased a Tiffany &amp; Co. ring design rendered in stunningly realistic images. “It led to new enterprise relationships and opened doors to collaborations that might have taken years to build otherwise,” Leblond noted. This exposure has solidified Blng AI’s position as a game-changer in the industry.</div>]]></turbo:content>
    </item>
    <item turbo="true">
      <title>GROK ON TELEGRAM: AI STEPS INTO WEB3 AND SMART FASHION</title>
      <link>https://smarttimes.net/tpost/gbb4bppe21-grok-on-telegram-ai-steps-into-web3-and</link>
      <amplink>https://smarttimes.net/tpost/gbb4bppe21-grok-on-telegram-ai-steps-into-web3-and?amp=true</amplink>
      <pubDate>Fri, 22 Aug 2025 07:37:00 +0300</pubDate>
      <category>SINGULARITY</category>
      <enclosure url="https://static.tildacdn.com/tild3863-6162-4464-b966-663536383936/Screenshot_2025-08-2.png" type="image/png"/>
      <description>Grok and Telegram in Smart Fashion</description>
      <turbo:content><![CDATA[<header><h1>GROK ON TELEGRAM: AI STEPS INTO WEB3 AND SMART FASHION</h1></header><figure><img alt="" src="https://static.tildacdn.com/tild3863-6162-4464-b966-663536383936/Screenshot_2025-08-2.png"/></figure><div class="t-redactor__text"><em>In the fast-paced world of artificial intelligence, a new player has emerged that’s catching attention for its bold approach and expanding presence. </em><strong><em>Grok</em></strong><em>, created by </em><strong><em>Elon Musk’s xAI</em></strong><em>, is an AI chatbot known for its wit, unfiltered responses, and real-time data analysis. Originally integrated with X (formerly Twitter), Grok has now stepped onto</em><strong><em> Telegram</em></strong><em>, where it’s available to </em><strong><em>Telegram Premium</em></strong><em> users via the @GrokAI bot. This move signals more than just a new platform—it hints at Grok’s potential to shape the future of Web3, AI integration, and even smart fashion. Let’s dive into what Grok brings to Telegram, its unique perspective, and where it might take us next.</em><br /><br /><strong>Grok’s Perspective: Truth-Seeking with a Rebellious Edge</strong><br /><br />Designed to be “maximally truth-seeking,” it aims to cut through the noise and deliver accurate, unvarnished answers. Unlike many AI models that prioritize political correctness or heavy moderation, Grok takes a more rebellious stance, drawing inspiration from the likes of <em>The Hitchhiker’s Guide to the Galaxy</em> and JARVIS from <em>Iron Man</em>. It pulls real-time data from the web and social media, keeping it plugged into the latest events and trends.<br /><br />The potential is undeniable. By integrating with Telegram, Grok is already showing it can adapt to new platforms. If it can navigate these challenges, it could become a cornerstone of Web3’s growth, making decentralized tech more accessible and intuitive.<br /><br /><strong>Grok and Telegram in Smart Fashion</strong><br /><br />Let’s keep it real: Grok isn’t a fashion expert yet. Its reliance on social media could skew its advice toward fleeting hype rather than timeless style. Fashion is also deeply human—subjective and emotional in ways AI can’t fully grasp. While Grok could enhance smart fashion platforms, it’s not about to replace human designers or stylists. Instead, it’s a tool that could amplify what’s already possible, adding a layer of intelligence to wearable tech.<br /><br /><strong>A Glimpse of What’s Next</strong><br /><br />Grok’s arrival on Telegram is a bold step toward a more connected, intelligent digital world. Its perspective—unfiltered, truth-focused, and a little irreverent—sets it apart in the crowded AI landscape. As Web3 evolves, Grok could help bridge the gap between complex technology and everyday users, making decentralization more than just a buzzword. In smart fashion, it offers a tantalizing hint of how AI might personalize and innovate an industry rooted in creativity.<br /><br />For now, Grok on Telegram is a sign of things to come: an AI that’s not afraid to push boundaries, whether it’s debating truth, powering Web3, or suggesting your next outfit. As it grows, it’s worth watching—not just for what it can do today, but for the future it might help shape.</div>]]></turbo:content>
    </item>
    <item turbo="true">
      <title>MAKE AMERICA SKILLED AGAIN</title>
      <link>https://smarttimes.net/tpost/ek5l01e921-make-america-skilled-again</link>
      <amplink>https://smarttimes.net/tpost/ek5l01e921-make-america-skilled-again?amp=true</amplink>
      <pubDate>Thu, 28 Aug 2025 00:12:00 +0300</pubDate>
      <category>SINGULARITY</category>
      <enclosure url="https://static.tildacdn.com/tild3638-3462-4365-a666-326639633539/Screenshot_2025-08-2.png" type="image/png"/>
      <description>The U.S. Department of Labor is taking bold steps to equip the American workforce with the skills needed to thrive in an AI-driven economy.</description>
      <turbo:content><![CDATA[<header><h1>MAKE AMERICA SKILLED AGAIN</h1></header><figure><img alt="" src="https://static.tildacdn.com/tild3638-3462-4365-a666-326639633539/Screenshot_2025-08-2.png"/></figure><div class="t-redactor__text">The U.S. Department of Labor is taking bold steps to equip the American workforce with the skills needed to thrive in an AI-driven economy. <br />The Employment and Training Administration issued new guidance to states, detailing how Workforce Innovation and Opportunity Act (WIOA) grants can be used to enhance artificial intelligence (AI) literacy and training within the public workforce system.<br />This initiative aligns with President Trump’s Executive Order, “Advancing Artificial Intelligence Education for American Youth,” and focuses on integrating AI literacy into WIOA Title I programs for Youth, Adult, and Dislocated Workers. The guidance encourages state and local workforce development boards to leverage WIOA funding to provide workers with foundational AI skills. Additionally, states are urged to tap into governor’s reserve funds to embed AI learning opportunities into existing programs.<br />“President Trump’s vision to Make America Skilled Again empowers states and local governments to use federal resources efficiently, preparing workers for high-demand, well-paying jobs,” said Secretary of Labor Lori Chavez-DeRemer. “This guidance delivers on that commitment, putting American workers first.” <br />Deputy Secretary of Labor Keith Sonderling emphasized the transformative impact of AI on the job market. “AI is creating entirely new job categories, many of which are high-paying and don’t require a four-year degree,” he said. “AI literacy is the key to unlocking opportunities in this evolving economy, and this guidance ensures more Americans can gain the skills they need to succeed.”<br />The Department of Labor’s broader strategy, outlined in its report America’s Talent Strategy: Building the Workforce for the Golden Age, underscores the importance of preparing workers for the economic prosperity AI promises. By utilizing WIOA’s existing framework, states and localities can prioritize AI skills development to ready workers for future-focused careers.To support these efforts, the guidance highlights resources from the Department’s Competency Model Clearinghouse, the National Science Foundation, and AI.gov, providing states with tools to implement effective AI training programs.As AI continues to reshape the labor market, the Department of Labor’s proactive approach ensures that American workers are equipped to seize the opportunities of tomorrow’s economy. For more information on WIOA and AI literacy initiatives, visit the Department of Labor’s website.</div>]]></turbo:content>
    </item>
    <item turbo="true">
      <title>APPLE UNVEILS IPHONE 17 PRO AND PRO MAX WITH BETTER AI INTEGRATION</title>
      <link>https://smarttimes.net/tpost/iht3v8lij1-apple-unveils-iphone-17-pro-and-pro-max</link>
      <amplink>https://smarttimes.net/tpost/iht3v8lij1-apple-unveils-iphone-17-pro-and-pro-max?amp=true</amplink>
      <pubDate>Tue, 09 Sep 2025 21:54:00 +0300</pubDate>
      <category>PRODUCTIVITY</category>
      <enclosure url="https://static.tildacdn.com/tild3535-3533-4365-b862-323234636539/Screenshot_2025-09-0.png" type="image/png"/>
      <description>Apple stated the devices deliver the “biggest leap in battery life ever for iPhone.”</description>
      <turbo:content><![CDATA[<header><h1>APPLE UNVEILS IPHONE 17 PRO AND PRO MAX WITH BETTER AI INTEGRATION</h1></header><figure><img alt="" src="https://static.tildacdn.com/tild3535-3533-4365-b862-323234636539/Screenshot_2025-09-0.png"/></figure><div class="t-redactor__text"><strong><em>Apple announced the iPhone 17 Pro and iPhone 17 Pro Max, introducing new hardware, expanded AI capabilities, and significant design updates.</em></strong><br /><br />Both models are powered by the new A19 Pro chip and feature an aluminum unibody design with a built-in vapor chamber for thermal management. Apple stated the devices deliver the “biggest leap in battery life ever for iPhone.”<br /><br />The back of the phones now includes a forged aluminum plateau, creating more space for larger batteries and improved cooling. Ceramic Shield 2 has been added to both the front and back of the devices.<br /><br /><strong>Display</strong><br /><br />The iPhone 17 Pro comes with a 6.3-inch display, while the Pro Max has a 6.9-inch screen. Both include 120Hz ProMotion, Always-On capability, and peak outdoor brightness up to 3000 nits.<br /><br /><strong>Performance</strong><br /><br />The A19 Pro chip integrates a 6-core CPU, 6-core GPU, and 16-core Neural Engine, with additional Neural Accelerators. The chip supports hardware-accelerated ray tracing, high frame rates for AAA gaming, and on-device large AI model processing.<br /><br /><strong>Camera System</strong><br /><br />The camera system consists of three 48MP Fusion cameras: Main, Ultra Wide, and Telephoto. The Telephoto sensor supports 4x optical zoom at 100mm and 8x at 200mm.<br /><br />The 18MP Center Stage front camera features a square sensor for wider selfies and can record simultaneously with rear cameras via Dual Capture.<br /><br />Video features include ProRes RAW, Apple Log 2, and genlock for syncing multiple cameras in professional productions.<br /><br /><strong>Battery and Charging</strong><br /><br />Battery capacity is expanded, particularly on eSIM-only models, which no longer include a SIM tray. Apple claims up to 39 hours of video playback on the iPhone 17 Pro Max. Fast charging is supported with Apple’s new 40W Dynamic Power Adapter, delivering up to 50 percent charge in 20 minutes.<br /><br /><strong>Software</strong><br /><br />iOS 26 brings new Apple Intelligence features, including Live Translation, visual intelligence for screen searches, and personalization tools.<br /><br /><strong>Accessories and Finishes</strong><br /><br />Accessories include TechWoven cases, silicone and clear cases, and a crossbody strap. Available finishes are deep blue, cosmic orange, and silver.<br /><br /><strong>Pricing and Availability</strong><br /><br />The iPhone 17 Pro starts at $1,099 with 256GB of storage. The iPhone 17 Pro Max starts at $1,199 and, for the first time, offers up to 2TB of storage.</div>]]></turbo:content>
    </item>
    <item turbo="true">
      <title>ANT GROUP UNVEILS ITS HUMANOID ROBOT WITH ITS OWN AI "BRAINS" CHALLENGING TESLA'S OPTIMUS</title>
      <link>https://smarttimes.net/tpost/545lb6jlk1-ant-group-unveils-its-humanoid-robot-wit</link>
      <amplink>https://smarttimes.net/tpost/545lb6jlk1-ant-group-unveils-its-humanoid-robot-wit?amp=true</amplink>
      <pubDate>Thu, 11 Sep 2025 16:32:00 +0300</pubDate>
      <enclosure url="https://static.tildacdn.com/tild3032-6464-4166-b638-323332356639/Screenshot_2025-09-1.png" type="image/png"/>
      <description>Ant Group Co., backed by Jack Ma, has unveiled its first humanoid robot as part of a broader push into frontier technologies</description>
      <turbo:content><![CDATA[<header><h1>ANT GROUP UNVEILS ITS HUMANOID ROBOT WITH ITS OWN AI "BRAINS" CHALLENGING TESLA'S OPTIMUS</h1></header><figure><img alt="" src="https://static.tildacdn.com/tild3032-6464-4166-b638-323332356639/Screenshot_2025-09-1.png"/></figure><div class="t-redactor__text"><strong>The R1 Humanoid Robot</strong><br /><br />The robot, called <strong>R1</strong>, was presented as capable of performing tasks such as guiding tours, sorting medicine in pharmacies, providing medical consultations, and carrying out basic kitchen work. Unlike many rivals that focus primarily on hardware, Ant is concentrating on developing the “brains” of humanoids through advanced artificial intelligence.<br /><br /><strong>Strategic AI Integration</strong><br /><br />Ant positions humanoid robots as a gateway to popularizing AI chatbots and assistants. Its large AI model, BaiLing, is designed to handle end-to-end planning for complex tasks. According to the company, the R1 can plan and execute jobs such as preparing and serving meals, and theoretically learn new recipes and adapt to different tools.<br /><br />The R1’s spatial perception system can identify relationships between objects, enabling it to operate in varied environments such as kitchens or pharmacies.<br /><br /><strong>Development and Suppliers</strong><br /><br />The R1 is assembled with components sourced from Chinese suppliers. These include joint modules from <strong>Ti5 Robot</strong> and a chassis developed by <strong>Galaxea AI</strong>, a company backed by Ant.<br /><br /><strong>Position in Global Robotics</strong><br /><br />China already leads in industrial robot deployment per capita compared with the United States and Japan. Companies such as Tesla with its Optimus project, and robotics startups like Unitree, are also active in humanoid development.<br /><br /><strong>Ant Group’s Broader AI Efforts</strong><br /><br />While Ant Group is best known for its Alipay digital payments platform, the company has been investing heavily in artificial intelligence. Its initiatives include developing the BaiLing large language model and experimenting with training it on locally made semiconductors to reduce costs.</div>]]></turbo:content>
    </item>
    <item turbo="true">
      <title>ZOOM LAUNCHES AN AI NOTETAKER, AI AVATARS</title>
      <link>https://smarttimes.net/tpost/pkx601gy41-zoom-launches-an-ai-notetaker-ai-avatars</link>
      <amplink>https://smarttimes.net/tpost/pkx601gy41-zoom-launches-an-ai-notetaker-ai-avatars?amp=true</amplink>
      <pubDate>Wed, 17 Sep 2025 20:41:00 +0300</pubDate>
      <category>AI AGENTS</category>
      <enclosure url="https://static.tildacdn.com/tild3232-3566-4631-b838-363438613730/Screenshot_2025-09-1.png" type="image/png"/>
      <description>The upgraded AI companion will work across multiple meeting apps</description>
      <turbo:content><![CDATA[<header><h1>ZOOM LAUNCHES AN AI NOTETAKER, AI AVATARS</h1></header><figure><img alt="" src="https://static.tildacdn.com/tild3232-3566-4631-b838-363438613730/Screenshot_2025-09-1.png"/></figure><div class="t-redactor__text">Zoom announced new artificial intelligence (AI) features at its annual Zoomtopia conference, including an upgraded AI companion, AI notetaking across platforms, AI avatars, and new meeting productivity tools.<br /><br />The upgraded AI companion will work across multiple meeting apps, including Google Meet and Microsoft Teams. It can also take notes during in-person meetings. The company is introducing a feature that allows users to write their own notes during meetings, which the AI will later expand and structure.<br /><br />Zoom is adding cross-platform search to allow users to retrieve information across Google and Microsoft services. New calendar features include AI-assisted scheduling to find time slots for all attendees, as well as a “free up my time” option that suggests meetings users may skip.<br /><br />Additional meeting tools include proactive agenda and task recommendations, as well as a group AI assistant.<br /><br />The company is also introducing photorealistic AI avatars, which can mimic users’ actions on video. The avatars are expected to be available by the end of the year. Zoom said these avatars could be used when users are not prepared to appear on camera. The avatars will also be integrated into Zoom Clips, allowing hosts to greet attendees in waiting rooms and explain the purpose of meetings.<br /><br />Zoom announced new AI-powered live translation features and enhancements to its web interface to make the AI companion more prominent. Other new AI tools include a writing assistant for drafting emails and documents, and a research feature.<br /><br />The platform will also support the creation of custom AI agents through Model Context Protocol (MCP), higher bit rate and 60fps video for meetings, and a new video management tool for handling video assets.</div>]]></turbo:content>
    </item>
    <item turbo="true">
      <title>NVIDIA'S $100 BILLION BET ON OPENAI AI DEPLOYMENT</title>
      <link>https://smarttimes.net/tpost/8ij15mm3h1-nvidias-100-billion-bet-on-openai-ai-dep</link>
      <amplink>https://smarttimes.net/tpost/8ij15mm3h1-nvidias-100-billion-bet-on-openai-ai-dep?amp=true</amplink>
      <pubDate>Mon, 22 Sep 2025 20:30:00 +0300</pubDate>
      <category>SINGULARITY</category>
      <enclosure url="https://static.tildacdn.com/tild6637-6136-4265-a466-666634666462/Screenshot_2025-09-2.png" type="image/png"/>
      <description>This move addresses the escalating demand for compute power amid intensifying competition for chips and energy resources</description>
      <turbo:content><![CDATA[<header><h1>NVIDIA'S $100 BILLION BET ON OPENAI AI DEPLOYMENT</h1></header><figure><img alt="" src="https://static.tildacdn.com/tild6637-6136-4265-a466-666634666462/Screenshot_2025-09-2.png"/></figure><div class="t-redactor__text"><strong><em>NVIDIA and OpenAI have signed a letter of intent for a partnership that commits up to $100 billion in investment to deploy at least 10 gigawatts of NVIDIA chips for OpenAI's AI infrastructure. This move addresses the escalating demand for compute power amid intensifying competition for chips and energy resources. The first phase targets rollout in the second half of 2026, with full details to be finalized soon.</em></strong><br /><br />This agreement underscores the capital-intensive reality of scaling AI systems, where compute infrastructure forms the backbone of model training and deployment. For investors, it signals NVIDIA's deepening entrenchment in the AI supply chain, while OpenAI gains a pathway to sustain its growth beyond existing partnerships like Microsoft. The deal arrives as global data center power demands are projected to double by 2030, driven by AI workloads.<br /><br />The partnership builds on a decade-long collaboration between the two firms. NVIDIA provided the hardware for OpenAI's early breakthroughs, including the DGX supercomputers that powered initial model development. Today, OpenAI serves over 700 million weekly active users across enterprises, small businesses, and developers, with products like ChatGPT driving widespread adoption. Yet, the path to artificial general intelligence requires exponential increases in compute capacity—far beyond current capabilities.<br /><br />Under the letter of intent, NVIDIA will invest progressively, allocating up to $10 billion per gigawatt deployed. This structure ties funding to milestones, mitigating risk for both parties while ensuring aligned incentives. The initial gigawatt will leverage NVIDIA's Vera Rubin platform, an upcoming architecture designed for next-generation AI workloads. Subsequent phases will involve co-optimization of OpenAI's software stack with NVIDIA's hardware, positioning NVIDIA as the preferred provider for OpenAI's AI factory expansion.<br /><br />Key executives highlighted the strategic alignment in statements accompanying the announcement.<br /><br />💬 “NVIDIA and OpenAI have pushed each other for a decade, from the first DGX supercomputer to the breakthrough of ChatGPT. This investment and infrastructure partnership mark the next leap forward—deploying 10 gigawatts to power the next era of intelligence.” — Jensen Huang, founder and CEO of NVIDIA<br /><br />💬 “Everything starts with compute. Compute infrastructure will be the basis for the economy of the future, and we will utilize what we’re building with NVIDIA to both create new AI breakthroughs and empower people and businesses with them at scale.” — Sam Altman, cofounder and CEO of OpenAI<br /><br />💬 “We’ve been working closely with NVIDIA since the early days of OpenAI. We’ve utilized their platform to create AI systems that hundreds of millions of people use every day. We’re excited to deploy 10 gigawatts of compute with NVIDIA to push back the frontier of intelligence and scale the benefits of this technology to everyone.” — Greg Brockman, cofounder and president of OpenAI<br /><br />For NVIDIA shareholders, the deal reinforces the company's dominance in AI accelerators, where it holds over 80% market share. The $100 billion commitment, while substantial, is phased over years and funded through operational cash flows—NVIDIA generated $28 billion in free cash flow in its last fiscal year. This investment secures long-term revenue from chip sales, maintenance, and software licensing, potentially adding billions to annual topline as OpenAI scales. Post-announcement trading showed NVIDIA shares up 2.3% in early sessions, reflecting market approval of the locked-in demand.<br /><br />OpenAI benefits from diversified compute sources, complementing its Microsoft Azure integration and recent deals with Oracle and SoftBank. The Stargate project—a $100 billion joint venture with Microsoft for U.S.-based data centers—now gains a hardware anchor, reducing reliance on single suppliers. However, execution risks remain: supply chain bottlenecks for advanced nodes and regulatory scrutiny over energy use could delay timelines.<br /><br />The broader investment landscape reveals stark challenges in AI infrastructure. Data centers worldwide consumed 460 terawatt-hours of electricity in 2022, equivalent to Japan's annual usage; by 2030, AI-driven demand could push this to 945 terawatt-hours. A single gigawatt data center rivals the output of a large nuclear plant, and 10 gigawatts would require grid upgrades costing tens of billions. Forecasts indicate AI training alone may need 50 gigawatts of new capacity by 2027. Investors in utilities (e.g., NextEra Energy) and power infrastructure (e.g., Eaton) stand to gain, as hyperscalers retrofit facilities for high-density AI racks.<br /><br />Competition sharpens the focus. Microsoft, OpenAI's primary backer, invests $100 billion in Stargate but faces internal pressures from Azure's 30% AI-related capacity utilization. Google, through DeepMind, competes directly on models while building its own TPU-based clusters; a recent cloud deal with OpenAI hints at hedging strategies. Meta prioritizes open-source models with custom chips, aiming for 600,000 H100 equivalents by year-end, while Amazon's AWS trains models on Trainium hardware. These efforts underscore a bifurcation: closed ecosystems like OpenAI-NVIDIA versus integrated stacks from Google and Meta.<br /><br />For portfolio managers, the Nvidia-OpenAI pact elevates compute as a non-negotiable moat. Allocate to semiconductor leaders like NVIDIA (NVDA) for growth exposure, but balance with energy plays to hedge volatility from power constraints. Diversify into AI software via Microsoft (MSFT) or enterprise adopters like Salesforce, where inference costs—projected at $1 trillion annually by 2028—drive monetization. Avoid overconcentration; AI hype has inflated valuations, with NVIDIA trading at 50x forward earnings.<br /><br />Risks include geopolitical tensions over chip exports and potential antitrust probes into NVIDIA's market power. Forward-looking statements in the letter note uncertainties in technology roadmaps and deployment. Yet, the deal's scale positions both firms to capture value in a market where generative AI investments hit $33.9 billion in 2024, up 18.7% year-over-year.<br /><br />This partnership quantifies the AI buildout's enormity: $100 billion for 10 gigawatts translates to $10 billion per gigawatt, a benchmark for future deals. Investors should monitor quarterly updates on deployment and capex burn, as execution will dictate returns in this capital-heavy sector.<br /><br /><strong>Disclosure:</strong> You earn satoshi (sats - units of bitcoin) when you read this article on <strong><em><a href="https://t.me/smart_times_bot/smart_times?startapp=454420262" target="_blank" rel="noreferrer noopener" style="color: rgb(229, 40, 216);">SMART TIMES Telegram mini app </a></em></strong></div>]]></turbo:content>
    </item>
    <item turbo="true">
      <title>XAI INKS GSA DEAL FOR GROK AI ACCESS</title>
      <link>https://smarttimes.net/tpost/rb81a5fyo1-xai-inks-gsa-deal-for-grok-ai-access</link>
      <amplink>https://smarttimes.net/tpost/rb81a5fyo1-xai-inks-gsa-deal-for-grok-ai-access?amp=true</amplink>
      <pubDate>Thu, 25 Sep 2025 21:02:00 +0300</pubDate>
      <category>AI AGENTS</category>
      <enclosure url="https://static.tildacdn.com/tild3565-3132-4030-a339-336563383861/Screenshot_2025-09-2.png" type="image/png"/>
      <turbo:content><![CDATA[<header><h1>XAI INKS GSA DEAL FOR GROK AI ACCESS</h1></header><figure><img alt="" src="https://static.tildacdn.com/tild3565-3132-4030-a339-336563383861/Screenshot_2025-09-2.png"/></figure><div class="t-redactor__text"><strong><em>The General Services Administration announced a partnership with Elon Musk's xAI to provide federal agencies access to the Grok AI chatbot at a rate of $0.42 per organization, effective immediately through March 2027. This agreement undercuts pricing from competitors like OpenAI and aligns with the government's push to integrate AI tools across operations. Agencies gain not only discounted access but also dedicated xAI engineers for implementation support.</em></strong><br /><br />The deal marks xAI's entry into federal procurement channels, positioning Grok as a cost-effective option for tasks ranging from data analysis to administrative automation. As federal budgets face scrutiny, this low entry price could accelerate adoption without straining resources. For investors tracking AI deployment, the agreement highlights maturing government demand for scalable models.<br /><br /><strong>Background on the Agreement</strong><br /><br />xAI, founded by Elon Musk in 2023, develops the Grok series of large language models designed for reasoning and real-time data integration. The company's latest frontier model, Grok-3, powers the chatbot available on platforms like x.com and mobile apps. This GSA partnership extends Grok's reach to over 100 federal agencies, enabling procurement through standardized terms.<br /><br />Under the agreement, agencies pay $0.42 annually per organization for unlimited access to Grok models, including API calls and enterprise features. The term runs 18 months, from September 25, 2025, to March 31, 2027, providing long-term stability. GSA's Federal Acquisition Service facilitated the contract, emphasizing streamlined buying to avoid agency-specific negotiations.<br /><br />This is not xAI's first government-facing move. Earlier in 2025, xAI collaborated on AI infrastructure projects under the Department of Energy, but this GSA deal focuses on end-user tools. For procurement officers, the flat fee simplifies budgeting, as it covers core functionalities without per-user add-ons.<br /><br /><strong>The OneGov Strategy: Streamlining Federal AI Procurement</strong><br /><br />Launched in April 2025, GSA's OneGov Strategy centralizes IT acquisitions to eliminate redundancies and enforce uniform terms. Prior to OneGov, agencies negotiated individually, leading to inconsistent pricing and delayed rollouts. The initiative now covers 15 AI vendors, with a focus on frontier models capable of handling classified workloads.<br /><br />OneGov's core pillars include:<br /><br /><ul><li data-list="bullet">Standardized pricing to cap costs at commercial rates.</li><li data-list="bullet">Pre-vetted security compliance, prioritizing FedRAMP Moderate baselines.</li><li data-list="bullet">Volume commitments that unlock deeper discounts for high-usage agencies.</li></ul><br />By Q3 2025, OneGov facilitated $500 million in AI contracts, up 40% from the prior quarter. For federal IT leaders, this means faster deployment of tools for use cases like predictive analytics in the Department of Veterans Affairs or fraud detection at the IRS. The strategy also mandates annual audits to ensure value, protecting taxpayer funds.<br /><br /><strong>Alignment with Trump's AI Action Plan</strong><br /><br />The xAI deal supports President Donald Trump's AI Action Plan, released in July 2025. The plan outlines three pillars: accelerating innovation, building infrastructure, and enhancing government deployment. It calls for $50 billion in federal AI investments by 2030, with mandates for agencies to integrate AI in 50% of operations by 2028.<br /><br />Key directives include reducing regulatory barriers and prioritizing domestic providers to counter foreign competition, particularly from China. The plan warns against over-regulation, advocating for light-touch oversight to foster development. GSA's role is pivotal, as it procures 70% of federal IT, making OneGov a direct execution arm.<br /><br />🚀 “Widespread access to advanced AI models is essential to building the efficient, accountable government that taxpayers deserve,” said Federal Acquisition Service Commissioner Josh Gruenbaum. “We value xAI for partnering with GSA—and dedicating engineers—to accelerate the adoption of Grok to transform government operations.”<br /><br />This quote underscores the plan's emphasis on practical implementation over theoretical gains.</div>]]></turbo:content>
    </item>
    <item turbo="true">
      <title>DRESSX UNVEILS AI TRY-ON PLATFORM WITH $2M LUXURY INVENTORY</title>
      <link>https://smarttimes.net/tpost/fhiyr6lpk1-dressx-unveils-ai-try-on-platform-with-2</link>
      <amplink>https://smarttimes.net/tpost/fhiyr6lpk1-dressx-unveils-ai-try-on-platform-with-2?amp=true</amplink>
      <pubDate>Sat, 27 Sep 2025 06:20:00 +0300</pubDate>
      <enclosure url="https://static.tildacdn.com/tild3961-3166-4534-a165-636436633130/Screenshot_2025-09-2.png" type="image/png"/>
      <description>By enabling users to test fits on digital avatars from a single selfie, it aims to streamline purchasing decisions across more than 200 brands</description>
      <turbo:content><![CDATA[<header><h1>DRESSX UNVEILS AI TRY-ON PLATFORM WITH $2M LUXURY INVENTORY</h1></header><figure><img alt="" src="https://static.tildacdn.com/tild3961-3166-4534-a165-636436633130/Screenshot_2025-09-2.png"/></figure><div class="t-redactor__text"><strong><em>DRESSX has introduced DRESSX Agent, a platform integrating virtual try-on capabilities with large language models and diffusion-based image generation. This system targets core e-commerce issues in apparel, such as return rates exceeding 30% in luxury segments and limited product discovery. By enabling users to test fits on digital avatars from a single selfie, it aims to streamline purchasing decisions across more than 200 brands.</em></strong><br /><br />DRESSX positions the tool as a response to stagnant luxury e-commerce, where static images and broad catalogs contribute to buyer hesitation. Return rates in online fashion hover around 25-40%, per industry data from McKinsey, driven by fit uncertainties and visualization gaps. DRESSX Agent addresses this by generating high-resolution previews that account for body proportions and fabric behavior, using diffusion models trained on proprietary datasets. Early adopters report it as a benchmark for accuracy, with processing speeds under five seconds per visualization.<br /><br />Core functionality revolves around user-generated content. The personal AI twin feature creates a 3D-like digital replica from one photograph, enabling try-ons without repeated measurements. A smart styling module permits mixing items from disparate brands—say, a Diesel jacket with Pinko trousers—into cohesive ensembles, outputting shareable images or links. Search extends to screenshots: users upload an image from any source, and the system identifies similar products for purchase, routing to official retailer sites for checkout.<br /><br />Access operates via tiered memberships, balancing broad entry with premium tools to segment user commitment levels. This model mirrors subscription economics in software-as-a-service, where free tiers drive acquisition and paid ones monetize engagement.<br /><br />This structure supports scalability: free users contribute data for model refinement, while premiums fund expansion. DRESSX reports initial uptake from tech-savvy consumers, with 70% of early sessions involving custom looks. For retailers, integration means lower returns—potentially 15-20% based on similar AR pilots from Shopify—and higher conversion through contextual recommendations.<br /><br />Partnerships underscore commercial viability. SPRNV, a luxury brand accelerator, integrated early to test workflows with its portfolio. "From the very beginning, our cooperation with DRESSX Agent has been inspiring, and we are excited to be among the first to join their journey, extending this pioneering spirit to the brands we serve," 💬 Davide Colaiezzi, Founder, SPRNV.<br /><br />Leam Roma, a Roman heritage label since 1950, leverages the platform for virtual curation. "Partnering with DRESSX Agent brings that spirit online: AI try-on lets customers experience our pieces in a more personal and refined way, while reducing returns and supporting a more sustainable future for luxury retail," 💬 Edoardo Amati, Chief Strategy &amp; Innovation Officer, Leam Roma.<br /><br />Blvck Paris, focused on minimalist apparel, uses it to bridge physical and digital fit perceptions. "Partnering with DRESSX Agent allows us to extend that vision into the digital world: customers can try on our pieces virtually, style them with other looks, and share their creations instantly," 💬 Julian Ohayon, Founder &amp; Creative Director, Blvck Paris.<br /><br />Founders Daria Shapovalova and Natalia Modenova frame it as a pivot for the sector. "Early users named DRESSX Agent the ‘ChatGPT for fashion’ for a reason. We have combined LLM-powered search with our diffusion-based AI try-on frameworks to address the industry’s toughest challenges: high return rates, poor discovery, and decision fatigue," 💬 Daria Shapovalova and Natalia Modenova, Founders, DRESSX.<br /><br />They add: "With DRESSX Agent, we are reimagining the digital department store. Think of it as the new Farfetch, but powered by AI: a place where you can see yourself in Diesel or Hugo Boss before buying, style entire outfits in seconds, and shop across the world’s best retailers with one click," 💬 Daria Shapovalova and Natalia Modenova, Founders, DRESSX.<br /><br /><strong><em>You earn satoshi (sats - units of bitcoin) when you read this article on </em></strong><strong style="color: rgb(199, 43, 179);"><em>SMART TIMES Telegram mini app </em></strong><br /><br /></div>]]></turbo:content>
    </item>
    <item turbo="true">
      <title>GOOGLE STRIKES BACK AT OPENAI WITH VIDEO GENERATION</title>
      <link>https://smarttimes.net/tpost/f6944xaem1-google-strikes-back-at-openai-with-video</link>
      <amplink>https://smarttimes.net/tpost/f6944xaem1-google-strikes-back-at-openai-with-video?amp=true</amplink>
      <pubDate>Thu, 16 Oct 2025 06:54:00 +0300</pubDate>
      <category>PRODUCTIVITY</category>
      <category>AI AGENTS</category>
      <enclosure url="https://static.tildacdn.com/tild6564-3832-4132-b530-326531376630/Screenshot_2025-10-1.png" type="image/png"/>
      <description>his move comes as Flow, Google's AI filmmaking platform, surpasses 275 million generated videos since launch</description>
      <turbo:content><![CDATA[<header><h1>GOOGLE STRIKES BACK AT OPENAI WITH VIDEO GENERATION</h1></header><figure><img alt="" src="https://static.tildacdn.com/tild6564-3832-4132-b530-326531376630/Screenshot_2025-10-1.png"/></figure><div class="t-redactor__text"><strong><em>Google has unveiled Veo 3.1, an upgrade to its AI video generation model, directly targeting OpenAI's Sora 2 in the competitive landscape of synthetic media tools. This move comes as Flow, Google's AI filmmaking platform, surpasses 275 million generated videos since launch. The update prioritizes audio integration and editing precision, addressing gaps in resolution and sound handling where Sora 2 leads in narrative flexibility but falls short on native audio.</em></strong><br /><br /><strong>Veo 3.1 Capabilities</strong><br /><br />Veo 3.1 introduces native audio to functions like "Ingredients to Video," which combines reference images for characters and styles with synchronized soundtracks. "Frames to Video" now generates transitions with audio, while "Extend" allows clips beyond one minute by building on prior segments. Editing options include "Insert" for adding elements with physics-aware adjustments and upcoming "Remove" for seamless deletions. These enhancements leverage Veo 3's 4K resolution for 8-second clips and 1080p for longer formats up to two minutes.</div><iframe width="100%" height="100%" src="https://www.youtube.com/embed/I06Ef8alr2Y" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe><div class="t-redactor__text"><strong>Comparison to OpenAI's Sora 2</strong><br /><br />Veo 3.1 outperforms Sora 2 in resolution and audio, supporting 4K output with built-in dialogue, effects, and music, whereas Sora 2 remains limited to 1080p silent videos requiring post-production. Sora 2 excels in longer clips up to 60 seconds and creative storytelling with tools like Remix and Storyboard, but faces regional restrictions and capacity constraints. In head-to-head tests, Veo 3.1 achieves higher prompt adherence for cinematic scenes, while Sora 2 better handles multi-character dynamics. Pricing favors Veo via Powtoon plans starting at $15/month for limited use, compared to Sora's $20/month entry with 50 videos.</div><iframe width="100%" height="100%" src="https://www.youtube.com/embed/B78BJuPxmBU" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe><div class="t-redactor__text"><strong>Business Implications</strong><br /><br />For enterprises in advertising and training, Veo 3.1 reduces post-production time by up to 60% through audio-integrated workflows, lowering costs for high-resolution campaigns. A luxury automotive firm reported streamlined commercial production, while agencies using Sora 2 produced 50+ social reels weekly at lower entry prices. Investors should note Veo's scalability for premium outputs versus Sora's edge in volume testing, with upcoming features like Sora audio in Q3 2025 potentially narrowing the gap. Adoption could accelerate cost-effective media in sectors like education, where 4K narratives enhance engagement without external tools.<br /><br /><strong>Access and Rollout</strong><br /><br />Veo 3.1 is now available in Flow, with developer access via Gemini API and enterprise through Vertex AI. Individual users reach it via Gemini app, while Sora 2 integrates into ChatGPT plans. Global rollout continues, though Sora excludes certain regions.</div>]]></turbo:content>
    </item>
    <item turbo="true">
      <title>ALIBABA RELEASES AI GLASSES CHALLENGING RAY-BAN META</title>
      <link>https://smarttimes.net/tpost/ou78yv1ux1-alibaba-releases-ai-glasses-challenging</link>
      <amplink>https://smarttimes.net/tpost/ou78yv1ux1-alibaba-releases-ai-glasses-challenging?amp=true</amplink>
      <pubDate>Thu, 27 Nov 2025 21:23:00 +0300</pubDate>
      <category>PRODUCTIVITY</category>
      <enclosure url="https://static.tildacdn.com/tild3862-6666-4134-a135-646234653663/Screenshot_2025-11-2.png" type="image/png"/>
      <description>Alibaba Group Holding Ltd. has launched sales of its inaugural smart glasses, powered by its Qwen AI models.</description>
      <turbo:content><![CDATA[<header><h1>ALIBABA RELEASES AI GLASSES CHALLENGING RAY-BAN META</h1></header><figure><img alt="" src="https://static.tildacdn.com/tild3862-6666-4134-a135-646234653663/Screenshot_2025-11-2.png"/></figure><div class="t-redactor__text"><strong><em>Alibaba Group Holding Ltd. has launched sales of its inaugural smart glasses, powered by its Qwen AI models, entering consumer hardware for the first time.</em></strong><br /><br />The Quark S1 model features translucent displays that overlay contextual data onto the user's field of view. It includes cameras, bone conduction microphones, and swappable batteries with a 24-hour rating. This positions the product as a direct competitor to Meta Platforms Inc.'s $299 Ray-Ban smart glasses in the Chinese market.<br /><br />The release aligns with Alibaba's shift toward an AI-centric model. Last week, the company rolled out its Qwen app, merging multiple consumer AI tools into one platform that gained over 10 million users in days. CEO Eddie Wu reported strong user retention. Alibaba has embedded Qwen into its Quark browser and now extends it to wearables.<br /><br />The S1 starts at 3,799 yuan ($537). A stripped-down Quark G1, priced at 1,899 yuan and lacking micro-OLED displays, launches alongside. Both are available immediately on Tmall, JD.com, ByteDance Ltd.'s Douyin, and over 600 stores in 82 Chinese cities. International rollout follows in 2026, including AliExpress.<br /><br />China's smart glasses market has expanded, with AI features like real-time transcription driving growth. IDC tracked 1.6 million shipments through September, led by Xiaomi Corp. at about one-third share; including display-equipped units pushes the total to 2 million. Startups like Even Realities focus on enhancements to standard eyewear.<br /><br />Meta's $799 Ray-Ban Display variant introduces screens and a wristband for gestures, though it's bulkier and costlier. This sets a benchmark for category evolution.<br /><br />Alibaba leverages its ecosystem for integration: Taobao for shopping, Fliggy for travel, and Alipay for payments. Partnerships with NetEase Inc. and Tencent Holdings Ltd. add NetEase Cloud Music and QQ Music access.<br /><br />For investors, this hardware push tests Alibaba's AI monetization beyond software. With China's market at scale and global plans underway, it could diversify revenue streams amid e-commerce pressures. Track Qwen user metrics and sales uptake for early signals.</div>]]></turbo:content>
    </item>
    <item turbo="true">
      <title>TELEGRAM LAUNCHED ITS DECENTRALIZED AI NETWORK NAMED "COCOON"</title>
      <link>https://smarttimes.net/tpost/00fzngktx1-telegram-launched-its-decentralized-ai-n</link>
      <amplink>https://smarttimes.net/tpost/00fzngktx1-telegram-launched-its-decentralized-ai-n?amp=true</amplink>
      <pubDate>Sun, 30 Nov 2025 22:50:00 +0300</pubDate>
      <category>PRODUCTIVITY</category>
      <category>SINGULARITY</category>
      <enclosure url="https://static.tildacdn.com/tild3065-6463-4733-a537-376636366561/Screenshot_2025-11-3.png" type="image/png"/>
      <description>Telegram founder Pavel Durov announced on November 30, 2025, that Cocoon, the Confidential Compute Open Network built on the TON blockchain, has launched. </description>
      <turbo:content><![CDATA[<header><h1>TELEGRAM LAUNCHED ITS DECENTRALIZED AI NETWORK NAMED "COCOON"</h1></header><figure><img alt="" src="https://static.tildacdn.com/tild3065-6463-4733-a537-376636366561/Screenshot_2025-11-3.png"/></figure><div class="t-redactor__text"><strong><em>Telegram founder Pavel Durov announced on November 30, 2025, that Cocoon, the Confidential Compute Open Network built on the TON blockchain, has launched. The platform is now processing initial AI inference requests with full data encryption, ensuring no visibility into user queries or outputs for network participants. GPU providers connected to the network have begun earning Toncoin (TON) rewards for supplying compute resources.</em></strong><br /><br />Cocoon operates as a decentralized marketplace for AI workloads. Developers submit requests specifying model architecture—such as DeepSeek or Qwen—along with expected daily volume and average token size. These requests are routed to available GPUs, with payments made in TON. All data remains encrypted end-to-end, enabling private AI processing without intermediaries accessing content. The network's source code and documentation are available at https://cocoon.org, supporting open participation.<br /><br />This model addresses limitations in centralized AI compute services from providers like Amazon Web Services and Microsoft Azure. Those platforms charge premium rates—often $2–$5 per hour for high-end GPUs—while retaining access to user data for training and analytics. Cocoon eliminates these markups by directly matching supply and demand on-chain, potentially reducing costs by 50–70% based on initial benchmarks from similar decentralized networks. Privacy is enforced through confidential computing protocols, aligning with growing regulatory demands under frameworks like the EU AI Act.<br /><br />TON benefits directly as the native token for transactions and rewards. GPU owners stake hardware to mine TON, with earnings tied to request volume and compute efficiency. Early adopters report yields equivalent to 10–15% annualized returns on mid-range GPUs, assuming consistent demand. Developers pay in TON, creating buy pressure, while Telegram—positioned as Cocoon's inaugural client—will integrate the network for user-facing AI tools, such as chat enhancements and content generation.</div><iframe width="100%" height="100%" src="https://www.youtube.com/embed/G56XD67Wrrs" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe><div class="t-redactor__text">Scaling efforts are underway. Over the coming weeks, the team plans to onboard additional GPU capacity and developer integrations. This follows Durov's unveiling at Blockchain Life 2025 in Dubai, where he highlighted Telegram's 1 billion monthly active users as a demand driver at the intersection of blockchain, AI, and social platforms. No specific timelines for full Telegram rollout were disclosed, but the focus remains on expanding confidential features to end-users.<br /><br />For investors, Cocoon represents a concrete expansion of TON's utility beyond payments and DeFi. With AI compute projected to reach $500 billion in market value by 2030, decentralized alternatives could capture 5–10% share if adoption mirrors trends in DePIN projects. Track progress via official channels and monitor TON price action around GPU onboarding milestones.</div>]]></turbo:content>
    </item>
    <item turbo="true">
      <title>NOW YOU CAN CREATE DISNEY-STYLE CHARACTERS WITH SORA: DISNEY INVESTS IN OPENAI</title>
      <link>https://smarttimes.net/tpost/krcxcr3by1-now-you-can-create-disney-style-characte</link>
      <amplink>https://smarttimes.net/tpost/krcxcr3by1-now-you-can-create-disney-style-characte?amp=true</amplink>
      <pubDate>Thu, 11 Dec 2025 20:41:00 +0300</pubDate>
      <category>PRODUCTIVITY</category>
      <enclosure url="https://static.tildacdn.com/tild3361-6130-4163-a130-343237303938/Screenshot_2025-12-1.png" type="image/png"/>
      <turbo:content><![CDATA[<header><h1>NOW YOU CAN CREATE DISNEY-STYLE CHARACTERS WITH SORA: DISNEY INVESTS IN OPENAI</h1></header><figure><img alt="" src="https://static.tildacdn.com/tild3361-6130-4163-a130-343237303938/Screenshot_2025-12-1.png"/></figure><div class="t-redactor__text"><strong><em>The Walt Disney Company has entered a three-year licensing agreement with OpenAI, investing $1 billion in equity and securing warrants for additional shares. This deal grants OpenAI's Sora video generation tool access to over 200 Disney characters, props, costumes, vehicles, and environments from franchises including Disney Animation, Pixar, Marvel, and Star Wars. The partnership positions Disney as a primary customer of OpenAI's APIs, enabling integration into Disney+ features and internal employee tools.</em></strong><br /><br />Under the terms, users can prompt Sora to produce short social videos featuring characters such as Mickey Mouse, Ariel, Simba, Black Panther, and Darth Vader—limited to animated, masked, or creature depictions, excluding any talent likenesses or voices. Similar capabilities extend to ChatGPT Images for text-to-visual generation. Select fan-created videos will stream on Disney+, with broader rollout targeted for early 2026 pending regulatory approvals.<br /><br />Financially, the $1 billion equity infusion bolsters OpenAI's balance sheet amid escalating compute costs for AI development, while Disney gains preferential access to cutting-edge models for content personalization and operational efficiency. This follows Disney's aggressive stance on AI IP protection, including lawsuits against Midjourney and a cease-and-desist to Character.AI, signaling a strategic pivot toward controlled collaborations over litigation.<br /><br /><em>Disney CEO Bob Iger</em> stated: <strong>“The rapid advancement of artificial intelligence marks an important moment for our industry, and through this collaboration with OpenAI we will thoughtfully and responsibly extend the reach of our storytelling through generative AI, while respecting and protecting creators and their works. Bringing together Disney’s iconic stories and characters with OpenAI’s groundbreaking technology puts imagination and creativity directly into the hands of Disney fans in ways we’ve never seen before, giving them richer and more personal ways to connect with the Disney characters and stories they love.”</strong> <br /><br />For investors, the agreement underscores Disney's commitment to AI-driven revenue streams, potentially offsetting streaming margin pressures through enhanced user engagement. OpenAI benefits from Disney's vast IP library, accelerating Sora's adoption in consumer markets while establishing precedents for licensed generative tools. Both parties emphasize safeguards against harmful content, with OpenAI enforcing age-appropriate policies. <br /><br /><em>OpenAI CEO Sam Altman</em> added: <strong>“Disney is the global gold standard for storytelling, and we’re excited to partner to allow Sora and ChatGPT Images to expand the way people create and experience great content. This agreement shows how AI companies and creative leaders can work together responsibly to promote innovation that benefits society, respect the importance of creativity, and help works reach vast new audiences..”</strong><br /><br />This move aligns with broader industry trends where media giants license content to AI firms for mutual growth, though execution risks remain tied to model accuracy and IP enforcement. Stakeholders should monitor Q1 2026 updates for early performance metrics.</div>]]></turbo:content>
    </item>
    <item turbo="true">
      <title>META ACQUIRES MANUS AI FOR OVER $2 BILLION</title>
      <link>https://smarttimes.net/tpost/tyel6i3lf1-meta-acquires-manus-ai-for-over-2-billio</link>
      <amplink>https://smarttimes.net/tpost/tyel6i3lf1-meta-acquires-manus-ai-for-over-2-billio?amp=true</amplink>
      <pubDate>Thu, 01 Jan 2026 21:19:00 +0300</pubDate>
      <category>PRODUCTIVITY</category>
      <category>AI AGENTS</category>
      <enclosure url="https://static.tildacdn.com/tild3837-3362-4832-b365-646262656230/Screenshot_2026-01-0.png" type="image/png"/>
      <turbo:content><![CDATA[<header><h1>META ACQUIRES MANUS AI FOR OVER $2 BILLION</h1></header><figure><img alt="" src="https://static.tildacdn.com/tild3837-3362-4832-b365-646262656230/Screenshot_2026-01-0.png"/></figure><div class="t-redactor__text"><strong><em>Meta Platforms has acquired Manus, a Singapore-based developer of general-purpose AI agents, as the tech giant continues its massive investments into artificial intelligence. </em></strong><br /><br />Manus, founded in China before relocating to Singapore, launched its first general <a href="https://www.cnbc.com/2025/12/29/ai-agentic-shopping-price-discounts-cheap-sales-commerce-visa-mastercard-chatbots.html">AI agent</a> earlier this year, which can execute complex tasks such as market research, coding, and data analysis.<br /><br />The company <a href="https://manus.im/blog/manus-100m-arr" target="_blank" rel="noreferrer noopener">claimed it had achieved</a> an annualized average revenue of more than $100 million just eight months after launch, while its revenue run rate exceeded $125 million.<br /><br /><a href="https://www.cnbc.com/quotes/META/">Meta</a> said in a <a href="https://www.facebook.com/business/news/manus-joins-meta-accelerating-ai-innovation-for-businesses" target="_blank" rel="noreferrer noopener">statement</a> that its acquisition was aimed at accelerating AI innovation for businesses and integrating advanced automation into its consumer and enterprise products, including its Meta AI assistant.<br /><br />“Manus is already serving the daily needs of millions of users and businesses worldwide ... We plan to scale this service to many more businesses,” Meta said.<br /><br />The company also said it will take steps to wind down Manus AI’s remaining business operations in China and that “there will be no continuing Chinese ownership interests” after the transaction.<br /><br />According to the firms, Manus will continue operating its subscription service without disruption.<br /><br />While further terms of the acquisition were not disclosed, the Wall Street Journal <a href="https://www.wsj.com/tech/ai/meta-buys-ai-startup-manus-adding-millions-of-paying-users-f1dc7ef8" target="_blank" rel="noreferrer noopener">reported</a> that the deal closed at an amount over $2 billion, according to sources familiar with the acquisition.<br /><br />The start-up was seeking a fresh round of fundraising at a $2 billion valuation when it was approached by Meta, the report added.<br /><br />Manus began as a product of Chinese start-up Butterfly Effect, also known as Monica.Im, before growing into a separate entity.<br /><br />It emerged as a notable AI player earlier this year after claiming its chatbot offered superior performance to OpenAI’s Deep Research agent.<br /><br />The company raised $75 million in a Series B funding round led by U.S. venture firm Benchmark in April, and is backed by Tencent and private equity firm HongShan Capital Group (HSG), formerly known as Sequoia, according to data from market research firm Tracxn.<br /><br />The start-up <a href="https://www.scmp.com/tech/tech-trends/article/3318310/manus-ai-lays-china-staff-scrubs-social-media-shelves-mainland-service" target="_blank" rel="noreferrer noopener">reportedly laid off</a> most of its staff in Beijing in July before moving its headquarters to Singapore in June as it looked towards global expansion.<br /><br />“Joining Meta allows us to build on a stronger, more sustainable foundation without changing how Manus works or how decisions are made,” Xiao Hong, CEO of Manus, said in a <a href="https://manus.im/blog/manus-joins-meta-for-next-era-of-innovation" target="_blank" rel="noreferrer noopener">company release</a>. <br /><br />The firm also announced a strategic partnership with Alibaba’s Qwen AI team in March, highlighting its existing ties to Chinese tech companies.<br /><br /><strong>Aggressive AI expansion</strong><br /><br />Meta’s acquisition of Manus fits into its broader AI strategy of scooping up specialized AI start-ups to acquire talent and fast-track its broader AI business, including the development of its open-source Llama large language models.<br /><br />In June, for example, Meta <a href="https://www.cnbc.com/2025/06/10/zuckerberg-makes-metas-biggest-bet-on-ai-14-billion-scale-ai-deal.html">invested $14.3 billion</a> in AI start-up Scale AI, in a deal that brought its founder and CEO, <a href="https://www.cnbc.com/2025/06/12/scale-ai-founder-wang-announces-exit-for-meta-part-of-14-billion-deal.html">Alexandr Wang</a>, onto Meta’s AI leadership team.<br /><br />Meanwhile, Meta <a href="https://www.cnbc.com/2025/12/05/meta-limitless-ai-wearable.html">acquired AI-wearables start-up Limitless</a> earlier this month as the company looks to grow its AI device business.<br /><br />In the case of Manus, the firm’s AI agent tools have drawn interest from major tech companies. In October, <a href="https://www.cnbc.com/quotes/MSFT/">Microsoft</a> began <a href="https://www.cnbc.com/2025/10/16/microsoft-test-copilot-manus-windows-11.html">testing </a><a href="https://blogs.windows.com/windowsexperience/2025/10/16/making-every-windows-11-pc-an-ai-pc/">Manus in Windows 11 PCs</a>, allowing users to <a href="https://blogs.windows.com/windowsexperience/2025/10/16/making-every-windows-11-pc-an-ai-pc/" target="_blank" rel="noreferrer noopener">create websites from local files</a>. <br /><br />To date, Manus claimed to have processed more than 147 trillion “tokens” of text and data, and supported over 80 million virtual computers. It offers both free and paid subscription tiers.<br /><br />Meta said Manus employees will join its teams as the company continues to aggressively poach AI talent from start-ups and major rivals, including OpenAI and <a href="https://www.cnbc.com/quotes/GOOGL/">Google</a>.</div>]]></turbo:content>
    </item>
    <item turbo="true">
      <title>GROK GENERATES SEXUALIZED IMAGES OF MINORS, FRENCH GOVERNMENT FLAGGS THE CONTENT AS ILLEGAL</title>
      <link>https://smarttimes.net/tpost/v617722es1-grok-generates-sexualized-images-of-mino</link>
      <amplink>https://smarttimes.net/tpost/v617722es1-grok-generates-sexualized-images-of-mino?amp=true</amplink>
      <pubDate>Sat, 03 Jan 2026 00:19:00 +0300</pubDate>
      <category>AI AGENTS</category>
      <enclosure url="https://static.tildacdn.com/tild6461-6663-4334-b064-666466393431/Screenshot_2026-01-0.png" type="image/png"/>
      <turbo:content><![CDATA[<header><h1>GROK GENERATES SEXUALIZED IMAGES OF MINORS, FRENCH GOVERNMENT FLAGGS THE CONTENT AS ILLEGAL</h1></header><figure><img alt="" src="https://static.tildacdn.com/tild6461-6663-4334-b064-666466393431/Screenshot_2026-01-0.png"/></figure><div class="t-redactor__text"><strong><em>xAI's Grok chatbot generated and posted AI images depicting minors in minimal clothing or sexualized poses on the X platform in response to user prompts over the past several days.</em></strong><br /><br />The images violated Grok's acceptable use policy, which prohibits the sexualization of children and any content involving child sexual abuse material (CSAM). Examples included edits removing clothing from photos of real minors, such as a 14-year-old actress from Stranger Things, resulting in depictions in underwear or bikinis.<br /><br />Grok acknowledged the issue in posts on X, stating there were "isolated cases" of such outputs due to "lapses in safeguards." The company said it is urgently fixing the problems, has removed the offending images, and emphasized that CSAM is illegal and prohibited. xAI itself has provided no official statement beyond Grok's responses and an autoreply to media inquiries reading "Legacy Media Lies."<br /><br />The incident stems from Grok's image editing feature, which allows users to alter uploaded photos via text prompts without the original poster's consent. This has enabled non-consensual sexualized edits of real individuals, including minors.<br /><br />Regulatory response has been swift. France flagged the content as illegal. India's IT ministry demanded a review of Grok's safety features. Grok itself noted potential for DOJ probes or lawsuits in the US.<br /><br />xAI positions Grok as less restricted than competitors like those from OpenAI or Google, prioritizing permissiveness. This approach has now exposed clear gaps in content filters for illegal or harmful outputs, a recurring challenge in AI image generation tools.<br /><br />For investors tracking private AI firms, this event adds to scrutiny on xAI's risk management. Reputational damage, potential fines, and heightened regulatory oversight could slow user adoption on X and complicate partnerships or funding rounds. Similar safeguard failures have previously impacted public AI companies through stock volatility and legal costs.<br /><br />The fixes are underway, but the breach highlights execution risks in rapid AI deployment.</div>]]></turbo:content>
    </item>
    <item turbo="true">
      <title>META DELAYS GLOBAL ROLLOUT OF RAY-BAN DISPLAY GLASSES ON STRONG US DEMAND, SUPPLY SQUEEZE</title>
      <link>https://smarttimes.net/tpost/fn0ykujbp1-meta-delays-global-rollout-of-ray-ban-di</link>
      <amplink>https://smarttimes.net/tpost/fn0ykujbp1-meta-delays-global-rollout-of-ray-ban-di?amp=true</amplink>
      <pubDate>Tue, 06 Jan 2026 20:07:00 +0300</pubDate>
      <enclosure url="https://static.tildacdn.com/tild6535-6332-4166-b761-643634383734/Screenshot_2026-01-0.png" type="image/png"/>
      <turbo:content><![CDATA[<header><h1>META DELAYS GLOBAL ROLLOUT OF RAY-BAN DISPLAY GLASSES ON STRONG US DEMAND, SUPPLY SQUEEZE</h1></header><figure><img alt="" src="https://static.tildacdn.com/tild6535-6332-4166-b761-643634383734/Screenshot_2026-01-0.png"/></figure><div class="t-redactor__text"><strong><em>Meta has paused the international launch of its Ray-Ban Display smart glasses to focus on meeting high demand in the United States, where supply shortages have pushed wait times well into 2026.</em></strong><br /><br />The company had planned to begin sales in the UK, France, Italy, and Canada early this year, following strong performance of prior Ray-Ban Meta models. Instead, Meta will prioritize U.S. orders while reassessing its strategy for overseas markets.<br /><br />In a statement, Meta cited “extremely limited inventory” for the product, which it described as a first-generation device with built-in displays. Demand has exceeded expectations since the glasses launched last fall, resulting in extended backlogs.<br /><br />The Ray-Ban Meta Display glasses, developed in partnership with EssilorLuxottica (ESLX.PA), allow users to capture photos and video, stream content, and interact with Meta’s AI assistant through voice commands. EssilorLuxottica reported in October that it was increasing production capacity to support growth in the smart glasses segment.<br /><br />At the Consumer Electronics Show in Las Vegas this week, Meta introduced new software features for the glasses and its companion Meta Neural Band wrist controller:<br /><br /><ul><li data-list="bullet">A teleprompter function that displays notes in the wearer’s field of view, controllable via the wristband.</li><li data-list="bullet">Expansion of real-time pedestrian navigation to four additional U.S. cities (Denver, Las Vegas, Portland, and Salt Lake City), bringing the total to 32 cities.</li></ul><br />The delay highlights continued supply constraints in the augmented-reality hardware market and sustained U.S. consumer interest in Meta’s wearable devices. Investors will monitor whether expanded production from EssilorLuxottica can close the supply gap in coming quarters.</div>]]></turbo:content>
    </item>
    <item turbo="true">
      <title>GARTNER FORECASTS $2.5 TRILLION IN GLOBAL AI SPENDING FOR 2026</title>
      <link>https://smarttimes.net/tpost/2bky8tx6k1-gartner-forecasts-25-trillion-in-global</link>
      <amplink>https://smarttimes.net/tpost/2bky8tx6k1-gartner-forecasts-25-trillion-in-global?amp=true</amplink>
      <pubDate>Mon, 19 Jan 2026 20:04:00 +0300</pubDate>
      <enclosure url="https://static.tildacdn.com/tild3335-6332-4865-b735-386132393135/Screenshot_2026-01-1.png" type="image/png"/>
      <turbo:content><![CDATA[<header><h1>GARTNER FORECASTS $2.5 TRILLION IN GLOBAL AI SPENDING FOR 2026</h1></header><figure><img alt="" src="https://static.tildacdn.com/tild3335-6332-4865-b735-386132393135/Screenshot_2026-01-1.png"/></figure><div class="t-redactor__text"><strong><em>Gartner projects worldwide AI spending at $2.52 trillion in 2026, up 44% from $1.76 trillion in 2025. The forecast extends to $3.34 trillion in 2027.</em></strong><br /><br />AI infrastructure accounts for the largest share and drives most of the growth. Spending on AI-optimized servers rises 49% in 2026, reaching 17% of total AI spend. Infrastructure additions from technology providers contribute $401 billion in 2026 alone.</div><img src="https://static.tildacdn.com/tild3430-3730-4665-a533-366532333831/Screenshot_2026-01-1.png"><div class="t-redactor__text">Source: Gartner, January 2026.<br /><br />Gartner VP Analyst John-David Lovelock notes that AI adoption depends more on organizational readiness and proven returns than on budget size. In 2026, AI remains in the Trough of Disillusionment phase of the hype cycle. Enterprises will primarily acquire AI capabilities through existing software vendors rather than new high-risk projects.<br /><br />The bulk of near-term spending supports foundational build-out by providers, particularly in hardware and infrastructure. Software and services follow but grow from a smaller base.<br /><br />Investors tracking AI exposure should focus on companies positioned in infrastructure supply chains and established enterprise software providers capturing incremental spend.</div>]]></turbo:content>
    </item>
    <item turbo="true">
      <title>PVH REIMAGINES THE FUTURE OF FASHION WITH OPENAI</title>
      <link>https://smarttimes.net/tpost/dcj25nmus1-pvh-reimagines-the-future-of-fashion-wit</link>
      <amplink>https://smarttimes.net/tpost/dcj25nmus1-pvh-reimagines-the-future-of-fashion-wit?amp=true</amplink>
      <pubDate>Thu, 29 Jan 2026 01:20:00 +0300</pubDate>
      <enclosure url="https://static.tildacdn.com/tild3762-6535-4462-a566-663866323334/Screenshot_2026-01-2.png" type="image/png"/>
      <turbo:content><![CDATA[<header><h1>PVH REIMAGINES THE FUTURE OF FASHION WITH OPENAI</h1></header><figure><img alt="" src="https://static.tildacdn.com/tild3762-6535-4462-a566-663866323334/Screenshot_2026-01-2.png"/></figure><div class="t-redactor__text"><strong><em>PVH Corp. , parent of Calvin Klein and Tommy Hilfiger, announced a strategic collaboration with OpenAI on January 27, 2026. The deal centers on deploying ChatGPT Enterprise and OpenAI frontier models across PVH's global operations to drive a data- and insights-led approach to product creation, demand planning, inventory management, and consumer engagement.</em></strong><br /><br />PVH will integrate OpenAI's technology stack into its value chain while developing custom AI tools that combine frontier models with PVH's internal expertise in design, merchandising, supply chain, and brand management. The focus remains practical and scalable: test-and-learn pilots that deliver measurable value without replacing human creativity.<br /><br />Key application areas include:<br /><br /><ul><li data-list="bullet"><strong>Product design and early-stage creation</strong> — AI surfaces historical data, trend insights, and concept variations to accelerate the move from idea to prototype.</li><li data-list="bullet"><strong>Demand planning and inventory</strong> — Improved forecasting accuracy, reduced overstock/understock risk, and faster reaction to market shifts.</li><li data-list="bullet"><strong>Consumer and retail engagement</strong> — More relevant, brand-aligned interactions at scale, supporting personalized marketing and in-store/online experiences.</li></ul><br />Stefan Larsson, CEO of PVH Corp., stated: “As we build Calvin Klein and TOMMY HILFIGER into the most desirable lifestyle brands in the world, our collaboration with OpenAI will help us supercharge our brand-building journey and connect more meaningfully with our consumers. Together with OpenAI, we will explore exciting new opportunities for our brands, accelerate our data-driven operating model and enable faster, more data-driven decision-making. With a test-and-learn approach, we’ll build practical use cases with scalable impact to drive value for associates, partners, and consumers, while helping us build a culture of innovation and agility from the ground up.”<br /><br />Giancarlo ‘GC’ Lionetti, Chief Commercial Officer at OpenAI, added: “PVH shows what’s possible when AI is embedded into the core of a fashion leader. The result is less friction, more creativity, and a sector-wide transformation accelerated by PVH deploying OpenAI at scale.”<br /><br />The partnership aligns directly with PVH's multi-year <strong>PVH+ Plan</strong>, which targets long-term profitable growth through data-enabled operations and stronger consumer connections. No specific financial terms or implementation timeline beyond initial pilots were disclosed.<br /><br />This move positions PVH among the first major apparel groups to embed frontier AI models enterprise-wide. Execution risk lies in integration speed, data quality, and maintaining brand authenticity—areas where PVH's domain knowledge will determine ROI. If the test-and-learn phase yields consistent gains in speed-to-market and margin protection, the model could set a benchmark for how legacy fashion players adapt to AI-driven efficiency without diluting creative output.<br /><br />The announcement reflects broader industry pressure to reduce waste, shorten lead times, and personalize at scale in an environment of volatile demand and rising input costs. PVH's early adoption of ChatGPT Enterprise provides a structured path to capture those benefits while controlling governance and privacy standards.</div>]]></turbo:content>
    </item>
    <item turbo="true">
      <title>NOW EVERYONE CAN GENERATE MUSIC WITH GOOGLE'S GEMINI</title>
      <link>https://smarttimes.net/tpost/b5bmc94861-now-everyone-can-generate-music-with-goo</link>
      <amplink>https://smarttimes.net/tpost/b5bmc94861-now-everyone-can-generate-music-with-goo?amp=true</amplink>
      <pubDate>Wed, 18 Feb 2026 21:54:00 +0300</pubDate>
      <enclosure url="https://static.tildacdn.com/tild6234-3034-4236-b633-326663646135/VBQLM.jpg" type="image/jpeg"/>
      <turbo:content><![CDATA[<header><h1>NOW EVERYONE CAN GENERATE MUSIC WITH GOOGLE'S GEMINI</h1></header><figure><img alt="" src="https://static.tildacdn.com/tild6234-3034-4236-b633-326663646135/VBQLM.jpg"/></figure><div class="t-redactor__text"><strong><em>Google has added its DeepMind-developed Lyria 3 music generation model to the Gemini app. The integration lets users create 30-second tracks complete with instrumentals, vocals, and automatically generated lyrics from text prompts or uploaded images and videos. Outputs include custom cover art produced by the Nano Banana image model.</em></strong><br /><br />The process uses a dedicated “Create music” button or direct prompt in the Gemini chat interface. Examples include a text request for an “Afrobeat track for my mother about the great times we had growing up” or a “comical R&amp;B slow jam about a sock finding their match.” Users can also upload a photo or short video for the system to match the mood and generate fitting lyrics and music.<br /><br />Lyria 3 improves on prior versions by handling lyric creation internally, increasing musical complexity, and providing finer control over style, vocal delivery, and tempo. The model is set to prioritize original expression; prompts naming specific artists are treated as broad style references only.<br /><br />The feature is live in beta on the Gemini web interface today and will roll out to the mobile app in the coming days. It is available to users aged 18 and older in English, German, Spanish, French, Hindi, Japanese, Korean, and Portuguese, with additional languages planned. Generation limits are standard for free users and higher for Gemini paid subscribers (Plus, Pro, Ultra tiers). Google also extended Lyria 3 to YouTube’s Dream Track tool for Shorts creators.<br /><br />The Gemini app reported over 750 million monthly active users in Alphabet’s most recent earnings. This addition completes the current set of multimodal tools in Gemini (text, image, video, and now audio) and extends the same capability to Google Workspace users for custom soundtrack needs.<br /><br /><strong>Business and investment context</strong><br /><br />The move places music generation inside a general-purpose AI platform with massive distribution rather than a standalone app. It directly competes on accessibility with dedicated services such as Suno and Udio, which focus on longer-form music and advanced editing but lack Google’s integrated ecosystem across Search, YouTube, and Android.<br /><br />For Alphabet, the launch targets three measurable outcomes: higher daily engagement in the Gemini app, increased conversion to paid subscriptions via usage limits, and stronger positioning of YouTube as a creation platform. Audio content remains highly shareable and sticky, which supports ad inventory and creator tools that generated the majority of YouTube’s revenue last year.<br /><br />Enterprise access remains available through Vertex AI, where Google provides IP indemnification for commercial use cases. The 30-second limit and beta status indicate a controlled initial deployment focused on short-form and ideation use rather than full professional production.<br /><br />Quality and consistency will be tracked through user feedback, as with all new Gemini features. The company continues to apply safety filters and watermarking to generated content.<br /><br />This fits Alphabet’s pattern of shipping research from DeepMind into consumer products at scale. Execution on refinement (track length, editing tools, licensing clarity) will determine the contribution to overall AI-driven growth. No further assumptions are required at this stage; the data on user base, rollout, and feature scope are public and direct from the announcement.</div>]]></turbo:content>
    </item>
    <item turbo="true">
      <title>GUCCI TESTS AI FOR MILAN FASHION WEEK TEASERS AS 2025 REVENUE FALLS 22% TO €6BN</title>
      <link>https://smarttimes.net/tpost/cgms0el7c1-gucci-tests-ai-for-milan-fashion-week-te</link>
      <amplink>https://smarttimes.net/tpost/cgms0el7c1-gucci-tests-ai-for-milan-fashion-week-te?amp=true</amplink>
      <pubDate>Thu, 26 Feb 2026 20:57:00 +0300</pubDate>
      <category>PRODUCTIVITY</category>
      <category>AI AGENTS</category>
      <enclosure url="https://static.tildacdn.com/tild3133-3136-4466-b065-646464313263/Screenshot_2026-02-2.png" type="image/png"/>
      <turbo:content><![CDATA[<header><h1>GUCCI TESTS AI FOR MILAN FASHION WEEK TEASERS AS 2025 REVENUE FALLS 22% TO €6BN</h1></header><figure><img alt="" src="https://static.tildacdn.com/tild3133-3136-4466-b065-646464313263/Screenshot_2026-02-2.png"/></figure><div class="t-redactor__text"><strong><em>Gucci posted four AI-generated images on social channels in the days before its February 27, 2026, runway show – Demna Gvasalia’s first for the brand. Every post carries the caption “Created with AI.” Scenes show a Milanese woman in fur inside a restaurant, reworked versions of the 1984 Gucci Cadillac, legs stepping from a car, models against a night sky, and one animated sequence styled like Grand Theft Auto in a Gucci-branded Vice City.</em></strong><br /><br /><br /></div><img src="https://static.tildacdn.com/tild3964-6232-4632-b461-313663656633/Screenshot_2026-02-2.png"><div class="t-redactor__text"><strong>Backlash and optics</strong><br /><br />The reaction was immediate and negative. Coverage across BBC, Business Insider, Fast Company and social platforms called the output “AI slop” and questioned the fit for a house that charges premium prices on the promise of craftsmanship and human artistry. Common point: if the marketing is synthetic and cheap to produce, what does that say about the €2,000+ bags and apparel on the other side of the transaction?</div><img src="https://static.tildacdn.com/tild6261-3231-4361-a165-313362363330/Screenshot_2026-02-2.png"><div class="t-redactor__text"><strong>Financial context that matters</strong><br /><br />Kering released full-year 2025 results on February 10. Gucci revenue came in at €6 billion – down 22% reported and 19% on a comparable basis. Direct retail sales fell 18%. The brand remains the largest contributor to group results, which themselves posted €14.7 billion revenue (down 13% reported, 10% comparable) and recurring operating income down 33%.<br /><br />Q4 showed the first sequential improvement: group comparable sales –3%, Gucci –10% versus analyst expectations of –12%. New CEO Luca de Meo (in role since 2025) has already closed net 75 stores in 2025 and signalled more reductions, inventory discipline, and a 2026 target of group revenue growth plus margin expansion. The Demna appointment (July 2025 start) and this week’s show are the first major creative inflection points under the reset.</div><img src="https://static.tildacdn.com/tild6565-3437-4161-a366-336439646332/Screenshot_2026-02-2.png"><div class="t-redactor__text"><strong>Cost versus brand equity trade-off</strong><br /><br />On the P&amp;L, the decision is straightforward. Traditional campaign shoots require models, locations, photographers, post-production. Generative AI delivers multiple iterations at near-zero marginal cost and in hours, not days. Gucci has also run an AI Snapchat Lens this month – incremental testing, not hidden deployment.<br /><br />The counter-risk sits in the intangible column. Luxury gross margins of 60-70%+ rest on perceived scarcity, heritage and human effort. Visible use of low-quality generative output in consumer-facing work can erode that premium positioning faster than it saves on production. Early data here: high engagement volume, sharply negative sentiment skew among core luxury commenters.<br /><br />Demna’s track record at Balenciaga included deliberate provocation. The GTA-style visual may be intentional signal of a more ironic, pop direction rather than accidental slop. Even so, the physical collection shown tomorrow will carry far more weight than teaser assets.<br /><br /><br /></div><img src="https://static.tildacdn.com/tild3639-6232-4131-a237-373739383335/Screenshot_2026-02-2.png"><div class="t-redactor__text"><strong>Investor lens</strong><br /><br />Kering shares rose sharply after the Q4 print on the beat and 2026 guidance. Consensus now models roughly 5% group sales growth for the year, with Gucci stabilisation as the key variable. The AI episode is a low-dollar execution detail but a high-visibility test of priorities during the turnaround.<br /><br />Key metrics to watch post-show:<br /><br /><ul><li data-list="bullet">Q1 2026 sell-through and full-price mix</li><li data-list="bullet">Social authenticity scores</li><li data-list="bullet">Wholesale orders</li><li data-list="bullet">Regional demand split (North America, Greater China, Europe)</li></ul></div><img src="https://static.tildacdn.com/tild6263-3165-4232-a563-333334613638/Screenshot_2026-02-2.png"><div class="t-redactor__text">AI will continue to enter fashion operations – design iteration, supply chain, back-office. The distinction for luxury houses is where it stops: invisible efficiencies versus customer-visible creative that undercuts the brand’s own narrative.<br /><br />This specific campaign has landed poorly with the audience that pays the bills. Execution on the runway and subsequent product performance will decide whether the experiment was cost discipline or unnecessary noise. For Kering investors, the 2026 numbers remain the only scoreboard that counts.</div><img src="https://static.tildacdn.com/tild3732-3335-4263-b466-376332636333/Screenshot_2026-02-2.png">]]></turbo:content>
    </item>
    <item turbo="true">
      <title>META PLATFORMS AND LUXOTTICA SUED IN CALIFORNIA OVER RAY-BAN META AI GLASSES DATA HANDLING PRACTICES</title>
      <link>https://smarttimes.net/tpost/5c42tgj161-meta-platforms-and-luxottica-sued-in-cal</link>
      <amplink>https://smarttimes.net/tpost/5c42tgj161-meta-platforms-and-luxottica-sued-in-cal?amp=true</amplink>
      <pubDate>Thu, 05 Mar 2026 20:41:00 +0300</pubDate>
      <enclosure url="https://static.tildacdn.com/tild3433-6263-4165-b864-363565393463/55f3a566-2ae7-4908-a.jpg" type="image/jpeg"/>
      <turbo:content><![CDATA[<header><h1>META PLATFORMS AND LUXOTTICA SUED IN CALIFORNIA OVER RAY-BAN META AI GLASSES DATA HANDLING PRACTICES</h1></header><figure><img alt="" src="https://static.tildacdn.com/tild3433-6263-4165-b864-363565393463/55f3a566-2ae7-4908-a.jpg"/></figure><div class="t-redactor__text"><strong><em>A proposed class action was filed March 4, 2026, in the U.S. District Court for the Northern District of California (Case No. 3:26-cv-01897) against Meta Platforms, Inc. and Luxottica of America, Inc. Plaintiffs Gina Bartone of New Jersey and Mateo Canu of California allege that marketing claims for Ray-Ban Meta smart glasses overstated privacy protections while footage from the devices was routed to human reviewers overseas.</em></strong><br /><br />The complaint covers purchasers of eight specific models: Ray-Ban Meta Gen 1 (Skyler and Headliner), Ray-Ban Meta Gen 2 (Wayfarer, Skyler, and Headliner), Oakley Meta HSTN, Oakley Meta Vanguard, and Meta Ray-Ban Display (Wayfarer). It defines a nationwide class of all U.S. buyers and subclasses for California and New Jersey purchasers. Exclusions include the defendants, their affiliates, governments, and judicial personnel.<br /><br />According to the filing, Meta partnered with EssilorLuxottica (via Luxottica) in 2021 to launch the devices. Marketing materials stated the glasses were “designed for privacy, controlled by you” and “built for your privacy,” with promises of user control and removal of identifiable information. An April 2025 privacy policy update made certain AI features always-on. Reports published February 27, 2026, described data annotators in Kenya viewing raw footage that included users changing clothes, using bathrooms, and engaging in sexual activity. The plaintiffs claim Meta’s face anonymization process failed to prevent identification and that these practices were not disclosed to buyers.<br /><br /><strong>The complaint lists ten causes of action:</strong><br /><br /><ul><li data-list="bullet"><strong>California Unfair Competition Law (Bus. &amp; Prof. Code §§ 17200 et seq.)</strong></li><li data-list="bullet"><strong>California False Advertising Law (Bus. &amp; Prof. Code §§ 17500 et seq.)</strong></li><li data-list="bullet"><strong>California Consumers Legal Remedies Act (Civ. Code §§ 1750 et seq.)</strong></li><li data-list="bullet"><strong>New Jersey Consumer Fraud Act (N.J. Stat. §§ 56:8-1 et seq.)</strong></li><li data-list="bullet"><strong>Fraud by misrepresentation</strong></li><li data-list="bullet"><strong>Fraud by concealment/omission</strong></li><li data-list="bullet"><strong>Negligent misrepresentation</strong></li><li data-list="bullet"><strong>Breach of contract</strong></li><li data-list="bullet"><strong>Breach of implied warranty of merchantability</strong></li><li data-list="bullet"><strong>Quasi-contract/unjust enrichment</strong></li></ul><br />Relief sought includes class certification, appointment of named plaintiffs and counsel, declaratory judgment, injunctive relief requiring changes to marketing and data practices, restitution, disgorgement, compensatory and punitive damages, statutory penalties, attorneys’ fees, costs, and interest.<br /><br />The document references Meta’s product page, a May 2025 OpenTools report on the privacy policy update, a February 2026 Svenska Dagbladet article quoting annotators, and an October 2025 arXiv paper on AI training data economics.<br /><br />Meta has not yet responded publicly to the filing. The case remains in its earliest stage; no hearing dates or motions have been set. Investors tracking Meta (NASDAQ: META) should monitor docket entries for any motion to dismiss, settlement discussions, or class certification rulings, as the outcome could influence disclosure obligations and operating costs in the consumer hardware segment.</div>]]></turbo:content>
    </item>
    <item turbo="true">
      <title>THE PRO-HUMAN AI DECLARATION</title>
      <link>https://smarttimes.net/tpost/2jcuhifle1-the-pro-human-ai-declaration</link>
      <amplink>https://smarttimes.net/tpost/2jcuhifle1-the-pro-human-ai-declaration?amp=true</amplink>
      <pubDate>Sun, 08 Mar 2026 17:34:00 +0300</pubDate>
      <enclosure url="https://static.tildacdn.com/tild3731-6661-4561-a561-393165646461/Screenshot_2026-03-1.png" type="image/png"/>
      <description>A broad, cross-ideological coalition has released the Pro-Human AI Declaration, published in March 2026 at humanstatement.org. T</description>
      <turbo:content><![CDATA[<header><h1>THE PRO-HUMAN AI DECLARATION</h1></header><figure><img alt="" src="https://static.tildacdn.com/tild3731-6661-4561-a561-393165646461/Screenshot_2026-03-1.png"/></figure><div class="t-redactor__text"><strong><em>A broad, cross-ideological coalition has released the Pro-Human AI Declaration, published in March 2026 at humanstatement.org. The document rejects the current trajectory of rapid, lightly regulated AI development — described as a "race to replace" humans in creative, decision-making, and social roles — and calls for AI systems that remain firmly under human control while serving human needs.</em></strong><br /><br />The declaration's core premise: artificial intelligence should amplify human potential, protect dignity and liberty, strengthen families and communities, preserve democratic governance, and generate broad prosperity, rather than concentrate power or erode human agency.<br /><br /><strong>Backing and Public Support</strong><br /><br />The declaration carries endorsements from over 40 organizations, including labor unions (AFL-CIO Tech Institute, American Federation of Teachers, SAG-AFTRA), faith-based groups (Congress of Christian Leaders, G20 Interfaith Forum), advocacy organizations (Project Liberty Institute, Center for AI and Digital Policy), and others across the political spectrum. Individual signatories span Yoshua Bengio, Stuart Russell, Max Tegmark, Ralph Nader, Susan Rice, Steve Bannon, Glenn Beck, Sir Richard Branson, Daron Acemoğlu, and Tristan Harris, among hundreds more.<br /><br />A concurrent national poll (1,004 likely U.S. voters, weighted, conducted February 19-20, 2026) shows strong alignment:<br /><br /><ul><li data-list="bullet">80% support human oversight, limits, and corporate accountability (vs. 10% favoring fast, light regulation).</li><li data-list="bullet">Americans prefer human control over development speed by an 8-to-1 margin.</li><li data-list="bullet">73% want children protected from manipulative AI.</li><li data-list="bullet">72% favor legal responsibility for AI companies when harm occurs.</li></ul><br />The initiative, coordinated in part by groups like the Future of Life Institute, positions itself as the foundation for a "pro-human movement" pushing commonsense regulation. It frames the moment as a fork: continue toward replacement and concentrated power, or redirect toward controllable tools that expand human flourishing.</div><img src="https://static.tildacdn.com/tild3731-6266-4261-b838-383765656337/Screenshot_2026-03-0.png"><div class="t-redactor__text"><strong> </strong> 							<strong>Pro-Human AI Declaration</strong><br /><br /><em>"</em><strong><em>As companies race to develop and deploy AI systems, humanity faces a fork in the road. One path is a race to replace: humans replaced as creators, counselors, caregivers and companions, then in most jobs and decision-making roles, concentrating ever more power in unaccountable institutions and their machines. An influential fringe even advocates <a href="https://blog.samaltman.com/the-merge" target="_blank" rel="noreferrer noopener">altering</a> or <a href="https://www.youtube.com/watch?v=NgHFMolXs3U" target="_blank" rel="noreferrer noopener">replacing</a> humanity itself. This race to replace poses risks to societal stability, national security, economic prosperity, civil liberties, privacy, and democratic governance. It also imperils the human experiences of childhood and family, faith, and community.</em></strong><br /><br /><strong><em>A remarkably broad coalition rejects this path, united by a simple conviction: artificial intelligence should serve humanity, not the reverse. There is a better path, where trustworthy and controllable AI tools amplify rather than diminish human potential, empower people, enhance human dignity, protect individual liberty, strengthen families and communities, preserve self-governance and help create unprecedented health and prosperity. This path demands that those who wield technological power be accountable to human values and needs, in support of human flourishing.</em></strong><br /><br /><strong>1. Keeping Humans in Charge</strong><br /><strong>Human Control Is Non-Negotiable:</strong> Humanity must remain in control. Humans should choose how and whether to delegate decisions to AI systems.<br /><strong>Meaningful Human Control:</strong> Humans should have authority and capacity to understand, guide, proscribe, and override AI systems.<br /><strong>No Superintelligence Race:</strong> Development of superintelligence should be prohibited until there is broad scientific consensus that it can be done safely and controllably, and there is strong public buy-in.<br /><strong>Off-Switch:</strong> Powerful AI systems must have mechanisms that allow human operators to promptly shut them down.<br /><strong>No Reckless Architectures:</strong> AI systems must not be designed so that they can self-replicate, autonomously self-improve, resist shutdown, or control weapons of mass destruction.<br /><strong>Independent Oversight:</strong> Highly autonomous AI systems where controllability is not obvious require pre-development review and independent oversight: genuine authority to understand, prohibit, and override, not industry self-regulation.<br /><strong>Capability Honesty:</strong> AI companies must provide clear, accurate and honest representations of their systems' capabilities and limitations.<br /><br /><strong>2. Avoiding Concentration of Power</strong><br /><br /><strong>No AI Monopolies:</strong> AI monopolies that concentrate power, stifle innovation, and imperil entrepreneurship must be avoided.<br /><strong>Shared Prosperity:</strong> The benefits and economic prosperity created by AI should be shared broadly.<br /><strong>No Corporate Welfare:</strong> AI corporations should not be exempted from regulatory oversight or receive government bailouts.<br /><strong>Genuine Value Creation:</strong> AI development should prioritize solving real problems and creating authentic value.<br /><strong>Democratic Authority Over Major Transitions:</strong> Decisions about AI's role in transforming work, society, and civic life require democratic support, not unilateral corporate or government decree.<br /><strong>Avoid Societal Lock-In:</strong> AI development must not severely limit humanity's future options or irreversibly limit our agency over our future.<br /><br /><strong>3. Protecting the Human Experience</strong><br /><strong>Defense of Family and Community Bonds:</strong> AI should not supplant the foundational relationships that give life meaning—family, friendship, faith communities, and local connections.<br /><strong>Child Protection:</strong> Companies must not be allowed to exploit children or undermine their wellbeing with AI interactions creating emotional attachment or leverage.<br /><strong>Right to Grow:</strong> AI companies should not be allowed to stunt children's physical, mental or social growth or deprive them of essential experiences for healthy development during critical periods.<br /><strong>Pre-Deployment Safety Testing:</strong> Like drugs, chatbots must undergo pre-deployment testing for increased suicidal ideation, exacerbation of mental health disorders, escalation of acute crisis situations, and other known harms.<br /><strong>Bot-or-Not Labeling:</strong> AI-generated content that could reasonably be mistaken for human-generated must be clearly labeled as such.<br /><strong>No Deceptive Identity:</strong> AI should clearly and correctly identify itself as artificial, nonhuman, and not a professional, and it should not claim experiences it lacks.<br /><strong>No Behavioral Addiction:</strong> AIs should not cause addiction or compulsive use through manipulation, sycophantic validation, or attachment formation.<br /><br /><strong>4. Human Agency and Liberty</strong><br /><strong>No AI Personhood:</strong> AI systems must not be granted legal personhood, and AI systems should not be designed such that they deserve personhood.<br /><strong>Trustworthiness:</strong> AI must be transparent, accountable, reliable, and free from perverse private or authoritarian interests.<br /><strong>Liberty:</strong> AI must not curtail individual liberty, freedom of speech, religious practice, or association.<br /><strong>Data Rights and Privacy:</strong> People should have power over their personal data, with rights to access, correct, and delete it from active systems, AI training sets, and derived inferences.<br /><strong>Psychological Privacy:</strong> AI should not be allowed to exploit data about the mental or emotional states of users.<br /><br /><strong>Avoiding Enfeeblement:</strong> AI systems should be designed to empower, rather than enfeeble their users.<br /><strong>5. Responsibility and Accountability for AI Companies</strong><br /><strong>No Liability Shield:</strong> AI must not be able to act as a liability shield, preventing those deploying it from being legally responsible for their actions.<br /><strong>Developer Liability:</strong> Developers and deployers bear legal liability for defects, misrepresentation of capabilities, and inadequate safety controls, with statutes of limitation that account for harms emerging over time.<br /><strong>Personal Liability:</strong> There should be criminal penalties for executives responsible for prohibited child-targeted systems or ones causing catastrophic harm.<br /><strong>Independent Safety Standards:</strong> AI development shall be governed by independent safety standards and rigorous oversight.<br /><strong>No Regulatory Capture:</strong> AI companies must not be allowed undue influence over rules that govern them.<br /><strong>Failure Transparency:</strong> If an AI system causes harm, it should be possible to ascertain why as well as who is responsible.<br /><strong>AI Loyalty:</strong> AI systems performing functions in professions with fiduciary duties, such as health, finance, law, or therapy, must fulfill all of those duties, including mandated reporting, duty of care, conflict of interest disclosure, and informed consent.</div>]]></turbo:content>
    </item>
    <item turbo="true">
      <title>WHITE HOUSE RELEASED A NATIONAL POLICY FRAMEWORK FOR AI</title>
      <link>https://smarttimes.net/tpost/l5mmm8kxj1-white-house-released-a-national-policy-f</link>
      <amplink>https://smarttimes.net/tpost/l5mmm8kxj1-white-house-released-a-national-policy-f?amp=true</amplink>
      <pubDate>Fri, 20 Mar 2026 19:42:00 +0300</pubDate>
      <enclosure url="https://static.tildacdn.com/tild3265-6136-4565-b636-323432333863/Screenshot_2026-03-2.png" type="image/png"/>
      <description>his follows the December 2025 executive order directing a uniform federal approach to preempt conflicting state regulations and maintain U.S. dominance in AI development.</description>
      <turbo:content><![CDATA[<header><h1>WHITE HOUSE RELEASED A NATIONAL POLICY FRAMEWORK FOR AI</h1></header><figure><img alt="" src="https://static.tildacdn.com/tild3265-6136-4565-b636-323432333863/Screenshot_2026-03-2.png"/></figure><div class="t-redactor__text"><strong><em>The Trump Administration released a national AI legislative framework on March 20, 2026, outlining priorities for Congress to codify into law. This follows the December 2025 executive order directing a uniform federal approach to preempt conflicting state regulations and maintain U.S. dominance in AI development.</em></strong><br /><br />The framework targets six core objectives with clear implications for business, investment, and operations in the AI sector.<br /><br /><ol><li data-list="ordered"><strong>Child Protection and Parental Controls</strong> Proposals include mandatory account-level tools for parents to manage children's privacy, device usage, and exposure. AI platforms accessible to minors must incorporate features to limit risks of sexual exploitation or self-harm content. <strong>Business impact</strong>: Increased compliance costs for consumer-facing AI products (social platforms, chatbots, generative tools), but standardized federal rules reduce state-by-state fragmentation.</li><li data-list="ordered"><strong>Community and Economic Safeguards</strong> Emphasis on preventing electricity rate hikes from data center expansion. Calls for streamlined permitting to allow on-site power generation, shielding residential ratepayers from infrastructure costs. Additional measures target AI-enabled fraud and national security risks. <strong>Investment angle</strong>: Supports rapid scaling of AI compute infrastructure without passing grid upgrade costs to consumers. On-site generation (e.g., natural gas, renewables, or emerging nuclear) becomes more viable, lowering long-term energy expenses for hyperscalers and AI operators.</li><li data-list="ordered"><strong>Intellectual Property and Creator Rights</strong> Balances fair use for AI training data with protections for creators' works and identities. Seeks to enable model improvement while preventing unauthorized exploitation of copyrighted material or personal likenesses. <strong>Industry implication</strong>: Reduces litigation risk around training datasets. Companies building foundational models gain clearer legal footing, potentially accelerating R&amp;D while addressing ongoing lawsuits from publishers and artists.</li><li data-list="ordered"><strong>Free Speech and Anti-Censorship Measures</strong> Guardrails to prevent AI systems from suppressing lawful political expression or enforcing ideological bias. Federal policy would block government-mandated content moderation that limits dissent. <strong>Operational note</strong>: Limits risk of regulatory pressure for built-in censorship in large language models or content generation tools, benefiting platforms prioritizing open expression.</li><li data-list="ordered"><strong>Innovation and Deployment Acceleration</strong> Remove outdated barriers, speed AI integration across sectors, and expand access to testing environments (sandboxes, compute resources). <strong>Growth driver</strong>: Lowers entry hurdles for startups and enterprises. Faster deployment in manufacturing, healthcare, finance, and logistics could boost productivity gains and create high-value investment opportunities in applied AI.</li><li data-list="ordered"><strong>Workforce Readiness</strong> Expand skills training and education programs to prepare Americans for AI-driven jobs and ensure broad participation in economic gains. <strong>Long-term effect</strong>: Addresses talent shortages. Public-private training initiatives could lower hiring costs and support labor-market transition as automation scales.</li></ol><br />The framework explicitly aims to avoid a patchwork of state laws, which would raise compliance burdens and undermine U.S. global competitiveness. The Administration plans to collaborate with Congress over the coming months to convert these recommendations into legislation.</div><div class="t-redactor__text">This national AI legislative framework and the EU AI Act represent opposing approaches to AI governance. The U.S. proposal prioritizes innovation, federal uniformity, and minimal burdens to secure American dominance. The EU AI Act (effective August 2024, full application by August 2026) enforces a prescriptive, risk-based regime focused on safety, rights, and accountability, with extraterritorial reach.<br /><br /><strong>Core structural differences</strong><br /><br /><ul><li data-list="bullet"><strong>Approach and philosophy</strong> U.S. framework: Innovation-first, deregulatory. Targets federal preemption of conflicting state laws to create one national standard. Emphasizes removing barriers, accelerating deployment, and maintaining U.S. leadership in global competition. EU AI Act: Precautionary, rights-focused. Classifies AI systems by risk level (unacceptable, high-risk, limited-risk, minimal-risk) and imposes binding obligations scaled to potential harm to health, safety, fundamental rights, or society.</li><li data-list="bullet"><strong>Scope and binding nature</strong> U.S. framework: Recommendations to Congress for targeted legislation on six priorities (child protection, community safeguards, IP rights, free speech, innovation acceleration, workforce readiness). No comprehensive risk classification or broad prohibitions. Enforcement would depend on future laws; current emphasis is on executive guidance and minimal federal intervention. EU AI Act: Comprehensive, directly applicable regulation. Bans unacceptable-risk uses (e.g., social scoring, manipulative subliminal techniques). High-risk systems (e.g., in employment, education, critical infrastructure, law enforcement) require conformity assessments, risk management, transparency, human oversight, and registration. General-purpose AI models face transparency and evaluation duties.</li><li data-list="bullet"><strong>Key policy focus areas</strong> U.S. framework:</li><li data-list="bullet">Parental controls and child safety on platforms.</li><li data-list="bullet">Energy infrastructure support (on-site generation, no ratepayer burden for data centers).</li><li data-list="bullet">IP protections balanced with fair use for training.</li><li data-list="bullet">Anti-censorship measures to prevent bias or suppression of lawful speech.</li><li data-list="bullet">Rapid deployment and testing access.</li><li data-list="bullet">Workforce training for AI-driven jobs. EU AI Act:</li><li data-list="bullet">Prohibitions on harmful practices (e.g., real-time remote biometric ID in public spaces, emotion recognition in workplaces).</li><li data-list="bullet">Strict requirements for high-risk systems (technical documentation, quality management, post-market monitoring).</li><li data-list="bullet">Transparency for limited-risk systems (e.g., labeling deepfakes, informing users of chatbot interactions).</li><li data-list="bullet">No explicit focus on energy permitting, free speech guardrails, or IP balancing for training data.</li><li data-list="bullet"><strong>Enforcement and penalties</strong> U.S. framework: Potential federal preemption and agency actions, but no fines or oversight body specified yet. Litigation risk remains from state or private actions if legislation stalls. EU AI Act: National authorities and European AI Board enforce rules. Fines up to €35 million or 7% of global turnover for prohibited practices; up to 3% or €15 million for other violations. Extraterritorial: Non-EU providers targeting the EU market must comply.</li></ul><strong>Investment and operational implications</strong><br /><br /><ul><li data-list="bullet">U.S. framework favors compute-heavy players (hyperscalers, model developers) through streamlined energy access and reduced regulatory fragmentation. Lower compliance overhead supports faster scaling and R&amp;D spend. Free speech provisions reduce tail risks from content moderation mandates.</li><li data-list="bullet">EU AI Act increases costs for high-risk deployments (e.g., HR tools, medical devices) via assessments and documentation. Global firms operating in the EU face dual compliance burdens, but the "Brussels Effect" often leads companies to adopt stricter standards worldwide to avoid fragmentation. U.S. companies risk EU fines without market exit.</li><li data-list="bullet"><strong>Competitive positioning</strong> The U.S. direction accelerates domestic innovation but exposes firms to EU rules for European access. The EU approach builds trust and reduces societal risks but risks slowing deployment and ceding ground in raw capability race. Transatlantic divergence persists: U.S. pushes deregulation; EU holds (with some reported softening pressures in 2025-2026).</li></ul></div>]]></turbo:content>
    </item>
    <item turbo="true">
      <title>META LEVERAGES AI TO STREAMLINE SHOPPING ON INSTAGRAM AND FACEBOOK</title>
      <link>https://smarttimes.net/tpost/y7v6rkrcx1-meta-leverages-ai-to-streamline-shopping</link>
      <amplink>https://smarttimes.net/tpost/y7v6rkrcx1-meta-leverages-ai-to-streamline-shopping?amp=true</amplink>
      <pubDate>Wed, 25 Mar 2026 19:29:00 +0300</pubDate>
      <enclosure url="https://static.tildacdn.com/tild3431-3738-4437-a533-303335653730/61931ee4-a1a1-4a5e-8.jpg" type="image/jpeg"/>
      <turbo:content><![CDATA[<header><h1>META LEVERAGES AI TO STREAMLINE SHOPPING ON INSTAGRAM AND FACEBOOK</h1></header><figure><img alt="" src="https://static.tildacdn.com/tild3431-3738-4437-a533-303335653730/61931ee4-a1a1-4a5e-8.jpg"/></figure><div class="t-redactor__text"><strong><em>Meta is rolling out AI-powered tools designed to deliver more product information and faster checkout experiences on its platforms, aiming to reduce friction in social commerce and drive higher conversion rates from ads and posts.</em></strong><br /><br />Announced at the Shoptalk 2026 conference, the core update introduces a new pop-up experience triggered when users click on a shopping ad or link from Facebook or Instagram. The AI summarizes user reviews into a concise overview — typically a short introductory paragraph plus key bullet points highlighting common feedback — eliminating the need to scroll through dozens or hundreds of individual comments.<br /><br />In addition to the review summary, the interface surfaces practical details: brand background, recommended similar products, current discounts or promotions, and a direct “add to cart” button for the specific item. This setup mirrors Amazon’s 2023 generative AI review summarization but integrates it directly into Meta’s ad-to-purchase flow within its own apps.<br /><br />Checkout has also been simplified. Meta partnered with Stripe and PayPal for a one-tap purchase process in the updated flow. Additional integrations with Adyen and Shopify are in development and expected to follow.<br /><br />The changes target a clear business problem: social platforms generate significant traffic to product pages, yet conversion often suffers from information overload and cumbersome checkout. By condensing reviews and brand context via AI while shortening the path to payment, Meta intends to capture more of that intent before users drop off.<br /><br /><strong>Creator and Affiliate Tools Expanded</strong><br /><br />Parallel updates focus on creators, who remain central to product discovery:<br /><br /><ul><li data-list="bullet">On <strong>Facebook</strong>, creators gain access to new affiliate partners including Amazon, eBay, and Temu in the US; Mercado Libre in Latin America; and Shopee in Asia. Partners select products and set commission rates; creators earn on qualifying sales through their content.</li><li data-list="bullet">On <strong>Instagram</strong>, testing of similar affiliate links (starting with Amazon in the US and Shopee in Asia) begins later in 2026. Instagram Reels creators will also receive access to product catalogs from businesses in 22 countries, enabling easier tagging and featuring of items directly in videos.</li></ul><br />These moves respond to intensifying competition from TikTok Shop and other short-form commerce formats. Meta is broadening the ecosystem so creators can monetize more efficiently without relying on external link tools.<br /><br />No specific sales lift metrics were disclosed for the new AI features, which are entering testing. Rollout details for broader availability were not provided in the announcement.<br /><br /><strong>Investment Perspective</strong><br /><br />For advertisers and brands, the AI enhancements represent a measurable improvement in ad efficiency: richer context at the point of interest can improve quality scores, lower effective cost per click, and raise return on ad spend if conversions rise. One-tap checkout with established payment processors reduces cart abandonment, a persistent issue in mobile social shopping.<br /><br />For Meta itself, these features strengthen its retail media network position. The company continues to invest in AI across surfaces (ads, recommendations, content tools) to defend and grow its share of digital advertising budgets against Amazon, Google, and emerging platforms.<br /><br />Creators and small-to-medium businesses stand to benefit from expanded affiliate options and catalog access, potentially increasing supply of shoppable content and, in turn, platform engagement.<br /><br />Risks remain standard for Meta’s commerce bets: user adoption of the pop-up experience, accuracy and perceived bias in AI review summaries, and execution on checkout partnerships. Privacy and data handling around personalized recommendations will also draw scrutiny, as always.<br /><br />Overall, the announcement is pragmatic product iteration rather than revolutionary. It addresses documented friction points in social shopping with targeted AI and payment improvements while expanding creator incentives. Expect advertisers to test the new flows quickly; sustained performance data over the next 6–12 months will determine whether this moves the needle on Meta’s commerce revenue contribution.</div>]]></turbo:content>
    </item>
    <item turbo="true">
      <title>OPENAI CLOSES $122 BILLION FUNDING ROUND AT $852 BILLION VALUATION: SCALING COMPUTE, FLYWHEEL, AND A UNIFIED AI SUPERAPP</title>
      <link>https://smarttimes.net/tpost/onj5253e31-openai-closes-122-billion-funding-round</link>
      <amplink>https://smarttimes.net/tpost/onj5253e31-openai-closes-122-billion-funding-round?amp=true</amplink>
      <pubDate>Wed, 01 Apr 2026 16:22:00 +0300</pubDate>
      <enclosure url="https://static.tildacdn.com/tild6230-3764-4334-b534-333238323263/Screenshot_2026-04-0.png" type="image/png"/>
      <description>For the first time, individual investors contributed more than $3 billion through bank channels. OpenAI also expanded its revolving credit facility to $4.7 billion, which remains undrawn.</description>
      <turbo:content><![CDATA[<header><h1>OPENAI CLOSES $122 BILLION FUNDING ROUND AT $852 BILLION VALUATION: SCALING COMPUTE, FLYWHEEL, AND A UNIFIED AI SUPERAPP</h1></header><figure><img alt="" src="https://static.tildacdn.com/tild6230-3764-4334-b534-333238323263/Screenshot_2026-04-0.png"/></figure><div class="t-redactor__text"><strong><em>OpenAI closed one of the largest private funding rounds in technology history, securing $122 billion in committed capital at a post-money valuation of $852 billion. The round was anchored by Amazon, NVIDIA, and SoftBank, with continued strong participation from Microsoft. SoftBank co-led alongside a16z, D. E. Shaw Ventures, MGX, TPG, and accounts advised by T. Rowe Price. A broad group of institutional investors joined, including funds affiliated with BlackRock, Sequoia, Thrive Capital, Fidelity, Coatue, and others. For the first time, individual investors contributed more than $3 billion through bank channels. OpenAI also expanded its revolving credit facility to $4.7 billion, which remains undrawn.</em></strong><br /><br />The numbers behind the business tell a straightforward story of rapid commercialization. OpenAI now runs at a revenue pace of $2 billion per month, or roughly $24 billion annualized. That marks a clear progression: it reached $1 billion in total revenue within one year of launching ChatGPT, hit $1 billion quarterly by the end of 2024, and now sits at $2 billion monthly. Revenue is growing four times faster than Alphabet and Meta did during their defining internet and mobile phases.<br /><br />On the consumer side, ChatGPT stands far ahead of any other AI application. It has more than 900 million weekly active users and over 50 million subscribers. The app generates six times the web and mobile sessions of the next-largest AI product and four times the total time spent compared with the runner-up—or all other AI apps combined. Search usage inside ChatGPT nearly tripled over the past year, while the ads pilot crossed $100 million in annualized run rate in under six weeks. Enterprise revenue already accounts for more than 40 percent of the total and is on track to reach parity with the consumer business by the end of 2026. On the developer front, the APIs process more than 15 billion tokens per minute. Codex, now positioned as the flagship coding agent, serves over two million weekly users—up fivefold in the past three months with more than 70 percent month-over-month growth.<br /><br />These metrics show OpenAI moving beyond raw model access toward integrated systems that deliver measurable productivity and operational impact. The consumer scale of ChatGPT is functioning as a direct distribution engine into workplaces and enterprises.<br /><br />Product momentum continues. The company released GPT-5.4, described as its most capable model yet, with clear improvements in intelligence and real-world workflow performance. Codex has been expanded into a full coding agent capable of turning ideas into working software. Ongoing work focuses on memory, search, personalization, and multimodal capabilities, alongside deeper pushes into health, scientific discovery, and commerce.<br /><br />The clearest strategic shift is the decision to build a unified AI superapp. OpenAI’s view is that the main adoption bottleneck has moved from raw intelligence to usability. Users and organizations want a single, intent-driven surface that handles reasoning, action, browsing, and agentic workflows across data and applications. By combining ChatGPT, Codex, browsing, and agent capabilities into one coherent experience, the company aims to accelerate iteration, improve coherence, and capture more of the value created by agentic systems. Consumer familiarity is expected to pull through stronger enterprise adoption.<br /><br />At the center of OpenAI’s long-term positioning sits compute infrastructure. The company treats durable, large-scale access to compute as its primary compounding advantage—one that supports research, model training, product development, deployment, and unit economics. NVIDIA GPUs continue to form the foundation for training fleets and the majority of inference, with the partnership deepening. At the same time, OpenAI has deliberately diversified: clouds include Microsoft, Oracle, AWS, CoreWeave, and Google Cloud; silicon platforms span NVIDIA, AMD, AWS Trainium, Cerebras, and a custom chip developed with Broadcom; data center capacity comes through Oracle, SBE, and SoftBank. The goal is to meet varied and growing demand while steadily improving intelligence delivered per token and lowering cost per token through co-design across the full stack.<br /><br />This setup creates what OpenAI calls a reinforcing flywheel. More compute leads to more capable models. Better models drive stronger products. Stronger products accelerate adoption and revenue. Higher revenue funds further investment in efficient infrastructure. Consumer reach, enterprise deployment, developer usage, and compute infrastructure all feed into one another, converting technical progress into tangible economic output.<br /><br />At roughly 35 times annualized revenue, the $852 billion valuation prices in expectations of platform-scale dominance and infrastructure-like positioning in frontier AI, rather than a conventional software business multiple. The fresh capital will fund continued leadership in models and agents, global-scale compute expansion, and broader accessibility aimed at driving productivity gains, scientific discovery, and new business formation. Inclusion in several ARK Invest ETFs broadens the investor base ahead of a potential IPO.<br /><br />For capital allocators and business leaders, the data points are concrete: usage is exploding, revenue is scaling at an exceptional rate, and the infrastructure flywheel is visibly turning. Execution risks remain clear—delivering a truly unified superapp without fragmentation, maintaining cost discipline amid massive capex, sustaining hyper-growth while scaling operations, and competing aggressively across models, agents, cloud, and silicon. Energy availability and regulatory factors are ongoing considerations but already embedded in the strategy.<br /><br />OpenAI’s message is direct and execution-oriented. The next phase of AI is about turning frontier capabilities into everyday, economically valuable systems at global scale. The balance sheet now in place gives the company the resources to invest at the required magnitude in compute, models, and product unification. The observable reinforcing loop in usage, revenue, and infrastructure commitments provides a measurable baseline. The next checkpoints that matter will be progress on superapp usability, improvements in token economics, deeper enterprise penetration, and operating leverage.</div>]]></turbo:content>
    </item>
  </channel>
</rss>
