THE LINUX FOUNDATION PROJECTS
EventsLF AI & Data Blog

Event Report: LF AI & Data Japan RUG Meetup #3

By April 9, 2026No Comments

From Edge AI to Agentic AI and Global Contribution — The State of Japan’s AI Ecosystem

On February 20, 2026, the Japan Regional User Group (Japan RUG) 3rd Meetup, a Japanese community under the LF AI & Data Foundation, was held at the Fujitsu Kawasaki Tower in Kawasaki City, Kanagawa Prefecture.

Since its launch last year, Japan RUG has been making steady progress. In this event, the latest insights on AI from the perspectives of “Infrastructure, Compression, Standardization, and Community” were shared by four companies: LY Corporation, Fujitsu, Hitachi, and Mitsubishi Electric.

Opening: The Landscape of Japan’s OSS AI Industry

(Noriaki Fukuyasu, The Linux Foundation)

Noriaki Fukuyasu from the Linux Foundation shared the following observations on the current state of Japan’s OSS AI industry:

  • Current State of Japan’s AI Market: While investment in infrastructure and model development is active, the expansion of the “digital deficit” due to overseas dependence on cloud infrastructure remains a challenge.
  • A Winning Strategy for Japanese Companies: Instead of competing in general-purpose LLMs that require massive investment, Japan should focus on “Small Language Models (SLM)” tailored for specific industrial tasks, leveraging the strength of “on-site data” in sectors like manufacturing.
  • Importance of Agentic AI: Agentic AI technology, which coordinates multiple specialized models and distributed infrastructure, is an essential element for Japan’s industrial structure.
  • Opportunities for the SI Industry: By shifting from general-purpose AI to solving industry-specific challenges (implementation and operation of AI agents), new business opportunities will emerge for the Japanese Systems Integration (SI) industry.
  • Future Strategy: To advance Japan’s unique business model (distributed infrastructure + lightweight models), it is crucial to actively participate in the technical standardization of Agentic AI.

Concluding his remarks, he emphasized the importance of investing in infrastructure (cloud-native technology) as the critical foundation for AI business.“Cloud-Native Investment IS AI Investment!”

New Standards Supporting Agentic AI Interoperability: A2A and MCP

(Satoshi Ito, Hitachi)

Satoshi Ito from Hitachi, Ltd. explained the latest trends in open standards that enable autonomous coordination between AI agents: “A2A Protocol” and “MCP (Model Context Protocol).”

  • A2A Protocol: A standard for communication between agents across different frameworks, contributed to the LF by Google and IBM. It simplifies task delegation through “Agent Cards” that describe an agent’s capabilities.
  • Model Context Protocol (MCP): A common standard—analogous to “USB-C” for connecting AI with external data and tools—proposed by Anthropic and now hosted under the LF’s Agentic AI Foundation.
  • Significance: These advancements in standardization are resolving the “N×M problem” of individual development, building the foundation for autonomous multi-agent systems.

“Conversation Design” Over English Fluency: How to Dive into International Conferences

(Masahiro Hiramori, Mitsubishi Electric)

Masahiro Hiramori of Mitsubishi Electric, sharing his perspective as a PyTorch Ambassador and Apache TVM Committer, discussed his experiences at the PyTorch Conference 2025.

  • Mindset: The true value of international conferences lies not in “attending sessions” but in “on-site conversations and discussions.”
  • Practical Tips: He offered concrete advice, including preparing a 30-second self-introduction template, using poster presentations as a starting point for dialogue, and tips for articulating value in talk proposals.
  • Message: “Even if your English isn’t perfect, you can connect with the world through proper conversation design.” This was a powerful encouragement for Japanese engineers to contribute to the global OSS community.

A New Frontier for Edge AI Opened by “1-bit Quantization”

(Yuma Ichikawa, Fujitsu)

In a session by Yuma Ichikawa from Fujitsu Research, the technical roadmap for “1-bit quantization” to achieve extreme LLM lightweighting was unveiled.

  • Strategy: He presented the “Fujitsu approach”: instead of building small models from scratch, it is more advantageous in terms of inference speed, flexibility, and cost to compress (quantize) large, high-performance models.
  • Core Technology: He introduced world-class research results accepted at NeurIPS and ICLR, such as “QEP (Quantization Error Propagation)” to suppress the accumulation of quantization errors and “QQA (Quasi-Quantum Annealing)” for rapid combinatorial optimization.
  • Outlook: He announced plans to release the “OneComp” toolkit as OSS by the end of March 2026, integrating these compression technologies, and expressed a commitment to leading a future where AI resides in every device.

Tackling the “Connectivity Bottleneck” to Protect Large-Scale AI Infrastructure

(Kim Jeongwoo, LY Corporation)

Kim Jeongwoo from LY Corporation discussed the challenges of “connectivity” in the era of AI agents and introduced case studies using “ID-JAG” and “Athenz” as solutions.

  • Challenge: Conventional user-consent-based connectivity creates a “bottleneck” in the autonomous coordination between AI agents.
  • Solution: By implementing the next-generation standard “ID-JAG,” SSO trust is extended to the API level. This enables seamless (zero-friction) interaction by removing cumbersome consent processes while allowing organization-level visibility and immediate revocation.
  • Implementation: He highlighted how integrating “Athenz,” an open-source authentication and authorization platform, with ID-JAG achieves secure control over large-scale AI/ML data infrastructures. 

Session Videos

You can view the session recordings from the LF AI & Data Japan RUG Meetup 2026 here. You can also explore the latest videos from across the community on the LF AI & Data YouTube channel.

Closing — The Future of Japan RUG

What this meetup reaffirmed is the importance of Japanese companies moving beyond being mere users of OSS to becoming deeply involved in the global AI ecosystem through “technology contributions,” “participation in standardization,” and “community leadership.” Japan RUG will continue to provide a forum for deep AI discussions in Japanese, supporting the growth of Japanese AI engineers and their ability to share insights with the world.

Event information and the latest news are updated regularly on our official website and social media channels. Please stay tuned for our upcoming events!

 

Author