š Qwen3-VL Tech report is now out on arXiv! From pretraining to post-training, architecture to infra, data to evaluation ā weāve packed in the details for anyone building on vision-language models. š„ 3 models >1M downloads in just over a month š Qwen3-VL-8B leads with 2M+ downloads š Built on the shoulders of Qwen2.5-VL (2800+ citations in <10 months!) Check out the paper for insights, baselines, and future directions. Letās keep pushing VLMs forward ā together. https://lnkd.in/gV-kPFTf
About us
- Industry
- Software Development
- Company size
- 51-200 employees
- Type
- Public Company
Employees at Qwen
Updates
-
š We are incredibly honored to announce that our paper, "Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink-Free" has received the NeurIPS 2025 Best Paper Award! A huge congratulations to our dedicated research team for pushing the boundaries of AI. Read more: https://lnkd.in/gziShEec
-
-
š Qwen Code v0.2.1 is here! We shipped 8 versionsļ¼v0.1.0->v0.2.1ļ¼ in just 17 days with major improvements: What's New: š Free Web Search: Support for multiple providers. Qwen OAuth users get 2000 free searches per day! šÆ Smarter Code Editing: New fuzzy matching pipeline reduces errors and saves tokensāfewer retries needed. āļøMore Control: Fine-tune AI behavior with temperature, top_p, and max tokens settings. š» Better IDE Integration: Enhanced Zed IDE support with todo and task management tools. š Cleaner Output: Tool responses now use plain text instead of complex JSONāeasier for AI to understand. š Improved Search: Better file filtering (respects `.gitignore`), smarter search tools, and standardized naming. ā” Faster Performance: Multi-stage normalization pipeline for zero-overhead matching, better Unicode handling, and optimized output limits. š Bug Fixes: Fixed token limits for multiple models, improved cross-platform support (macOS & Windows), and better stability. Try it nowāsmoother, more reliable AI coding! š https://lnkd.in/gfWKUJvq Ā š https://lnkd.in/gu-ypWVJ
-
-
šĀ QwenĀ DeepResearchĀ 2511Ā isĀ LIVE!Ā š We'veĀ justĀ droppedĀ aĀ majorĀ upgrade,Ā makingĀ yourĀ researchĀ deeper,Ā faster,Ā andĀ smarter! šļ¼Ā https://lnkd.in/gEz2fX7f APP:Ā Ā https://qwen.ai/download āØĀ DualĀ ModeĀ Selection: NormalĀ Mode:Ā EfficientĀ &Ā versatileĀ forĀ mostĀ needs! AdvancedĀ Mode:Ā GoĀ deeper!Ā DevotesĀ extraĀ timeĀ forĀ aĀ moreĀ thoroughĀ analysis.Ā š§ šĀ FileĀ UploadsĀ Enabled:Ā NowĀ youĀ canĀ easilyĀ uploadĀ yourĀ documentsĀ orĀ imagesĀ forĀ theĀ AIĀ toĀ analyze! ā”ļøĀ BoostedĀ SearchĀ Power:Ā DrasticallyĀ improvedĀ searchĀ efficiencyĀ &Ā depth.Ā ReadĀ andĀ processĀ moreĀ webĀ infoĀ inĀ lessĀ time! šĀ PreciseĀ ReportĀ Control:Ā CommandĀ theĀ reportĀ formatāwordĀ count,Ā paragraphs,Ā &Ā content!Ā GetĀ comprehensiveĀ reportsĀ withĀ enhancedĀ citationĀ reliability. š§š»Ā All-NewĀ UX:Ā OurĀ newĀ decoupledĀ architectureĀ deliversĀ aĀ smoother,Ā moreĀ responsiveĀ userĀ experience!
-
-
Weāve released an early preview of Qwen3-Max-Thinkingāan intermediate checkpoint still in training. Even at this stage, when augmented with tool use and scaled test-time compute, it achieves 100% on challenging reasoning benchmarks like AIME 2025 and HMMT. You can try the current version in Qwen Chat and Alibaba Cloud APIāmore to come as training continues. Qwen Chat: https://lnkd.in/g9bCkR6f Alibaba Cloud API ļ¼enable_thinking=Trueļ¼: https://lnkd.in/gvM86jgD
-
-
šĀ Qwen3-VLĀ isĀ nowĀ availableĀ onĀ llama.cpp!Ā RunĀ thisĀ powerfulĀ vision-languageĀ modelĀ directlyĀ onĀ yourĀ personalĀ devicesāfullyĀ supportedĀ onĀ CPU,Ā CUDA,Ā Metal,Ā Vulkan,Ā andĀ otherĀ backends.Ā WeāveĀ alsoĀ releasedĀ GGUFĀ weightsĀ forĀ allĀ variantsāfromĀ 2BĀ upĀ toĀ 235B.Ā DownloadĀ andĀ enjoy!Ā š š¤ Hugging Face: https://lnkd.in/gQW5igpj š¤ ModelScope: https://lnkd.in/gB9yeXyx š PR: https://lnkd.in/gxkkNURw
-
Introducing Qwen3-VL-2B and Qwen3-VL-32B! From edge to cloud, these dense powerhouses deliver ultimate performance per GPU memory, packing the full capabilities of Qwen3-VL into compact and scalable forms. š„ Qwen3-VL-32B outperforms GPT-5 mini & Claude 4 Sonnet across STEM, VQA, OCR, video understanding, agent tasks, and more. š” It matches models up to 235B (even beating them on OSWorld!) with only 32B parameters. ā”ļø FP8 versions available for ultra-efficient deployment. š§ Also releasing Instruct & Thinking variants for flexible use cases. Try it now: https://lnkd.in/gkiW3quP Hugging Face:https://lnkd.in/g6E4ssgE ModelScope:https://lnkd.in/gB9yeXyxĀ API - instruct:Ā https://lnkd.in/gEPtvybZ API - thinking:Ā https://lnkd.in/gP7FcQhF Cookbook: https://lnkd.in/gRwFiYy2
-
-
Qwen Deep Research just got a major upgrade. ā”ļø It now creates not only the report, but also a live webpage š and a podcast šļø - Powered by Qwen3-Coder, Qwen-Image, and Qwen3-TTS. Your insights, now visual and audible. ⨠š https://lnkd.in/gEz2fX7f
-
Excited to announce the launch of Qwen3-VL-Flash on Alibaba Cloud Model Studio! š A powerful new vision-language model that combines reasoning and non-reasoning modes, outperforming open-source Qwen3-VL-30B-A3B and Qwen2.5-72B with faster response, stronger capabilities, and the lower cost! šø Supports ultra-long context (up to 256K tokens) ā perfect for long videos & documents š§ Enhanced image/video understanding with 2D/3D localization and spatial awareness š Advanced OCR, multilingual recognition, agent control & real-world applications šØ Significantly improved security perception and real-environment visual intelligence APIļ¼ https://lnkd.in/gB6BqMpq
-
-
We're open-sourcing several core components from the Qwen3Guard Technical Report, now available for research and community use: š¹ Qwen3-4B-SafeRL: A safety-aligned model fine-tuned via reinforcement learning using feedback from Qwen3Guard-Gen-4B. āĀ Achieves significant safety improvement on WildJailbreak (64.7 ā 98.1)Ā without compromising general task performance. š¹ Qwen3GuardTest: A benchmark for evaluating Guard models, covering: (1) Safety classification of intermediate reasoning/thinking content (2) Moderation of streaming/token-by-token outputs š Hugging Face: https://lnkd.in/ehin3j-c š¤ ModelScope: https://lnkd.in/ey8RPp6G š Dataset: https://lnkd.in/ewY3gqaD š» Code & Details: https://lnkd.in/gaiU8xFH