Galaxy.ai Logo

LiquidAI/LFM2-2.6B vs Phi-3.5 Mini 128K Instruct (Comparative Analysis)

Compare
Comparative Analysis: LiquidAI/LFM2-2.6B vs. Phi-3.5 Mini 128K Instruct
Want to try out these models side by side?Try Galaxy.ai for free

Overview

Phi-3.5 Mini 128K Instruct was released 1 year before LiquidAI/LFM2-2.6B.
LiquidAI/LFM2-2.6BLiquidAI/LFM2-2.6B
Phi-3.5 Mini 128K InstructPhi-3.5 Mini 128K Instruct
Model Provider
The organization behind this AI's development
Liquid logoLiquid
Microsoft logoMicrosoft
Input Context Window
Maximum input tokens this model can process at once
32.8K
tokens
128K
tokens
Output Token Limit
Maximum output tokens this model can generate at once
Not specified
tokens
Not specified
tokens
Release Date
When this model first became publicly available
October 20th, 2025
August 21st, 2024

Capabilities & Features

Compare supported features, modalities, and advanced capabilities
LiquidAI/LFM2-2.6B
Phi-3.5 Mini 128K Instruct
Input Types
Supported input formats
📝Text
📝Text
Output Types
Supported output formats
📝Text
📝Text
Tokenizer
Text encoding system
OtherOther
Key Features
Advanced capabilities
Function CallingStructured OutputReasoning ModeContent Moderation
Function CallingStructured OutputReasoning ModeContent Moderation
Open Source
Model availability
Available on HuggingFace →Available on HuggingFace →

Pricing

LiquidAI/LFM2-2.6B is roughly 0.5x less expensive compared to Phi-3.5 Mini 128K Instruct for input tokens and roughly 1.0x less expensive for output tokens.
LiquidAI/LFM2-2.6BLiquidAI/LFM2-2.6B
Phi-3.5 Mini 128K InstructPhi-3.5 Mini 128K Instruct
Input Token Cost
Cost per million input tokens
$0.05
per million tokens
$0.10
per million tokens
Output Token Cost
Cost per million outut tokens
$0.10
per million tokens
$0.10
per million tokens

Benchmarks

Compare relevant benchmarks between LiquidAI/LFM2-2.6B and Phi-3.5 Mini 128K Instruct.
LiquidAI/LFM2-2.6BLiquidAI/LFM2-2.6B
Phi-3.5 Mini 128K InstructPhi-3.5 Mini 128K Instruct
MMLU
Measures knowledge across 57 subjects like law, math, history, and science
Benchmark not available.
Benchmark not available.
MMMU
Measures understanding of combined text and images across various domains
Benchmark not available.
Benchmark not available.
HellaSwag
Measures common sense reasoning by having models complete sentences about everyday situations
Benchmark not available.
Benchmark not available.

At a Glance

Quick overview of what makes LiquidAI/LFM2-2.6B and Phi-3.5 Mini 128K Instruct unique.
Liquid logoLiquidAI/LFM2-2.6B by Liquid is a text-based AI model. It can handle standard conversations with its 32.8K token context window. Very affordable at $0.05/M input and $0.10/M output tokens. Released October 20th, 2025.
Microsoft logoPhi-3.5 Mini 128K Instruct by Microsoft can use external tools and APIs. It can handle standard conversations with its 128K token context window. Very affordable at $0.10/M input and $0.10/M output tokens. Released August 21st, 2024.

Explore More Comparisons

Compare your models with top performers across different categories