- Imagine Weekly ā The only Midjourney Newsletter you need
- Posts
- š Midjourney Office Hours Highlights: June 11th
š Midjourney Office Hours Highlights: June 11th

Midjourney is on the brink of what could be its most ambitious release to dateāa full-fledged video model, promising to animate its iconic visual style in new, expressive ways. The upcoming updates span across style systems, video generation, infrastructure, and future roadmap, and indicate a platform steadily evolving beyond image creation. In short, this isnāt just another feature updateāitās Midjourney doubling down on creativity and interactivity. While the initial launch wonāt be flawless, the focus on fun, magic, and creative empowerment signals an exciting new phase in the platformās evolution.
šØ Style Reference (S-Ref) System Gets Overhauled
One of the first things on deck is a significant update to the S-Ref system. This new version, expected to drop imminently, breaks compatibility with older S-ref codesāmeaning users will need to append --sv4 or --sv5 to continue using legacy references. The upside? Enhanced mood board capabilities, a more refined URL handling system, and the addition of an S-ref randomization feature. By all internal accounts, this is anticipated to be a āslam dunkā release for creators who rely on aesthetic precision and variation.
š„ Midjourneyās Video Model Is Almost Here
Internally described as āextremely beautifulā and āreally fun,ā Midjourneyās long-awaited video model is entering its final stages of preparation. Designed for image-to-video generation (no text-to-videoāfor now), the model will launch with support for visuals from versions V4 to V7, including Niji images. It wonāt be shipping with high-resolution or extended video length features at launch, in part to preserve server capacity and ensure accessibility.
In a deliberate strategy to prioritize wide user adoption, Midjourney is considering a low-entry pricing modelādespite the higher costs involved in running the video model. However, it wonāt support relaxed mode initially, and early access may be restricted to yearly/standard members or Mega Plan subscribers.
šļø Features and Focus
The launch version emphasizes medium-quality videos that aim to strike a balance between aesthetic fidelity and accessibility. Notably, the video model maintains Midjourneyās distinct visual signature more effectively than early text-to-video approaches seen elsewhere. A key highlight is the planned video extension feature, set as a post-launch priority to accommodate user demands for longer clips.
š§Ŗ Rating Parties to Tune Quality
Midjourney is involving its community early with ārating parties,ā which allow users to vote on early video outputs. The initial party deliberately includes broken or quirky clipsāthink āheads spinning aroundāāto help identify bugs and refine quality thresholds. A second round will focus on honing preferences and defining success metrics. You can participate via the desktop-only ranking page.
š» Server Hurdles and Strategy
The video model is computationally demandingāpotentially doubling the current server load. Midjourney is negotiating across three different providers to scale affordably. Lower costs would enable broader rollout, while steeper pricing might lead to stricter access controls or tiered pricing in the future.
š Anime Fans Rejoice: Niji Video Model Incoming
A Niji-specific video variant, better aligned with animeās stylistic rigors, is planned for release within a month after the main video rollout. Interestingly, this version may also be the first to introduce text-to-video functionality, since anime visuals are more easily trained within the structured domains of video data.
š§ Looking Ahead: V7.1, V8, and Beyond
Thereās no pause on innovation. V7.1 is expected to borrow learning from the video model to improve coherence, while V8 is under early development to boost image quality and visual understanding. A new version of Style Explore is also in progress, based on the updated S-ref system, and the notoriously tricky O-ref feature is undergoing fresh experimentation.
š£ļø Feedback-Driven Development
The Midjourney team is actively sifting through user feedback from feature votes, with character consistency topping the list of priorities. User demand for longer videos and features like angle shifting are also being factored into post-launch development.
TL;DR
šØ Style Ref Update: New S-Ref system drops soon; better mood boards, randomization, improved URLs. Old codes need --sv4 or --sv5.
š„ Video Model: Launching soon with image-to-video only. Beautiful, fun, but no text-to-video (yet). Works with V4āV7 + Niji.
āļø Features: Medium quality. Shorter videos for now. Extensions and enhancements post-launch. Maintains Midjourneyās look well.
š§Ŗ Rating Parties: First party shows ābrokenā content to find flaws. Next will refine quality. Participate on desktop.
š» Server Concerns: Video model needs 2x capacity. Negotiating server deals. Costs will impact pricing and access.
š Niji Video Model: Anime-focused model due within a month. Might include text-to-video. Better fit for structured anime styles.
š§ Future Roadmap: V7.1 to integrate video learnings. V8 in early dev. S-Ref and O-ref getting major updates.
š£ļø Community Feedback: Character consistency is top user ask. Longer video lengths and angle shifting in progress.Stay tuned for more updates as Midjourney continues to innovate and enhance its platform!
If you want to support me, feel free to buy me a coffee āļø
If youāre not subscribed yet to my newsletter Imagine Weekly, Iād be thrilled to welcome you on board!