Imagine Weekly – Issue 40 [June 30th]

Welcome to Issue 40 of Imagine Weekly 👋

It's been a while since we last connected. I apologize for the absence. I hope the weekly Office Hours digests helped you get through the break. Some incredible updates have just been released on the live website of Midjourney, surpassing many of our expectations. Midjourney Video is outstanding, even though it's "just" a V1. Additionally, you might have noticed that V7 is now the default model for Midjourney. Next to that many other video models have dropped and it’s hard to keep track.

How do you like Imagine Weekly? Is there anything I can do differently? Anything you want to see more or less of? Just shoot me an email [email protected] ✉️

In this issue you’ll find:

  • 📺 Midjourney Video V1

  • 🥊 Midjourney V7 vs V6.1

  • 📖 Office Hours TL;DR

  • 🛠️ Related & Tools to explore

  • 🎨 Some new srefs to tinker with and other sources of inspiration

📺 Midjourney Steps Into Video: First AI Animation Model Now Live

Big news from the world of AI creativity: Midjourney, best known for pushing the boundaries of image generation, has just launched its very first video model. This marks their first step toward a much larger and ambitious goal—real-time, open-world AI simulations where users can move through 3D spaces, interact with characters, and control dynamic environments on the fly.

For now, the offering is called “Image-to-Video”, and it’s exactly what it sounds like. Users can take any image they’ve created in Midjourney and animate it with just one click. There’s an automatic mode, where the AI decides how the scene should move, as well as a manual mode, where users can write out their own motion prompts to describe the type of animation they want.

Midjourney has also introduced two motion styles: one for subtle, ambient movements (ideal for still-life or atmospheric scenes) and another for high-motion scenarios where both the subject and camera are in motion. Users can even upload images from outside Midjourney and use them as starting frames for animation.

The videos are short—just a few seconds per clip—but users can extend them in small increments, up to 20 seconds total. The pricing is refreshingly accessible: the cost of generating a video works out to about the same as upscaling an image, making it one of the most affordable AI video tools currently available.

For now, the feature is web-only, but Midjourney has hinted at ongoing improvements, including possible relax modes for video generation and expanded functionality in the future.

This is a major milestone—not just for Midjourney, but for the entire AI creative space. It’s exciting to see how quickly the gap between static images and full-motion, interactive AI worlds is starting to close.

If you want to learn more or try it yourself, head over to Midjourney’s official site.

🥊 What’s New in Midjourney V7 vs V6.1?

TL;DR: V7 is faster, smarter, more consistent, and gives you better control over both style and subject.

V7 brings smarter image generation with higher image quality, better prompt understanding, and more coherent compositions. Scenes feel more intentional, and characters and objects stay consistent thanks to the new Omni-reference (–oref) feature.

If speed matters, there’s now a Draft mode (–draft) which delivers results up to 10x faster than standard generation—perfect for quick explorations.

For style and mood control, V7 introduces updated sref and moodboard algorithms, giving you more precision and control than ever before.

Finally, personalization profiles have been upgraded. According to Midjourney, 85% of users now prefer the new system for getting results that match their style.

📖 TL;DR

📹 Video Infrastructure & Costs: Efforts are underway to reduce video generation costs through server optimizations, aiming for faster and cheaper production.

🕒 Relax Mode Changes: Changes in Relax mode are being implemented due to heavy usage by top users, impacting financials.

💼 Business Model & Funding: 80% of revenue is reinvested in models; company is community-funded and exploring AI lab partnerships for resources.

🎥 Video Features in Development: New features like Video Upscaler, Start/End Frame Generation, and Turbo Mode are in development to enhance video capabilities.

🖼️ Image Model Updates: Version 7.1 and 8 are in progress, with ambitious integration goals and testing for mood board improvements.

🌐 Platform Growth & Stability: Smooth platform launch with a growing team; new user growth steady but limited by paid model.

🔮 Future Outlook: Combining image and video capabilities, exploring mobile features, and two secret projects nearing announcement.

🎨 Inspiration, Prompts and Srefs

Your Midjourney portraits still look fake | Ross Symons

Your Midjourney portraits still look fake You’re not alone. Most creatives hit a wall when it comes to realism. The lighting feels flat. The skin looks plastic. The emotions missing entirely. Here’s the problem: You’re describing a face, not a feeling. To create images that feel real, you need to do more than add detail. You need to craft emotion, texture, mood, and atmosphere. Realism isn’t in the subject, it’s in the story you’re telling. Take this prompt for example: Prompt Example: Close-up portrait of a young woman with damp hair softly clinging to her face, distant gaze, tear-streaked cheeks, soft overcast lighting casting moody shadows, subtle freckles and visible skin pores, cinematic film grain, blurred rainy window in the background with cool blue-gray tones. –ar 3:4 –style raw –quality 2 Why it works: • Emotional expression: “distant gaze, tear-streaked cheeks” gives the AI something to feel, not just render. • Texture and skin detail: “freckles and pores” break the polished, synthetic look that ruins realism. • Lighting for depth: Overcast lighting with shadows creates form, contrast, and atmosphere. • Environmental storytelling: A rainy window in the background adds mood and narrative context. • Photographic imperfection: Film grain makes it feel captured, not generated. If your AI portraits still feel lifeless, it’s not your concept, it’s your prompt. I break this down step by step in the ZenRobot masterclass. Learn how to go from not bad to “wait… is this a photo?” DM me and I’ll send you the link. | 72 comments on LinkedIn

See you next week 👋🤖

I hope you enjoyed this issue of Imagine Weekly. If you found it helpful, we'd be thrilled if you could share it with your colleagues and friends. Please feel free to reach out with any suggestions or feedback.

We want everyone back to the office 😅

To support me:

☕️  I’d be happy about a coffee

Buy Me A Coffee

😘 Recommend the newsletter to your friends: it really helps!

📬 If you’re not subscribed yet to my newsletter Imagine Weekly, I’d be thrilled to welcome you on board!