Jump to content

Looking for an AI assistant for code? Consider Supermaven


gornycreative
 Share

Recommended Posts

So I started playing around with Supermaven's free tier. If you are looking for a code assistant for VSCode I would recommend giving it a try.

I started using it after Theo covered it in one of his tool videos, and I can say it probably shaves about 15 minutes of work per hour for me.

It does a reasonably good job of anticipating structures you are building based on a quick overview of the codebase for your project. I haven't extended the codebase to include the entire source for processwire in my projects, but even with just the minimum core wire folder modules and a few site module source code bits it is more than enough to cover the bases.

I am using the free tier which has a smaller token context limitation, but the code suggestions are fast, they don't feel inappropriately intrusive and don't write-ahead too far until the engine has high confidence in the direction you are trying to go.

When it has high certainty, it opts to give you large code blocks - even entire functions with appropriate substitutions.

It has sped up my RockMigration script writing considerably - beyond the macros, because it sees other migration functions I have defined and does a very good job of anticipating not only the class structures for custom page classes, etc, but also replacing names for new classes with proper singular/plural forms, etc.

I'd say it provides the correct code without modification 85% of the time.

https://supermaven.com/

  • Like 7
Link to comment
Share on other sites

  • 3 weeks later...

Interesting. I watched Theo's video but haven't tried it yet. Mainly because I'm very happy with https://codeium.com/. Also very fast and quite good completions. Been using this for almost a year now and it has made much progress in quality and features. In addition to code completion you get AI chat, DocBlock Creation, Refactoring and much more. It seems to very intelligently grasp the context it needs to provide good completions without you having to configure anything. You can even improve performance by fine tuning some settings.

I am using this inside the Cursor editor instead of their Copilot++ feature.

Link to comment
Share on other sites

I test supermaven and codeium at the moment. What I like about supermaven, it is superfast. I don't develop that "Copilot pause" on my typing. Codium feels slower as supermaven, but also faster as Copilot.

On codeium, my problem is, that the context of the project is only avaible in the chat, not really in the inline mode (in my testing). So that is something, where I tend more to supermaven at the moment.

 

(Maybe it is a limitation of the neovim plugin)

  • Like 1
Link to comment
Share on other sites

1 hour ago, Tiberium said:

I test supermaven and codeium at the moment. What I like about supermaven, it is superfast. I don't develop that "Copilot pause" on my typing. Codium feels slower as supermaven, but also faster as Copilot.

On codeium, my problem is, that the context of the project is only avaible in the chat, not really in the inline mode (in my testing). So that is something, where I tend more to supermaven at the moment.

 

(Maybe it is a limitation of the neovim plugin)

Thank you for the insight. Just today I had the clear proof that Codeium autocomplete was well aware of the context. At least the last edited file (like they say in their docs). Can't say for sure that it also has context from the whole codebase. Need to check that.

Will also give Supermaven a try and see how it compares.

Link to comment
Share on other sites

yes to be more precise, it is aware of the edited files it was active. The chat index the whole project (you can see the progress on it). But Supermaven seems to had context in inline autocomplete of files i didn't touch (but they are in my project).

Link to comment
Share on other sites

Out of curiosity, I installed Supermaven yesterday, even though I was quite happy with Codeium, and started playing around with it since then.

For now, I am super impressed by its speed, accuracy, and knowledge. While Codeium is more on the PHP side of things, Supermaven seems to know more about or even understands ProcessWire.

 

Link to comment
Share on other sites

10 hours ago, wbmnfktr said:

Out of curiosity, I installed Supermaven yesterday, even though I was quite happy with Codeium, and started playing around with it since then.

For now, I am super impressed by its speed, accuracy, and knowledge. While Codeium is more on the PHP side of things, Supermaven seems to know more about or even understands ProcessWire.

 

Same for me. Installed it yesterday and went straight for the trial of pro plan because of the insane context window of 1 million tokens. If the model sees that much of your codebase it can make much better suggestions. Even on the free plan you get 300.000 tokens context which is way more than any of the other autocomplete models can digest. The speed is absolutely amazing. Guess I will switch from Codeium.

  • Like 1
Link to comment
Share on other sites

45 minutes ago, gebeer said:

and went straight for the trial

I am still on the free plan for now - might get the pro for Claude Sonnet 3.5 support in the chat as that models is a true banger.
Whatever nonsense I throw at it, it gets the gist and gives me at least a solid foundation to work on - sometimes even a completely fine working solution.

Link to comment
Share on other sites

3 hours ago, wbmnfktr said:

might get the pro for Claude Sonnet 3.5 support in the chat as that models is a true banger.

I get better results with Claude Sonnet than with GPT4o. Been using 3.5 Sonnet since it came out. Currently using it through Anthropic API with https://www.continue.dev/. In continue config I disabled the autocomplete and use Supermaven for that. The continue chat interface integrates nice and I could choose from more models if I desired to do so. With the Supermaven pro plan, you get 5$ of monthly credits for the chat models. Don't know about pricing for additional chat requests above that. Couldn't find it on Supermavens website. When going through Anthropic API, the usage cost is very detailed and transparent.

Anyways, exciting to see so much going on in the AI assistant space and great to have so many options.

  • Like 1
Link to comment
Share on other sites

It's interesting to see a confirmation of my testing (codium vs Supermaven). Especial the inline Autocomplete function. What I mostly use.

 

The context window now with 1 mio, is a fresh update from 02.07.

  • Like 1
Link to comment
Share on other sites

  • 1 month later...

It's been two weeks now since I switched to Cursor Pro full-time and I have to say...

"Thank you, Supermaven. It was an awesome time. Fare well, enjoy your life now. Was a pleasure to meet you. 😊"

 

On 7/7/2024 at 4:15 AM, wbmnfktr said:

might get the pro for Claude Sonnet 3.5 support in the chat as that models is a true banger.

I'm there, in a different way as planned, yet... I reached that point.
And I am super happy.

Best part... Docs included.

2024-08-27_22-37.png.db3728ff70d9782ce4f1f9b7f226f400.png

 

Anyone else of you moved completely or at all to Cursor?

  • Like 1
Link to comment
Share on other sites

15 hours ago, wbmnfktr said:

Anyone else of you moved completely or at all to Cursor?

Jep, since 12.1.2024 🙂 https://processwire.com/talk/topic/29439-cursor-might-be-my-vscode-replacement/#comment-238405

For me it's absolutely worth the 20€ per month. It helps me a lot with JS and CSS and sometimes also PHP, but the better you are with a language the less often you'll need help from the AI.

6 hours ago, Tiberium said:

But Cursor is a (VS-Code like) editor. Not something you can integrate in your editor of choice (as a plugin), correct?

I think so, yes.

  • Like 1
Link to comment
Share on other sites

On 8/28/2024 at 2:00 PM, bernhard said:

the better you are with a language the less often you'll need help from the AI

That's true.

What I love that it can do the tasks I don't want to do.
Split a large file into smaller parts, create some boilerplates, add fake data and content.
Scaffold out the basics so you just have to fill in the blanks.

Even little things you probably have in your snippets like:

  1. give me a foreach loop that iterates over $recipes and updates x, y, and z.
  2. generate 1,000 demo pages, add unique titles, some content, dates starting from 1997 to 2027.

5 seconds. Done.

Link to comment
Share on other sites

Drop all the others (Supermaven, Copilot, Cursor) and go with Sourcegraph Cody. It supports multiple LLMs (Claude, GPT40, Mixtral, Gemini, or even local models), , integrates smoothly with VSCode, and gets regular updates. Cody is fast, supports custom commands, and has an excellent UI with a well-integrated diff feature, similar to Cursor. It also uses your entire codebase or other public repos as context when answering questions.

IntelliJ-based IDEs are supported too, but the UI isn't as polished yet. Improvements are expected by early September.

I haven't done a direct speed comparison, but Cody feels quick, and I’m not missing anything.

  • Like 2
Link to comment
Share on other sites

Currently, I am using the paid version of Bito.ai in PHPStorm for chatting and Supermaven free for "coding completion." They work well side by side.

Bito's GUI and the generated results are quite impressive (as they include clear explanations of what was suggested and why, as well as the assumptions made by the AI engine (also known as LLM)). Even Bito's web account features are nice and informative. Bito indexes the whole codebase and one only needs to tell it to use that, but in my prompts I always tell the engine what classes and what methods to deal with (if appropriate, of course). Since it looks up the whole codebase, sometimes it might creatively pick classes/methods form other off topic places but in that case it is usually enough to guide it to back to topic.

The only strange thing about Bito is the subscription scheme: a monthly subscription starts at the beginning of the month. Since I subscribed a week ago, I only had to pay for the remaining days of August, which was fine for me, as I did not have to pay for a full month just to test it.

@dotnetic Sourcegraph Cody looks cheap, so thanks for the tip! I will test that as well.

  • Like 2
Link to comment
Share on other sites

17 hours ago, dotnetic said:

Drop all the others (Supermaven, Copilot, Cursor) and go with Sourcegraph Cody. It supports multiple LLMs (Claude, GPT40, Mixtral, Gemini, or even local models), , integrates smoothly with VSCode, and gets regular updates. Cody is fast, supports custom commands, and has an excellent UI with a well-integrated diff feature, similar to Cursor. It also uses your entire codebase or other public repos as context when answering questions.

IntelliJ-based IDEs are supported too, but the UI isn't as polished yet. Improvements are expected by early September.

I haven't done a direct speed comparison, but Cody feels quick, and I’m not missing anything.

Hmm i see Neovim support.

Thanks for sharing!

  • Like 1
Link to comment
Share on other sites

Been trying to get my buddy to field test all of these suggestions you all have been making so I can just wait until a clear winner surfaces after a decent amount of time. 🤣 His feedback after I mentioned Sourcegraph Cody was that Cursor makes it ridiculous easy for the AI to auto-generate, and auto-modify multiple files for him. He's saving to Git far more often due to the lack of a single-action, multi-file undo, but otherwise that's the "gotcha" between all of the other suggestions made after Cursor.

The assistants are really coming along blazingly fast.

  • Like 1
Link to comment
Share on other sites

1 minute ago, BrendonKoz said:

single-action, multi-file undo

There is always the option to only accept parts of the code Cursor created/generated.
From the chat it's easier to see than from composer - still having only the small composer window and looking at the files, you can select the changes you want to apply.

cursor-composer-accept-partial.thumb.png.0db883d3a63e2f19c67a4286e0c855da.png

  • Like 1
Link to comment
Share on other sites

Watching this cody video made me crazy 😄 , the guy is generating a unit test with AI from the finished implementation! 😬 This is so wrong.
We should never write a test from an implementation, tests must be wrote before... because they are meant to validate the implementation. Implementation is a consequence of the tests, not the other way. If an AI generate a test from a wrong implementation, will the test validate the wrong or the expected behavior? 😁

For the fun I asked ChatGPT about this technique:

Generating unit tests from the function they are supposed to test might seem efficient for saving time or ensuring that the test covers the existing code, but it has several significant drawbacks.

1. Confirmation Bias:

  • The test may inherit the same errors or biases as the original function. If the function contains a logical error, the test generated from that same logic might not detect it.

2. Lack of Test Case Diversity:

  • A good unit test should verify a function under various conditions, including edge cases or unusual scenarios. If a test is automatically generated from the function, it may not cover all possibilities and might only focus on scenarios the function already handles correctly.

3. Objectivity and Independence:

  • Unit tests are meant to be independent and objective. By generating tests from the function itself, this independence is compromised, which can reduce the tests' ability to identify defects.

4. Lack of Critical Thinking:

  • Manually creating unit tests forces the developer to consider various use cases, possible inputs, and expected outcomes. This helps identify potential flaws in the function's logic. This critical thinking is often lost when tests are generated automatically.

5. Maintenance and Scalability:

  • If the function evolves, the generated tests will also need to be updated. Automatically generated tests may not be as flexible or well-maintained as manually crafted ones.

When Can This Technique Be Useful?

  • Initial Generation: For very large projects where tests are nonexistent, generating basic tests can be a starting point before refining them manually.
  • Quick Coverage: In situations where quick coverage is essential, this technique can help, though it should not replace thoughtful and manual testing.

Conclusion

It is best not to rely solely on this technique. Unit tests should be independent and designed to push the function to its limits, which is difficult to achieve with tests generated from the code they are supposed to test.

Edited by da²
  • Like 1
Link to comment
Share on other sites

 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...