• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Which model are running for code assistance?

Easy Rhino

Linux Advocate
Staff member
Joined
Nov 13, 2006
Messages
15,661 (2.34/day)
Location
Mid-Atlantic
System Name Desktop
Processor i5 13600KF
Motherboard AsRock B760M Steel Legend Wifi
Cooling Noctua NH-U9S
Memory 4x 16 Gb Gskill S5 DDR5 @6000
Video Card(s) Gigabyte Gaming OC 6750 XT 12GB
Storage WD_BLACK 4TB SN850x
Display(s) Gigabye M32U
Case Corsair Carbide 400C
Audio Device(s) On Board
Power Supply EVGA Supernova 650 P2
Mouse MX Master 3s
Keyboard Logitech G915 Wireless Clicky
Software Fedora KDE Spin
I have tried deepseek-coder-v2 for chat and codellama, starcoder2, and qwen2.5-coder 1.5B for code completion and all of them are horrible compared to straight up copilot.
 
Joined
Jan 8, 2017
Messages
9,717 (3.27/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
All the models I tried fail in pretty much the same way like down to the exact same mistakes, pretty sure they're all trained on the same data so it's kind of pointless trying to find the best one. Copilot is also just a wrapper for ChatGPT.
 

Easy Rhino

Linux Advocate
Staff member
Joined
Nov 13, 2006
Messages
15,661 (2.34/day)
Location
Mid-Atlantic
System Name Desktop
Processor i5 13600KF
Motherboard AsRock B760M Steel Legend Wifi
Cooling Noctua NH-U9S
Memory 4x 16 Gb Gskill S5 DDR5 @6000
Video Card(s) Gigabyte Gaming OC 6750 XT 12GB
Storage WD_BLACK 4TB SN850x
Display(s) Gigabye M32U
Case Corsair Carbide 400C
Audio Device(s) On Board
Power Supply EVGA Supernova 650 P2
Mouse MX Master 3s
Keyboard Logitech G915 Wireless Clicky
Software Fedora KDE Spin
All the models I tried fail in pretty much the same way like down to the exact same mistakes, pretty sure they're all trained on the same data so it's kind of pointless trying to find the best one. Copilot is also just a wrapper for ChatGPT.

Yea, it is discouraging that the free models are all pretty bad. I mean, none of them are all that great at boiler plate. Copilot recognizes what I am trying to do and it adjusts on the fly. Pretty great for getting the boring stuff out of the way.
 
Joined
Aug 20, 2007
Messages
21,836 (3.41/day)
Location
Olympia, WA
System Name Pioneer
Processor Ryzen 9 9950X
Motherboard MSI MAG X670E Tomahawk Wifi
Cooling Noctua NH-D15 + A whole lotta Sunon, Phanteks and Corsair Maglev blower fans...
Memory 128GB (4x 32GB) G.Skill Flare X5 @ DDR5-4000(Running 1:1:1 w/ FCLK)
Video Card(s) XFX RX 7900 XTX Speedster Merc 310
Storage Intel 5800X Optane 800GB boot, +2x Crucial P5 Plus 2TB PCIe 4.0 NVMe SSDs, 1x 2TB Seagate Exos 3.5"
Display(s) 55" LG 55" B9 OLED 4K Display
Case Thermaltake Core X31
Audio Device(s) TOSLINK->Schiit Modi MB->Asgard 2 DAC Amp->AKG Pro K712 Headphones or HDMI->B9 OLED
Power Supply FSP Hydro Ti Pro 850W
Mouse Logitech G305 Lightspeed Wireless
Keyboard WASD Code v3 with Cherry Green keyswitches + PBT DS keycaps
Software Gentoo Linux x64
Keep in mind CoPilot scans and uses your code for model training purposes. That's a nonstarter for me.

The reason its better is it has used all of public github for training.
 

Solaris17

Super Dainty Moderator
Staff member
Joined
Aug 16, 2005
Messages
27,393 (3.84/day)
Location
Alabama
System Name RogueOne
Processor Xeon W9-3495x
Motherboard ASUS w790E Sage SE
Cooling SilverStone XE360-4677
Memory 128gb Gskill Zeta R5 DDR5 RDIMMs
Video Card(s) MSI SUPRIM Liquid X 4090
Storage 1x 2TB WD SN850X | 2x 8TB GAMMIX S70
Display(s) 49" Philips Evnia OLED (49M2C8900)
Case Thermaltake Core P3 Pro Snow
Audio Device(s) Moondrop S8's on schitt Gunnr
Power Supply Seasonic Prime TX-1600
Mouse Razer Viper mini signature edition (mercury white)
Keyboard Monsgeek M3 Lavender, Moondrop Luna lights
VR HMD Quest 3
Software Windows 11 Pro Workstation
Benchmark Scores I dont have time for that.
Yea, it is discouraging that the free models are all pretty bad. I mean, none of them are all that great at boiler plate. Copilot recognizes what I am trying to do and it adjusts on the fly. Pretty great for getting the boring stuff out of the way.

I have a sub to chatGPT. I havent had time to spin any of the more vram intensive models up, but I did go into some of the mid range ones that say require north of 8. They get better; but some like deepseek, just arent it for me. It seems to weigh certain answers above others. I will try to explain.

For example, one of my tests is to ask for a batch file. I specify batch in my prompt. Deepseek in everycase would try to give me powershell. Powershell is more modern, better documented and preffered today. The script itself was fine, but I pretty much give it a -100 because I specified BATCH. It would give it to me, but I had to insist a few more times.

The others specifically starcoder didnt have that "presumptuous" issue. But it started to fall flat when I started asking it more complex things.

I am hoping when I have time and start running the big big models it improves. Llama isnt bad, but it does start to poop syntax issues when things get hard. It loved putting ";" where it doesn't belong.

That said, I think a lot of these take a learning curve from those that use them too. You have to know how to "speak" to your model of choice.
 
Joined
May 10, 2023
Messages
644 (0.97/day)
Location
Brazil
Processor 5950x
Motherboard B550 ProArt
Cooling Fuma 2
Memory 4x32GB 3200MHz Corsair LPX
Video Card(s) 2x RTX 3090
Display(s) LG 42" C2 4k OLED
Power Supply XPG Core Reactor 850W
Software I use Arch btw
I have tried deepseek-coder-v2 for chat and codellama, starcoder2, and qwen2.5-coder 1.5B for code completion and all of them are horrible compared to straight up copilot.
Given that the best models are the bigger ones, and I often don't have that much free ram/vram on a daily basis, I pretty much always use claude with cursor.
When offline or using a local model, I often use either codellama or deepseek-coder (the older model) because that's what i have downloaded already.
Keep in mind CoPilot scans and uses your code for model training purposes. That's a nonstarter for me.

The reason its better is it has used all of public github for training.
You can opt out of that, fwiw.
 
Joined
Aug 20, 2007
Messages
21,836 (3.41/day)
Location
Olympia, WA
System Name Pioneer
Processor Ryzen 9 9950X
Motherboard MSI MAG X670E Tomahawk Wifi
Cooling Noctua NH-D15 + A whole lotta Sunon, Phanteks and Corsair Maglev blower fans...
Memory 128GB (4x 32GB) G.Skill Flare X5 @ DDR5-4000(Running 1:1:1 w/ FCLK)
Video Card(s) XFX RX 7900 XTX Speedster Merc 310
Storage Intel 5800X Optane 800GB boot, +2x Crucial P5 Plus 2TB PCIe 4.0 NVMe SSDs, 1x 2TB Seagate Exos 3.5"
Display(s) 55" LG 55" B9 OLED 4K Display
Case Thermaltake Core X31
Audio Device(s) TOSLINK->Schiit Modi MB->Asgard 2 DAC Amp->AKG Pro K712 Headphones or HDMI->B9 OLED
Power Supply FSP Hydro Ti Pro 850W
Mouse Logitech G305 Lightspeed Wireless
Keyboard WASD Code v3 with Cherry Green keyswitches + PBT DS keycaps
Software Gentoo Linux x64
You can opt out of that, fwiw.
Aware but don't really buy that it works frankly. Once its in the model, they can't pull it out, and with the default being "on" its probably scanned nearly immediately.

But I will admit maybe it helps future commits, so I will admit I use that setting for my little bit of open source projects.
 
Last edited:
Joined
May 10, 2023
Messages
644 (0.97/day)
Location
Brazil
Processor 5950x
Motherboard B550 ProArt
Cooling Fuma 2
Memory 4x32GB 3200MHz Corsair LPX
Video Card(s) 2x RTX 3090
Display(s) LG 42" C2 4k OLED
Power Supply XPG Core Reactor 850W
Software I use Arch btw
Aware but don't really buy that it works frankly.
You can monitor the network calls that are done.
Once its in the model, they can't pull it out, and with the default being "on" its probably scanned nearly immediately.
That's not really how it works. If you opt in, the data is sent back to OpenAI/MS, and they might do further fine-tunings with the new data in newer versions of the model, but it's not "shoved" into the model instantly, nor immediately used for the internal weights of the model.
 
Joined
Aug 20, 2007
Messages
21,836 (3.41/day)
Location
Olympia, WA
System Name Pioneer
Processor Ryzen 9 9950X
Motherboard MSI MAG X670E Tomahawk Wifi
Cooling Noctua NH-D15 + A whole lotta Sunon, Phanteks and Corsair Maglev blower fans...
Memory 128GB (4x 32GB) G.Skill Flare X5 @ DDR5-4000(Running 1:1:1 w/ FCLK)
Video Card(s) XFX RX 7900 XTX Speedster Merc 310
Storage Intel 5800X Optane 800GB boot, +2x Crucial P5 Plus 2TB PCIe 4.0 NVMe SSDs, 1x 2TB Seagate Exos 3.5"
Display(s) 55" LG 55" B9 OLED 4K Display
Case Thermaltake Core X31
Audio Device(s) TOSLINK->Schiit Modi MB->Asgard 2 DAC Amp->AKG Pro K712 Headphones or HDMI->B9 OLED
Power Supply FSP Hydro Ti Pro 850W
Mouse Logitech G305 Lightspeed Wireless
Keyboard WASD Code v3 with Cherry Green keyswitches + PBT DS keycaps
Software Gentoo Linux x64
Suffice to say, my confidence that they'd keep this transparent remains low. But we are getting offtopic. I did not mean to start AI fearmongering here, appologies.
 
Top