=================
== Snappyl.com ==
=================
Welcome to my corner of the internet!

Windows

default

There are so many memes I could easily toss in here to describe my feelings on the current state of Windows

  • What have you done to my boy?
  • I’m not sad - I’m just disappointed
  • I use Arch, btw

But I feel like those don’t even begin to capture just how excruciating it is to use Windows these days. But then, also, I won’t pretend like I ever really liked Windows. Rather, my relationship with Windows has been more Stockholm syndrome than anything.

Read more...

ChatGPT vs Gemini

AI

Today I decided to see what OpenAI does with my ongoing LLM/AI harm research, so I signed up for the pro plan as you do. Right out the gate, as I get my feet wet with ChatGPT, I see there is a “project” feature. I’m going to be honest: I like it. Google should ape that. One downside that I’ve noticed immediately, however, is that I cannot initiate a deep research query within a project. What the heck?

Read more...

The Meta Paper

default

About a month ago I found a paper on ACM about how Meta uses an automated LLM generation strategy to increase their testing code-coverage for Facebook and Instagram. Work for me has been a bit gnarly as of late, but I got a break finally! We fixed the glitch (and are now working on removing the Band-Aid and implementing something more permanent), so work is back to normal. Since that’s back to normal, I was finally able to read the paper, which was actually fairly interesting.

Read more...

Software Development Developments

AI

This last week I was watching some YouTube, as you do, and I came across a video by ThePrimeTime about this very topic I’m researching right now! In it, he goes over some research from GitClear along with the person who wrote the report.

Is it a good report though?

I don’t know if it’s good or not. I’m not familiar with GitClear and it’s interesting at least that their CEO would be creating the report. However it doesn’t contradict what I’ve seen so far in any great way. At least not in any way that my stupid brain can recall. So I would say that, at the very least, it’s worth taking note of for further study.

Read more...

“Improving Accuracy Tips"

AI

Tips from the DOD!

I was just reading an article from Carnegie Mellon on what concerns the DoD has in software procurement, and more relevantly to this topic, AI/LLM usage. The whole thing is interesting, but for my purposes here I’m most interested in LLM accuracy and on that topic, there wasn’t much in the article, but what was there was good.

Source: Perspectives on Generative AI in Software Engineering and Acquisition

Read more...

Lost My Notes

default

Lost My Notes

So I learned a valuable lesson today - Samba does not apparently support the question mark character in file names, even if the volume used by Samba does. Also, either I wasn’t paying attention and clicked a “stop bothering me with these file copy errors!” or KDE just didn’t copy unsupported files over to my Samba share the last time I backed up. Also, turns out Obsidian will make files with question marks in the filename. And finally, I erased my main computer to migrate from Linux to Windows for a few reasons and have lost those notes in the process. So a small set back there, unfortunately.

Read more...

Gemini 2.0 Deep Research

AI

Gemini 2.0 Deep Research

So late last week Gemini 2.0 Deep Research got released. I’m trying that out currently in a comparison of AI coding assistants. I haven’t made a lot of progress – had some Helldiving to do with friends. My initial impression is the output is a little better than 1.5 pro, but that’s just vibe based. So I wouldn’t put much stock in that.

Other news!

I’m also getting my internal source control migrated to Gitea and integrating that with Jenkins finally. If you see this page, then my integration worked!

AI Source Preference

AI

Gemini Deep Research Source Selection

So this evening I’m beginning my research on where an LLM might work well and where it might not. Of course, based on my first post on this topic one might think AI is terrible at everything, however I feel like that might be too great a generalization. I have not been impressed with output for software development, but that doesn’t mean it’s useless. So as I start my research tonight, I began by just going straight to Gemini 1.5 Pro Deep Reasearch, because why not? Obviously it’s output is going to be considered “sus” until proven otherwise, however it does also just barf a ton of sources at me that I can review.

Read more...

AI Musings

AI

Musings and Thoughts

Since my last post I’ve been thinking about why LLMs work as well or poorly as they do and I think I’ve come up with a working hypothesis. Admittedly, it’s probably obvious, but hey! I never said I was quick!

What Were These Systems Trained On?

Programming

So, in the case of programming, these LLM systems have been trained on things like code in GitHub or posts on Stack Overflow. What are these posts typically? I would assume these posts are typically amateur programmers asking entry level questions more often than not. Further, the questions will likely be for smaller systems or short code snippets and not entire applications. Given that sort of training bias, I would expect for programming that an LLM would be well versed in simple programming tasks and less so in more advanced tasks. And I think, in that line of reasoning, it would be safe to expect the kind of output seen from the reports I reviewed in my last post.

Read more...

AI Review

AI

Aritificial Intelligence Review

Recently I decided to dive into the literature on the current-ish state of Aritificial Intelligence (AI), and more specifically large language models (LLM), to see what the current potential pitfalls may be. I was interested in the accuracy of outputs from LLMs and what effects that accuracy, or lack thereof, may result in. In my review I’ve seen a few interesting things that I’m going to just summarize here – I’m not a writer or journalist. I’m just a software developer, so bear with me.

Read more...
1 of 2 Next Page