Reading some github copilot discourse and trying to decide if the thoughts & feelings I have about it are Upton-Sinclair-flavored based on my salary. Also shiny toy. Also I think it is the thin edge of the wedge of future ML trends and not a flash-in-pan like "web3"

I would feel a lot less conflicted about these ML tools if they were way more legible. Like, give me a sidepane annotating the sources from which a suggestion was derived.

Then, I could make a better decision about context. Like, when I google for an algorithm, I can see in the surrounding page whether this is a pretty common construct or whether I'm about to copy/paste from significantly complex code tainted by Oracle.

Follow

@lmorchard this is an interesting perspective, thanks for sharing it.

I would imagine the attribution would be a mess. I'm not sure it's possible to draw a line from a particular input to a particular output when using machine learning. Even if you could it might be something like 1/10,000 from source A, 7/100,000 from source B, on and on for thousands of inputs.

@bcgoss I'm less versed in the ML algorithms than I'd like, but I'd like to imagine legible attribution might get easier the more specific the code suggestion gets.

Which could also be a warning sign that it's "filing the serial numbers off" something specific and to decide not to accept the suggestion.

That said, a legible process seems like a non-trivial change from the ground up

@lmorchard I've made neural nets as a hobbyist, and one attempt at a k-nearest neighbor classifier for work. In both, the algo has a complex set of weighted equations. The weights are tuned to the "right" value using "reinforcement". Given an input and an expected output, weights are changed one direction if the actual output matches the expected output, another direction if it's different.

In the end all you have are the weights and the algorithm, no real connection to the inputs.

Sign in to participate in the conversation
Ruby.social

A Mastodon instance for Rubyists & friends