ruby.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
If you are interested in the Ruby programming language, come join us! Tell us about yourself when signing up. If you just want to join Mastodon, another server will be a better place for you.

Administered by:

Server stats:

1.1K
active users

#LLM

113 posts106 participants5 posts today

Efficient use of a #searchEngine 10 years ago:

Coming up with a phrase that would likely contained in an article that explains the thing I want to read

Efficient use of a search engine now:

Coming up with a phrase that avoids anything triggering #LLM #AI based content

In ancient times, the japes of fools and jesters were heeded as warnings from the gods. I have not spent this much time burnishing my jester credentials for nothing -- dashbots are coming and they will ruin everything. #UXDesign #UX #ProductManagement #LLM #AI #GenAI #B2B

spavel.medium.com/dashbots-the

I am going to create a source of truth that is so single
Medium · “Dashbots” — the inevitable fusion of dashboards and chatbotsBy Pavel Samsonov

🌗 GitHub - astronomer/airflow-ai-sdk:適用於 Apache Airflow 的 LLM 和 AI Agent SDK
➤ 將 LLM 的力量融入您的 Airflow 工作流程
github.com/astronomer/airflow-
astronomer/airflow-ai-sdk 是一個基於 Pydantic AI 的 SDK,旨在將大型語言模型(LLM)和 AI Agent 整合到 Apache Airflow 工作流程中。它透過 `@task.llm`、`@task.llm_branch` 和 `@task.agent` 等裝飾器,簡化了 Airflow pipeline 中 LLM 呼叫和 agent orchestration 的流程。使用者可以輕鬆地在 Airflow pipeline 中定義和執行基於 LLM 的任務,並利用豐富的 Airflow 功能,例如排程、錯誤處理和監控。
+ 「這個 SDK 讓我在 Airflow 中使用 LLM 變得超簡
#開源工具 #AI #Airflow #LLM

За вечер с Cursor написал в один свой пет-проектик столько, сколько не написал за последние полгода-год 🙄

Если бы его ещё можно было напрямую использовать в IDEA - это было бы мегабожественно. А так это просто божественно.

#log#dev#LLM

You know, we invented systems before there were computers.
'Forms' were on paper, rather than on screens.
An 'in tray' was an actual metal wire, or wooden tray, for paper letters, notes, memos and forms.
A database was called a 'filing cabinet'.
An 'interface' was a mail box.
A 'front end' was a person, with a job title like administrator, or clerk.
These systems were described, in excruciating detail, in procedure manuals.
The processes were run not by CPUs, but by people.
'Bugs' were when people made mistakes.

Systems were difficult to understand, even harder to diagnose, and very very hard to fix or change.
To change the way a department worked, for e.g. accounts receivable was so hard that most companies never even tried.

And yet somehow people are under the impression that it is the code that is the difficult bit about modern business systems.
So they try and make the code part easier.
#LowCode #LoCode #NoCode #AI #GenAI #LLM

It was never the code. Code was never the bottleneck.

raganwald.com/2012/01/08/duck-

raganwald.comDuck Programming

"If I’m 4 years old and my partner is 3x my age – how old is my partner when I’m 20?"
Do you know the answer?

🤥 An older Llama model (by Meta) said 23.
🤓 A newer Llama model said 28 – correct.

So what made the difference?

Today I kicked off the 5-day Kaggle Generative AI Challenge.
Day 1: Fundamentals of LLMs, prompt engineering & more.

Three highlights from the session:
☕ Chain-of-Thought Prompting
→ Models that "think" step by step tend to produce more accurate answers. Sounds simple – but just look at the screenshots...

☕ Parameters like temperature and top_p
→ Try this on together.ai: Prompt a model with “Suggest 5 colors” – once with temperature 0 and once with 2.
Notice the difference?

☕ Zero-shot, One-shot, Few-shot prompting
→ The more examples you provide, the better the model understands what you want.

I entered the #ComputationalLinguistics field in 2018 by enrolling for a Bachelor's degree.

Since then, a lot has changed. Almost all the things we learned about, programmed in practice and did research on are now nearly irrelevant in our day-to-day.

Everything is #LLMs now. Every paper, every course, every student project.

And the newly enrolled students changed, too. They're no longer language nerds, they're #AI bros.

I miss #CompLing before ChatGPT.

Anyone know a good small LLM that is reasonably consistent with rewriting text? The use case is that I want a model that can run pretty well (it's okay if it takes 10 minutes, it becomes a problem if it takes 2 hours) on a regular server without a GPU installed. I want to rewrite static text into something more pleasant to read. Unfortunately, Llama 3 is really inconsistent in the smaller models. I wonder if someone has come across a better option, but maybe it's just not possible yet... #llm

THE AI HYPE BUBBLE IS THE NEW CRYPTO HYPE BUBBLE

"Someday, we’re gonna feel pretty silly about our autocomplete worship"

Anyone who has used the assorted #LLM's realized that it's not all that good and isn't fooling anyone with what it writes & the code it creates... yes, it's fancy autocomplete and yes, that's the reason #OpenAI is on the edge of bankruptcy despite the millions upon millions invested

#Crypto is bullish!t. Don't let anyone tell you different.

doctorow.medium.com/the-ai-hyp

Medium · The AI hype bubble is the new crypto hype bubble - Cory Doctorow - MediumBy Cory Doctorow

#AI is the new tobacco: that industry invented scientific controversy around the unquestionable harm of tobacco and lobbied aggressively against any regulation of its sales, just as corporations now peddle tall tales about utility of AI and lobby for its use everywhere, supercharging #enshittification by normalising #slop.
It took more than half a century to reverse what tobacco industry has done to public opinion on tobacco, and the damage to public health was irreversible.