c.im is one of the many independent Mastodon servers you can use to participate in the fediverse.
C.IM is a general, mainly English-speaking Mastodon instance.

Server stats:

2.9K
active users

#microsoft

525 posts357 participants34 posts today
Replied in thread
@bboeckel@toot.thoughtworks.com

At work, since quality and long term maintainability are our competitive advantage, we ended up with a couple of rules for developers who want to use these statistical tools (as opposite to fully debuggable code generation tools we also use and sometime even develop for internal use):

1. All code and documentation suggested by a #LLM must be clearly marked with something like // begin LLM: Copilot and // ens LLM: Copilot.

2. All commits message that contains changes suggested by a LLM must start with [LLM].

We have a few codebase that evolved over decades (and got carefully migrated over time through IDEs, framework versions, build systems and so on), with several new developers joining the teams but also few of them leaving.

Yet, over all these years, it has always been possible to reconstruct how and why a certain change had been introduced.

Now, including the output of these mindless statistical agents might easily demage the whole codebase in subtle ways, hurting the trust of our customers and ultimately our edge over alternatives.

We discussed if it was possible to forbid the use of these tools, mostly because of #copyright and #dataprotection concerns, but end up realizing that given how the hype is affecting the IDEs, with eg. #Microsoft pushing his #Copilot down the devs throat, we cannot really forbid them to new devs already addicted.

So we are opting for demage reduction, with the hope to be able to assess the total value (or demage) effectively provided by these tools.

Unfortunately these tools are designed to avoid any responsibility, so you cannot create a git account for Copilot's commits, so we know that some junior devs might forget to mark the LLM output, but fortunately they are going to pair program with a senior for a while, at the beginning.

This however shows how pair programming and code reviews serve pretty different purpose and cannot catch all the issues that might be evident to the other.

@mtorchiano@mastodon.uno

Replay de la conférence DevCon #IA (saison 2 - mars 2025) organisée par Programmez!

Keynote d’ouverture : nouveautés #Surface et #Copilot+PC pour les développeurs / Frédéric Wickert (#Microsoft France)

Session 1 : Génération d'images locales avec #StableDiffusion / Raphaël Semeteys (Worldline)

Session 2 : délivrez vos #LLM en #serverless / Guillaume Blaquiere

Session 3 : Comment je me suis remplacé par mes agents (#AgentIA) / Aymeric Weinbach

Session 4 : Coder avec des Agents : Démo Live et Transformation du Métier / Frédéric Malo (Checkmarx)

Session 5 : L'IA face au Diktat du Court Terme / Adel Chaibi (Ingénieur AI chez Intel)
programmez.com/actualites/repl

Programmez! · Replay de notre conférence DevCon IA saison 2Programmez! vous propose les replays des sessions de la conférence DevCon de mars 2025 à 42.
Replied in thread

@denmanrooke I totally agree. #Alphabet, #Microsoft and #Meta do it on a daily basis but on an enormous scale. Though they have created a seemingly legal background to collect, store and process huge amounts of data from their users, the way they tricked (psychologically blackmailed) their existing users to agree to the data collection is ethically undefendable. It was always "My way, or the highway". The users had no real choice.

Feeding AI with stolen or forcibly taken data *must* be illegal.