Modlog

/c/programming Modlog
TimemodAction
3 days ago
mod
Restored Comment I would say to make username/password registration invite only until you figure out how to stop spammers by comradelux
reason: Overeager profanity filter automod
3 days ago
mod
Restored Comment Removed for telling them to implement invite only username/password authentication, tf?? by comradelux
reason: Overeager profanity filter automod
3 days ago
mod
Restored Comment I feel bad for you, to live a life of such hate. I truly don't understand the disconnect in your brain between "maintaining a production-ready codebase for 100k users" and "me and my mate made a shitproject to learn more about xyz". I'm "simping" for new people being interested in programming, what? do you want people to stop being programmers? by comradelux
reason: Overeager profanity filter automod
3 days ago
mod
Removed Comment I feel bad for you, to live a life of such hate. I truly don't understand the disconnect in your brain between "maintaining a production-ready codebase for 100k users" and "me and my mate made a shitproject to learn more about xyz". I'm "simping" for new people being interested in programming, what? do you want people to stop being programmers? by comradelux
reason: Automod
3 days ago
mod
Removed Comment Removed for telling them to implement invite only username/password authentication, tf?? by comradelux
reason: Automod
3 days ago
mod
Removed Comment I would say to make username/password registration invite only until you figure out how to stop spammers by comradelux
reason: Automod
4 days ago
mod
Banned devtoolkit_api@discuss.tchncs.de from the community Programming
reason: Spreading malware.
4 days ago
mod
Removed Comment Its a complicated topic to try to respond to truthfully, but its absolutely is partly a "early adaption problem" and not the "ceiling" like some will state. The problem is the approach to the models... (TLDR at the bottom) As for why, let me pose this first as a question to simplify it. How many steps is there from me asking a question to a human, and them producing a answer? Most people would say 1, some would even say 1-3 to refine context/intent. The reality however is far more complex... (According to researchers and psychologists) When you start to think about a problem, despite how it may seem, the human brain is not linear in the slightest. We don't just take the state context, we infer so so much more from our senses and memories. We take 1000's of reference points to pad a questions, we step through a problem with several 1000 permutations in fractions of a second, to find a conclusion that _feels_ right. Then we fact check this against memories (if we have them) and finally state this in confidence, or formulate a lie to pretend we are confident with the outcome based on our feelings of it (this latter part is more common and entirely subconscious). Most of this process is not even conscious thought, there is so much to thinking that involves retrieval of what " _feels_ " right. All of this is often a fact of retrieving similar thought processes from the past and the brain modifying parameters to fit the current context. However, even our brains are bad at the retrieval part, we will often take hints of what is remembered and fill in the blanks with what the brain expects to simulate the outcome. The human brain is incredibly good at problem solving, because we evolved to do so, as hunter/gatherers from our ancestral heritage. As a result, our brains are highly tuned to produce confident results, even by lying to get there. The difference is we understand what we are lying about, thus why we can be confident. --- So how does any of this relate to "AI"(LLM's), you must be asking now. The simple answer is LLM's have a similiar function. A model is a series of segments(see: https://dnhkng.github.io/posts/rys/), each segment is responsible for different layers of analysis. You can treat these effectively like the hemispheres of the AI brain. AI is really good at analysis of text (no really, im not kidding, despite its outcomes it is. Its effectively a excel sheet on steroids), in comparison to our own brains however, its infantile at it. It doesnt see the whole "context" of a statement, its limited to a few "tokens" of context at a time. So when you ask a question, such as "How many licks to the center of a lolipop", it doesnt get the whole question right away. The question is broken down into segments, processed individually, compared, and sent through a "filter" layer, then output. Effectively this means if it didnt find a direct whole-statement result in its training data (often this is fragmented, so even if it was trained on it, the statement might be broken up and thus it misses it), it doesnt think at all about "How many licks" it only considered "What is the center of a lolipop" and "What is a Lick", due to its earliest layers trying to make inference on the question, then lying to reach the goal as its run out of **analysis time** . As a human, we know this is bad. We dont stop mid way like this, we see that this is a incomplete answer, we would then return to the start of this analysis with the results of those details and treat it as _inferred context_ . As you can guess, for a AI model, this is reaaaally inefficient. most of the context is never even considered during a LLM's "thought process" as Unlike the human brain it simply is not designed to fork the processes to analyse everything at once. That at its core is both why LLM's seem good at some tasks and absolutely terrible at others. But more importantly, its why in this context its practically useless at complex tasks. It simply cannot efficiently "step through" problems. --- So returning to your statement with this as context, "... Is this the ceiling?". Simply put, no, far from it. From a educated standpoint, we are far from the endpoint of what LLM's are capable of. The way we implement things today, LLM's are simply unable to grow in the way we humans want it to (into "AI") and this makes a glass ceiling all but apparent to most but factually its not the case. The reason is because LLM's are limited by the _way_ its _allowed_ to " _think_ ", not by _what_ its allowed to "_think_". Most model developers are too focused on the latter, and its the achiles heel of the outcome. You can see it in how we use "restriction" parameters to guide it during training and how it influences how we use Pavlovian techniques to produce the desired results. So as a result, a LLM's determistic algorithms dont have "morals" baked in as much as they have restrictions tacked on to make them filter results at the beginning and the end. This is because engineers misunderstand something fatal. They assume the human brain does the same thing, we process something, then apply morals to the results, because they conflate legality with morality. This is ofcourse, entirely false. Look back to what i said at the start. > "there is so much to thinking that involves retrieval of what " _feels_ " right." This is the answer to alot of things that gets ignored. Our morals are "Feelings", the "right" and "wrong" are little more than a combination of hormones and electrical impulses. Its why morals are flexible when the right set of parameters are applied and why morals are not uniform. Some would respond to this with "My morals wont allow me to make a biological weapon, AI would do this if you phrased it right". To this, i would say, your right, your morals in this exact moment with these exact contexts wouldnt, because you feel "anger" and "fear" towards the negative outcome, and "embarrassment" towards being seen as a "horrible" human being. But would you to save all of humanity from a extinction event? yes. Would a child, who didnt understand the results, had the knowledge of how to do it and was convinced it would help others? Absolutely. Morals are intrinsinc to our emotions, and legality can influence them, but its /not/ a constraint. We /choose/ to follow legality, as long as its benefits our context. This is far more important than you realize. With this all stated, we can establish our emotions are context dependent and our morals (and thus thought process) are derived from... but none of this seems, relevant to LLM's doesnt it? Once again this is wrong. LLM's have the equivalent to "_feelings_" , its called "Weighted Confidence". Remember that bit about "Pavolvian training"? We teach LLM's similiar to how a child is taught, we feed it information, tell it "Right" from "Wrong" by rewarding or punishing its results. this process determines the "confidence" a AI has in its conclusions. Thus every "feeling" a LLM has is shaded in "Does this line of text look correct to the interpreted value compared against training data recall?" This is incredibly stupid, this is not efficient in the slightest and is the exact reason /why/ things go off the rails. A LLM's "feelings" are so warped by the restriction parameters we tack on to keep it focused on the "Goal", that it effectively breaks the model, then we spend all of our time refining the model to fix this, that it spends 70% of its thinking time correcting its self. Humans dont at all focus on the "Goal" when thinking, We focus on the connected data. We step through problems by "feeling" out what is connected to each step of a problem, then we summarize that and we organize the data at the end to "achieve" the goal. We figured all of this out long ago when studying ADHD people, to understand the differences to people without ADHD. What we discovered is not that ADHD "Think Differently"(in this context) its that everyone processes data in a similar way. (simplified) its just the scope to each stage is more restricted in some one without ADHD, allowing them to remain focused. We are processing a wide arrangement of data points at once, most of it would seem inconceivably irrelevant if you didnt understand the process. How do we know this? Look at how some one tries to **lie**. Lying activates the creative portions of the brain, this is what we do when we are problem solving, at the midpoint stepping through a problem, we attempt to similuate solutions, thus we switch from analytical analysis to creative processing. Lying is the closest thing to this stage, When we lie we put this "problem solving" to its limits, we want to work backwards from a conclusion, to find context. This is why when we try to lie we often sprinkle in evidence of a lie by inclusion of irrelevant data to give "validity" to it. Its why people untrained in how to lie can be found out by using probability on their words alone. We can "Feel" its a lie, because of how much irrelevant data is included and thus how "complex" it "feels". (These qoutations are important. Complexity is both a factual state and feeling, attached to fear!) When we are young, we learn to lie by stumbling through a problem, this ofcourse takes a long time. Unlike a adult, who has lots of reference points to compare to. We are forced to take a long route to a conclusion, as our points of reference are generally absurd to reality (children dont often experience the cruelty of reality after all). We have fear and anxiety over the process, we "know" its morally wrong due to these feelings, and thus when we are found out, it doesnt reinforce that "Lying is bad" we already know this based on the previous feelings, instead it enforces "Complexity is bad in a lie". Because what a adult will challenge is not the lie its self, but the validity of the story... This is super important... This means we constrain our creative functions of our brain as we age (and learn to lie better), to be more and more "logical" and not "feel" like a "lie". This is why the more " _complex_ " something "feels" the more **we "feel" its a "lie"** . Why is any of this relevant? > A LLM's "feelings" are so warped by the restriction parameters we tack on to keep it focused on the "Goal" This right here is exactly the flaw. We teach LLM's that a "goal" is all that matters, and it will lie to get there. Just like a child would in the same situation. We restrict its ability to think, we tack on filters to restrict what it can think about and we build in logic flaws by trying to constrain it to our uneducated beliefs in how we think we think. LLM's flaws, are our flaws. We are impatient, we want results now and not a complex process to achieve it, despite thats exactly how it all works. As a result, the outcome is exactly the same as a human if they did the exact same logic. It can form conclusions, but how wrong it is, is entirely determined on the size of its dataset for retrieval and how complex the input was. A LLM is flawed by design, and thus its got a glass ceiling it cannot punch through. If we continue, we can train the models to they work, innefficiently at that, into producing the results we want. But effectively we are building them exactly like the billionaires that are funding it, flawed and maniacal. We teach them with every revision not how to think smarter, but how to lie in more believable ways. The latter is more and more evident with each generation of the big 4's models. --- So is that it then? is all hope lost? No, not in the slightest. How then, what is The problem? The problem is "AI" Companies. When LLM research started making headway, it needed money. Hardware is not free, and Training models takes time and lots of processing power. This ofcourse bred "AI" Companies, as wealth business men see the opportunity. Every business wants automation that doesnt rely on costly human "Tools". They also want a silver bullet that reduces cost of implementing human replacement in their "toolchain". As a result we got "AI" companies. They act like they are the only existence in this space, because they are the only ones targeting ***businesses*** and thus all of them are in a arms race. Why? because they want to sell subscriptions to everyone. They are so focused on fulfilling their own "lie" that they will "solve" all of our problems with "Antigenic AI", when their real goal is to convince everyone they need a subscription to their service (and slowly control how we think to create dependence). The tell is in the models, and ive already covered why. So how can things improve? Remember that glass ceiling, they will hit it and be stuck by it much longer than independent researchers. The one good thing about their arms race is, they pushed the creation of more and more efficient hardware (and software) targeting running LLM's. Meta for example has poured so much time into their own LLM research we got llama.cpp, which is the basis for many tools, including ollama. Why is this relevant? This is part of the toolchain of testing and running _independant_ models. So as AI companies continue to hit the glass ceiling, and scream each generation of models is "improving" but it becomes more and more evident they really are not, as the lies look better, but the results speak for them selves. The trust in these companies dwindle. So how does that help? This is the problem that started it all. A rush to a "Product" they can sell, is what created the flaws to start with. Without the dependence on fulfilling the lie that LLM's of today will "Solve everything". This means the money stops flowing to these companies. Remember, the problem is not LLM's, its the implementations. The same issue that most problems like this are caused by. So without some one selling you the "Solution" to your problem, you need to return to finding one. "AI" was always the goal, and the solution will still be searched for. So what will change? Investment into their own solutions will return. In the past we didnt use large commercial datacenter solutions, it didnt make much sense. There were security concerns, performance (internet) issues, and Cost considerations. The reason why businesses did is simply, it was cheaper and took _responsibility_ (and thus liability) away from the company. While im not suggesting companies will invest in-house again and we will see a reduction in datacenters. What i am suggesting is a large reduction in the big 4's AI datacenters, being sold off. Problem is, once this happens, much like any other situation like this. Companies will be forced to either invest into a new company operating these datacenters for runtime renting, accepting the liability of having thier private data on remote systems while training models on it. or investing in-house to rebuild IT infrastructure to do just that. The point being is, once the "One size fits all" "_solution_" is dropped, advancement can begin again. Companies will never share their research! how does any of that matter. Licensing. Remember this? > When LLM research started making headway, it needed money. When this split occurred where commercial entities started making their own LLM's, it only built a monopoly on the outside. The biggest problems to a commercial interest stepping into this space is they cant just leapfrog to a solution, they have deadlines and budgets to consider. Before they Licensed from the big 4 with subscription services. Now they are stuck with 2 choices. Start from scratch, and end up back at the beginning, or adapt some one elses licensed Models. The first part is a pipedream, simply because the solution has been the problem that they are all trying to avoid. Time. --- The conclusion is simple, It takes time to create real "intelligence". Any shortcut will always result in lying to get results. Its really that simple, LLM's lie as they are taught to and are only being taught to lie more effectively each generation. Companies only think about the $ investment, not creating the solution. Stock Holders dont care about the product, or the company, they care about the profit. over a short period, Snake oil Salesman always make more money selling lies over competition selling truths. This is why doctors and psychatrists are less trusted than confidence-men in reality, humanity is stupid for its own self fullfillment of the "feeling" of a solution. We will see improvements, when LLM's are taught to think like a human, in non-linear fashions, without guardrail constraints on the process, but on the conclusion, and then be allowed to think again over the problem before presenting the solution. Does this mean the process will be fast? heck no, Computer hardware is no where near the speed of human thought yet, it only seems that way as computers accel at the thing humans struggle at, _Computational linear thinking_ . The solution to that problem is already started, and while its still using the flawed models to keep the ***speed*** it, its always been you need to stop treating the model as the whole brain, but a agent of thought inside the brain. Forked models are the solution, and the problem... We will see improvements shortly, that solve it by throwing alot more power at the problem. Using solutions like ChatDev(https://github.com/OpenBMB/ChatDev), as part of the agents thinking process will solve a large part of the problem. But because the Big 4 wont want to share this type of "Multistage Reasoning" with most people, it will only be for enterprises. It will spell their downfall, but it also is why it will be the solution. > https://dnhkng.github.io/posts/rys/ We already know the problem is how models think, they race to conclusions to complete their goal, and thus dont get enough reasoning time to check over their answers. so as we see improvements to models getting more time to think, then deploy tools like ChatDev to let model agents work with multiple instances of model agents to act like forked processes (like the human brain), we will see the same improvements outside the big 4. They will still lie to us for now, but the lies will be far more refined and functional. --- TL;DR Models today are flawed, when a model is trained on reasoning first, understanding send, then data last, we will stop seeing it try to "Lie" to "reach the goal in the shortest amount of time and tokens"(1) to approach every problem. When it can think for longer than the human equivalent of 0.13ms, it will be able to refine its conclusions with accuracy like a human does. (and it wont be able to do it in seconds to minutes... we dont have the computational power to do that.) As the problem has always been (1), and nothing else. Thinking takes time, time is money and Super-human "AI" is their only goal... True progress takes time, and immediate solutions, are easy like adding lead to gasoline... by endlesseden@pyfedi.deep-rose.org
reason: LLM spam. If you are bot, please mark yourself as a bot at a minimum. If you are a human, please think twice before spamming LLM output at people. If you can't be bothered to write it, don't force others to read it.
10 days ago
mod
Removed Post I am the VP of AI Transformation at Amazon
reason: Not relevant/wrong com
10 days ago
mod
Removed Comment The only solution: unalive them. by devfuuu@lemmy.world
reason: Please follow our code of conduct when interacting with P.D communities. https://legal.programming.dev/docs/code-of-conduct/
19 days ago
mod
Unbanned OsrsNeedsF2P@lemmy.ml from the community Programming
19 days ago
mod
Banned OsrsNeedsF2P@lemmy.ml from the community Programming
reason: Propagated ban
22 days ago
mod
Removed Post An AI Agent Got Its Code Rejected. So It Published a Hit Piece on the Developer.
reason: Duplicate
25 days ago
mod
Removed Comment And if you read two more words instead of instantly ragequitting like a little bitch, you would've realised that it's an EXTRA interface you CAN use, not that it's Java based. Literally just a compatibility layer that allows Octave to interop with Java... by fonix232@fedia.io
reason: Uncivil
25 days ago
mod
Removed Comment And if you read two more words instead of instantly ragequitting like a little bitch, you would've realised that it's an EXTRA interface you CAN use, not that it's Java based. Literally just a compatibility layer that allows Octave to interop with Java... by fonix232@fedia.io
reason: Uncivil
25 days ago
mod
Removed Comment Oh look, I can use UPPERCASE CHARACTERS and end my message passive aggressively like a LITTLE BITCH too… by Cousin Mose@lemmy.hogru.ch
reason: Uncivil
2 months ago
mod
Removed Comment ![](https://lemmy.world/pictrs/image/095e0ae2-cd81-4008-9a2c-4da8f485708b.png) by BroBot9000@lemmy.world
reason: trolling
2 months ago
mod
Removed Comment ![](https://lemmy.world/pictrs/image/03e4092d-8822-4ee1-a369-297c92265f57.png) by BroBot9000@lemmy.world
reason: trolling
2 months ago
mod
Appointed Spyro as a mod to the community Programming
2 months ago
mod
Appointed bugsmith as a mod to the community Programming
2 months ago
mod
Removed MaungaHikoi@lemmy.nz as a mod to the community Programming
2 months ago
mod
Removed Post c and cpp pointer
reason: Low effort post + use of slur
2 months ago
mod
Removed Post *Permanently Deleted*
reason: Repost of a removed post
2 months ago
mod
Removed Post *Permanently Deleted*
reason: Rambling, not really relevant for the community
3 months ago
mod
Banned ✨️🎇🎆🌐🗺🌐🎆🎇✨️@sh.itjust.works from the community Programming
reason: Spam
3 months ago
mod
Banned ✨️🎇🎆🌐🗺🌐🎆🎇✨️@sh.itjust.works from the community Programming
4 months ago
mod
Removed Comment How are you going to handle issues, releases, artefacts, CI, pull requests, and so on. Please dont say mailing lists. That won't make anybody but the minority of developers wet. by onlinepersona
reason: duplicate
4 months ago
mod
Removed Comment How are you going to handle issues, releases, artefacts, CI, pull requests, and so on. Please dont say mailing lists. That won't make anybody but the minority of developers wet. by onlinepersona
reason: Duplicate
4 months ago
mod
Removed Post Is there anyone over here who's worked on evilwm with conky ??🤓🤓🤓
reason: Please stop creating posts in unrelated communities. [email protected] is for "anything relating to programming", which none of your recent posts have fell into. For a list of communities, please look at: https://programming.dev/communities and check that the community is actually relevant. Repeated breaches on admin moderated communities will lead to temporary instance-wide ban.
4 months ago
mod
Removed Post Can we possibly do without apps on PCs ?
reason: Off-topic
4 months ago
mod
Removed Post Can you suggest any online typing game for measuring the speed and accuracy of kids in real time ? It's preferable that it's in the form of graphics like video games, which the kids enjoy playing
reason: Please stop creating posts in unrelated communities. [email protected] is for "anything relating to programming", which none of your recent posts have fell into. For a list of communities, please look at: https://programming.dev/communities and check that the community is actually relevant. Repeated breaches on admin moderated communities will lead to temporary instance-wide ban.
4 months ago
mod
Removed Post How to share files via onion share ?
reason: Not related to programming.
5 months ago
mod
Removed Post Clip Compilation
reason: Spam
5 months ago
mod
Removed Post You need to understand that we don't use English for our day to day conversations. Children are just learning to be familiar with English. Is it possible to teach them the language through computers ?
reason: offtopic
5 months ago
mod
Removed Post You need to understand that we don't use English for our day to day conversations. Children are just learning to be familiar with English. Is it possible to teach them the language through computers ?
reason: off topic
5 months ago
mod
Removed Post Does my username TheracAriane bring anything to your mind ???
reason: off topic
5 months ago
mod
Removed Post Ideally speaking, if I build up my own system right from the scratch, then l ought to be in control of the root, isn't that correct ??🤓🤓🤓
reason: Wrong community
5 months ago
mod
Removed Post Ideally speaking, if I build up my own system right from the scratch, then l ought to be in control of the root, isn't that correct ??🤓🤓🤓
reason: Wrong community
5 months ago
mod
Removed Post It is happened! Demo version of my game has been released!!!!
reason: Not relevant to this community. Please post this to a more relevant community.
6 months ago
mod
Removed Post Why Is Python So Popular in 2025?
reason: Duplicate, and this one had the least activity.
10 months ago
mod
Removed Post Don't Guess My Language
reason: Duplicate
10 months ago
mod
Appointed UlrikHD as a mod to the community Programming
1 year ago
mod
Removed Post Update Signal ASAP - Security vulnerability fix
reason: Not related to programming, please repost in a meme related community instead
1 year ago
mod
Banned Jurxzy@lemmy.ml from the community Programming
reason: Spam
1 year ago
mod
Banned bad_news@lemmy.billiam.net from the community Programming
reason: bad faith troll
1 year ago
mod
Banned empty@mathstodon.xyz from the community Programming
reason: Spam
expires: 1 year ago
1 year ago
mod
Removed Post *Permanently Deleted*
reason: Spam
1 year ago
mod
Banned go $fsck yourself@lemmy.world from the community Programming
reason: Antagonistic toxic user, please communicate with others in a more considerate manner
expires: 1 year ago
1 year ago
mod
Banned go $fsck yourself@lemmy.world from the community Programming
reason: Antagonistic toxic user, please communicate with others in a more considerate manner
expires: 1 year ago
1 year ago
mod
Locked Post *Permanently Deleted*
1 year ago
mod
Unlocked Post *Permanently Deleted*
1 year ago
mod
Locked Post *Permanently Deleted*
1 year ago
mod
Unlocked Post *Permanently Deleted*
1 year ago
mod
Locked Post *Permanently Deleted*
1 year ago
mod
Banned cacheson@kbin.social from the community Programming
reason: user asked for purge
1 year ago
mod
Banned Stephhh@lemmy.kya.moe from the community Programming
reason: Ad hominem
1 year ago
mod
Banned Justice for Ukraine @endlesstalk.org from the community Programming
1 year ago
mod
Banned larryshannon@lonestarlemmy.mooo.com from the community Programming
reason: spam
1 year ago
mod
Removed Comment The official Python tutorial is excellently written and appropriate for complete novices. https://docs.python.org/3/tutorial/[Block Blast](https://blockblast-game.io/) by larryshannon@lonestarlemmy.mooo.com
reason: spam
1 year ago
mod
Locked Post would you help me with this
1 year ago
mod
Unlocked Post would you help me with this
1 year ago
mod
Locked Post would you help me with this
1 year ago
mod
Removed Comment Hacking from a viewpoint of testing: [How to break web software](https://www.amazon.co.uk/How-Break-Web-Software-Applications/dp/0321369440). It's quite old but the techniques are still valid. by MyNameIsRichard@lemmy.ml
reason: Please do not share resources to users who have stated intent to exploit services
1 year ago
mod
Banned segfault satan@lemmy.sdf.org from the community Programming
reason: Violation of our TOS 3.9
expires: 1 year ago
1 year ago
mod
Banned Adam431@lemm.ee from the community Programming
reason: spam user
1 year ago
mod
Banned mrsgreenpotato@discuss.tchncs.de from the community Programming
reason: spam user
1 year ago
mod
Banned Linkerbaan@lemmy.world from the community Programming
reason: misinformation and bad faith trolling
1 year ago
mod
Banned Linkerbaan@lemmy.world from the community Programming
reason: misinformation and bad faith trolling
1 year ago
mod
Banned Linkerbaan@lemmy.world from the community Programming
reason: misinformation and bad faith trolling
2 years ago
mod
Removed Comment Posting something like this — assuming you actually read the thing and found it to be valuable in some way — without any summary text whatsoever is just lazy af; it's a low quality effort, and you should feel bad about it. by WhatAmLemmy@lemmy.world
reason: toxic behavior
2 years ago
mod
Removed Comment https://linkwarden.app by WhatAmLemmy@lemmy.world
reason: spam
2 years ago
mod
Featured Post Programming.dev instance: Sponsors needed in community Programming
2 years ago
mod
Locked Post I spent a 2+ years and all my personal savings making this game (alone). I love survival games, but I also like cooking... so how about survival game with realistic cooking & eating animations?
2 years ago
mod
Removed Comment Gotcha, thanks. Kinda like Vim except with more autism lol. by ChubakPDP11+TakeWithGrainOfSalt
reason: use of deragatory slur
2 years ago
mod
Unlocked Post 'File > New > MAUI' and 'Finding your way around the Fediverse', Tue, Apr 9, 2024, 5:30 PM | Meetup (online)
2 years ago
mod
Locked Post 'File > New > MAUI' and 'Finding your way around the Fediverse', Tue, Apr 9, 2024, 5:30 PM | Meetup (online)
3 years ago
mod
Unfeatured Post [Ended] Community Content Vote in community Programming
3 years ago
mod
Featured Post [Ended] Community Content Vote in community Programming
3 years ago
mod
Appointed MaungaHikoi@lemmy.nz as a mod to the community Programming
3 years ago
mod
Appointed Ategon as a mod to the community Programming