#rust programmer
Explore tagged Tumblr posts
Text
I got most of the base of the CPU worked out on my NES emulator using references to 6502 Assembly and Rust binary arithmetic to implement the 6502 Op Codes. It was hard because originally I didn't update the flags and define the op code functions in the right place because the documentation I was following basically didn't provide the info on where to put it >_> But I wrote my tests at the bottom and in the main.rs file I defined a function that if everything passed on the CPU it would print out "Hello Rusty NES" to show me it works! Today I am going to work a little on RAM management but @emoryvalentine14 and I are going to kill some Roblox zombies soon so it will be after while :D It's going to be a while before the NES emulator is going to be live but I plan to host it on itch.io for 2.99$ along with some NES games I am personally deving in pure 6502 Assembly :| because I am a glutton for punishment.
Also the Discord is now at 70+ members and we are still a very positive community! Loving it. If you want to join shoot me a comment or a message!
Alright folks, I am going to sign off! Will update next update on the NES.
#Nes emulator#emulator#NES#Nintendo#Nintendo Entertainment System#Nintendo SNES#Super Nintendo#game emulator#gaming#rust programming#rust language#rust lang#rust programmer#systems development#technology#programming#programmer#programmers#development#developer
7 notes
·
View notes
Text
Fuck it, I'm tired of C's bullshit. I'm gonna learn Zig.
29 notes
·
View notes
Text
35 notes
·
View notes
Text
i desperately WISH that reading the phrase “[x] is among the few who understands how to write C programs with effectively zero memory bugs” didn’t instantaneously put me through a demented minor-key magical girl transformation sequence in which i became a screeching thirteen-headed cockatrice, belching fire and ichor and rust macros
but UNFORTUNATELY FOR ME i’ve looked at these fuckers’ code, so,,,,
#look i find the Rust Evangelism Task Force vibe annoying too#they do not solve all problems and have done things that annoyed me and made my job harder#but that is a DROP IN THE BUCKET compared to every quote-unquote elite C programmer that has made software actively worse ahgleiahg
9 notes
·
View notes
Text
I'm gearing up for Advent of Code 2023 (while trying to be realistic about time constraints with self-care, work, and relationships)!
Who wants to do it with me? I'll mostly be coding in R (in base, tidyverse, and other random paradigms/package bundles), Python, Excel, and (maybe) Rust.
Let's do this work-life-programming balance. (I am a very rusty programmer since I mostly do statistical work - I have the aptitude for it but am very inefficient with optimized solutions.) I'll post my solutions here: https://github.com/pritikadasgupta/adventofcode
#advent of code#adventofcode#programming#r language#r#rstudio#python#excel#rust#r programmer#r programming#statistician#dataviz#data scientist#work-life-programming balance
3 notes
·
View notes
Text
The real socks for
Crabs Everywhere Socks • $10.00
It’s my pledge that is 2023 we are going to have crabs everywhere. On your devices, on your feet, they’ll be a crab for anything and everybody.
75% Cotton, 21% Nylon, 4% Lycra
Crew Length
Fits Most Feet
Men’s Shoe Size 7 – 12
Women’s Shoe Size 6 – 11
9K notes
·
View notes
Text
This system has very strong opinions about programming languages it does not know how to write in.
#this is about Rust#we dislike it immensely#safety is the programmer's job#not the language's#the language's job is let you tell the computer what to do#if you fuck that up it's on you
1 note
·
View note
Text
This Rust joke encapsulates the basic concept of karma through a simple program that interacts with the user by accepting input and providing output based on the given input. Here's how it aligns with the idea of karma, which often implies that the energy or actions one puts out into the world will come back to them in some form:
Input as Action: The function give() takes user input, representing an action or thought put out into the universe. This act of giving an input parallels the concept of performing an action or harboring a thought in real life.
Output as Reaction: The function get(value: String) then processes this input. If the input is "love," it returns "1," and if the input is "fear," it returns "0." For any other input, it simply returns the input itself. This mechanism is a direct metaphor for karma, where the nature of what you put out—represented here by the specific strings "love" or "fear"—determines what you get back.
Love and Fear as Metaphors: The choice of "love" and "fear" as inputs with specific outputs ("1" for love and "0" for fear) symbolizes the idea that positive actions or thoughts (love) lead to positive outcomes, while negative actions or thoughts (fear) lead to negative or less desirable outcomes. The binary nature of the outputs (1 and 0) can be seen as a nod to the binary outcomes in life: positive or negative, good or bad karma.
Looping Nature: The program's looping nature, prompting the user to continuously enter thoughts and see the karma returned, illustrates the ongoing cycle of actions and consequences, a core tenet of the karma concept. It suggests that for every action (input), there is a reaction (output), and this cycle continues indefinitely.
In essence, this Rust program is a playful representation of the karma concept, using code to illustrate the philosophical idea that the quality of what you put out into the world (your actions, thoughts, or intentions) directly influences what comes back to you.
0 notes
Text
Sorry for lack of posting
As you all know I recently started the System Development Discord and it has taken off quite a bit. I think we're at nearly 30 members now! I have been very busy as yesterday my wife and I spent time with my Grandma and Grandpa and then today I spent the day with my wife for our early Valentine's date! I got her some chocolates, a teddy bear, a rose, and a card as well as a Freddy Fazbear's Pizzeria shirt and a prompt journal! Then we spent the day sculpting and hanging out which was really nice! Tomorrow I am going to be developing a kernel in Rust with my friend Avi. We are all learning rust for systems development and it's a lot of fun! I also broke my wisdom tooth in half on a peppermint so I may be hurting by tomorrow :/ What is worse is that I swallowed half the tooth! But anyways, I am going to go lay down because I am tired then I plan on getting up and learning more Rust! Also thanks to @xiacodes @xiabablog for being an awesome friend. I think she is going to be coding along with us tomorrow on the OS!! :D Goodnight all, stay toasty!
#systems development#rust language#rust#programming#programmer#programmers#rust programmer#rust programming#technology#valentines day#valentines#date night#fnaf#sculpting#art#artists#artist#discord#operating systems#operating system#coding#coders#discord server#tech industry
7 notes
·
View notes
Text
What kind of bubble is AI?
My latest column for Locus Magazine is "What Kind of Bubble is AI?" All economic bubbles are hugely destructive, but some of them leave behind wreckage that can be salvaged for useful purposes, while others leave nothing behind but ashes:
https://locusmag.com/2023/12/commentary-cory-doctorow-what-kind-of-bubble-is-ai/
Think about some 21st century bubbles. The dotcom bubble was a terrible tragedy, one that drained the coffers of pension funds and other institutional investors and wiped out retail investors who were gulled by Superbowl Ads. But there was a lot left behind after the dotcoms were wiped out: cheap servers, office furniture and space, but far more importantly, a generation of young people who'd been trained as web makers, leaving nontechnical degree programs to learn HTML, perl and python. This created a whole cohort of technologists from non-technical backgrounds, a first in technological history. Many of these people became the vanguard of a more inclusive and humane tech development movement, and they were able to make interesting and useful services and products in an environment where raw materials – compute, bandwidth, space and talent – were available at firesale prices.
Contrast this with the crypto bubble. It, too, destroyed the fortunes of institutional and individual investors through fraud and Superbowl Ads. It, too, lured in nontechnical people to learn esoteric disciplines at investor expense. But apart from a smattering of Rust programmers, the main residue of crypto is bad digital art and worse Austrian economics.
Or think of Worldcom vs Enron. Both bubbles were built on pure fraud, but Enron's fraud left nothing behind but a string of suspicious deaths. By contrast, Worldcom's fraud was a Big Store con that required laying a ton of fiber that is still in the ground to this day, and is being bought and used at pennies on the dollar.
AI is definitely a bubble. As I write in the column, if you fly into SFO and rent a car and drive north to San Francisco or south to Silicon Valley, every single billboard is advertising an "AI" startup, many of which are not even using anything that can be remotely characterized as AI. That's amazing, considering what a meaningless buzzword AI already is.
So which kind of bubble is AI? When it pops, will something useful be left behind, or will it go away altogether? To be sure, there's a legion of technologists who are learning Tensorflow and Pytorch. These nominally open source tools are bound, respectively, to Google and Facebook's AI environments:
https://pluralistic.net/2023/08/18/openwashing/#you-keep-using-that-word-i-do-not-think-it-means-what-you-think-it-means
But if those environments go away, those programming skills become a lot less useful. Live, large-scale Big Tech AI projects are shockingly expensive to run. Some of their costs are fixed – collecting, labeling and processing training data – but the running costs for each query are prodigious. There's a massive primary energy bill for the servers, a nearly as large energy bill for the chillers, and a titanic wage bill for the specialized technical staff involved.
Once investor subsidies dry up, will the real-world, non-hyperbolic applications for AI be enough to cover these running costs? AI applications can be plotted on a 2X2 grid whose axes are "value" (how much customers will pay for them) and "risk tolerance" (how perfect the product needs to be).
Charging teenaged D&D players $10 month for an image generator that creates epic illustrations of their characters fighting monsters is low value and very risk tolerant (teenagers aren't overly worried about six-fingered swordspeople with three pupils in each eye). Charging scammy spamfarms $500/month for a text generator that spits out dull, search-algorithm-pleasing narratives to appear over recipes is likewise low-value and highly risk tolerant (your customer doesn't care if the text is nonsense). Charging visually impaired people $100 month for an app that plays a text-to-speech description of anything they point their cameras at is low-value and moderately risk tolerant ("that's your blue shirt" when it's green is not a big deal, while "the street is safe to cross" when it's not is a much bigger one).
Morganstanley doesn't talk about the trillions the AI industry will be worth some day because of these applications. These are just spinoffs from the main event, a collection of extremely high-value applications. Think of self-driving cars or radiology bots that analyze chest x-rays and characterize masses as cancerous or noncancerous.
These are high value – but only if they are also risk-tolerant. The pitch for self-driving cars is "fire most drivers and replace them with 'humans in the loop' who intervene at critical junctures." That's the risk-tolerant version of self-driving cars, and it's a failure. More than $100b has been incinerated chasing self-driving cars, and cars are nowhere near driving themselves:
https://pluralistic.net/2022/10/09/herbies-revenge/#100-billion-here-100-billion-there-pretty-soon-youre-talking-real-money
Quite the reverse, in fact. Cruise was just forced to quit the field after one of their cars maimed a woman – a pedestrian who had not opted into being part of a high-risk AI experiment – and dragged her body 20 feet through the streets of San Francisco. Afterwards, it emerged that Cruise had replaced the single low-waged driver who would normally be paid to operate a taxi with 1.5 high-waged skilled technicians who remotely oversaw each of its vehicles:
https://www.nytimes.com/2023/11/03/technology/cruise-general-motors-self-driving-cars.html
The self-driving pitch isn't that your car will correct your own human errors (like an alarm that sounds when you activate your turn signal while someone is in your blind-spot). Self-driving isn't about using automation to augment human skill – it's about replacing humans. There's no business case for spending hundreds of billions on better safety systems for cars (there's a human case for it, though!). The only way the price-tag justifies itself is if paid drivers can be fired and replaced with software that costs less than their wages.
What about radiologists? Radiologists certainly make mistakes from time to time, and if there's a computer vision system that makes different mistakes than the sort that humans make, they could be a cheap way of generating second opinions that trigger re-examination by a human radiologist. But no AI investor thinks their return will come from selling hospitals that reduce the number of X-rays each radiologist processes every day, as a second-opinion-generating system would. Rather, the value of AI radiologists comes from firing most of your human radiologists and replacing them with software whose judgments are cursorily double-checked by a human whose "automation blindness" will turn them into an OK-button-mashing automaton:
https://pluralistic.net/2023/08/23/automation-blindness/#humans-in-the-loop
The profit-generating pitch for high-value AI applications lies in creating "reverse centaurs": humans who serve as appendages for automation that operates at a speed and scale that is unrelated to the capacity or needs of the worker:
https://pluralistic.net/2022/04/17/revenge-of-the-chickenized-reverse-centaurs/
But unless these high-value applications are intrinsically risk-tolerant, they are poor candidates for automation. Cruise was able to nonconsensually enlist the population of San Francisco in an experimental murderbot development program thanks to the vast sums of money sloshing around the industry. Some of this money funds the inevitabilist narrative that self-driving cars are coming, it's only a matter of when, not if, and so SF had better get in the autonomous vehicle or get run over by the forces of history.
Once the bubble pops (all bubbles pop), AI applications will have to rise or fall on their actual merits, not their promise. The odds are stacked against the long-term survival of high-value, risk-intolerant AI applications.
The problem for AI is that while there are a lot of risk-tolerant applications, they're almost all low-value; while nearly all the high-value applications are risk-intolerant. Once AI has to be profitable – once investors withdraw their subsidies from money-losing ventures – the risk-tolerant applications need to be sufficient to run those tremendously expensive servers in those brutally expensive data-centers tended by exceptionally expensive technical workers.
If they aren't, then the business case for running those servers goes away, and so do the servers – and so do all those risk-tolerant, low-value applications. It doesn't matter if helping blind people make sense of their surroundings is socially beneficial. It doesn't matter if teenaged gamers love their epic character art. It doesn't even matter how horny scammers are for generating AI nonsense SEO websites:
https://twitter.com/jakezward/status/1728032634037567509
These applications are all riding on the coattails of the big AI models that are being built and operated at a loss in order to be profitable. If they remain unprofitable long enough, the private sector will no longer pay to operate them.
Now, there are smaller models, models that stand alone and run on commodity hardware. These would persist even after the AI bubble bursts, because most of their costs are setup costs that have already been borne by the well-funded companies who created them. These models are limited, of course, though the communities that have formed around them have pushed those limits in surprising ways, far beyond their original manufacturers' beliefs about their capacity. These communities will continue to push those limits for as long as they find the models useful.
These standalone, "toy" models are derived from the big models, though. When the AI bubble bursts and the private sector no longer subsidizes mass-scale model creation, it will cease to spin out more sophisticated models that run on commodity hardware (it's possible that Federated learning and other techniques for spreading out the work of making large-scale models will fill the gap).
So what kind of bubble is the AI bubble? What will we salvage from its wreckage? Perhaps the communities who've invested in becoming experts in Pytorch and Tensorflow will wrestle them away from their corporate masters and make them generally useful. Certainly, a lot of people will have gained skills in applying statistical techniques.
But there will also be a lot of unsalvageable wreckage. As big AI models get integrated into the processes of the productive economy, AI becomes a source of systemic risk. The only thing worse than having an automated process that is rendered dangerous or erratic based on AI integration is to have that process fail entirely because the AI suddenly disappeared, a collapse that is too precipitous for former AI customers to engineer a soft landing for their systems.
This is a blind spot in our policymakers debates about AI. The smart policymakers are asking questions about fairness, algorithmic bias, and fraud. The foolish policymakers are ensnared in fantasies about "AI safety," AKA "Will the chatbot become a superintelligence that turns the whole human race into paperclips?"
https://pluralistic.net/2023/11/27/10-types-of-people/#taking-up-a-lot-of-space
But no one is asking, "What will we do if" – when – "the AI bubble pops and most of this stuff disappears overnight?"
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/12/19/bubblenomics/#pop
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
--
tom_bullock (modified) https://www.flickr.com/photos/tombullock/25173469495/
CC BY 2.0 https://creativecommons.org/licenses/by/2.0/
4K notes
·
View notes
Text
Lots of people talking in general but to your specific point, if you are new to programming in general I would stick to only one and since you're taking a class that will use Python, learn Python. You will be able to pick up c# very quickly later.
If you know programming generally, sure learn both languages at the same time. They're so different I don't imagine any confusion, especially because one is static typed with semicolons and the other is an absolute garbage language that I desperately want to abandon but my industry adopted it dynamically typed with enforced white space so visually it will be easy to center yourself in one vs the other. At least that's true for me when I switch between them.
Question regarding languages: Python & C#
Is it possible to learn both languages at the same time? Or will it be easy to get confused between the two? I thought I would come to the lovely codeblr people who have a vast amount of experience for advice. C# is something I *want* to learn, but I *have* to learn Python for for my course that starts in January...I just want to see if it is plausible to learn both at the same time or if it will mess up my learning if I try and learn both. I'd love to hear about your experiences and what your first languages were when you first started out in the world of coding!!
#i just really hate python#but don't let that influence you lots of people love it#and lots of programmers hate the language they use the most#probably why rust is so popular#no one uses it
47 notes
·
View notes
Text
C is now illegal by order of President Biden; all C programmers report to the nearest FEMA camp for your mandatory thigh-highs and estrogen injections as you begin your new life as a Rust developer.
441 notes
·
View notes
Text
Ad | Humble Bundle April 2024
Hi folks, here's a few bundles that some of you might be interested in this month.
For the inspiring programmers - the Code like a Pro bundle supports Girls Who Code.
If you lean more towards 3D modelling and design then Blender Core Skills bundle has loads of resources on Mesh modelling, rigging, shading and lighting. It also raises money for One Tree Planted.
I know a bunch of you love a good TTRPG, there's a solid Pathfinder Second Edition - Guns of Alkenstar Bundle available. A portion of the money goes towards Endometriosis UK - a charity very close to my heart.
147 notes
·
View notes
Text
so I'm reading Gankra's "Learn Rust With Entirely Too Many Linked Lists" and the introduction feels like
STOP USING LINKED LISTS
DATA ELEMENTS WERE NOT SUPPOSED TO BE GIVEN POINTERS
YEARS OF COMPUTER SCIENCE yet NO REAL-WORLD USE FOUND for using anything other than Vec
Want to add and remove elements from the front and back just for a laugh? We have a tool for that: It's called "VecDeq"
"It might take a long time to look at any element but I'll make it up with all the merges, inserts, and splits I'll be doing" - Statements dreamed up by the utterly Deranged
LOOK at what Functional Programmers have been demanding your Respect for all this time, with all the LISP machines & tape readers we built for them (This is REAL Computer Science, done by REAL Computer Scientists)
???????????
"Hello I would like element.next.next.next.next.next.next.next.next please"
They have played us for absolute fools
165 notes
·
View notes
Text
lmao (from this CTF writeup):
The final step, emitting the target language, which is nowadays often NOT C, is our greatest weakness in 2024. A new generation of engineers and systems folk have discovered the fruits of Chris Lattner's labor and staked their claim on today's software landscape. Unfortunately for reverse engineers, we continue to deal with the Cambrian explosion in binary diversity without commensurate improvements in tools. We eat shit reading worsening pseudo-C approximations of things that are not C. This problem will probably not get solved in the near future. There is no market for a high-quality Rust decompiler. First, no one writes exploits or malware in languages like Rust or Haskell. Unlike C/C++/Obj-C, the Rust/Haskell/etc ecosystems are predominantly open-source further decreasing the need for reverse engineering. Lastly, improved source control and ready availability of managed enterprise services (i.e. GitHub) make first-party loss of source code much rarer nowadays. So like, no one really cares about decompiling Rust other than unfortunate CTF players. Golang is a notable exception. Golang is like, the language for writing malware--great standard library, good cross-platform support, brain-dead easy concurrency, easy cross-compilation, fully static linking, and design with junior programmers in mind. You could shit out a Golang SSH worm in like 200 LoC crushing carts and ketamine no problem. People worry about AGI Skynet hacking the Pentagon to trigger a nuclear holocaust but really it's more gonna be like eastern European dudes rippin' it with some hella gang weed ChatGPT ransomware. So maybe we'll get a good Golang decompiler first?
32 notes
·
View notes