#that's like. arbitrary code execution at this point
Explore tagged Tumblr posts
Text
So @kaiasky you asked for me to write something if I figure out more about how video game hacks work, and I did some more research.
At its most basic, if bytes are written on your machine, you can modify them. So if you have a binary on your machine, you can easily decompile it to assembly, modify the assembly, recompile it, and run it.
Except, wait, that sucks.
You have to know assembly well, which is miserable
It's really easy to brick a program that you do this to because of offsets
A lot of programs have checksums and other failsafes to detect direct modification
So that's not going to work very well.
Instead, you can ride along in the process that's executing the binary and execute your code there. For windows machines, it seems like the easiest way is DLL injection.
Now we get to use C instead of assembly (thank god) and we have a lot more flexibility. We don't want to touch the underlying binary because of (2) and (3), but since we're in the same address space as the program, we can write to the addresses that the program accesses. If we discover the location of a variable, we can overwrite its value with whatever we want.
We can abuse this further by messing with function pointers - if we can overwrite a value containing the location of another function within the binary, we can point it to our code instead, achieving arbitrary code execution. Yay! I think there are some countermeasures to this that cheat developers have to watch out for, but this is just a broad overview anyway.
Okay, great, but how do we figure out what to modify? We could read the entire binary in assembly, but I think I'd rather kill myself. Instead we have to use some tool to spy on the program. I remember using CheatEngine when I was little to cheat in flash games, and it turns out it still works pretty well - if you can repeatedly modify a value, you can use a tool to find its memory address (usually).
You can even trace back a pointer chain (ie, an attribute contained within a player object contained within a game object, but usually with many more layers) to its root and find a way to do location discovery entirely automatically, though this seems a little tricky sometimes.
Anyway. This is a very broad overview that raises as many questions as it answers for me, but I'm going to try playing around with some tools and seeing if I can get anywhere.
41 notes
·
View notes
Note
I have a question, if a character can only be seen via glitches, would it work for a Ryu Number? Like, a glitch being able to make an unused character appear that could otherwise only be accessed by hacking (for instance, there's a bug in Earthbound that allows you to get to the Debug Menu, where Kirby is used to navigate said menus)
I have actually thought about this a bit ever since a glitch was discovered in 3rd Strike that allowed players to actually see the otherwise hidden Doraemon in one of the stages.
I'm inclined to say no, and vaguely gesture toward developer intention and the existence of arbitrary code execution, but I fully recognize that the former is only so relevant up to a certain point and is not infrequently in conflict with the player experience, and the latter can be well-defined for exclusion so as to still allow for less involved glitches. (I think.)
In short, I guess, but I would probably be compelled to include another route without glitched appearances.
28 notes
·
View notes
Text
Why Should we Play Glitchy Games?
How many times has this happened to you? You booted up a new game, you got it day one, and it has some glitches, and you aren’t too fond of the graphics, so, as such, you stop playing it and don’t really come back to it. You go back to whatever you were playing before and whenever your friends talk about it with you you always remark on how you couldn’t get past the way the graphics were. Some time passes, and you decide to watch a video on it, and you realize you missed out.
I think we’ve all been there, and that’s nothing to be ashamed of, lots of people judge books by their covers or just by the first few pages. But it doesn’t always have to be this way. Even if a game is unpolished, has poor graphics, or is glitchy, should stick with it. If not for just the sake of seeing what’s on the other side.
Talking about video games, it’s also easy to get caught up in the details. I’ve seen so much discourse about the graphics of games and how powerful the consoles are, it’s almost like the games themselves stop mattering at a certain point. It’s become more of “how close can we get to a movie without just making a movie?” Now, granted, multiple years during the Game Awards, Geoff Keighley and other speakers have spoken about how they hope to see video games get even closer to being like movies. While I’m all for advancements, and granted this is mostly from Playstation and Microsoft’s AAA output, they all play so close to one another, and if that’s not happening, Playstation is remaking The Last of Us again.
So, what’s the point? All these games look the same, and while there’s depth to them, but at the same time, there’s a lack of variety in the actual genres. So many of these games have become samey. There’s not enough variety, but with the actual content of the games, there’s not too much actually covered. So much of this has become a discussion of graphics and less about the games themselves. The discussion, and focus, should start to veer towards the actual content of these games, not just the performance and graphics.
Now, sometimes, the contents of the games aren’t completely polished, but I’ve seen so many stories of people playing these unpolished games and having a great time. A big example is Super Smash Bros. Melee, a game that has become renowned for its glitches such as the Wavedash and L-Canceling along with the roster’s unbalanced nature has made it an enduring classic for two decades, or Pokemon Red and Blue, games that, while a complete experience, are held together with duct tape and a dream, allowing for Arbitrary Code Execution, a way for people to make precise inputs, rewriting the game while the game is being played due to an insecurity in the game’s coding, allowing again for new discoveries to be made and be played for decades. So much so that people are still finding new techniques and the hierarchy of characters' competitive comparisons to one another to this day. Along with games that haven’t aged well or were unpolished, some games are made with glitches in mind, whether for comedy, like Goat Simulator, or simply that a glitch inspires a whole new concept. One of the most influential examples being Street Fighter II, where the designers found that different attacks had ending lags and could also be strung into each other, allowing for concept of combos to be born, and essentially creating the basis for all modern fighting games.
Now some may say that, no, playing unpolished games leaves you with a frustrating experience, with constant crashes and choppy, inconsistent frame rates. An example of this that I can think of is the Steam version of Fallout New Vegas, a game that, in general, is colloquially considered one of the best video games in the Fallout series, if not in its genre. But, due to the game not being developed for PCs first because it was first developed for the PS3 and Xbox 360, the game’s development being 18 months, and how old the game is, the Steam version is riddled with lots of bugs and frequently crashes. Despite this, however, the game still sits with an overall review average of “Very Positive” on its store page. Why is that? People found that, despite all the issues, players still love the game and found ways to not only work around the crashes, whether through toughing it out or modding the game to tighten up the experience, preventing these crashes from happening. This even happens now, with online games, like Fortnite and Fallout 76. These games, for as popular as they are, there’s always glitches that come with new additions and get fixed as they come, and they release with these glitches. People still play them. Even a game that came out three days ago, Legend of Zelda: Echoes of Wisdom, has framerate drops and a strange lack of anti-aliasing, but guess what? I’m already halfway done with it, and I believe it’ll get fixed within time, whether as a patch or it was intended to work better on whatever this Switch successor will be. The point is, even if these games have these problems, a lot of the people who pull through usually end up at liking games like this.
Of course, this doesn’t work for consoles, but usually these don’t go that far as an issue.
All in all, even if the games aren’t polished or have glitches, it may still be worth it to play them. While, that doesn’t mean you can excuse extremely poor performances, you might find that you can still get enjoyment out of them, and can still be worth your time.
8 notes
·
View notes
Text
Undead Unluck ch.226 thoughts
[Closing the Book]
(Topics: character analysis - Apocalypse/Juiz/Julia, thematic analysis - Rules/Unjustice, predictions - Unjustice vs. the Master Rules)
Apocalypse
I've heard of books making people cry, but this is ridiculous!
Ever since Juiz said she considered Apocalypse a member of the Union, I figured he'd get a big moment sooner or later, but I didn't think he'd die! I didn't even think he could die! I guess it makes sense, I just never really thought about it...
This definitely helps build on an idea I lightly touched on last week, that someone had to die to raise the stakes of this arc and make it feel less like it was a flawless, easy victory, but I stopped short at Juiz, who was already dead, not returning. I should have considered how much more impactful it would be for someone to die in real-time than to simply realize that someone else just wasn't coming back
With that in mind, Apocalypse is unfortunately the perfect character to do that with. He's been built up for a long time now to be less of an antagonistic force than he presented himself as, but ultimately, what would his place in the world be once the game concluded? Would Julia/Juiz just carry him around everywhere she went? Would he be condemned to sit in place in the Roundtable Room forevermore without any real purpose? Would he go back to sleep and be buried again? Or would he just fade away with the Gods and UMA?
This way, that question is resolved and Apocalypse is given a fitting send-off, where we remember him as more of a hardass taskmaster than a petulant villain. This little redemption concludes his role as quest giver while also giving closure to his friendship with Juiz, fully rounding out his character and making him much more memorable than his initial role as a talking MacGuffin
Furthermore, it also provides the perfect impetus for Julia to awaken Unjustice
Rules Are Made to be Followed
I've always wondered why Unjustice was one of the first two Negators to manifest. Undead makes sense, the whole point of the game is to go on for eternity, so having someone who can be present through the whole thing makes sense, but Unjustice always seemed like an arbitrary choice beyond just being a cool character concept
I now realize though that I am a fool, and Unjustice is the perfect Negator to be present in a game all about Rules. Justice, morality, chivalry - codes of conduct, laws, rules meant to guide how the individual and society interact, to maintain order and balance. Of course the Rules that govern the world, that dictate the very definition of right and wrong, are opposed by the vessel of Unjustice, who questions that definition
While UMA Justice is one of the current enemies, he hasn't at all been established as any kind of foil or parallel to Unjustice. Apocalypse, however, is quite literally the rulebook for the Gods' game, the judge, commissioner and referee for how the quests are conducted. UMA Justice likely upholds a sense of duty and chivalry in God's favor, but Apocalypse is, despite all appearances, a neutral party. He acts indignant when Quests are completed and becomes upset when they're skipped, but so long as the rules of the game are being followed, he accepts the outcome, as that's how it's meant to be
For a God or even a Master Rule to interfere with the proceedings of the game, to prevent the players from participating, to outright say "nevermind the rules of the game," clearly goes against the very purpose of Apocalypse's being. He was trying to fulfill his purpose as he was created for, and right at the end he was told to betray that purpose under threat of death. Soul retaliation against Apocalypse was, without exaggeration, an execution for the crime of staying true to the rules. In other words...
An Unjust death
Whatever Juiz's tragedy was, whatever Julia sees in Apocalypse's memories, it's plenty clear that Julia already understands the pain that Apocalypse felt being manipulated as he was, and the injustice that his death implies. The memories she's about to see will simply give her context and understanding, but the death of a friend is the necessary tragedy to trigger Unjustice's manifestation
Even more fitting, then, that this death was the final straw in a long line of injustices
It's Not Cheating When I Do It
"You're a UMA, and you can't even follow the Rules?"
"You're going to let [Sick] escape? Why are you interfering in the Sacred Quests?!" "[Because] Undead is boring."
"I'm calling the shots here. If you won't do what I ask, I'll just make you do it."
The "Rules" have been interfering, lying and cheating from the beginning. Any opportunity they get, they find a loophole to take advantage of a situation to get what they want and leave anyone and everyone else out to dry. The "game" has only ever mattered to Apocalypse; the Negators wanted out no matter what, and the Rules wanted to win no matter what. The Negators had the excuse of being made to defy the Rules, but the Rules themselves were always secretly designed to be able to bend in whatever way they needed to justify that they're always right
With that in mind, the role of the Negators isn't to demonstrate that Rules aren't necessary, but that the Rules as they exist are fundamentally wrong. Rules and order are necessary, even Negators have to follow rules, but the purpose of Rules needs to be the maintenance of order, not the retroactive justification of means. Rules are supposed to protect the ruled, not empower the rulers
This entire series, the entire power system, is an explicit analysis, deconstruction and critique of that aspect of our society. I'm not politically minded enough to suggest that UU proposes some kind of anarchist government system, but philosophically it certainly seems to advocate for a greater balance between freedom and order (another parallel to One Piece, I might add!!!). The inclusion of Unjustice is, in and of itself, a direct refutation of the current prevailing definition of justice, one that calls not for justice as a whole to be discarded, but examined and reconstructed
I think in a way this is reflected in Juiz's narrative as well
The Times Are A-Changing
Juiz spent eons developing her specific vision of justice, but in the end it never amounted to anything in its own time. The longer she maintained that vision, the more she lost sight of it, becoming entrenched in a distorted and arguably corrupted version of it until it finally resulted in the people she thought she could trust most to see her vision through betraying her for their own, incompatible visions. The things she was willing to do to make her vision come true ran counter to the original vision's purpose, and it was only when she found a new vision in someone else that she was able to finally let go of her old ways, embrace change, and rest
Julia, a young and naive version of Juiz, represents the entrustment of Juiz's justice with Fuuko, and the faith that Fuuko's vision of justice would create the legacy that Juiz had truly strived for since the beginning. No longer carrying the weight of the world on her own and insisting that her way is best, Juiz's new, refreshed self acts to ease the burden of another and believes in visions beyond her own
In L100, Juiz's inability to see from the perspective of others caused her to misread both Victor and Billy, costing her her most valuable ally and her arm respectively. Juiz believed that Victor wanted to die and couldn't see that he wanted her to be happy; Juiz believed that Billy wanted to betray the Union and couldn't see that it pained him to make enemies of them. Ostensibly she had faith in him, hence why Billy couldn't use Unjustice, but her faith was that he was trying to save the world, not that he was still an ally, and she paid dearly for that misapprehension
Julia did not make that same mistake
Julia believed in Apocalypse's justice, in his commitment to the game and desire to make the Union strong enough to create a truly equal clash between Sun and Luna's supposed ideologies, so while the same arm that Juiz lost was damaged, Julia was otherwise unharmed, and managed to bring her "betrayer" back to his senses and her side
In other words, everything that Juiz lost because of her justice, Julia was able to retain and regain on her behalf. Thanks to Fuuko's guidance, Julia became an embodiment of Juiz's redemption, and in turn became an agent of Apocalypse's
The question now, then, is what will be the outcome of Julia's newfound vision of justice
Rules Are Made to be Broken
It's clear that the idea of Julia's awakening is to turn the tide against Sun and the Master Rules all at once with Unjustice, but is it that simple? In the past, Unjustice's effect on the Master Rules was to prompt them to self-destruct, but like I said last week, would that be satisfying?
Soul was upset with Luna's interference before, but now he's going so far as to say the game doesn't matter. Is he simply accepting the situation and trying to win with the hand he's been dealt, or is there something more going on here? Is Soul being influenced by Luna the same way that he's influencing Apocalypse?
And if that's the case, are all of the Master Rules being influenced? Do any of them want the fight to happen this way? Luck clearly wants the Union to use Remember to make things more fair, but is that because Luck thinks it would be more fun, or is it because that little bit of encouragement is the best that Luck can do to defy Luna's will?
When Unjustice fully awakens, will history repeat with the Master Rules removing themselves from the equation, or will Julia, with her fresh eyes and new take on Unjustice's definition, free the Master Rules from their unjust masters and create the aforementioned balance between freedom and order?
As things stand, it doesn't seem like Tozuka intends to delve into the individual fights between the Union and Master Rules as I initially projected. Julia's role as the new Unjustice is absolutely going to shape what happens next, for better or worse, and in my opinion, forcing a team-up would be far more interesting than just singlehandedly ending all of the other fights. This would allow us to get the moments we were hoping to see from the match-ups while both keeping things fresh and reinforcing the themes of the story
Of course, it's also possible that Soul's ability will prevent Julia from taking out everyone else at once and she'll need to focus on him alone, or she really will clear the entire battlefield save for the Gods. Either way, with such a strong chapter like this, I'm more assured than ever that Tozuka knows exactly what he's doing
If nothing else, I will always have faith in an author who cries when drawing the death of one of their characters. Oda is on record as crying at Merry and Ace's deaths, so Tozuka doing the same for Apocalypse very clearly demonstrates the love he feels for his cast and story. If that doesn't convince readers that this is the story Tozuka wants to tell, I don't know what will
Until next time, let's enjoy life!
16 notes
·
View notes
Text
Flash Was Killed Because It Was Objectively Dangerous
I get it, I get the Flash nostalgia and the fondness for old Flash games. I was big on Neopets before they decided to ruin the art and make all the pets samey paper dolls to play dressup with (completely ruining the point of the far more expensive "redraw" colors like Mutant and Faerie and Desert). I have fond memories of Newgrounds games and I even managed to take a class for a semester in high school where I could learn flash.
But I also remember how terrible it was. And you should too.
Leaving aside all of the issues involving performance and inaccessibility (such as being easily broken by bog-standard browser actions like the back button, and its ability to modify web code AND OS code in real time likely broke a lot of accessibility tech too), Flash was legitimately one of the most dangerous web technologies for the end user. An end-user is you, or more specifically back then, child-you.
According to Wikipedia and its sources, Flash Player has over a thousand vulnerabilities known and listed and over 800 of these lead to arbitrary code execution.
What is arbitrary code execution? That's when someone can just run any commands they want on a machine or program that didn't intend it. A fun way to see this is in this infamous Pokemon tool-assisted speedrun where they manage to get an SNES to show the host's twitch chat in real time. It's not so fun though when it's someone stealing all the files on your computer, grabbing your credentials so they could clean out your Neopets account (yes, really, it was a pretty common concern at the time), and other nefarious works. Also, there was a time where it allowed people to spy on you with your webcam and microphone.
Oh and on top of all of this, Flash had its own "flash cookies", which could not be cleared by ordinary means and thus could be used to track users indefinitely, at least until Adobe slapped a bandaid over it by introducing yet another screen an ordinary person wouldn't know to use. (I assume this is how the infamous neopets "cookie grabbers" worked, so they could get into your account. This is mainly what I remember about using Flash back in the early 2000s lol) So it not only was a "stranger taking over your machine" concern, but a bog-standard privacy concern too, arguably a precursor to our current panopticon internet landscape, where greedy websites would track you because they could and maybe get some money out of it, facilitated by this technology.
When Apple decided to block it, it wasn't out of greed; Steve Jobs cited its abysmal performance and security record, among other issues such as an inherent lack of touchscreen support, and Apple cited specific vulnerability use-cases when blocking specific versions before they nuked it entirely. When Mozilla, who makes Firefox, decided to block it, it's not like they would've gotten money out of doing so, or by offering an alternative; they did so because it is fucking dangerous.
Your ire and nostalgia is misplaced. Flash was not killed by our current shitty web practices that ruin unique spaces and fun games. Flash was killed because both Macromedia (its original developers) and Adobe were incapable of making it safe, if that was even possible, and it was killed after third-parties, in an unprecedented gesture, collectively threw their hands up and said enough.
Well, that and HTML5 being developed and becoming more widespread, being able to do everything Flash can do without being a pox on technology. One could argue that you should bemoan the lack of Flash-to-HTML5 conversion efforts, but that requires asking a lot of effort of people who would have to do that shit for free...and if they have to run Flash to do so, opening themselves up to some of the nastiest exploits on the internet.
Nostalgia is a fucking liar. The games themselves I think are worth having nostalgia over (look, I still find myself pining for that one bullet hell Neopets made and Hannah and the Pirate Caves), but Flash itself deserves none of that, and absolutely deserved to be put in the fucking ground. You're blaming the wrong causes. It was terrible.
(specifics and sources found via its wikipedia page, which has a lot more than is mentioned here. and also my own opinions and experiences back then. lol)
#flash#nostalgia really is a liar#don't trust it#technology#yet another instance of my unfettered autism#adobe flash#macromedia flash#the old web#I was there gandalf three thousand years ago lmao#personal context: I am now a software QA that tests web apps#and when I was a child I was absolutely a neopets addict and am on Subeta TO THIS DAY#I learned HTML and CSS when I was 12#largely to spruce up my Neopets profile#I have been on the internet A While now#(I understand how ironic it is given that my tumblr layout is kind of shit; I will fix it soon)
20 notes
·
View notes
Note
(I was reading the Bulbapedia page for Bad Eggs and
Oh my god. Arbitrary Code Execution. Ace.)
[YEAH OKAY SO. THATS SOMETHING WE THINK ABOUT A LOT ACTUALLY
would you believe me if i told you that wasnt originally on purpose??? i dont remember the original reasoning other than "Ace Maple is a nice sounding name"
we've had ace around as a concept for. years now? they're OLD but have always been "pokemon processor specializing in 'anomalies.'" (fun fact anomalies originally moreso meant Pokepastas and evolved into being more about Glitches!) (this is an excuse to point at old art- the very first concept we have of ace! they've changed so much yet so little lol)

all things considered though. this coincidence has only become more and more fitting for them as they developed more. we're excited for you guys to get to know them more! ^^
we're really glad people have been enjoying the original aspects of this story, and not just the more familiar pokepasta stuff <:) we all really hope you'll continue to enjoy things like this to come!]
#ik this was a bit of a tangent but we're just very excited. and we think about ace a lot HAFSJJGD#mn QnA
8 notes
·
View notes
Text
I wrote a quine, without strings, in a calculator
Okay so I should probably clarify some things, the calculator in question (dc) is more of a "calculating tool", it is built into most linux distributions, and it is a command line tool. I should also clarify "without strings", because dc itself does support strings, and I do actually use strings, however, I do not use string literals (I'll explain that more later), and I only use strings that are 1 character long at most.
So first of all, why did I decide to do this, well, this all started when I found a neat quine for dc:
[91Pn[dx]93Pn]dx
If you're curious about how this works, and what I turned it into, it'll be under the cut, for more technical people, you can skip or skim the first text block, after that is when it gets interesting.
So first of all, what is a quine, a quine is a computer program that outputs its own source code, this is easier said than done, the major problem is one of information, the process of executing source code normally means a lot of code, for a little output, but for a quine you want the exact same amount of code and output. First of all, let's explain dc "code" itself, and then this example. Dc uses reverse polish notation, and is stack-based and arbitrary precision. Now for the nerds reading this, you already understand this, for everybody else's benefit, let's start at the beginning, reverse polish notation means what you'd write as 1+1 normally (infix notation), would instead be written as 1 1 +, this seems weird, but for computers, makes a lot of sense, you need to tell it the numbers first, and then what you want to do with them. Arbitrary precision is quite easy to explain, this means it can handle numbers as big, or as small, or with as many decimal points as you want, it will just get slower the more complex it gets, most calculators are fixed precision, have you ever done a calculation so large you get "Infinity" out the other end? That just means it can't handle a bigger number, and wants to tell you that in an easy to understand way, big number=infinity. Now as for stack based, you can think of a stack a bit pile a pile of stuff, if you take something off, you're probably taking it off the top, and if you put something on, you're probably also putting it ontop. So here you can imagine a tower of numbers, when I write 1 1 +, what I'm actually doing is throwing 1 onto the tower, twice, and then the + symbol says "hey take 2 numbers of the top, add them, and throw the result back on", and so the stack will look like: 1 then 1, 1, then during the add it has nothing, then it has a 2. I'm going to start speeding up a bit here, most of dc works this way: you have commands that deal with the stack itself, commands that do maths, and commands that do "side things". Most* of these are 1 letter long, for example, what if I want to write the 1+1 example a little differently, I could do 1d+, this puts 1 on the stack (the pile of numbers), then duplicates it, so you have two 1s now, and then adds those, simple enough. Lets move onto something a little more complex, let's multiply, what if I take 10 10 * well I get 100 on the stack, like you may expect, but this isn't output yet, we can print it with p, and sure enough we see the 100, I can print the entire stack with f, which is just 100 too for now, I can print it slightly differently with n, I'll get into that later, or I can print with P which uhhhh "d", what happened there? Well you see d is character 100 in ASCII, what exactly ASCII is, if you don't know, don't worry, just think of it as a big list of letters, with corresponding numbers. And final piece of knowledge here will be, what is a string, well it's basically just some text, like this post! Although normally a lot shorter, and without all the fancy formatting. Now with all that out of the way, how does the quine I started with actually work?
From here it's going to get more technical, if you're lost, don't worry, it will get even more technical later :). So in dc, you make a string with [text], so if we look at the example again, pasted here for your convenience
[91Pn[dx]93Pn]dx
it makes one long string at the start, this string goes onto the stack, and then gets duplicated, so it's on the stack twice, then it's executed as a macro. In technical language, this is just an eval really, in less technical language, it just means take that text, and treat it like more commands, so you may see, it starts with 91P, 91 is the ASCII character code for [, which then gets printed out, not coincidentally, this is the start of the program itself. Now the "n" that comes afterwards, as I said earlier, this is a special type of print, this means print without newline (P doesn't use newlines either), which means we can keep printing without having to worry about everything being on separate lines, now what is it printing? Well what's on top of the stack, oh look, it's the copy of the entire string, which once again not coincidentally, is the entire inside of the brackets, so now we've already printed out the majority of the program, now dx is thrown on the stack, which as you may notice is the ending of the program, but we won't print it yet, we'll first print 93 as a character, which is "]", and then print dx, and this completes the quine, the output is now exactly the same as the input. Now, I found this some time ago, and uncovered it again in my command history, it's interesting, sure, but you may notice it's not very... complicated, the majority of the program is just stored as a string, so it already has access to 90% of itself from the start, and just has to do some extra odd jobs to become a full quine, I wanted to make this worse. I started modifying it, doing some odd things, which I won't go into, I wanted to remove the numbers, replacing it entirely with calculations from numbers I already have access to, like the length of a string, this wasn't so hard, but then I hit on what this post is about "can I make this without using string literals"
Can I make this without using string literals?
Yes, I can! And it took a whole day. I'll start by explaining what a string literal is, but this will largely be the end of my explaining, from here it's about to get so technical and I don't want to spend all day explaining things and make this post even longer than it's already going to be. A string literal is basically just the [text] you saw earlier, it's making a string by just, writing out the string. In dc there's only 1 other way to make a string, the "a" command, which converts a number, into a 1 character string, using the number as an ASCII character code. Strings in dc are immutable, you can only print, execute, and move them around with the usual stack operations, you cannot concatenate, you cannot modify in any way, the only other things you can do with a string, is grab the first character, or count the characters, but as I just explained, our only way to make strings creates a 1 character string, which cannot be extended, so the first character is just, the entire thing, and the length is always 1, so neither of these are useful to us. So, now we understand what the restriction of no string literals really is (there are more knock on restrictions I'll bring up later), let's get into the meat of it, how I did it.
So I've just discussed the way I'll be outputting the text (this quine will need text, since all the outputting commands are text!), with the "a" command and the single character strings it produces, let's now figure out some more restrictions. So any programmers reading this are going to be horrified by what I'm about to say. If I remove string literals, dc is no longer Turing Complete, I am trying to write a quine in a language (subset) that is not Turing Complete, and can only output 1 character at a time**. You can't loop in dc, but you can recurse, with macros, which are effectively just evaling a string, you can recurse, since these still operate on the main stack, registers, arrays, etc, they can't be passed or return anything, but this doesn't matter. Now I cannot do this, because if I only have 1 character strings via "a" then I can't create a macro that does useful work, and executes something, since that would require more than 1 command in it. So I am limited to only linear execution***. Now lets get into the architecture of this quine, and finally address all these asterisks, since they're finally about to be relevant, I started with a lot of ideas for how I'd architect these, I call these very creatively by their command structure, dScax/dSax, rotate-based execution, all-at-once stack flipping, or the worst of them all, LdzRz1-RSax (this one is just an extension of rotate-based execution), I won't bother explaining these, since these are all failed ideas, although if anybody is really curious, I might explain some other time, for now, I'll focus on the one that worked, K1+dk: ; ;ax, or if you really want to try to shoehorn a name, Kdkax execution, now, anybody intimately familiar with dc, will probably be going "what the fuck are you doing", and rightly so, so now, let's finally address the asterisks, and get into what Kdkax execution actually means, and how I used it.
*"Most commands are 1 character long, but there are exceptions, S, L, s, l, :, ; and comparisons, only : and ; are relevant here, so I won't bother with the rest, although some of the previous architectures used S and L as you may have seen. : and ; are the array operations, there are 256 arrays in dc, each one named after a character, if I want to store into array "a" I will write :a, a 2 character sequence, same for loading from array "a" ;a, I'll get into exactly how these work later **I can only output 1 character at a time with p, P, and n, but f can output multiple characters, the only catch being it puts a newline between each element of the stack, and because I can only put 1 character into each stack element, it's a newline between each character for me (except for numbers). I'll get into what this means exactly later ***I can do non-linear execution, and in fact, it was required to make this work, but I can only do this via single character macros, which is, quite the restriction to put it lightly
So I feel like I've been dancing around it now, what does my quine actually look like, well, I wanted to keep things similar to the original, where I write a program, I store it, then I output it verbatim, with some cleanup work. However, I can't store the program as strings, or even characters, I instead need to store it as numbers, and the easiest way to do this, is to store it as the char codes for dc commands, so if I want to execute my 1d+ example from before, I instead store it as 49 100 43, which when you convert them back to characters, and then execute them in sequence, to do the same thing, except I can store them, which means I can output them again, without needing to re-create them, this will come in handy later. So, well how do I execute them, well, ax is the sequence that really matters here, and it's something all my architectures have in common, it converts them to a character, then executes them, in that order, not so hard, except, I'm not storing them anymore, well then if you're familiar with dc, you might come across my first idea, dScax, which, for reasons you will understand later, became dSax, this comes close to working, it does store the numbers in a register, and execute them, but this didn't really end up working so well. I think the next most important thing to discuss though, is how I'm outputting, as I mentioned earlier "f" will be my best friend, this outputs the entire stack, this is basically the whole reason this quine is possible, it's my only way of outputting more characters from the program, than the program itself takes up, since I can't loop or recurse, and f is the only character that outputs more than 1 stack element at once, it is my ticket to outputting more than I'm inputting, and thereby "catching up" with all the characters "wasted" on setup work. So now, as I explained earlier, f prints a newline between each stack element, and I can only create 1 character stack elements, and because in a quine the output must equal the input, this also means the input must equal the output. And because I just discovered an outputting quirk, this means my input must also match this quirk, if I want this to be a quine, so, my input is limited to 1 character, or 1 number, per line, since this is the layout my stack will take, and therefore will be the layout of my output. So what does this actually mean, I originally thought I couldn't use arrays at all, but, this isn't true, the array operations are multiple character sequences yes, but turns out, there actually are multiple characters per line, there's also a linefeed character. And since there is an array per ASCII character, I am simply going to be storing everything in "array linefeed"! So now, with all of this in mind, what does the program actually look like.
Let's take a really simple example, even simpler than earlier, let's simply store 1 and then print it, this seems simple enough, 1p does it fine, but, lets convert it to my format, and it's going to get quite long already, in order to prevent it getting even longer, I'll use spaces instead of newlines, just keep in mind, they're newlines in the actual program
112 49 0 k K 1 + d k : K 1 + d k : 0 k K 1 + d k ; K 1 + d k ; 0 k K 1 + d k ; a x K 1 + d k ; a x
now, what the fuck is going on here, first of all, I took "1p" and converted both characters into their character codes "49 112" and then flipped them backwards (dw about it), then, I run them through the Kdkax architecture. What happens is I initialise the decimal points of precision to 0, then, I increment it, put it back, but keep a copy, and then run the array store, keep in mind, this is storing in array linefeed, but what and where is it storing? Its index is the copy of the decimal points of precision I just made, and the data it's storing at that index, is whatever comes before that on the stack, which, not coincidentally, is 49, the character code for the digit "1", then I do the same process again, but this time, the decimal points of precision is 1, not 0, and the stack is 1 shorter. So now, I store 112 (the character code for p), in index 2 of array linefeed, now you may notice, the array is looking the exact same as the original program I wanted to run, but, in character code form, it is effectively storing "1p", but as numbers in an array, instead of characters in a string. I then reset the precision with 0k, and start again, this time with the load command, which loads everything back out, except, now flipped, the stack originally read 49 112, since that's the order I put them on, the top is 49, the last thing I put on, but after putting them into the array, and taking them back out, now I'm putting on 112 last instead, so now the stack reads 112 49, which happens to be the exact start of the code, this will be important later. For now, the important part is, the numbers are still in the array, taking them out just makes a copy, so, this time I take them out again, but rather than just storing them, I convert them to a character, and then execute them, 49 -> 1 -> 1 on the stack, 112 -> p -> print the stack, and I get 1 printed out with the final x. Now this may not seem very significant, but this is how everything is going to be done from here on out.
So, what do I do next? Well now's time to start on the quine itself, you may have noticed in the last example, I mentioned how at one point, the stack exactly resembles the program itself, or at least the start of it, this is hopefully suspicious to you, so now you may wonder, what if my program starts with "f" to print out the entire stack? Well, I get all the numbers back, i.e. I get the start of the file printed out, and this will happen, no matter how many numbers (commands) I include, now we're getting somewhere, so if I write fc at the start of my program (converted into character codes and then newline separated) then I include enough copies of the whole Kdkax stuff to actually store, load, and execute it, then I can execute whatever I want, and I'll get back everything except the Kdkax stuff itself, awesome! So now we come onto, how do I get back the "Kdkax stuff", and more importantly, what are my limitations executing things like this, can I just do anything?
Well, put simply, no, I cannot use multicharacter sequences, and I actually can't this time, because it's being executed as a single character macro, I don't have a newline to save me, and I just get an error back, so okay that's disappointing. This multicharacter sequence rule means I also can't input numbers bigger than 1 digit, because remember, the numbers get converted into characters and then executed, and luckily, executing a number, just means throwing it on the stack, so I'm good for single digit numbers. Then in terms of math (I know, this is a post about a calculator and only now is the maths starting), I can't do anything that produces decimals, since the digits of precision is constantly being toyed with, and I also can't use the digits of precision as a storage method either, because it's in use. I can actually use the main stack though! It's thankfully left untouched (through a lot of effort), so I'm fine on that front. Other multicharacter sequences include negative numbers, strings (so I can't cheese it, even here), and conditionals.
So it was somewhere around here, I started to rely on a python script I wrote for some of the earlier testing, and I modified it to this new Kdkax architecture when I was confident this was the way forwards. It converts each character into a character code, throws that at the start, and then throws as many copies of the store, load, and execute logic as I need to execute the entire thing afterwards. This allows me to input (mostly) normal dc into the input, just keeping in mind that any multicharacter sequences will be split up. So now I can start really going, and I'll speed up from here, effectively, what I need to do, is write a dc program, that can output "0 k", then "K 1 + d k :" repeated as many times as there are characters in my program, then "0 k" again, then "K 1 + d k ;" repeated just as many times, then "0 k" again, then "K 1 + d k ; a x" also repeated just as many times, without using strings, multicharacter sequences, loops, branches, recursion, any non-integer maths, with a newline instead of a space in every sequence above. Doable. The program starts with fc, like I mentioned, this prints out all the numbers at the start, and leaves us with a clean stack, I'll explain in detail how I output the "0 k" at the start, and leave the rest as an exercise to to the reader. I want to do this by printing the entire stack, so I want to put it on backwards, k first, k is character code 107 in decimal, and I can't input this directly, because I can't do anything other than single digit numbers, so maths it is, here I abuse the O command, which loads the output base, which is 10 by default, and I then write "OO*7+a", which is effectively character((10*10)+7) written in a more normal syntax, this creates "k" on the stack, and then I can move onto 0, for which I write "0", since a number just puts itself on the stack, no need to create it via a character code, I can just throw it on there, keep in mind this will all get converted to 79 79 42 55 43 97 48, but the python script handles this for me, and I don't need to think about it. The stack now reads "0 k" and I can output this with f, and clear the stack, I then do the same deal for "K 1 + d k :", the next "0 k", "K 1 + d k ;" but here I do something a little different, because I want to output "K 1 + d k ; a x" next (after the "0 k" again), I don't clear the stack after outputting "K 1 + d k ;", and instead, I put "a x" on the stack, and then use the rotate stack commands to "slot it into place" at the end, this is a neat trick that saves some extra effort, it makes printing the "0 k" in between more difficult, but I won't get into that. For now the important part, is the output of my program now looks something like this "(copy of input numbers) 0 k K 1 + d k : 0 k K 1 + d k ; 0 k K 1 + d k ; a x" this is amazing, this would be the correct output, if my program was only 1 character long at this point, now keep in mind I'm writing non-chronologically, so my program never actually looked like this, but if you're following along at home you should have this at this point:
fcOO*7+a0fcaO5*8+aOO*7+aOO*aO4*3+aO4*9+a355**afcOO*7+aOO*aO4*3+aO4*9+a355**af0nOanOO*7+anOanOO*2O*+aOO*3-a08-R08-Rf
definitely longer than 1 character, you might think at this point, it's just a matter of spamming "f" until you get there, but unfortunately, you'll never get there, every extra "f" you add, requires an extra copy of the store, load, execute block in the program, so you're outpaced 3 to 1, so what do you do about this? You print 4 at once! I want the stack to look like "K 1 + d k : K 1 + d k : K 1 + d k : K 1 + d k :" and similarly for the other steps, and then I can spam f with greater efficiency! This was somewhat trivial for the first 2, but for the ax, because I'm using the rotate to push it at the end, I need to do this 4 times too, with different rotate widths, not too hard. And now, I can finally get there, but how many times do I spam f? Until my program is exactly 3/4s printing on repeat, which makes sense if you think about it, and below, is finally the program I ended up with
fcOO*7+a0fcO5*8+aOO*7+aOO*aO4*3+aO4*9+a355**aO5*8+aOO*7+aOO*aO4*3+aO4*9+a355**aO5*8+aOO*7+aOO*aO4*3+aO4*9+a355**aO5*8+aOO*7+aOO*aO4*3+aO4*9+a355**affffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffcOO7+a0fcO5*9+aOO*7+aOO*aO4*3+aO4*9+a355**aO5*9+aOO*7+aOO*aO4*3+aO4*9+a355**aO5*9+aOO*7+aOO*aO4*3+aO4*9+a355**aO5*9+aOO*7+aOO*aO4*3+aO4*9+a355**affffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff0nOanOO*7+anOanOO*2O*+aOO*3-a08-R08-ROO*2O*+aOO*3-a082-R082-ROO*2O*+aOO*3-a083-R083-ROO*2O*+aOO*3-a084-R084*-Rffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff
I say finally, but this is actually pre-python script! The final program I actually ended up with will instead be included in a reblog, because it really needs its own cut. But anyway, this was how I wrote a quine, for a calculator, without using string literals.
#programming#quine#linux#dc calculator#computing#linux utils#program#quine programming#coding#python#there was only brief use of python in here#and I didn't even include the code for that#but whatever#this took me a whole day to make#and I am so so proud of it
4 notes
·
View notes
Text
since I've been talking about FFXIV modding, lately, a caveat on risks.
you may have heard about an infamous event in ffxiv modding where the author of GShade, a widely popular ReShade fork, got pissy about people forking their code and decided to demonstrate the risks of installing untrusted code on your computer... by pushing an update that makes GShade automatically shut down users' computers. basically torching their own mod and ejecting themselves from the mod community to make a point.
the impact on the modding scene was ultimately minor. almost all GShade shaders could be ported straight back to ReShade (the main thing that GShade added was support for reading the depth buffer, which can be turned on in ReShade). but it does illustrate that a mod scene, is, well, literally downloading code by strangers on the internet and running it, with all the dangers that entails.
one thing I like a lot about the FFXIV mod scene is that a whole lot of the development is open source - indeed, a ton of the infrastructure for distributing mods is straight up built on GitHub. the level of technical knowledge is some of the highest I've seen in a mod scene. this doesn't rule out attacks! simply being open source doesn't mean the level of code scrutiny that you would get in a high-profile open source project, with many mods being the work of a solo dev or a tiny team of very passionate nerds. it would not be difficult for a mod author to do what the author of GShade did, and push an update with an attack. but it is at least a check, and a general incentive to cooperate.
the risks are increased further by mods like Mare Synchronos which allow other people to give you a list of mods to download. Mare Synchronos does not allow arbitrary code execution, it is limited to replacing game assets and certain specific tricks like skeleton modification, but the attack surface is still present - if there's any problem in the way the asset replacer mods modify the game's memory at runtime, I'm sure it would be easy to break out and take full control of the game process.
from there, you're still limited by the privileges of the user account - FFXIV does not run as administrator. but there's still considerable scope for shenanigans.
for this reason it's kind of surprising to me that some mod authors do not publish source code. e.g. for PuppetMaster and MidiBard, a combo which is widely used by in-game performers, the github repo is literally just metadata and binaries. like, guys. it's in everyone's interest to be able to know exactly what the code is doing; if you refuse to show the source that is immediately sus. but when i started talking to modders in the scene to ask for some advice on getting started on my pie in the sky animation mod project, I was warned that people can be quite protective over the tricks they've learned, and to back up any resources I use in case they disappear at random. (be assured I am going to devlog this project in great detail like I always do lol.)
I can't really tell you where to set your acceptable risk level. for me, since everything is backed up locally and remotely, and my sensitive data is encrypted, I figure the outside risk of attacks is worth it for the chance to see peoples' custom character designs and add another layer to the game. I think the risks are considerably less than, say, downloading cracked games. but it is an attack surface you are exposing so you gotta make that call! don't go in unaware of it.
10 notes
·
View notes
Text
The implication in the comment isn't correct. Consider the following game:
Player 1 picks a nondeterministic Turing machine
Player 2 picks an execution of that Turing machine
If it terminates with an empty tape, player 1 wins. If it terminates with a non-empty tape, player 2 wins. (If the Turing machine doesn't terminate, then neither does the game.)
This is obviously Turing complete, but there's also an obvious winning strategy of just picking a machine that immediately halts.
There's maybe a related but correct statement, but I'm not quite sure what it would be. I don't think it even implies that a brute-force algorithm to find a winning strategy wouldn't work, since the algorithm only needs to find a single winning solution, not all possible games, so it could just interleave the execution of possibly non-terminating states until one of them wins.
I guess this could fail if, as part of all winning strategies, the opponent of the ultimate winner is able to force the execution of an arbitrary Turing machine along the way, but in that case they could force the execution of a non-terminating TM, and thus avoid losing. (I'm assuming that if the game reaches a non-terminating state, that means neither player can win, so it counts as a draw. I don't know what the actual rule is.)
I guess it implies that checking whether there's a winning strategy from an arbitrary state is uncomputable at least.
Also, in a paper I found on the topic, they conjecture that it isn't even computable to determine what counts as a legal move (Conjecture 2), in which case you kind of can't even enter the game into an algorithm for finding a winning strategy in the first place.
I had assumed that the "running DOOM" thing would leave you tediously executing DOOM by hand, but the paper points out that you're allowed to use arbitrary shortcuts in MTG games, which implies that if you beat your opponent enough to be able to basically do what you like, choose to tediously set up a bootloader of some sort with your Magic cards, convince your opponent that the card-based computer you've set up is equivalent to a silicon computer you have handy, enter the DOOM code into it, then run that, it would count as a valid way game of Magic The Gathering. If you have your opponent playing DOOM though, they'd still have the right to slow it down to make it easier. To avoid the issue of relying on timing like DOOM, you could turn your Magic game into a non-real time game instead, like chess. Or recurse, and run Magic in Magic. (Technically an actual Turing machine, like the one they describe, accepts all its inputs at the start rather than as it runs, but I assume that's fixable.)
why is chess the Big Boy Smart Brain game?? there’s been no advances in strategy for 70 years. you can sit down and teach yourself the five winning moves in like a day. show me a computer that can flawlessly win any game of MTG and then ill be impressed.
6K notes
·
View notes
Text
How to Prevent File Inclusion Vulnerabilities in Laravel (2025)
File Inclusion Vulnerabilities in Laravel: What You Need to Know
Laravel is one of the most popular PHP frameworks for building web applications, known for its elegant syntax and rich set of features. However, like any other web framework, Laravel is susceptible to security vulnerabilities. One such vulnerability is File Inclusion (specifically Local File Inclusion, or LFI), which can allow attackers to include files that can lead to code execution, information disclosure, or remote code inclusion.

In this post, we'll dive into what File Inclusion vulnerabilities are, how they affect Laravel applications, and how you can fix them using best practices. Moreover, we'll guide you on how our free Website Security Checker can help you identify and protect your site from these vulnerabilities.
What is File Inclusion?
File Inclusion vulnerabilities occur when an application includes a file without properly validating the file path. This could allow malicious users to include arbitrary files from the server, which could lead to serious security breaches.
In Laravel, this typically happens when user input is passed directly to a file inclusion function like include(), require(), or file_get_contents() without proper sanitization.
How File Inclusion Vulnerabilities Work in Laravel
Laravel’s architecture is designed to be secure out of the box. However, improper use of dynamic file inclusion can introduce risks. Here’s a simple example where File Inclusion vulnerabilities could occur:
<?php // Vulnerable to LFI $file = $_GET['page']; // User input directly from the URL include($file . '.php'); ?>
In this code, the value passed through the $_GET['page'] parameter is directly included as a PHP file. Without validation, an attacker could potentially manipulate this input to include malicious files, such as:
http://yourwebsite.com/?page=../../etc/passwd
This would allow an attacker to read sensitive files on the server.
How to Fix File Inclusion Vulnerabilities in Laravel
To protect your Laravel application from File Inclusion vulnerabilities, follow these best practices:
Avoid User Input in File Inclusion: Never allow user input to determine the file path. Always validate and sanitize any user input before using it in file inclusion functions.
<?php $validPages = ['home', 'about', 'contact']; // Whitelist valid page names $page = $_GET['page']; if (in_array($page, $validPages)) { include($page . '.php'); } else { echo "Invalid page request!"; } ?>
Use Laravel’s Built-in Routing: Instead of relying on dynamic file inclusion, use Laravel’s routing system to map requests to controllers or views. This eliminates the need for file inclusion altogether.
Disable allow_url_include in PHP: Make sure the allow_url_include directive is disabled in your PHP configuration to prevent Remote File Inclusion (RFI) attacks.
How Our Free Website Security Checker Can Help
Our Free Website Security Scanner tool can scan your Laravel site for File Inclusion vulnerabilities and other security issues. By using the tool to test website security free, you can get a detailed vulnerability report that will help you protect your Laravel application.
Here’s a screenshot of our tool in action:

Vulnerability Assessment Report Example
Once you run the security check, our tool generates a vulnerability report that shows potential issues with your site. Here’s an example of a vulnerability assessment report:

By reviewing these reports, you can identify weak points in your site and take immediate steps to fix any File Inclusion vulnerabilities or other security threats.
Conclusion
File Inclusion vulnerabilities can be a major threat to Laravel applications if not properly handled. By following best practices like input sanitization, using Laravel’s routing system, and utilizing our free Website Security Checker, you can ensure your application is secure against these threats.
Start using our free tool today and take proactive steps to protect your Laravel site from file inclusion and other security risks!
#cyber security#cybersecurity#data security#pentesting#security#the security breach show#laravel#inclusion
1 note
·
View note
Text
information about the main dungeon party at least the members i have at this point because i havent talked abt this story very much and now i want to
as the plates of the steppe continue to get pushed around by the rapidly expanding dungeon, as well as the continual displacement of underground communities, a certain party is no longer looking for the assistance of "dark magic" hidden within the depths - their goal is to kill the dungeon itself once and for all.
meros arrogant and temperamental prince from a far northern kingdom, having transformed into a rabbit beastman and fled south in search of the steppe dungeon's dark magic, in hopes it would return him to normal. for a time he disguised himself as a merchant girl and white mage, but later accepted his state and joined the main party as a dragoon. doesn't like healing anymore because nobody likes how direct and painful (but extremely efficient) his healing is. only sheridaen knows that he's a prince.
sheridaen "High Elf" and polygonal lady knight pretending to be a runaway princess. really from the clergy of a horrible and powerful church, exiled and burned with blue fire. now a lordless paladin with an absent god that she believes owes her a personal favor. only meros knows that she was from the church, and lets only him do intensive heals on her even though it hurts like a bitch. extremely annoyed by meros but considers him her lord to protect.
nyssa pagan human pediatric nurse spirited away from the earthly midwest into the steppe - now hearing voices and extending her heart in ways she could not before, she dances with fairies and deities on the astral plane and calls upon them as a summoner. she is also the party's main healer, able to harness the usual wheelhouse of cast heals as well as raising and barrier spells. she's a much more gentle healer than meros but her magic takes longer to set.
unnamed werewolf guy i just gave a mental form (luiz) nyssa's friend, also from earth but they only met once they were in sey and started traveling together. Deaf werewolf guy with a combat style somewhere between bard trickster and mimic. he was also a speedrunner back on earth and has somehow figured out how to harness this physically within the dungeon... does crazy movement tech so hard that it can launch him into the acuity "code" between the walls of the dungeon. he can do this to get into rooms with no entrance or exit as well as through doors that unlock from the other side. he doesnt know quite what hes doing and thus isnt able to see the arbitrary acuity execution thats keeping the dungeon going.
other non-members but important characters include the dog woman, a mysterious older adventurer from the distant west who hangs around for a short while to investigate the dungeon. nobody has seen her fight but she carries strange circular weapons. (dancer? monk?) leihn gar, quarter kobold steppe tailor & sweetiepie. friend of the party. also deaf. not a fighter but hoping that the party will be able to take down the dungeon for good.
0 notes
Text
Charity Majors, CTO & Co-Founder at Honeycomb – Interview Series
New Post has been published on https://thedigitalinsider.com/charity-majors-cto-co-founder-at-honeycomb-interview-series/
Charity Majors, CTO & Co-Founder at Honeycomb – Interview Series
Charity is an ops engineer and accidental startup founder at Honeycomb. Before this she worked at Parse, Facebook, and Linden Lab on infrastructure and developer tools, and always seemed to wind up running the databases. She is the co-author of O’Reilly’s Database Reliability Engineering, and loves free speech, free software, and single malt scotch.
You were the Production Engineering Manager at Facebook (Now Meta) for over 2 years, what were some of your highlights from this period and what are some of your key takeaways from this experience?
I worked on Parse, which was a backend for mobile apps, sort of like Heroku for mobile. I had never been interested in working at a big company, but we were acquired by Facebook. One of my key takeaways was that acquisitions are really, really hard, even in the very best of circumstances. The advice I always give other founders now is this: if you’re going to be acquired, make sure you have an executive sponsor, and think really hard about whether you have strategic alignment. Facebook acquired Instagram not long before acquiring Parse, and the Instagram acquisition was hardly bells and roses, but it was ultimately very successful because they did have strategic alignment and a strong sponsor.
I didn’t have an easy time at Facebook, but I am very grateful for the time I spent there; I don’t know that I could have started a company without the lessons I learned about organizational structure, management, strategy, etc. It also lent me a pedigree that made me attractive to VCs, none of whom had given me the time of day until that point. I’m a little cranky about this, but I’ll still take it.
Could you share the genesis story behind launching Honeycomb?
Definitely. From an architectural perspective, Parse was ahead of its time — we were using microservices before there were microservices, we had a massively sharded data layer, and as a platform serving over a million mobile apps, we had a lot of really complicated multi-tenancy problems. Our customers were developers, and they were constantly writing and uploading arbitrary code snippets and new queries of, shall we say, “varying quality” — and we just had to take it all in and make it work, somehow.
We were on the vanguard of a bunch of changes that have since gone mainstream. It used to be that most architectures were pretty simple, and they would fail repeatedly in predictable ways. You typically had a web layer, an application, and a database, and most of the complexity was bound up in your application code. So you would write monitoring checks to watch for those failures, and construct static dashboards for your metrics and monitoring data.
This industry has seen an explosion in architectural complexity over the past 10 years. We blew up the monolith, so now you have anywhere from several services to thousands of application microservices. Polyglot persistence is the norm; instead of “the database” it’s normal to have many different storage types as well as horizontal sharding, layers of caching, db-per-microservice, queueing, and more. On top of that you’ve got server-side hosted containers, third-party services and platforms, serverless code, block storage, and more.
The hard part used to be debugging your code; now, the hard part is figuring out where in the system the code is that you need to debug. Instead of failing repeatedly in predictable ways, it’s more likely the case that every single time you get paged, it’s about something you’ve never seen before and may never see again.
That’s the state we were in at Parse, on Facebook. Every day the entire platform was going down, and every time it was something different and new; a different app hitting the top 10 on iTunes, a different developer uploading a bad query.
Debugging these problems from scratch is insanely hard. With logs and metrics, you basically have to know what you’re looking for before you can find it. But we started feeding some data sets into a FB tool called Scuba, which let us slice and dice on arbitrary dimensions and high cardinality data in real time, and the amount of time it took us to identify and resolve these problems from scratch dropped like a rock, like from hours to…minutes? seconds? It wasn’t even an engineering problem anymore, it was a support problem. You could just follow the trail of breadcrumbs to the answer every time, clicky click click.
It was mind-blowing. This massive source of uncertainty and toil and unhappy customers and 2 am pages just … went away. It wasn’t until Christine and I left Facebook that it dawned on us just how much it had transformed the way we interacted with software. The idea of going back to the bad old days of monitoring checks and dashboards was just unthinkable.
But at the time, we honestly thought this was going to be a niche solution — that it solved a problem other massive multitenant platforms might have. It wasn’t until we had been building for almost a year that we started to realize that, oh wow, this is actually becoming an everyone problem.
For readers who are unfamiliar, what specifically is an observability platform and how does it differ from traditional monitoring and metrics?
Traditional monitoring famously has three pillars: metrics, logs and traces. You usually need to buy many tools to get your needs met: logging, tracing, APM, RUM, dashboarding, visualization, etc. Each of these is optimized for a different use case in a different format. As an engineer, you sit in the middle of these, trying to make sense of all of them. You skim through dashboards looking for visual patterns, you copy-paste IDs around from logs to traces and back. It’s very reactive and piecemeal, and typically you refer to these tools when you have a problem — they’re designed to help you operate your code and find bugs and errors.
Modern observability has a single source of truth; arbitrarily wide structured log events. From these events you can derive your metrics, dashboards, and logs. You can visualize them over time as a trace, you can slice and dice, you can zoom in to individual requests and out to the long view. Because everything’s connected, you don’t have to jump around from tool to tool, guessing or relying on intuition. Modern observability isn’t just about how you operate your systems, it’s about how you develop your code. It’s the substrate that allows you to hook up powerful, tight feedback loops that help you ship lots of value to users swiftly, with confidence, and find problems before your users do.
You’re known for believing that observability offers a single source of truth in engineering environments. How does AI integrate into this vision, and what are its benefits and challenges in this context?
Observability is like putting your glasses on before you go hurtling down the freeway. Test-driven development (TDD) revolutionized software in the early 2000s, but TDD has been losing efficacy the more complexity is located in our systems instead of just our software. Increasingly, if you want to get the benefits associated with TDD, you actually need to instrument your code and perform something akin to observability-driven development, or ODD, where you instrument as you go, deploy fast, then look at your code in production through the lens of the instrumentation you just wrote and ask yourself: “is it doing what I expected it to do, and does anything else look … weird?”
Tests alone aren’t enough to confirm that your code is doing what it’s supposed to do. You don’t know that until you’ve watched it bake in production, with real users on real infrastructure.
This kind of development — that includes production in fast feedback loops — is (somewhat counterintuitively) much faster, easier and simpler than relying on tests and slower deploy cycles. Once developers have tried working that way, they’re famously unwilling to go back to the slow, old way of doing things.
What excites me about AI is that when you’re developing with LLMs, you have to develop in production. The only way you can derive a set of tests is by first validating your code in production and working backwards. I think that writing software backed by LLMs will be as common a skill as writing software backed by MySQL or Postgres in a few years, and my hope is that this drags engineers kicking and screaming into a better way of life.
You’ve raised concerns about mounting technical debt due to the AI revolution. Could you elaborate on the types of technical debts AI can introduce and how Honeycomb helps in managing or mitigating these debts?
I’m concerned about both technical debt and, perhaps more importantly, organizational debt. One of the worst kinds of tech debt is when you have software that isn’t well understood by anyone. Which means that any time you have to extend or change that code, or debug or fix it, somebody has to do the hard work of learning it.
And if you put code into production that nobody understands, there’s a very good chance that it wasn’t written to be understandable. Good code is written to be easy to read and understand and extend. It uses conventions and patterns, it uses consistent naming and modularization, it strikes a balance between DRY and other considerations. The quality of code is inseparable from how easy it is for people to interact with it. If we just start tossing code into production because it compiles or passes tests, we’re creating a massive iceberg of future technical problems for ourselves.
If you’ve decided to ship code that nobody understands, Honeycomb can’t help with that. But if you do care about shipping clean, iterable software, instrumentation and observability are absolutely essential to that effort. Instrumentation is like documentation plus real-time state reporting. Instrumentation is the only way you can truly confirm that your software is doing what you expect it to do, and behaving the way your users expect it to behave.
How does Honeycomb utilize AI to improve the efficiency and effectiveness of engineering teams?
Our engineers use AI a lot internally, especially CoPilot. Our more junior engineers report using ChatGPT every day to answer questions and help them understand the software they’re building. Our more senior engineers say it’s great for generating software that would be very tedious or annoying to write, like when you have a giant YAML file to fill out. It’s also useful for generating snippets of code in languages you don’t usually use, or from API documentation. Like, you can generate some really great, usable examples of stuff using the AWS SDKs and APIs, since it was trained on repos that have real usage of that code.
However, any time you let AI generate your code, you have to step through it line by line to ensure it’s doing the right thing, because it absolutely will hallucinate garbage on the regular.
Could you provide examples of how AI-powered features like your query assistant or Slack integration enhance team collaboration?
Yeah, for sure. Our query assistant is a great example. Using query builders is complicated and hard, even for power users. If you have hundreds or thousands of dimensions in your telemetry, you can’t always remember offhand what the most valuable ones are called. And even power users forget the details of how to generate certain kinds of graphs.
So our query assistant lets you ask questions using natural language. Like, “what are the slowest endpoints?”, or “what happened after my last deploy?” and it generates a query and drops you into it. Most people find it difficult to compose a new query from scratch and easy to tweak an existing one, so it gives you a leg up.
Honeycomb promises faster resolution of incidents. Can you describe how the integration of logs, metrics, and traces into a unified data type aids in quicker debugging and problem resolution?
Everything is connected. You don’t have to guess. Instead of eyeballing that this dashboard looks like it’s the same shape as that dashboard, or guessing that this spike in your metrics must be the same as this spike in your logs based on time stamps….instead, the data is all connected. You don’t have to guess, you can just ask.
Data is made valuable by context. The last generation of tooling worked by stripping away all of the context at write time; once you’ve discarded the context, you can never get it back again.
Also: with logs and metrics, you have to know what you’re looking for before you can find it. That’s not true of modern observability. You don’t have to know anything, or search for anything.
When you’re storing this rich contextual data, you can do things with it that feel like magic. We have a tool called BubbleUp, where you can draw a bubble around anything you think is weird or might be interesting, and we compute all the dimensions inside the bubble vs outside the bubble, the baseline, and sort and diff them. So you’re like “this bubble is weird” and we immediately tell you, “it’s different in xyz ways”. SO much of debugging boils down to “here’s a thing I care about, but why do I care about it?” When you can immediately identify that it’s different because these requests are coming from Android devices, with this particular build ID, using this language pack, in this region, with this app id, with a large payload … by now you probably know exactly what is wrong and why.
It’s not just about the unified data, either — although that is a huge part of it. It’s also about how effortlessly we handle high cardinality data, like unique IDs, shopping cart IDs, app IDs, first/last names, etc. The last generation of tooling cannot handle rich data like that, which is kind of unbelievable when you think about it, because rich, high cardinality data is the most valuable and identifying data of all.
How does improving observability translate into better business outcomes?
This is one of the other big shifts from the past generation to the new generation of observability tooling. In the past, systems, application, and business data were all siloed away from each other into different tools. This is absurd — every interesting question you want to ask about modern systems has elements of all three.
Observability isn’t just about bugs, or downtime, or outages. It’s about ensuring that we’re working on the right things, that our users are having a great experience, that we are achieving the business outcomes we’re aiming for. It’s about building value, not just operating. If you can’t see where you’re going, you’re not able to move very swiftly and you can’t course correct very fast. The more visibility you have into what your users are doing with your code, the better and stronger an engineer you can be.
Where do you see the future of observability heading, especially concerning AI developments?
Observability is increasingly about enabling teams to hook up tight, fast feedback loops, so they can develop swiftly, with confidence, in production, and waste less time and energy.
It’s about connecting the dots between business outcomes and technological methods.
And it’s about ensuring that we understand the software we’re putting out into the world. As software and systems get ever more complex, and especially as AI is increasingly in the mix, it’s more important than ever that we hold ourselves accountable to a human standard of understanding and manageability.
From an observability perspective, we are going to see increasing levels of sophistication in the data pipeline — using machine learning and sophisticated sampling techniques to balance value vs cost, to keep as much detail as possible about outlier events and important events and store summaries of the rest as cheaply as possible.
AI vendors are making lots of overheated claims about how they can understand your software better than you can, or how they can process the data and tell your humans what actions to take. From everything I have seen, this is an expensive pipe dream. False positives are incredibly costly. There is no substitute for understanding your systems and your data. AI can help your engineers with this! But it cannot replace your engineers.
Thank you for the great interview, readers who wish to learn more should visit Honeycomb.
#acquisitions#Advice#ai#AI-powered#android#API#APIs#APM#app#apps#author#AWS#Best Of#bugs#Building#Business#change#Charity#chatGPT#code#Collaboration#complexity#Containers#course#CTO#dashboard#data#data pipeline#Database#databases
0 notes
Text
Quantum Momentum
I was watching [this video] by Sabine Hosfetter when the idea finally clicked on my head. I'm not entirely sure how to phrase this idea that seems to need phrasing though.
I don't know what Sabine is asking in the video, I'm going based on an understanding of the concept; how do you measure the thing, without affecting the measurement of the thing.
And the obvious answer is; You can't. But a lot of physicists lately have been framing this [ping-pong] idea lately which I also haven't quite understood because the phrasing is always slightly off.
Well. To me anyway.
So, in the experiment described by Sabine; If you measure one half of the box, and the particle isn't there, you know it's in the other half of box.
However; if you measure it AND you see the particle; you've affected the momentum of the particle, and thus affected the measurement.
Which means you, if this is how you're measuring; you can only really get a pseudo accurate measurement if you don't see it; and when you *DO* see it; the experiment is over, because you've changed all future measurements.
One solution has been entanglement. Every *EXTRA* entangled particles means you can take a measurement by discarding an entangled particle, until you're left a single particle, and when you measure *that* particle; again the experiment is over, you cannot take any future measurements.
*UNLESS* you take into account the affect that your measurements have had on said particle.
Unfortunately, it takes a useable axis *out* of the particle, but it it allows you sustain the "experiment* for as long as possible.
So; if you create a closed system where this particle is bouncing off of an arbitrary amount of measuring devices; you can know how you affect the particle AND retain some arbitrary data of some kind on the particle.
And thus; you can know all the information stored on the qBit.
As a little post note here;
A qBit has a few things; a pitch (that is the initial data fed to the particle) and the catch (that is reading that particle at some arbitrary point).
Entangled particles have the ability to be caught *multiple* times.
And a process; or code execution is kind of like a batter who affects the ball mid execution and is caught in the outfield somewhere.
But if in the system you have multiple catchers throwing to each other that can also affect the calculation; then you have the ability to sustain that pitch indefinitely.
Or until entropy catches up with the calculation.
Thus we have two ways to use a qBit; one with a batter and end point, and one that's circular catchers.
0 notes
Note
This one is actually not so accidental, depending on how "generous" you want to be with the term programming language. One could definitely argue that an uberstate -> seed format is certainly a programming language of sorts, just one that has limited utility.
There are absolutely a bunch of ways you can accidentally create programming languages though, even Turing complete ones. Pretty much whenever you have something operate on input, it's sort of a programming language.
Surely that's an exaggeration, right? Not really. Consider a basic interpreter for something like Brainfuck. All it does it steps through the program character by character and does something depending on the character. Brainfuck is Turing complete and certainly a programming language.
When you take in input and do something based on it, the user can control what the program does based on that input. If they are particularly clever they may be able to get the program handling their input to affect a computation.
This seems goofy but actually has security implications. In cybersecurity there's the concept of weird machines. There are many ways in which they can arise as a security problem, but generally if the user is able to corrupt the state of a program based on their input there's a good chance that they can then control the corrupted state using an unspecified secret latent programming language. This secret language that nobody intentionally wrote is a consequence of how the program was made and, depending on the circumstances, may allow for anything from exfiltrating secret variables and denial of service to complete arbitrary code execution.
Weird machines are squarely in the "accidentally made programming language" territory, but going back to the original example, there are many similar cases where a programmatic system is extended just slightly to the point of (at least "practical") completeness.
In C there's this function called printf, which prints output to the terminal based on a format string. Format strings look like "age: %d\n" with the %d being a placeholder for a number. There are a lot of different placeholders, including ones for decimals, strings, single characters, numbers in hexadecimal, etc. Particularly notable is the placeholder %n, which doesn't actually change what gets printed, instead it writes the amount of bytes printed out so far to a pointer.
The intended use case was to see how much space parts of a string took up, for alignment or similar. This simple intention combined with the overall behavior of C allows you to do pretty much anything using just looped printf, like implementing a tic-tac-toe game.
It doesn't have to be so explicitly unintentional, either. SQL started out as a database query language, and is still mostly used for that purpose. As time went on more capabilities were added to the conditions on which data could be retrieved, and eventually a full programming language specification was added to SQL. A lot of domain specific languages go down this route, as can be seen with the seed language.
As someone who does a lot of informal research on systems in theoretical computer science, it can be very easy to create a system which just so happens to be Turing complete. Hell, just 6 arbitrary precision integer variables in a loop with + - * and floor division is Turing complete, it's called blindfolded arithmetic. Even many systems which are not Turing complete can still do an impressive amount of computation.
If you're a programmer, keep this in mind whenever handling user input. Whether the input is a file, network request, or typed input, make sure to think about what kinds of input the user could potentially put in there and how your program might behave when certain data are input.
How do you *accidentally* make a programming language?
Oh, it's easy! You make a randomizer for a game, because you're doing any% development, you set up the seed file format such that each line of the file defines an event listener for a value change of an uberstate (which is an entry of the game's built-in serialization system for arbitrary data that should persiste when saved).
You do this because it's a fast hack that lets you trigger pickup grants on item finds, since each item find always will correspond with an uberstate change. This works great! You smile happily and move on.
There's a small but dedicated subgroup of users who like using your randomizer as a canvas! They make what are called "plandomizer seeds" ("plandos" for short), which are seed files that have been hand-written specifically to give anyone playing them a specific curated set of experiences, instead of something random. These have a long history in your community, in part because you threw them a few bones when developing your last randomizer, and they are eager to see what they can do in this brave new world.
A thing they pick up on quickly is that there are uberstates for lots more things than just item finds! They can make it so that you find double jump when you break a specific wall, or even when you go into an area for the first time and the big splash text plays. Everyone agrees that this is neat.
It is in large part for the plando authors' sake that you allow multiple line entries for the same uberstate that specify different actions - you have the actions run in order. This was a feature that was hacked into the last randomizer you built later, so you're glad to be supporting it at a lower level. They love it! It lets them put multiple items at individual locations. You smile and move on.
Over time, you add more action types besides just item grants! Printing out messages to your players is a great one for plando authors, and is again a feature you had last time. At some point you add a bunch for interacting with player health and energy, because it'd be easy. An action that teleports the player to a specific place. An action that equips a skill to the player's active skill bar. An action that removes a skill or ability.
Then, you get the brilliant idea that it'd be great if actions could modify uberstates directly. Uberstates control lots of things! What if breaking door 1 caused door 2 to break, so you didn't have to open both up at once? What if breaking door 2 caused door 1 to respawn, and vice versa, so you could only go through 1 at a time? Wouldn't that be wonderful? You test this change in some simple cases, and deploy it without expecting people to do too much with it.
Your plando authors quickly realize that when actions modify uberstates, the changes they make can trigger other actions, as long as there are lines in their files that listen for those. This excites them, and seems basically fine to you, though you do as an afterthought add an optional parameter to your uberstate modification action that can be used to suppress the uberstate change detector, since some cases don't actually want that behavior.
(At some point during all of this, the plando authors start hunting through the base game and cataloging unused uberstates, to be used as arbitrary variables for their nefarious purposes. You weren't expecting that! Rather than making them hunt down and use a bunch of random uberstates for data storage, you sigh and add a bunch of explicitly-unused ones for them to play with instead.)
Then, your most arcane plando magician posts a guide on how to use the existing systems to set up control flow. It leverages the fact that setting an uberstate to a value it already has does not trigger the event listener for that uberstate, so execution can branch based on whether or not a state has been set to a specific value or not!
Filled with a confused mixture of pride and fear, you decide that maybe you should provide some kind of native control flow structure that isn't that? And because you're doing a lot of this development underslept and a bit past your personal Balmer peak, the first idea that you have and implement is conditional stops, which are actions that halt processing of a multiple-action-chain if an uberstate is [less than, equal to, greater than] a given value.
The next day, you realize that your seed specification format now can, while executing an action chain, read from memory, write to memory, branch based on what it finds in memory, and loop. It can simulate a turing machine, using the uberstates as tape. You set out to create a format by which your seed generator could talk to your client mod, and have ended up with a turing complete programming language. You laugh, and laugh, and laugh.
#as a sidenote I'd like to see the proof of completeness for the seed language#I have my suspicions that it might be bounded-complete#which are certainly capable#but also certainly not Turing complete#esolang#esolangs#programming#coding#cybersecurity#software engineering
2K notes
·
View notes
Note
hiya foone! i'm working on the surprisingly lofty task of modding barbie fashion show 2004 and i've been told twice to ask if you have any leads on how to get to the game files. i don't know how to simplify it because i'm so in over my head at this point. here is the thread
okay so here's how you reverse engineer an arbitrary game, the quick version:
Research. Who made the game? what else did they make? Maybe they made a game with the same engine, and someone already figured out that one? (not that I saw on a quick look, but you may be able to dig deeper) Also, look in the game files. There's a PowerRender.dll and a sipEngine.bc file. Nothing for sipEngine, but PowerRender has a hit on the internet archive, maybe that download includes some info on how it encodes files?
Look at the files (with a hex editor, like HxD). KAR files seem to be the main storage mechanism, and they've got a RIFF header. RIFF is a standard, though they're not using it exactly. But this might help. Another thing you can spot in the KAR files is a bunch of english strings (CreditsTb.kar is lousy with them). That's a good sign: it means the files aren't compressed, so you don't have to figure out the compression method.
Static analysis of the EXE. Get Ghidra and load up the EXE. Find where it opens files (CreateFileA/CreateFileW on windows), trace back from there. Check the strings. Hey look, function FUN_004e6260 is called with "KAResource.kar". so FUN_004e6260 is probably a function to load arbitrary resource files. Dig through that, figure out how it works.
Dynamic analysis of the EXE. Stick it in a debugger and see what it does. Set a breakpoint on CreateFileA/W and follow the execution. I don't have a good recommendation for what tool to use here, I'm from the past. I've used Ollydbg a lot but it hasn't been updated in 9 years.
Hijack the EXE and make it do your work for you. One thing I noticed while looking around was references to Python. This game apparently embeds a python interpreter, version 2.2. Maybe you can find where it loads the code from, or inject your own code?
Anyway those are some introductory ideas. feel free to ask any follow-up questions, but this hopefully gives you some idea of where to start?
Good luck!
446 notes
·
View notes