jstnlowe
jst a blog
35 posts
do what you can, with what you have, where you are
Don't wanna be here? Send us removal request.
jstnlowe · 3 months ago
Text
Always down for some interesting code. Tonight's snippet brought to you by one of the top teams competing in jailCTF 2024. Arguably the most important bit of CTF prep you can do is to come up with a good team name. Here, the team went with chr(sum(range(ord(min(str(not())))))), which, in Python, evaluates to ඞ.
Wait, what?
I'll admit that this took me way too long to work out because I am not nearly clever enough. Much Googling was done. And also this only works with Python 3+ because unicode.
Working from the inside out:
not() is applying the unary not operator to an empty tuple, which is not(False), which returns True. This is something I don't think that I explicitly realized - but in Python, certain objects are "falsy", meaning they evaluate to the Boolean False, and an empty sequence or collection, such as empty tuple, is one of those falsy objects.
str() is converting True to the actual string of characters: "True"
min() is returning the character in "True" with the lowest unicode codepoint, which is "T" (84). So here it's useful that capital letters come before lowercase and thus have lower codepoint values.
ord() is returning the integer that represents "T", which is 84.
range() takes that integer and returns the sequence of integers from zero 0 to 84 as a tuple. This works because if you only give range() a single argument (the integer to stop at), it uses the defaults for start (0) and step (1).
sum() is adding up all of the items in that tuple (0+1+2+
83), resulting in 3486. In my opinion, sum(range(84)) is probably the cleverest bit because 3486 is (conveniently) a triangular number, such that it is the sum of the natural numbers 1 to 83.
chr() is returning the unicode character represented by that codepoint (3486), which corresponds to the Sinhalese character ඞ (U+0D9xE)
0 notes
jstnlowe · 3 months ago
Text
Tumblr media
Birthday card for a friend and a Thank You card for attendees of a recent equestrian event. Done with a combination of things - Krita, Rebelle, Affinity Photo, and a few different SDXL models using Invoke. Still figuring out workflows to get ideas on to paper.
0 notes
jstnlowe · 7 months ago
Text
Tumblr media
Some event posters for an upcoming series of equestrian clinics with Jo Aston. Recently switched from Adobe products over to Affinity Photo and Affinity Designer, as Adobe's licensing costs just aren't justifiable when the work is (mostly) pro bono.
0 notes
jstnlowe · 2 years ago
Text
Tumblr media
Another example of how controlling the aspect ratio in a Midjourney prompt affects the composition. This time with seamless tiles. Here, the same prompt was used with three different aspect ratios: square (1:1), portrait (1:2), and landscape (2:1).
flowering dogwood by william morris --tile --v 5
0 notes
jstnlowe · 2 years ago
Text
Tumblr media
Experimenting with Midjourney v5 prompts and aspect ratios. These all use the same prompt and the same seed, only the AR varied. The default AR is square (1:1), so this illustrates how important it can be to consider AR when constructing a prompt.
3 notes · View notes
jstnlowe · 2 years ago
Text
Working with Midjourney & GPT4 - Part II
Again, using GPT4 to generate concise prompts for specific concepts.
GPT4 Prompt: In the form of a 100-word prompt suitable for use with an AI image generator: describe the exterior architecture and features of an ideal small, modern cabin in the woods. GPT4 Response: Design a cozy, modern cabin nestled in a lush forest. Incorporate a minimalist exterior with clean lines, floor-to-ceiling windows, and natural materials like wood and stone. Include a sloping green roof, an outdoor deck, and subtle exterior lighting. Situate the cabin on a serene clearing surrounded by tall trees, with a meandering footpath leading to its entrance.
The above was used as the Midjourney Prompt for v5. Here are a few of the results:
Tumblr media Tumblr media Tumblr media Tumblr media
1 note · View note
jstnlowe · 2 years ago
Text
Working with Midjourney & GPT4
One of a series of little experimental projects that I have been doing with Midjourney v5 and GPT4 (via OpenAI ChatGPT Plus). Sometimes it's tough to find the right words to convey an idea.So why not leverage an LLM that knows all the words and ask it to help us talk to the other LLM? It's pretty rad.
GPT4 Prompt: Give me a 10-15 word sentence summarizing the compositions and subjects of La Città Nuova by Antonio Sant'Elia, followed by 10 to 15 short phrases, comma-separated, describing the style and aesthetics, then a list of the methods and materials used by the artist in creating the works. GTP4 Response: [
] depicts innovative urban structures featuring interconnected, multi-level living spaces. Futuristic, dynamic, geometric, bold lines, vertical, industrial aesthetic, monumental, integrated infrastructure, elevated transportation, modern metropolis. Ink on paper, architectural drawings, perspective sketching. Midjourney Prompt: innovative urban structures featuring interconnected, multi-level living spaces. Futuristic, dynamic, geometric, bold lines, vertical, industrial aesthetic, monumental, integrated infrastructure, elevated transportation, modern metropolis. Sepia, ink on paper, architectural drawing, perspective sketch, white background --v 5
Midjourney gave these as the responses:
Tumblr media Tumblr media
For comparison, here are two of Sant'Elia's drawings from La CittĂ  Nuova:
Tumblr media Tumblr media
0 notes
jstnlowe · 4 years ago
Text
Build Your Own Braindance
Cyberpunk 2077 has this in-game concept of a “braindance” where special implants record every sensory input (and some that are extra-sensory, such as thermal infrared) that a person is experiencing. The recording can then be played back by another person, allowing them to experience the events from a first-person perspective or even explore the recorded environment as if they were actually there. This is pretty much directly inspired by the SQUIDs in the movie Strange Days and there are a lot of references to the movie throughout the game.
We actually have a primitive form of this available to us today and some aspects of it can be seen in CP2077's interpretation of the technology. It’s a process called photogrammetry and it’s surprisingly easy to get started.
Photogrammetry software allows us to take photos or video of environments or objects, and through a semi-automated process, generate 3D models. There are several free photogrammetry applications out there. This particular workflow uses AliceVision Meshroom and is accessible to anyone with a CUDA-capable Nvidia GPU.
With a Pixel 3A, I shot a quick video walking into my kitchen and then around my kitchen table on which I piled with whatver I could find to make things more interesting. The video was a 4K 30 fps MP4 and was recorded vertically with the idea being that orienting the video perpendicular to the plane of motion would capture as much information about the environment as possible by reducing the amount of overlap of each frame as I moved horizontally through the space. I ended up with about 50 seconds of footage (1500 frames) weighing in at 330MB.
From that video I extracted roughly every 10th frame leaving me with just over 160 images, each 2160x3840. The images were then imported into Meshroom and processed with the default pipeline. This took a little over half an hour on an Intel i7-8700K with an Nvidia RTX 2080 and 32GB of memory.
In this video you can see the Meshroom interface - the source images on the left, the resulting point cloud and position of the camera in each source image on the right, and the pipeline below.
youtube
From that 300MB video I ended up with a 1.3 million triangle model weighing in at just over 100MB with the texture (also generated by Meshroom). The final result is unrefined, a little rough, but not bad for an hour of R&D and thirty minutes of processing. With some experimentation it should be possible to get smoother surfaces and fewer gaps in the mesh.
youtube
I also did a quick walkaround of my dog while he was sleeping on the bed. This model used 25 photos instead of a video and took about four minutes to process.
youtube
0 notes
jstnlowe · 4 years ago
Text
The human brain’s capacity doesn’t change from one year to the next, so the insights from studying human behavior have a very long shelf life. What was difficult for users twenty years ago continues to be difficult today.
Steve Krug, Don't Make Me Think, Revisited: A Common Sense Approach to Web Usability
0 notes
jstnlowe · 5 years ago
Photo
Tumblr media
Another quick event poster for the Wentworth Hunt - their annual “Blessing of the Hounds” ceremony. Featured in the artwork is Sue Levy, MFH riding Mo and hounds Acorn (left) and Archer (right). Illustrator and Photoshop, as usual.
0 notes
jstnlowe · 5 years ago
Photo
Tumblr media
Two posters made for an upcoming Wentworth Hunt event: left is the advertisement, right is the notice to be posted by landowners. Photoshop and Illustrator.
0 notes
jstnlowe · 5 years ago
Photo
Tumblr media
Quick mock-up comparing the sizes of Onyx Boox Note Pro and the Nova Pro. I like the idea of a ereader with a stylus, especially one that runs Android and sports a Wacom digitizer. Trying to decide if the Nova would be too small - if the Note is really worth the premium price. Both are awesome devices, just need to choose portability versus ergonomics.
4 notes · View notes
jstnlowe · 5 years ago
Text
Verifying Exported SSRS Report Contents
When it comes to the generation, export, and/or printing of reports, there are times when it is useful to be able to quickly determine the contemporaneity and integrity of a report. This is useful in situations where you need to determine at a glance if the contents of the report are recent and or if any changes have been made to them.
There are much more powerful and comprehensive ways of ensuring report/data integrity, however the following applies to situations where "casual" inspection of a report is sufficient. What I’ll cover here is the use of a timestamp and the generation and use of a hash value.
Time Generated
Including a timestamp of report generation is a very common approach and, depending on how often that report is generated or the content is updated, displaying just the date may be sufficient. This can be accomplished through the use of SSRS global parameter ExecutionTime. I almost always include this in the footer of the reports I design, typically in a smaller font so that it does not distract the user from the contents of the report.
Content Hash
When it comes to validating the contents of the report, the approach that I tend to use is to call SQL's HASHBYTES() function, passing it values from the report. While in many cases you may be able to pass the full contents of the report into the function, the input is limited to 8000 bytes so you may end up having to choose a subset of values. An added bonus of using a subset is that, as long as you do not indicate which values were used, it makes attempts to falsify a hash all that more difficult (not that this is a practical concern for most applications).
The output of the function is a value that is (in practice) unique to the input which allows for the verification of the contents of the report: the hash on a previously-generated copy could be used to very quickly determine if the contents of the copy had been modified externally after generation or if the values in the database had been changed since its generation.
As of SQL 2016, the weaker hash algorithms have been deprecated, leaving us with SHA2_256 and SHA2_512 both of which have output that is not human-readable and not suitable for displaying on a report. To address this, it's perfectly acceptable to truncate the hash to a more reasonable length for our purposes.
Implementation
Let’s say that we have a report that displays the contents of a single personnel record. The dataset query looks like this:
SELECT [Id],[FirstName],[LastName],[Username],[HireDate],[AccessLevel],[Notes]` FROM [Employees] WHERE [Id] = @userid
Now, let's say that our primary concern is the integrity of the [AccessLevel] and [Notes]. We'll include the following line in the SELECT statement to generate the hash using SHA256, convert it to varchar, and truncate it to the first sixteen characters:
LEFT(CONVERT(VARCHAR(64),HASHBYTES('SHA2_256',[AccessLevel]+[Notes]),2),16) AS [ReportHash]
The output could then be displayed in the footer of the report alongside the generation timestamp and would look something like this:
3A250D547F4A209A 06/03/2019 19:01:35
0 notes
jstnlowe · 6 years ago
Text
VMWare VM Doesn't Accept Keyboard Input after Conversion
Recently used vCenter Converter to create a VM from a Lenovo ThinkPad and on booting the VM afterwards, discovered that it would not accept keyboard input. Not even from the “Send Ctrl+Alt+Del” button. The mouse was fine, so I was able to use Windows’ on-screen keyboard to do some digging.
Device manager indicated that the keyboard device threw an error:
Windows cannot start this hardware device because its configuration information (in the Registry) is incomplete or damaged.
The issue is that the registry entry carries some deprecated values for things like Lenovo’s UltraNav and Synaptics devices. So we need to track it down and clean it up. Use RegEdit and navigate to the following key:
HKLM\SYSTEM\CurrentControlSet\Control\Class\{4d36e96b-e325-11ce-bfc1-08002be10318}
Edit UpperFilters so that the only entry in it is kbdclass.
Reboot.
0 notes
jstnlowe · 6 years ago
Text
Converting Perfmon BLG with Relog
Relog is a great, probably underutilized utility that ships with Windows (\Windows\System32\). It lets you take just about any system performance log file and convert it to a number of formats (including dumping them into SQL). It also supports resampling, however the resampling (with -t) does not involve any sort of statistical operation; it just takes every nth sample. For example, a 1-min interval log with -t 60 specified would take every 60th sample in the log, effectively giving you a 1-hour interval.
Converting perfmon blg files to csv:
\nrelog logfile.blg -f csv -o logfile.csv
Importing to a SQL store requires first that you create a SQL Server ODBC DSN:
\nrelog logfile.blg -f sql -o SQL:!
Taking a logfile with 1 minute sample frequency and resample to a 1 hour frequency csv:
\nrelog logfile.blg -f csv -o logfile.csv -t 60
0 notes
jstnlowe · 6 years ago
Text
Retrieving Values from Nested Unnamed JSON Arrays
Came across an interesting case where the JSON was an unnamed array of unnamed arrays and the user needed to retrieve a value from within one of those inner arrays.
Here’s the JSON (which we’ll store in the variable @json):
DECLARE @json​ nvarchar(MAX) = [ [ {"name": "Supplier","text": "Northstar"}, {"name": "Product","text": "NS40"}, {"name": "Material","text": "Steel"} ], [ {"name": "Supplier","text": "Qualiton"}, {"name": "Product","text": "VB-2-AC"}, {"name": "Material","text": "Aluminum"} ] ]
Let’s say we want to grab “Northstar” using JSON\_VALUE(). Unfortunately, we can’t just use JSON\_VALUE() because it can’t deal with the path required, so the first thing to do is to use JSON_QUERY() to retrieve the inner array we’re interested in:
SELECT JSON_QUERY(@json,'$[0]')
Which returns:
[ {"name": "Supplier","text": "Northstar"}, {"name": "Product","text": "NS40"}, {"name": "Material","text": "Steel"} ]
With the above, we can use JSON_VALUE() to grab the value we’re interested in:
SELECT JSON_VALUE( JSON_QUERY(@json,’$[0]’),’$[0].text’)
Which returns: “Northstar”.
0 notes
jstnlowe · 6 years ago
Text
Three-Pronged Approach to Phishing Emails
There has been a dramatic increase in phishing attempts the past few years with the most common tactic being to send out emails with spoofed “from” addresses that contain malicious attachments or instructions. At a glance, the better-crafted emails might seem legitimate and attackers have gotten very good at exploiting human weaknesses. Educating users just is not enough. Authenticating emails is somewhat complex and relies heavily on the sender for proper implementation and the recipient for proper scrutiny.
In its current form, the best authentication approach uses three protocols, but before we go over them it is important to understand why we need them to begin with.
A Tale of Two Addresses
Email relies on something called the Simple Mail Transfer Protocol (SMTP) which is a standard that allows any computer to transmit an email message to any other computer. SMTP uses two "from" addresses: the "envelope from" and the "header from".
The "envelope from" is hidden in the email header and tells the recipient's mail server where the email should be returned ("bounced") to - like the return address on the outside of a physical piece of mail. The "header from" address is what you see displayed in the "From:" field in your email client - like the header you might see on the letter that was mailed inside an envelope. These two addresses do not have to be the same.
Unfortunately, both of these addresses can be spoofed; a malicious third party can craft an email to a recipient that appears to have come from a legitimate sender. Sometimes it can be difficult for a recipient to determine if the email is legitimate. Together, the three authentication protocols allow the recipient's email server to make that determination and handle the email appropriately.
Sender Policy Framework (SPF)
SPF is a "path-based" protocolt that allows the recipient to verify that the email is indeed from the domain that it claims to be sent from. This is accomplished through the use of DNS records on the sender's domain that specify a whitelist of IP addresses that are allowed to send mail from that domain. If the email claims to have been sent by [email protected] but the email headers indicate that it was sent from a computer whose IP address does not appear on that whitelist, the recipient can treat the email accordingly. This check is performed before the recipient receives the contents of the email, allowing them to reject malicious emails without being exposed to anything that they might have contained. However, SPF only checks the "envelope from" address, meaning that the "header from" address can still be spoofed.
DomainKeys Indentified Mail (DKIM)
DKIM is an "identity-based" protocol and forms the second layer. This protocol is a little more complicated. It essentially allows a sender to electronically sign emails in a way that can be verified by the recipient. This is done through the use of an asymmetric cryptographic algorithm and a set of two keys, one of which is known only to the sender (the "private" key) and the other is provided in a DNS record for the recipient to use (the "public" key). The cryptographic process is outside the scope of this article, but it essentially takes the email (the header, the content, or both) and uses the private key to calculate a special value called a "hash" in such a way that there is no way to back-calculate the private key from the hash and the email contents. This hash is sent along with the email. The public key is the other half of this unique key pair which allows the recipient to verify that the email was indeed sent by someone who holds the private key.
Domain-based Message Authentication, Reporting, and Conformance (DMARC)
DMARC is the final piece of this process and uses something called "alignment" to close any holes in the SPF and DKIM authentications. DMARC alignment checks that the "header from" domain matches the "envelope from" domain used by SPF and that the "header from" domain matches the domain name usd by the DKIM signature. This also means that an email has to pass both SPF and DKIM. DMARC also allows senders to tell email providers how to handle anything that fails authentication - to allow the email through (monitor), to send the email to a spam folder (quarantine), or to refuse delivery entirely (reject). DMARC also has a reporting piece which allows email providers to send reports back to the owner of a domain that provide aggregation and forensic information about the emails the provider recived claiming to be sent from that domain.
0 notes