#the deleting duplicates thing is what gets me LIKE WHAT DO YOU MEAN MPV DUPLICATES FRAMES
Explore tagged Tumblr posts
Text
giffing with screencaps is kind of miserable how do you guys do this šµāš«
#usually i import video into layers but that method doesn't work with my iwtv s1 download#i am suffering#why are there so many steps#screencap with mpv -> delete duplicates -> load files into stack which ends up taking forever AND THEN i can gif as normal#the deleting duplicates thing is what gets me LIKE WHAT DO YOU MEAN MPV DUPLICATES FRAMES#lmao i'm whining bc i'm not used to it but i'm sure any gifmakers reading this are scoffing rn#tbd#daphne.txt
4 notes
Ā·
View notes
Text
My GIF Making process: Screen capturing using MPV player, Organizing files, 3 Sharpening settings, Basic Coloring PSD + Actions set
This is a very long post so heads up.
Iāll try to be as thorough and true as much as possible to the way I make my gifs (I already use Photoshop Actions which Iāve long since set up but now for this tutorial Iām reviewing them to show you the exact steps Iāve learned to create my gifs š) and present them to you in a semi-coherent way. Also, please bear with me since English is my second language.
First things first. Below are the things and tools we need to do this:
Downloaded 4K or 1080p quality videos (letās all assume we know where to get theseāespecially for high definition movies and tv seriesāso this post doesnāt get removed, okay? š)
Adobe Photoshop CC or the CS versions can work as well, but full disclosure I havenāt created gifs using the CS versions since 2020. Iām currently using Adobe Photoshop 2024.
mpv player. Use mpv player to get those frames/screenshots or any other video player that has a screen grabber feature. Iāve used adapter for the longest time but Iāve switched to mpv because the press to screenshot feature while the video is playing has been a game changer not to mention ultimate time saver for me. For adapter you need to play it in another video player (like VLC player), to get the start and end timestamps of the scene you want to gif which takes me ages before I can even open Photoshop.
Anyway! Please stop reading this post for a moment and head over to this amazing tutorial by kylos. She perfectly tells you how to install and use mpv player, both for Mac and Windows users.
One thing I have to share though, I had a tough time when I updated my MacOS to Sonoma since MPV is suddenly either duplicating frames or when I delete the duplicates the player seems to be skipping frames :/ I searched and found a solution here, though it didnāt work for me lol. My workaround for this in the meantime is decreasing the speed down to 0.70 then start screenshottingāitās not the same pre Sonoma update but it works so Iāll have to accept it rather than have jumpy looking gifs.
Now, after this part of kylosā tutorial:
you can continue reading the following sections of my gif tutorial below.
I want to share this little tip (sorry, this will only cater to Mac users) that I hope will be helpful for organizing the screenshots that MPV saved to the folder you have selected. Because believe me you donāt want to go through 1k+ of screenshots to select just 42-50 frames for your gif.
The Control + Command + N shortcut
This shortcut allows you to create a new folder from files you have pre-selected. As you can see below I have already created a couple of folders, and inside each folder I have selected screenshots that I want to include in one single gif. It's up to you how you want to divide yours, assuming you intend to create and post a Tumblr gifset rather than just one gif.
Another tip is making use of tags. Most of, if not all the time, I make supercorp gifs so I tag blue for Kara and red (or green) for Lenaājust being ridiculously on brand and all that.
Before we finally open Photoshop, there's one more thing I want to sayāI know, please bear with me for the third? fourth? time š
It's helpful to organize everything into their respective folders so you know the total number of items/frames you have. This way, you can add or delete excess or unnecessary shots before uploading them in Photoshop.
For example below there are 80 screenshots of Kara inside this folder and for a 1:1 (540 x 540 px) Tumblr gif, Photoshop can just work around with 42-50 max number of frames with color adjustments applied before it exceeds the 10 MB file size limit of Tumblr.
Sometimes I skip this step because it can be exhausting (haha) and include everything so I can decide visually which frames to keep later on. You'll understand what I mean later on. But it's important to keep the Tumblr 10 MB file size limit in mind. Fewer frames, or just the right amount of frames, is better.
So, with the screenshot organization out of the way, let's finally head over to Photoshop.
Giffing in Photoshop, yay!
Letās begin by navigating to File > Scripts > Load Files into Stackā¦
The Load Layers window will appear. Click the Browse button next.
Find your chosen screenshots folder, press Command + A to select all files from that folder then click Open. Then click OK.
After importing and stacking your files, Photoshop should display the following view:
By the way, I'll be providing the clip I've used in this tutorial so if want to use them to follow along be my guest :)
If you haven't already opened your Timeline panel, navigate to Windows > Timeline.
Now, let's focus on the Timeline panel for the next couple of steps.
Click Create Video Timeline, then youāll have this:
Now click the menu icon on the top right corner then go to Convert Frames > Make Frames from Clips
Still working on the Timeline panel, click the bottom left icon this timeāthe icon with the three tiny boxesāto Convert to Frame Animation
Select Make Frames From Layers from the top right corner menu button.
So now you have this:
Go and click the top right menu icon again to Select All Frames
Then click the small dropdown icon to set another value for Frame Delay. Select Otherā¦
The best for me and for most is 0.05 but you can always play around and see what you think works for you.
Click the top right menu icon again to Reverse Frames.
I think Photoshop has long since fixed this issue but usually the first animation frame is empty so I just delete it but now going through all these steps there seems to be none of that but anyways, the delete icon is the last one among the line of feature buttons at the bottom part of the Timeline panel.
Yay, now we can have our first proper GIF preview of a thirsty Lena š
Press spacebar to watch your gif play for the very first time! After an hour and half of selecting and cutting off screenshots! š Play it some more. No really, Iām serious. I do this so even as early (lol) as this part in the gif making process, I can see which frames I can/should delete to be within the 10 MB file size limit. You can also do it at the end of course š
Now, letās go to the next important steps of this tutorial post which Iāve numbered below.
Crop and resize to meet Tumblr's required dimensions. The width value should be either 540px, 268px, or 177px.
Convert the gif to a Smart Object for sharpening.
Apply lighting and basic color adjustments before the heavy coloring. I will be sharing the base adjustments layers I use for my gifs š.
1. Crop and Resize
Click on the Crop tool (shortcut: the C key)
I like my GIFs big so I always set this to 1:1 ratio if the scene allows it. Press the Enter key after selecting the area of the frame that you want to keep.
Side note: If you find that after cropping, you want to adjust the image to the left or another direction, simply unselect the Delete Cropped Pixels option. This way, you will still have the whole frame area available to crop again as needed and as you prefer.
Now we need to resize our gif and the shortcut for that is Command + Opt + I. Type in 540 as the width measurement, then the height will automatically change to follow the ratio youāve set while cropping.
540 x 540 px for 1:1
540 x 405 px for 4:3
540 x 304 px for 16:9
For the Resample value I prefer Bilinearābut you can always select the other options to see what you like best.
Click OK. Then Command + 0 and Command + - to properly view the those 540 pixels.
Now we get to the exciting part :) the sharpen settings!
2. Sharpen
First we need to have all these layers ācompressedā intro a single smart object from which we can apply filters to.
Select this little button on the the bottom left corner of the Timeline panel.
Select > All Layers
Then go to Filter > Convert for Smart Filters
Just click OK when a pop-up shows up.
Now you should have this view on the Layers panel:
Now I have 3 sharpen settings to share but Iāll have download links to the Action packs at the end of this long ass tutorial so if you want to skip ahead, feel free to do so.
Sharpen v1
Go to Filter > Sharpen > Smart Sharpenā¦
Below are my settings. I donāt adjust anything under Shadows/Highlights.
Amount: 500
Radius: 0.4
Click OK then do another Smart Sharpen but this time with the below adjustments.
Amount: 12
Radius: 10.0
As you can see Lenaās beautiful eyes are āpopping outā now with these filters applied. Click OK.
Now we need to Convert to Frame Animation. Follow the steps below.
Click on the menu icon at the top right corner of the Timeline panel, then click Convert Frames > Flatten Frames into Clips
Then Convert Frames > Convert to Frame Animation
One more click to Make Frames From Layers
Delete the first frame then Select All then Set Frame Delay to 0.05
and there you have it! Play your GIF and make sure itās just around 42-50 frames. This is the time to select and delete.
To preview and save your GIF go to File > Export > Save for Web (Legacy)ā¦
Below are my Export settings. Make sure to have the file size around 9.2 MB to 9.4 MB max and not exactly 10 MB.
This time I got away with 55 frames but this is because I havenāt applied lighting and color adjustments yet and not to mention the smart sharpen settings aren't to heavy so letās take that into consideration.
Sharpen v1 preview:
Sharpen v2
Go back to this part of the tutorial and apply the v2 settings.
Smart Sharpen 1:
Amount: 500
Radius: 0.3
Smart Sharpen 2:
Amount: 20
Radius: 0.5
Weāre adding a new type of Filter which is Reduce Noise (Filter > Noise > Reduce Noise...) with the below settings.
Then one last Smart Sharpen:
Amount: 500
Radius: 0.3
Your Layers panel should look like this:
Then do the Convert to Frames Animation section again and see below preview.
Sharpen v2 preview:
Sharpen v3:
Smart Sharpen 1:
Amount: 500
Radius: 0.4
Smart Sharpen 2:
Amount: 12
Radius: 10.0
Reduce Noise:
Strength: 5
Preserve Details: 50%
Reduce Color Noise: 0%
Sharpen Details: 50%
Sharpen v3 preview:
And here they are next to each other with coloring applied:
v1
v2
v3
Congratulations, you've made it to the end of the post š
As promised, here is the download link to all the files I used in this tutorial which include:
supercorp 2.05 Crossfire clip
3 PSD files with sharpen settings and basic coloring PSD
Actions set
As always, if you're feeling generous here's my Ko-fi link :) Thank you guys and I hope this tutorial will help you and make you love gif making.
P.S. In the next post I'll be sharing more references I found helpful especially with coloring. I just have to search and gather them all.
-Jill
#tutorial#gif tutorial#photoshop tutorial#gif making#sharpening#sharpening tutorial#photoshop#photoshop resources#psd#psd coloring#gif coloring#supercorp#supercorpedit#lena luthor#supergirl#my tutorial#this has been a long time coming#guys. i'm BEGGING you. use the actions set - it was a pain doing all this manually again ngl LMAO#i've been so used to just playing the actions#so this has been a wild refresher course for me too š
439 notes
Ā·
View notes
Text
Version 422
youtube
windows
zip
exe
macOS
app
linux
tar.gz
šš It was hydrus's birthday this week! šš
I had a great week. I mostly fixed bugs and improved quality of life.
tags
It looks like when I optimised tag autocomplete around v419, I accidentally broke the advanced 'character:*'-style lookups (which you can enable under tags->manage tag display and search. I regret this is not the first time these clever queries have been broken by accident. I have fixed them this week and added several sets of unit tests to ensure I do not repeat this mistake.
These expansive searches should also work faster, cancel faster, and there are a few new neat cache optimisations to check when an expensive search's results for 'char' or 'character:' can quickly provide results for a later 'character:samus'. Overall, these queries should be a bit better all around. Let me know if you have any more trouble.
The single-tag right-click menu now always shows sibling and parent data, and for all services. Each service stacks siblings/parents into tall submenus, but the tall menu feels better to me than nested, so we'll see how that works out IRL. You can click any sibling or parent to copy to clipboard, so I have retired the 'copy' menu's older and simpler 'siblings' submenu.
misc
Some websites have a 'redirect' optimisation where if a gallery page has only one file, it moves you straight to the post page for that file. This has been a problem for hydrus for some time, and particularly affected users who were doing md5: queries on certain sites, but I believe the downloader engine can now handle it correctly, forwarding the redirect URL to the file queue. This is working on some slightly shakey tech that I want to improve more in future, but let me know how you get on with it.
The UPnPc executables (miniupnp, here https://miniupnp.tuxfamily.org/) are no longer bundled in the 'bin' directory. These files were a common cause of anti-virus false positives every few months, and are only used by a few advanced users to set up servers and hit network->data->manage upnp, so I have decided that new users will have to install it themselves going forward. Trying to perform a UPnP operation when the exe cannot be found now gives a popup message talking about the situation and pointing to the new readme in the bin directory.
After working with a user, it seems that some clients may not have certain indices that speed up sibling and parent lookups. I am not totally sure if this was due to hard drive damage or broken update logic, but the database now looks for and heals this problem on every boot.
parsing (advanced)
String converters can now encode or decode by 'unicode escape characters' ('\u0394'-to-'Ī') and 'html entities' ('&'-to-'&'). Also, when you tell a json formula to fetch 'json' rather than 'string', it no longer escapes unicode.
The hydrus downloader system no longer needs the borked 'bytes' decode for a 'file hash' content parser! These content parsers now have a 'hex'/'base64' dropdown in their UI, and you just deliver that string. This ugly situation was a legacy artifact of python2, now finally cleared up. Existing string converters now treat 'hex' or 'base64' decode steps as a no-op, and existing 'file hash' content parsers should update correctly to 'hex' or 'base64' based on what their string converters were doing previously. The help is updated to reflect this. hex/base64 encodes are still in as they are used for file lookup script hash initialisation, but they will likely get similar treatment in future.
birthday
ššššš
On December 14th, 2011, the first non-experimental beta of hydrus was released. This week marks nine years. It has been a lot of work and a lot of fun.
Looking back on 2020, we converted a regularly buggy and crashy new Qt build to something much faster and nicer than we ever had with wx. Along with that came mpv and smooth video and finally audio playing out of the client. The PTR grew to a billion mappings(!), and with that came many rounds of database optimisation, speeding up many complicated tag and file searches. You can now save and load those searches, and most recently, search predicates are now editable in-place. Siblings and parents were updated to completely undoable virtual systems, resulting in much faster boot time and thumbnail load and greatly improved tag relationship logic. Subscriptions were broken into smaller objects, meaning they load and edit much faster, and several CPU-heavy routines no longer interrupt or judder browsing. And the Client API expanded to allow browsing applications and easier login solutions for difficult sites.
There are still a couple thousand things I would like to do, so I hope to keep going into 2021. I deeply appreciate the feedback, help, and support over the years. Thank you!
If you would like to further support my work and are in a position to do so, my simple no-reward Patreon is here: https://www.patreon.com/hydrus_dev
full list
advanced tags:
fixed the search code for various 'total' autocomplete searches like '*' and 'namespace:*', which were broken around v419's optimised regular tag lookups. these search types also have a round of their own search optimisations and improved cancel latency. I am sorry for the trouble here
expanded the database autocomplete fetch unit tests to handle these total lookups so I do not accidentally kill them due to typo/ignorance again
updated the autocomplete result cache object to consult a search's advanced search options (as under _tags->manage tag display and search_) to test whether a search cache for 'char' or 'character:' is able to serve results for a later 'character:samus' input
optimised file and tag search code for cases where someone might somehow sneak an unoptimised raw '*:subtag' or 'namespace:*' search text in
updated and expanded the autocomplete result cache unit tests to handle the new tested options and the various 'total' tests, so they aren't disabled by accident again
cancelling a autocomplete query with a gigantic number of results should now cancel much quicker when you have a lot of siblings
the single-tag right-click menu now shows siblings and parents info for every service, and will work on taglists in the 'all known tags' domain. clicking on any item will copy it to clipboard. this might result in megatall submenus, but we'll see. tall seems easier to use than nested per-service for now
the more primitive 'siblings' submenu on the taglist 'copy' right-click menu is now removed
right-click should no longer raise an error on esoteric taglists (such as tag filters and namespace colours). you might get some funky copy strings, which is sort of fun too
the copy string for the special namespace predicate ('namespace:*anything*') is now 'namespace:*', making it easier to copy/paste this across pages
.
misc:
the thumbnail right-click 'copy/open known urls by url class' commands now exclude those urls that match a more specific url class (e.g. /post/123456 vs /post/123456/image.jpg)
miniupnpc is no longer bundled in the official builds. this executable is only used by a few advanced users and was a regular cause of anti-virus false positives, so I have decided new users will have to install it manually going forward.
the client now looks for miniupnpc in more places, including the system path. when missing, its error popups have better explanation, pointing users to a new readme in the bin directory
UPnP errors now have more explanation for 'No IGD UPnP Device' errortext
the database's boot-repair function now ensures indices are created for: non-sha256 hashes, sibling and parent lookups, storage tag cache, and display tag cache. some users may be missing indices here for unknown update logic or hard drive damage reasons, and this should speed them right back up. the boot-repair function now broadcasts 'checking database for faults' to the splash, which you will see if it needs some time to work
the duplicates page once again correctly updates the potential pairs count in the 'filter' tab when potential search finishes or filtering finishes
added the --boot_debug launch switch, which for now prints additional splash screen texts to the log
the global pixmaps object is no longer initialised in client model boot, but now on first request
fixed type of --db_synchronous_override launch parameter, which was throwing type errors
updated the client file readwrite lock logic and brushed up its unit tests
improved the error when the client database is asked for the id of an invalid tag that collapses to zero characters
the qss stylesheet directory is now mapped to the static dir in a way that will follow static directory redirects
.
downloaders and parsing (advanced):
started on better network redirection tech. if a post or gallery URL is 3XX redirected, hydrus now recognises this, and if the redirected url is the same type and parseable, the new url and parser are swapped in. if a gallery url is redirected to a non-gallery url, it will create a new file import object for that URL and say so in its gallery log note. this tentatively solves the 'booru redirects one-file gallery pages to post url' problem, but the whole thing is held together by prayer. I now have a plan to rejigger my pipelines to deal with this situation better, ultimately I will likely expose and log all redirects so we can always see better what is going on behind the scenes
added 'unicode escape characters' and 'html entities' string converter encode/decode types. the former does '\u0394'-to-'Ī', and the latter does '&'-to-'&'
improved my string converter unit tests and added the above to them
in the parsing system, decoding from 'hex' or 'base64' is no longer needed for a 'file hash' content type. these string conversions are now no-ops and can be deleted. they converted to a non-string type, an artifact of the old way python 2 used to handle unicode, and were a sore thumb for a long time in the python 3 parsing system. 'file hash' content types now have a 'hex'/'base64' dropdown, and do decoding to raw bytes at a layer above string parsing. on update, existing file hash content parsers will default to hex and attempt to figure out if they were a base64 (however if the hex fails, base64 will be attempted as well anyway, so it is not critically important here if this update detection is imperfect). the 'hex' and 'base64' _encode_ types remain as they are still used in file lookup script hash initialisation, but they will likely be replaced similarly in future. hex or base64 conversion will return in a purely string-based form as technically needed in future
updated the make-a-downloader help and some screenshots regarding the new hash decoding
when the json parsing formula is told to get the 'json' of a parsed node, this no longer encodes unicode with escape characters (\u0394 etc...)
duplicating or importing nested gallery url generators now refreshes all internal reference ids, which should reduce the liklihood of accidentally linking with related but differently named existing GUGs
importing GUGs or NGUGs through Lain easy import does the same, ensuring the new objects 'seem' fresh to a client and should not incorrectly link up with renamed versions of related NGUGs or GUGs
added unit tests for hex and base64 string converter encoding
next week
Last week of the year. I could not find time to do the network updates I wanted to this week, so that would be nice. Otherwise I will try and clean and fix little things before my week off over Christmas. The 'big thing to work on next' poll will go up next week with the 423 release posts.
1 note
Ā·
View note
Text
Version 499
youtube
windows
Qt5 zip
Qt6 zip
Qt6 exe
macOS
Qt5 app
Qt6 app
linux
Qt5 tar.gz
Qt6 tar.gz
I had a great week catching up on bug reports.
highlights
The changelog is quite long this week. It is mostly smaller bug fixes that aren't interesting to read. Some users have reported crashes in Qt6, and I got one crash IRL myself, which is very rare for me. I am not totally sure what has been causing these, but I fixed some suspects this week, so let me know if things are better.
I am also updating mpv for Windows users this week. It is a big jump, about a year's worth of updates, and I feel like it is a little smoother. It is also more stable and supports weirder files.
And thanks to a user's work, there is an expansion to the twitter downloader. You can now download a twitter user's likes, and from twitter lists, and--if you can find any--twitter collections.
If you are a big 'duplicates' user, there is a subtle change in the potential duplicates search this week. Check the changelog for full details, but I think I've fixed the 'I set "must be pixel dupes" but I saw a pair that wasn't' issue.
full list
mpv:
updated the mpv version for Windows. this is more complicated than it sounds and has been fraught with difficulty at times, so I do not try it often, but the situation seems to be much better now. today we are updating about twelve months. I may be imagining it, but things seem a bit smoother. a variety of weird file support should be better--an old transparent apng that I know crashed older mpv no longer causes a crash--and there's some acceleration now for very new CPU chipsets. I've also insisted on precise seeking (rather than keyframe seeking, which some users may have defaulted to). mpv-1.dll is now mpv-2.dll
I don't have an easy Linux testbed any more, so I would be interested in a Linux 'running from source' user trying out a similar update and letting me know how it goes. try getting the latest libmpv1 and then update python-mpv to 1.0.1 on pip. your 'mpv api version' in _help->about_ should now be 2.0. this new python-mpv seems to have several compatibility improvements, which is what has plagued us before here
mpv on macOS is still a frustrating question mark, but if this works on Linux, it may open another door. who knows, maybe the new version doesn't crash instantly on load
.
search change for potential duplicates:
this is subtle and complicated, so if you are a casual user of duplicates, don't worry about it. duplicates page = better now
for those who are more invested in dupes, I have altered the main potential duplicate search query. when the filter prepares some potential dupes to compare, or you load up some random thumbs in the page, or simply when the duplicates processing page presents counts, this all now only tests kings. previously, it could compare any member of a duplicate group to any other, and it would nominate kings as group representatives, but this lead to some odd situations where if you said 'must be pixel dupes', you could get two low quality pixel dupes offering their better king(s) up for actual comparison, giving you a comparison that was not a pixel dupe. same for the general searching of potentials, where if you search for 'bad quality', any bad quality file you set as a dupe but didn't delete could get matched (including in 'both match' mode), and offer a 'nicer' king as tribute that didn't have the tag. now, it only searches kings. kings match searches, and it is those kings that must match pixel dupe rules. this also means that kings will always be available on the current file domain, and no fallback king-nomination-from-filtered-members routine is needed any more
the knock-on effect here is minimal, but in general all database work in the duplicate filter should be a little faster, and some of your numbers may be a few counts smaller, typically after discounting weird edge case split-up duplicate groups that aren't real/common enough to really worry about. if you use a waterfall of multiple local file services to process your files, you might see significantly smaller counts due to kings not always being in the same file domain as their bad members, so you may want to try 'all my files' or just see how it goes--might be far less confusing, now you are only given unambiguous kings. anyway, in general, I think no big differences here for most users except better precision in searching!
but let me know how you get on IRL!
.
misc:
thank's to a user's hard work, the default twitter downloader gets some upgrades this week: you can now download from twitter lists, a twitter user's likes, and twitter collections (which are curated lists of tweets). the downloaders still get a lot of 'ignored' results for text-only tweets, but this adds some neat tools to the toolbox
thanks to a user, the Client API now reports brief caching information and should boost Hydrus Companion performance (issue #605)
the simple shortcut list in the edit shortcut action dialog now no longer shows any duplicates (such as 'close media viewer' in the dupes window)
added a new default reason for tag petitions, 'clearing mass-pasted junk'. 'not applicable' is now 'not applicable/incorrect'
in the petition processing page, the content boxes now specifically say ADD or DELETE to reinforce what you are doing and to differentiate the two boxes when you have a pixel petition
in the petition processing page, the content boxes now grow and shrink in height, up to a max of 20 rows, depending on how much stuff is in them. I _think_ I have pixel perfect heights here, so let me know if yours are wrong!
the 'service info' rows in review services are now presented in nicer order
updated the header/title formatting across the help documentation. when you search for a page title, it should now show up in results (e.g. you type 'running from source', you get that nicely at the top, not a confusing sub-header of that article). the section links are also all now capitalised
misc refactoring
.
bunch of fixes:
fixed a weird and possible crash-inducing scrolling bug in the tag list some users had in Qt6
fixed a typo error in file lookup scripts from when I added multi-line support to the parsing system (issue #1221)
fixed some bad labels in 'speed and memory' that talked about 'MB' when the widget allowed setting different units. also, I updated the 'video buffer' option on that page to a full 'bytes value' widget too (issue #1223)
the 'bytes value' widget, where you can set '100 MB' and similar, now gives the 'unit' dropdown a little more minimum width. it was getting a little thin on some styles and not showing the full text in the dropdown menu (issue #1222)
fixed a bug in similar-shape-search-tree-rebalancing maintenance in the rare case that the queue of branches in need of regeneration become out of sync with the main tree (issue #1219)
fixed a bug in archive/delete filter where clicks that were making actions would start borked drag-and-drop panning states if you dragged before releasing the click. it would cause warped media movement if you then clicked on hover window greyspace
fixed the 'this was a cloudflare problem' scanner for the new 1.2.64 version of cloudscraper
updated the popupmanager's positioning update code to use a nicer event filter and gave its position calculation code a quick pass. it might fix some popup toaster position bugs, not sure
fixed a weird menu creation bug involving a QStandardItem appearing in the menu actions
fixed a similar weird QStandardItem bug in the media viewer canvas code
fixed an error that could appear on force-emptied pages that receive sort signals
next week
We are due a cleanup week, and I think I need one. Just some simple background work for a bit so I can rest, and I'll fit in some other small work like this week. As usual, I don't have anything exciting planned for the anniversary of v500. I was thinking about making that the week for the exclusive switch to Qt6, but with some users still getting crashes, I am less keen.
Thanks everyone!
0 notes
Text
Version 451
youtube
windows
zip
exe
macOS
app
linux
tar.gz
I had a great week cleaning code and fixing bugs. If you have a big database, it will take a minute to update this week.
all misc this week, most bug fixes
I fixed a critical bug in tag siblings. It was causing some tag siblings to be forgotten, particularly on local tag services, and particularly when a tag had a sibling removed and added (e.g. 'delete A->B, then add A->C') in the same transaction. I will keep working here and will trigger a new sibling reprocess in the future just as 450 did so we can fix more PTR issues.
The new content-based processing tracking had a couple more issues. Some Linux users, in particular, it just broke for, due to a SQLite version issue. I have fixed that, and I have fixed some issues it caused for IPFS. There are new unit tests to make sure this won't happen again.
I fixed an issue with sessions recently not saving thumbnail order correctly!
I fixed issues with some of the new Client API file search parameters not working right!
Big video files should import a bit faster, and will show more status updates as they do their work.
We have had more anti-virus problems in recent weeks. We'd hoped building on github would eliminate them, but it hasn't completely. A user helped me trace one issue to the Windows installer. We have 'corrected' it (believe it or not, it was removing the 'open help' checkbox on the final page of the install wizard, and yes the reasons for why this was causing problems are ridiculous), so with luck there will be fewer problems. Thank you for the reports, and let me know if you have more trouble!
full list
stupid anti-virus thing:
we have had several more anti-virus false positives just recently. we discovered that at least one testbed used by these companies was testing the 'open html help' checkbox in the installer, which then launched Edge on the testbed, and then launched the Windows Update process for Edge and Skype, which was somehow interacting with UAC and thus considered suspicious activity owned by the hydrus installer process, lmao. thereafter, it seems the installer exe's DNS requests were somehow being cross-connected with the client.exe scan as that was identified as connected with the installer. taking that checkbox out as a test produced a much cleaner scan. there is a limit to how much of this nonsense I will accomodate, but this week we are trying a release without that 'open help' link in the installer, let's see how it goes
semi-related, I brushed up the install path message in the installer and clarified help->help will open the help in the first-start welcome popup message
.
misc:
I fixed a critical bug in tag sibling storage when a 'bad' tag's mapping is removed (e.g. 'delete A->B') and added ('add A->C') in the same transaction, and in a heap of fun other situations besides, that mostly resulted in the newly added sibling being forgotten. the bug was worse when this was on a local tag service via the manage siblings dialog. this problem is likely the cause of some of our weird sibling issues on clients that processed certain repository updates extremely quickly. I will keep investigating here for more issues and trigger another sibling reset for everyone in the future
the 'show some random pairs' button on the duplicates page is nicer--the 'did not find any pairs' notification is a popup rather than an annoying error dialog, and when there is nothing found, it also clears the page of thumbs. it also tries to guess if you are at the end of the current search, and if so, it will not do an auto re-fetch and will clear the page without producing the popup message
fixed a bug that meant file order was not being saved correctly in sessions! sorry for the trouble!
import of videos is now a little faster as the ffmpeg call to check resolution and duration is now retained to check for presence of an audio channel
when files are imported, the status messages are now much more granular. large and CPU-heavy files should move noticeably from hash generation to filetype calculation to metadata to actual file copying
fixed a database query bug in the new processing progress tracking code that was affecting some (perhaps older) versions of sqlite
when you trash/untrash/etc... a file in the media viewer, the top hover text now updates to show the file location change
fixed a typo bug in the new content type tracking that broke ipfs pinning yet again, sorry for the trouble! (issue #955)
I fleshed out my database pending and num_pending tests significantly. now all uploadable content types are tested, so ipfs should not break at the _db_ level again
the page tab menu now clumps the 'close x pages' into a dynamic submenu when there are several options and excludes duplicates (e.g. 'close others' and 'close to the left' when you right-click the rightmost page)
the page tab menu also puts the 'move' actions under a submenu
the page tab menu now has 'select' submenu for navigating home/left/right/end like the shortcuts
fixed some repository content type checking problems: showing petition pages when the user has moderation privileges on a repository, permission check when fetching number of petitions, and permissions check when uploading files
fixed a typo in the 'running in wine' html that made the whole document big and bold
across the program, a 'year' for most date calculations like 'system:time imported: more than a year ago' is now 365 days (up from 12 x 30-day months). these will likely be calendar calculated correctly in future, but for now we'll just stick with simple but just a bit more accurate
fixed a bug in mpv loop-seek when the system lags for a moment just when the user closes the media viewer and the video loops back to start
.
client api:
expanded my testing system to handle more 'read' database parameter testing, and added some unit tests for the new client api file search code
fixed the 'file_sort_asc' in the new client api file search call. it was a stupid untested typo, thank you for the reports (issue #959)
fixed 'file_service_name' and 'tag_service_name' when they are GET parameters in the client api
I fleshed out the file search sort help to say what ascending/descending means for each file sort type
.
boring database cleanup:
to cut down on redundant spam, the new query planner profile mode only plans each unique query text once per run of the mode
also fixed an issue in the query planner with multiple-row queries with an empty list argument
refactored the tag sibling and parent database storage and lookup code out to separate db modules
untangled and optimised a couple of sibling/parent lookup chain regeneration calls
moved more sibling and parent responsibility to the new modules, clearing some inline hardcoding out of the main class
cleaned up a bunch of sibling, parent, and display code generally, and improved communication between these modules, particularly in regards to update interactions and display sync
the similar files data tables are migrated to more appropriate locations. previously, they were all in client.caches.db, now the phash definition and file mapping tables are in master, and the similar files search record is now in main
next week
I've managed to catch up on some critical issues, and some IRL stuff also eased up, so I have some breathing room. I want to put some more time into multiple local file services, which has been delayed, likely some way to search over previously deleted files.
0 notes
Text
Version 399
youtube
windows
zip
exe
macOS
app
linux
tar.gz
source
tar.gz
I had a great week tidying up smaller issues before my vacation.
all small items this week
You can now clear a file's 'viewing stats' back to zero from their right-click menus. I expect to add an edit panel here in future. Also, I fixed an issue where duplicate filters were still counting viewing time even when set in the options not to.
When I plugged the new shortcuts system's mouse code into the media viewer last week, it accidentally worked too well--even clicks were being propagated from the hover windows to the media viewer! This meant that simple hover window clicks were triggering filter actions. It is fixed, and now only keyboard shortcuts will propagate. There are also some mouse wheel propagation fixes here, so if you wheel over the taglist, it shouldn't send a wheel (i.e. previous/next media) event up once you hit the end of the list, but if you wheel over some hover window greyspace, it should.
File delete and undelete are now completely plugged into the shortcut system, with the formerly hardcoded delete key and shift+delete key moved to the 'media' shortcut set by default. Same for the media viewer's zoom_in and zoom_out and ctrl+mouse wheel, under the 'media viewer - all' set. Feel free to remap them.
The new tag autocomplete options under services->tag display and search now allow you to also search namespaces with a flat 'namespace:', no asterisk. The logic here is improved as well, with the 'ser'->'series:metroid' search type automatically assuming the 'namespace:' and 'namespace:*' options, with the checkboxes updating each other.
I fixed an issue created by the recent page layout improvements where the first page of a session load would have a preview window about twenty pixels too tall, which for some users' workflows was leading to slowly growing preview windows as they normally used and restarted the program. A related issue with pages nested inside 'page of pages' having too-short preview windows is also fixed. This issue may happen once more, but after one more restart, the client will fix the relevant option here.
If you have had some normal-looking files fail to import, with 'malformed' as the reason, but turning off the decompression bomb check allowed them, this issue is now fixed. The decomp bomb test was itself throwing an error in this case, which is now caught and ignored. I have also made the decomp bomb test more lax, and default off for new users--this thing has always caught more false positives than true, so I am now making it more an option for users who need it due to memory limitations than a safeguard for all.
advanced parsing changes
The HTML and JSON parsing formulae can now do negative indexing. So, if you need to select the '2nd <a> tag from the end of the list', you can now set -2 as the index to select. Also, the JSON formula can now index on JSON Objects (the key->value dictionaries), although due to technical limitations the list of keys is sorted before indexing, rather than selecting the data as-is in the JSON document.
Furthermore, JSON formulae that are set to get strings no longer pull a 'null' value as the (python) string 'None'. These entries are now ignored.
I fixed an annoying issue when hitting ok on 'fixed string' String Matches. When I made the widgets hide and not overwrite the 'example string' input last week, I forgot to update the ok validation code. This is now fixed.
full list
improvements:
the media viewer and thumbnail _right-click->manage_ menus now have a _viewing stats->clear_ action, which does a straight-up delete of all viewing stats record for the selected files. 'edit' will be added to this menu in future
extended the tag autocomplete options with a checkbox to allow 'namespace:' to match all tags, without the explicit asterisk
tag autocomplete options now permit namespace searches if the 'search namespaces into full tags' option is set
the tag autocomplete options panel now disables and checks the namespace checkboxes when one option overrules another
cleaned up some tag search logic to recognise and deal with 'namespace:' as a query
added some more unit tests for tag autocomplete options
the html and json parsing formulae now support negative indexing, to select the nth last item from a list
extended the '1 -> "1st"' ordinal string conversion code to deal with negative indices
the 'hide tag' taglist menu actions are now wrapped in yes/no dialogs
reduced the activation-to-click-accept time that the shortcuts handler uses to ignore activating clicks from 100ms to 17ms
clicking the media viewer's top hover window's zoom buttons now forces the 'media viewer center' zoom centerpoint, so if you have the mouse centerpoint set, it won't zoom around the button where you are clicking!
added a simple 8chan.moe watcher to the defaults, all users will get it on update
the default bandwidth rules for download pages, subs, and watchers are now more liberal. only new users will get these. various improvements to db and ui update pipeline mean the enforced breaks are less needed
when a manage tags dialog moves to another media, if it has a 'recent tags' suggestion list with a selection, the selection now resets to the top item in the list
the mpv player now tracks when a video is fully loaded and only reports seek bar info and allows seeks when this is so (this should fix some seekbar errors on broken/slow-loading vids)
added 'undelete_file' to media shortcut commands
file delete and undelete are no longer hardcoded in the media viewer and media thumbnail grid. these actions are now handled entirely in the media shortcut set, and added to all clients by default (this defaults to (shift +) delete key, and also backspace on macos, so likely no changes)
ctrl+mouse wheel is no longer hardcoded to zoom in the media browser. these actions are now handled entirely in the 'all' media viewer shortcut set (this defaults to ctrl+wheel or +/-, so likely no changes)
deleted some old shortcut processing code
tightened up some update timers to better halt work while the client is minimised to system tray. this _may_ improve some users' restore hanging issues
as Qt is happier than wx about making pages on a non-visible client, subscriptions and various url import operations are now permitted to create pages while the client is minimised to taskbar or system tray. if this applies to your situation, please let me know how you get on here, as this may relieve some restore hanging as the pending new-file jobs are no longer queued up
.
fixes:
clicks on hover window greyspace should no longer propagate up to the media viewer. this was causing weird archive/delete filter actions
mouse scroll on hover window taglist should no longer propagate up to the media viewer when the taglist has no more to scroll in that direction
fixed an issue that meant preview windows were initialising about twenty pixels too short for the first page loaded in a session, and also pages created within nested page of pages. also cleaned up some logic for unusual situations like hidden preview windows. one more cycle of closing and reopening the client will fix the option value here
cleaned and unified some page sash setting code, also improving the 'hide preview window' option reliability for advanced actions
fixed a bug that meant file viewtime was still being recorded on the duplicate filter when the special exception option was off
reduced some file viewtime manager overhead
fixed an issue with database repair code when local_tags_cache is missing
fixed an issue updating a very old db not recognising that local_tags_cache does not yet exist for proper reason and then trying to repair it before update code runs
fixed the annoying issue introduced in the recent string match overhaul where a 'fixed character' string match edit panel would not want to ok if the (now hidden) example string input did not have the same fixed char data. it now validates no matter what is in the hidden input
potentially important parsing fix: JSON parsing, when set to get strings, no longer converts a 'null' value to 'None'
the JSON parsing formula now allows you to select the nth indexed item of an Object (a JSON key->value dictionary). due to technical limitations, it alphabetises the keys, not selecting them as-is in the JSON itself
images that do not load in PIL no longer cause mime exceptions if they are run through the decompression bomb check
.
misc:
boosted the values of the decompression bomb check anyway, to reduce false positives. it generally now has a problem with images with a bmp > 1GB memory
by default, new file import options now start with decompression bombs allowed. this option is being reduced to a stopgap for users with less memory
'MimeException' is renamed to 'UnsupportedFileException'
added 'DamagedOrUnusualFileException' to handle normally supported files that cannot be parsed or loaded
'SizeException' is split into 'TagSizeException' and 'FileSizeException'
improved some file exception inheritance
removed the 'experimental' label from sub-gallery page url type in parsing system
updated some advanced help regarding bad files
misc help updates
updated cloudscraper to 1.2.40
next week
I am taking next week off. Normally I'd be shitposting E3, but instead I think I am going to finally get around to listening to the Ring Cycle through and giving Kingdom Come - Deliverance a go.
v400 will therefore be on the 10th of June. I hope to have the final part of the subscription data overhaul done, which will mean subscriptions load in less than a second, reducing how much data it needs to read and write and ultimately be more accessible for the Client API and things like right-click->add this query to subscription "blahbooru artists".
Thanks everyone!
0 notes