#If the tagged wants the poll removed I will have no problem deleting it
Explore tagged Tumblr posts
Note
@littleguysdaily from [tumblr].gov
#bed wed behead#Littleguysdaily#Tumblr#If the tagged wants the poll removed I will have no problem deleting it
1K notes
·
View notes
Text
Parler: Less Free Speech, More Analytics
The free speech social media platform that disallows dissenting opinions and promises to farm your data
In the summer of 2020, Youtubers started bring up the social media platform Parler as a new alternative to Twitter and Facebook, after many of the same creaters also promoted Minds, Gab.io, and Candid (the platform that was allegedly a front to run analytics on users that were likely to be troubling). The appeal: free speech. Free to say anything you want and defend your ideas.
Truth be told, I've been to these sites, and I've been disappointed by every one of them. I support free speech, but I have no patience for a platform with the majority of their users are explicitly trolls or seemingly crazy people. These platforms have a habit of rapidly devlolving into holocost and world order conspiracy theories. It's a fine thing to offer everyone to say their piece, but I think apealling to the people that normally can't stay civil on major platforms is a recipe for disaster. Parler, however, has moderation, which seems a bit counter intuitive to free-speech, but it offers a clean image for new members. It's going into it a bit further that reveals that there's a lot more going on than a bunch of conservatives and Trumpettes getting a platform to say their tagline of the week.
My Views on the Relationship between Privacy and Free Speech
This is important, as I am often seen as trying to get away with saying my own crazy spew and not answering for it. That is not my intention. Today the public forum is used by special interest groups for unethical studies on users and as targeting platforms for retaliation. All I want is to seperate speech from identity and livelihood. The express purpose for doing so is to allow people to know what is being said and argue with the ideas while avoiding violence and the distraction of ad hominem. I do not support the use of social media bots at all, and I appreciate removing harassing content, spam, and obvious trolling from a forum. I do not appreciate removing one's sincere opinion while of sound mind or tracking them across platforms and this does a disservice to everyone to either have things hidden from them about a person and their beliefs or reading too far into their behavior and even predicting their real-life behaviors which puts many individuals at risk of violence.
Their Problems with Privacy
When you sign up for Parler, just like Twitter, you have to provide a phone number. This phone number is attached to your account, and by extension, your activity. While this is a way to ensure that people are not easily making replacement, spam, or bot accounts, it's also a bit of you that they get to market. You likely use your cell phone for other social media, it's used for a lot services like shopping rewards programs as well.
Who's interested in your phone number and why? Well, we can take a look at Parler's own Privacy Policy. For them they want to market things to you, identify you along with more personal details if you want to be a part of their influencer network, and to sell as part of their company to whomever that may be. They also allow for 3rd party analytics just like Facebook allowed Cambridge Analytica to view users on their platform.
Now, depending on how you connect to the platform, either by their webpage or their app, you can expect more information to be taken about your device. If you're using their app, their Privacy Policy specifically states that they will collect your contacts if you permit them. It's already required when you install the app, so by installing it, you already permitted them. More on the app store, on Android, they request to read, modify, and delete the contents of your SD card and take pictures and video from your camera. While these can be used implemented selectively in the code for uploading videos and pictures to your post, it's concerning given their other behaviors, such as requesting other applications that you have installed.
Regardless of whether you're using the web or app, you can expect that 3rd party cookies like those from Google, Amazon, and Facebook will be used to track you while you use the website. This along with information about what posts you view, searches you make on the site, times that you're online and active, and the people you follow make a nice package for people interested in your data, such as Google and Facebook, meaning the same exact companies may still be able to track you and affect your experience browsing online through ad services.
Overall this Privacy Policy leaves a lot to the imagination but still emphasizes enough that they will collect data on you to monetize it as an asset and with 3rd party research and advertising analytics. It is the same problems as Google, Facebook, and Twitter, but now with a neat controlled group of a mostly conservative user base. This, in the wrong hands, might be an interesting petri dish for highly-targeted political research.
Just My Privacy? Is that so bad?
Their ToS is a garbage fire, and I highly encourage everyone to read it just for the audacity of what it says outright, and what it carefully leaves out.
The Censorship-Free Twitter Alternative: Now with Censorship!
Probably the goofies thing to come out of Parler is all of the stories of people's accounts getting deleted for sharing their opinions. To add to this, I was having trouble getting my account removed (more on that later, so I opted instead to use the trending hashtags and tag a few popular users in a post where I stated that the website had all of the hallmarks of being shady. I waited over two days to have my account deleted the normal way, but within ten minutes of posting that Parley, I was banned. Amazing. But don't take our words for it, they explicitly tell you that if they don't like you, they'll ban you in section 9 of their ToS!
Coming soon: Worthless Microtransactions!
Section 6 of their ToS describe their virtual items. Interestingly, they outright deny you the right to trade or sell any of the items on the site without their permission. This is interesting not only because they are explictly enforcing the worthlessness of their virtual items, but this also precludes anyone from exchanging their account, and thus all the associated virtual items for money, goods, or services. This means if you grew an engaging account on the platform and a company is interested in buying access to it, you have to ask Parler's permission, and then they may only allow it contingent upon you giving more personal information such as, in their own example of them buying items back from you, your social security number.
Old Issues: The Deleted Sections
Very recently, the ToS have been changed. As you can see in this reddit post from the time of Parler's launch, any user of their platform was legally bound to be ready to defend and idemnify Parler in court for actions you take on the platform, and you are already bound to pay their fees in court if you are defending yourself against them or anyone responsible for Parler. You also were not allowed to sue them or be a beneficiary of a class-action lawsuit against them.
Final Thoughts
Parler is yet another alternative social media site which is has attracted the worst users from other sites right away. This makes the platform less attractive to "normal" users. Interestingly, their banning practices seem to indicate that they only want the conservative, but not too edgy crowd, the kind that is of really big importance socially and politically right now; the middle of the road, fly-over state blue collar family type that got excited about Trump because of the chants and rallies without really understaning the greater policies.
Now, I'm not going to sit here and outright describe Parler's intentions like I know them, because I don't. But I know that if I wanted to do market and polling research on the group of people in Europe and America that fit this general trend of hyper politics, I would curate similarly to Parler, protect myself from litigation from the users, collect as much information as I could on them and share that information with other websites to get a holistic picture of the users. I would make their usage of the platform unempowering and worthless to see what they were willing to do for minimal incentive. I would attract A and B-list figures within the different movements thant have supported the shift in politics and have them promote it for me, as well as get the alternative media sites to do gushing admiration articles on it over and over while more generally well regarded sites scoff and criticize it to get this particular subset of users into this one place where I can observe them.
Bottom line: this website's policies and behaviors are antithetical to free speech. You cannot advocate for free speech and be so anti-privacy in my view. You cannot claim to be a legitimate alternative to other sites when you are curating an environment for a specific group. You cannot be against censorship and then censor users for the most mundane posts that go against your image. This website is DOA, worst than the ones that came before it, because where as the others had hope of being normal that just ran out, this place squashes it right away. Parler is an exclusive right-wing platform and my personal opinion is that it is also a petri dish for analytics for this political persuasion
#conservatism#trump#parler#social media#why are people so into this site#i was on it for two days and it was just like wow we have ted cruz#ted cruz#is a wax golem#politics#privacy#digital rights#free speech#long post#very long post#news#tech#technology
6 notes
·
View notes
Text
Version 330
youtube
windows
zip
exe
os x
app
tar.gz
linux
tar.gz
source
tar.gz
I had a great week. There are some more login scripts and a bit of cleanup and speed-up.
The poll for what big thing I will work on next is up! Here are the poll + discussion thread:
https://www.poll-maker.com/poll2148452x73e94E02-60
https://8ch.net/hydrus/res/10654.html
login stuff
The new 'manage logins' dialog is easier to work with. It now shows when it thinks a login will expire, permits you to enter 'empty' credentials if you want to reset/clear a domain, and has a 'scrub invalid' button to reset a login that fails due to server error or similar.
After tweaking for the problem I discovered last week, I was able to write a login script for hentai foundry that uses username and pass. It should inherit the filter settings in your user profile, so you can now easily exclude the things you don't like! (the click-through login, which hydrus has been doing for ages, sets the filters to allow everything every time it works) Just go into manage logins, change the login script for www.hentai-foundry.com to the new login script, and put in some (throwaway) credentials, and you should be good to go.
I am also rolling out login scripts for shimmie, sankaku, and e-hentai, thanks to Cuddlebear (and possibly other users) on the github (which, reminder, is here: https://github.com/CuddleBear92/Hydrus-Presets-and-Scripts/tree/master/Download%20System ).
Pixiv seem to be changing some of their login rules, as many NSFW images now work for a logged-out hydrus client. The pixiv parser handles 'you need to be logged in' failures more gracefully, but I am not sure if that even happens any more! In any case, if you discover some class of pixiv URLs are giving you 'ignored' results because you are not logged in, please let me know the details.
Also, the Deviant Art parser can now fetch a sometimes-there larger version of images and only pulls from the download button (which is the 'true' best, when it is available) if it looks like an image. It should no longer download 140MB zips of brushes!
other stuff
Some kinds of tag searches (usually those on clients with large inboxes) should now be much faster!
Repository processing should also be faster, although I am interested in how it goes for different users. If you are on an HDD or have otherwise seen slow tag rows/s, please let me know if you notice a difference this week, for better or worse. The new system essentially opens the 'new tags m8' firehose pretty wide, but if that pressure is a problem for some people, I'll give it a more adaptable nozzle.
Many of the various 'select from a list of texts' dialogs across the program will now size themselves bigger if they can. This means, for example, that the gallery selector should now show everything in one go! The manage import/export folder dialogs are also moved to the new panel system, so if you have had trouble with these and a small screen, let me know how it looks for you now.
The duplicate filter page now has a button to edit your various duplicate merge options. The small button on the viewer was too-easily missed, so this should make it a bit easier!
full list
login:
added a proper username/password login script for hentai foundry--double-check your hf filters are set how you want in your profile, and your hydrus should inherit the same rules
fixed the gelbooru login script from last week, which typoed safebooru.com instead of .org
fixed the pixiv login 'link' to correctly say nsfw rather than everything, which wasn't going through last week right
improved the pixiv file page api parser to veto on 'could not access nsfw due to not logged in' status, although in further testing, this state seems to be rarer than previously/completely gone
added login scripts from the github for shimmie, sankaku, and e-hentai--thanks to Cuddlebear and any other users who helped put these together
added safebooru.donmai.us to danbooru login
improved the deviant art file page parser to get the 'full' embedded image link at higher preference than the standard embed, and only get the 'download' button if it looks like an image (hence, deviant art should stop getting 140MB brush zips!)
the manage logins panel now says when a login is expected to expire
the manage logins dialog now has a 'scrub invalidity' button to 'try again' a login that broke due to server error or similar
entering blank/invalid credentials is now permitted in the manage logins panel, and if entered on an 'active' domain, it will additionally deactivate it automatically
the manage logins panel is better at figuring out and updating validity after changes
the 'required cookies' in login scripts and steps now use string match names! hence, dynamically named cookies can now be checked! all existing checks are updated to fixed-string string matches
improved some cookie lookup code
improved some login manager script-updating code
deleted all the old legacy login code
misc login ui cleanup and fixes
.
other:
sped up tag searches in certain situations (usually huge inbox) by using a different optimisation
increased the repository mappings processing chunk size from 1k to 50k, which greatly increases processing in certain situations. let's see how it goes for different users--I may revisit the pipeline here to make it more flexible for faster and slower hard drives
many of the 'select from a list of texts' dialogs--such as when you select a gallery to download from--are now on the new panel system. the list will grow and shrink depending on its length and available screen real estate
.
misc:
extended my new dialog panel code so it can ask a question before an OK happens
fixed an issue with scanning through videos that have non-integer frame-counts due to previous misparsing
fixed a issue where file import objects that have been removed from the list but were still lingering on the list ui were not rendering their (invalid) index correctly
when export folders fail to do their work, the error is now presented in a better way and all export folders are paused
fixed an issue where the export files dialog could not boot if the most previous export phrase was invalid
the duplicate filter page now has a button to more easily edit the default merge options
increased the sibling/parent refresh delay for 1s to 8s
hydrus repository sync fails due to network login issues or manual network user cancel will now be caught properly and a reasonable delay added
additional errors on repository sync will cause a reasonable delay on future work but still elevate the error
converted import folder management ui to the new panel system
refactored import folder ui code to ClientGUIImport.py
converted export folder management ui to the new panel system
refactored export folder ui code to the new ClientGUIExport.py
refactored manual file export ui code to ClientGUIExport.py
deleted some very old imageboard dumping management code
deleted some very old contact management code
did a little prep work for some 'show background image behind thumbs', including the start of a bitmap manager. I'll give it another go later
next week
I have about eight jobs left on the login manager, which is mostly a manual 'do login now' button on manage logins and some help on how to use and make in the system. I feel good about it overall and am thankful it didn't explode completely. Beyond finishing this off, I plan to continue doing small work like ui improvement and cleanup until the 12th December, when I will take about four weeks off over the holiday to update to python 3. In the new year, I will begin work on what gets voted on in the poll.
2 notes
·
View notes
Text
NEW SERVER, Explanations, Some(brero) Update
Downtime Again?? This will be a bit technical, if you're not too bothered about what happened, please skip to the next header. On Friday 5th May at approximately 7am server time, Lioden started lagging very heavily. We initially thought it was just a small blip and that it would all sort itself out. Upon further investigation we discovered that our database had corrupted pretty badly, which we believe was caused by the disruption earlier in the week with the physical data centre migration. The admin team very quickly went into alert mode; we discussed a plan of action. We weighed up whether it would be worth a 24 hour downtime for DNS propagation or whether it would be quicker to wait for our host to diagnose and repair whatever was going on. We also had the option of a new database server. The database server was sold to us under the promise that it would take 8 hours - we asked to be fast-tracked as we wanted to get the game back up ASAP. We were then told 4 hours (this was after 4 hours of downtime already, after first thinking it was a temporary blip and then having to diagnose the problem ourselves). 20 hours later, we finally get the message that the server is up and running. Throughout this time we took turns in sleeping and staying up to see if the server would come up, so the communication between admins was unfortunately very disjointed for a time. Had we known sooner that our total downtime would be upwards of 25 hours, we would have switched hosts immediately. We are extremely unhappy with the customer service we received while trying to get our server fixed/moved. Since we are on a new server, expect some lag for first 24-36 hours of your gameplay as your data is being cached. What will happen next? We will be moving host. We do not want to give this company any more money. We are temporarily staying put until we can set up a server with another company. We will then be requesting a refund for all of our pre-paid services (we like to pay a year in advance). We will schedule game maintenance for later this month, on a week day, early in the morning (server time) so that we can switch hosts. In 5 years of Lioden's life (and in 15 years of having our websites hosted) we have never experienced something like this that was also handled so poorly by the hosting company, and we aim to never experience this again.
I lost out on time playing! If you go to the main News page, we have set up a Downtime Care Package for you! This bundle gives Food & Toy bundles and Nesting material based on the number of lionesses you own, along with a bunch more goodies (some GB items as well). We can't apologise enough for this downtime - it has been completely out of our control. Your lions should be also maxed out on happiness and hunger. This item will be available until Tuesday.
Have you lost our messages again? >_> Thankfully we were able to retrieve some messages. We only have from Tuesday when we came back up, up to a certain time on Thursday evening, but we have been able to restore at least a partial backup this time. If you purchased GB and your new GB are no longer on your account, please send a PM to Abbey #1 with your PayPal transaction ID and we will refund it for you. BACKUP RESTORATION TIMES: Rough Estimate on backup times, as depending on specific Lioden feature, they are stored at different hours: 4 and a half hours before rollover for main backups, 3 hours 50 minutes before rollover for lions, 4 hours 15 minutes for Hoard. This means userlogs, inboxes, lions and other things can be restored from different times and there might be some discrepancies. If you notice "ghost notifications" just delete them please, we tried to restore inboxes but the data was still corrupted a bit. What about May Event and our lost progress? What about the storyline? If you guys will struggle with the event tiers due to downtime we'll see about tweaking them as the month goes! May Storyline will help you gather some Tokens and Manticora Beetles! Unfortunately we were unable to code or work on it ... again. So we might launch it within a day or two and include it in May news post along with a notification that it was launched.
Friday Update
Unfortunately thanks to the downtime our Friday update was cut short. We were able to implement a couple of nice moderating tools, including a tool to make moderator posts stand out more when they want to get your attention ;)
We will be working very hard to bring you a much bigger coding update next week. Bug Fixes * Roasted Lamb notice showing on all lions when applied to king has been fixed - it will now only show on your king. * When cubs are born, the cub number given by the ultrasound bat will be removed - you will need to visit the bat for each new pregnancy to find out the accurate number of cubs. * Patrolling males are no longer chaseable, killable or reserve-able. * We have attempted a fix of aging lionesses dropping to negative fertility - please let us know if you still experience this bug. * We have also attempted a fix of lionesses not going into heat immediately for the tutorial. Terms of Service Change We have made a minor change to the Terms of Service based on a confusing situation that has arisen lately. In order to address this properly, here is the new rule regarding referrals which will apply from now. "While the referral rewards and monthly referral contests exist, referring a friend to the game should not be incentivised in any way. This includes, but is not limited to, offering in-game or off-site rewards for signing up and rolling over enough to earn the referral points. Anyone found to be incentivising referrals will have any referral points revoked."
Remember that the event ends on May 31st at 11:59 pm, and all your currency will be stored in your account until next year. There are new avatar Tags made by Shad, for May and Misc, check them out as always!
Raffle Lioness
We had to run the Raffle again as the progress from yesterday was wiped. Our apologies. Congrats Khorshid (#8241)! You have won the last raffle lady! New lady with the Sunset over Entabeni BG is up for impressing in Special Lioness area in Explore or in NEWS section under News Post List! ;D She has first Margay marking!
Polls
Poll LINK - We plan to add a new Mutie on Demand item for 2017's Black Friday sale (don't worry, we're not replacing previous MoD, we're just adding new stuff), We had some ideas so maybe you can help us decide which exactly would go well on Lioden? Twin MoD - Another 6 piebald designs, completely new and unique to the new item. We'd call it MoD: More Piebald or something. Somatic Black Patches - basically what I'm linking below - would look like somatic breaks that you can see on horses etc. I am open to suggestions on their shapes. Visually it'd look a bit like Melanism cutouts. Real life black patch on a lion - LINK - Front leg :D
2 notes
·
View notes
Text
Version 324
youtube
windows
zip
exe
os x
app
tar.gz
linux
tar.gz
source
tar.gz
I had a great week. The downloader overhaul is almost done.
pixiv
Just as Pixiv recently moved their art pages to a new phone-friendly, dynamically drawn format, they are now moving their regular artist gallery results to the same system. If your username isn't switched over yet, it likely will be in the coming week.
The change breaks our old html parser, so I have written a new downloader and json api parser. The way their internal api works is unusual and over-complicated, so I had to write a couple of small new tools to get it to work. However, it does seem to work again.
All of your subscriptions and downloaders will try to switch over to the new downloader automatically, but some might not handle it quite right, in which case you will have to go into edit subscriptions and update their gallery manually. You'll get a popup on updating to remind you of this, and if any don't line up right automatically, the subs will notify you when they next run. The api gives all content--illustrations, manga, ugoira, everything--so there unfortunately isn't a simple way to refine to just one content type as we previously could. But it does neatly deliver everything in just one request, so artist searching is now incredibly faster.
Let me know if pixiv gives any more trouble. Now we can parse their json, we might be able to reintroduce the arbitrary tag search, which broke some time ago due to the same move to javascript galleries.
twitter
In a similar theme, given our fully developed parser and pipeline, I have now wangled a twitter username search! It should be added to your downloader list on update. It is a bit hacky and may be ultimately fragile if they change something their end, but it otherwise works great. It discounts retweets and fetches 19/20 tweets per gallery 'page' fetch. You should be able to set up subscriptions and everything, although I generally recommend you go at it slowly until we know this new parser works well. BTW: I think twitter only 'browses' 3200 tweets in the past, anyway. Note that tweets with no images will be 'ignored', so any typical twitter search will end up with a lot of 'Ig' results--this is normal. Also, if the account ever retweets more than 20 times in a row, the search will stop there, due to how the clientside pipeline works (it'll think that page is empty).
Again, let me know how this works for you. This is some fun new stuff for hydrus, and I am interested to see where it does well and badly.
misc
In order to be less annoying, the 'do you want to run idle jobs?' on shutdown dialog will now only ask at most once per day! You can edit the time unit under options->maintenance and processing.
Under options->connection, you can now change max total network jobs globally and per domain. The defaults are 15 and 3. I don't recommend you increase them unless you know what you are doing, but if you want a slower/more cautious client, please do set them lower.
The new advanced downloader ui has a bunch of quality of life improvements, mostly related to the handling of example parseable data.
full list
downloaders:
after adding some small new parser tools, wrote a new pixiv downloader that should work with their new dynamic gallery's api. it fetches all an artist's work in one page. some existing pixiv download components will be renamed and detached from your existing subs and downloaders. your existing subs may switch over to the correct pixiv downloader automatically, or you may need to manually set them (you'll get a popup to remind you).
wrote a twitter username lookup downloader. it should skip retweets. it is a bit hacky, so it may collapse if they change something small with their internal javascript api. it fetches 19-20 tweets per 'page', so if the account has 20 rts in a row, it'll likely stop searching there. also, afaik, twitter browsing only works back 3200 tweets or so. I recommend proceeding slowly.
added a simple gelbooru 0.1.11 file page parser to the defaults. it won't link to anything by default, but it is there if you want to put together some booru.org stuff
you can now set your default/favourite download source under options->downloading
.
misc:
the 'do idle work on shutdown' system will now only ask/run once per x time units (including if you say no to the ask dialog). x is one day by default, but can be set in 'maintenance and processing'
added 'max jobs' and 'max jobs per domain' to options->connection. defaults remain 15 and 3
the colour selection buttons across the program now have a right-click menu to import/export #FF0000 hex codes from/to the clipboard
tag namespace colours and namespace rendering options are moved from 'colours' and 'tags' options pages to 'tag summaries', which is renamed to 'tag presentation'
the Lain import dropper now supports pngs with single gugs, url classes, or parsers--not just fully packaged downloaders
fixed an issue where trying to remove a selection of files from the duplicate system (through the advanced duplicates menu) would only apply to the first pair of files
improved some error reporting related to too-long filenames on import
improved error handling for the folder-scanning stage in import folders--now, when it runs into an error, it will preserve its details better, notify the user better, and safely auto-pause the import folder
png export auto-filenames will now be sanitized of \, /, :, *-type OS-path-invalid characters as appropriate as the dialog loads
the 'loading subs' popup message should appear more reliably (after 1s delay) if the first subs are big and loading slow
fixed the 'fullscreen switch' hover window button for the duplicate filter
deleted some old hydrus session management code and db table
some other things that I lost track of. I think it was mostly some little dialog fixes :/
.
advanced downloader stuff:
the test panel on pageparser edit panels now has a 'post pre-parsing conversion' notebook page that shows the given example data after the pre-parsing conversion has occurred, including error information if it failed. it has a summary size/guessed type description and copy and refresh buttons.
the 'raw data' copy/fetch/paste buttons and description are moved down to the raw data page
the pageparser now passes up this post-conversion example data to sub-objects, so they now start with the correctly converted example data
the subsidiarypageparser edit panel now also has a notebook page, also with brief description and copy/refresh buttons, that summarises the raw separated data
the subsidiary page parser now passes up the first post to its sub-objects, so they now start with a single post's example data
content parsers can now sort the strings their formulae get back. you can sort strict lexicographic or the new human-friendly sort that does numbers properly, and of course you can go ascending or descending--if you can get the ids of what you want but they are in the wrong order, you can now easily fix it!
some json dict parsing code now iterates through dict keys lexicographically ascending by default. unfortunately, due to how the python json parser I use works, there isn't a way to process dict items in the original order
the json parsing formula now uses a string match when searching for dictionary keys, so you can now match multiple keys here (as in the pixiv illusts|manga fix). existing dictionary key look-ups will be converted to 'fixed' string matches
the json parsing formula can now get the content type 'dictionary keys', which will fetch all the text keys in the dictionary/Object, if the api designer happens to have put useful data in there, wew
formulae now remove newlines from their parsed texts before they are sent to the StringMatch! so, if you are grabbing some multi-line html and want to test for 'Posted: ' somewhere in that mess, it is now easy.
next week
After slaughtering my downloader overhaul megajob of redundant and completed issues (bringing my total todo from 1568 down to 1471!), I only have 15 jobs left to go. It is mostly some quality of life stuff and refreshing some out of date help. I should be able to clear most of them out next week, and the last few can be folded into normal work.
So I am now planning the login manager. After talking with several users over the past few weeks, I think it will be fundamentally very simple, supporting any basic user/pass web form, and will relegate complicated situations to some kind of improved browser cookies.txt import workflow. I suspect it will take 3-4 weeks to hash out, and then I will be taking four weeks to update to python 3, and then I am a free agent again. So, absent any big problems, please expect the 'next big thing to work on poll' to go up around the end of October, and for me to get going on that next big thing at the end of November. I don't want to finalise what goes on the poll yet, but I'll open up a full discussion as the login manager finishes.
1 note
·
View note
Text
Version 374
youtube
windows
zip
exe
macOS
app
linux
tar.gz
source
tar.gz
I had a great week. A ton of Qt problems are fixed, and a macOS App is ready. If you were waiting for a cleaner release, I would recommend this for all Windows and Linux users.
Qt
I mostly worked this week on Qt bugs. I appreciate all the reports everyone sent in. I have fixed a whole lot, mostly bringing things back in line to where the wx build was. The whole list is in the changelog, but the highlights are:
Fixed the cursor not unhiding in the media viewer as long as it was over a media.
Fixed the resizing texts that were causing subscription popups and others to bounce around.
Fixed pages auto-resorting media on creation when not desired.
Fixed tab drag and drop when 'do not follow' (+shift) mode is on.
File drag and drop to discord should be fixed on Windows if the options->gui BUGFIX is set.
Added basic high DPI scaling support--all feedback appreciated. -- EDIT: It looks like high dpi scaling is making thumbs and media viewer unintentionally scale up in a pixelly way on some machines. I will put time into this week and see if I can get them looking better for 375.
Fixed some memory leaks.
There is still some stuff I didn't have time to get to--some layout/sizing problems for dialogs, some splitter/sash positioning issues, focus sometimes not transferring when requested, and some weirder stuff like html rendering as a web document in text labels rather than displaying literally. I'll keep going, but the vast bulk of the work is done.
macOS and Linux builds
I fixed a critical issue in the macOS build and believe I have it working as good as the others. If you are a macOS user who has a backup, please give it a go and let me know what you get. If you want a confirmed clean release, please wait a week. Some things like maximised media viewers are a little janky, so I'm interested in feedback on what you see and what you would like it to do instead, if anything. I have disabled borderless fullscreen for now for similar reasons. Also, the centered tab bar for regular pages has thrown off all the tab drag and drop position calculations, so tab rearrange and intra-client file drag and drops are disabled--I'll put some time into it next week and see if I can fix it.
The Linux build now has some common library files removed. This should improve font compatibility for some users by causing hydrus to rely on your higher compatibility system libraries rather than the ones on my dev machine. If you use the built release and have the wrong font or other UI jank, please try doing a semi 'clean' install this week by deleting all the .so files in your base install directory (the ones beside the 'client' executable) before extracting as you normally would. If you try this, make sure you have a backup beforehand, just in case you accidentally delete the wrong thing.
Arch users who run from source may have been unable to run the client due to 'shiboken' issues. This is because Arch recently updated to Python 3.8, where PySide2 (a Qt wrapper) does not currently work! Some updated running-from-source instuctions on how to use PyQt5 instead are now here: https://hydrusnetwork.github.io/hydrus/help/running_from_source_linux_packages.txt
full list
qt environment/build:
macOS build is useable! tab drag and drop position calculation doesn't work yet, so intra-client file DnDs and tab rearrange DnDs are disabled for now. borderless fullscreen is also disabled, feedback on this vs maximise would be appreciated
fixed a critical bug in the macOS release that was resulting in 100% CPU repaint loop for the canvas viewer when media was loaded (wew). this may have affected certain other platforms in some situations
the linux build has a variety of common library files removed, letting your OS rely on higher compatibility system defaults. this _should_ clean up font and other issues for users running on very new/old system libraries. if you cannot run 374, please let me know your distro and version and any error messages
the special linux running from source document is updated, including info about Arch and PyQt5
fixed a windows build issue that meant some animated gifs were not able to load and render correctly
fixed a precise time fetching issue for users running from source with python 3.8
high dpi scaling should have improved support. please report on bad layout issues and other artifacts
fixed creating a serialised object png when using PyQt5
fixed file save dialogs with filetype filters when using PyQt5
fixed an important menubar related memory leak
_seem_ to have fixed an important media viewer memory leak
.
qt ui fixes:
fixed pages not collecting and sorting on creation if they do not have to, which restores the 'preserve flat unsorted order' behaviour of session loads and file drag and drop page tab creations
fixed the cursor not unhiding on move in the media viewer when over an animation or static image
fixed the issue where a new thumbnail panel would double-up with the old one for half a second if a menu caused the panel swap
reworked the elided (text that cuts off...) label code to more reliably work on single lines, which fits our purposes. the network job control (esppecially on subscription popups) and top hover window should now show their long statuses without changing their parent panel's layout
updated a variety of old text-wrap-width wx-hacks texts to instead auto-fill available space
the various downloaders should now be careful about handling large status texts. if a multiline error or html page slips in to a status somewhere, your download pages' lists should no longer go nuts with very tall spam-filled status cells
hydrus->discord drag and drop should be fixed if the BUGFIX is on!
fixed page tab drag and drop to do live drag selection with 'do not follow' behaviour (this is switched by holding down shift during drag), and, in this case, got it to return to the original page's neighbour/parent once the drop is complete
fixed 'center' dialogs positioning on the center of their parent windows, rather than the center of the primary screen
fixed the hover windows not passing shortcuts up to the media viewer when not consumed
fixed some misc 'can I consume a shortcut' focus/active checking code
fixed the various hide/parents/siblings tag menu items for tags with counts
fixed the main gui and other non-dialog windows remembering their pre-maximise/fullscreen sizes if set to remember size and previously closing while maximised/fullscreened
menubar menus should now show description text in the main gui statusbar on mouseover of their items
fixed a bad menu initialisation in the canvas preview panel
fixed a little page splitter bork and improved size of preview window on initial boot
fixed the edit notes dialog when launched from the media viewer
fixed a couple of text edit issues in edit url class panel
fixed page up/down scroll for taglists
fixed page down scroll for thumbnail grid, and fixed page up/down distance
fixed thumbnails not scrolling into view if they are keyboard-selected slightly off screen but within the scroll option percentage threshold
misc layout and style cleanup
misc refactoring
.
misc:
you can now set the maximum size of duplicate filter pair batches (default 250) under options->duplicates
when an ipfs service fails to pin a file and returns no hash or the empty multihash, this is now recognised, info dumped to log, a simple popup message sent, and the job continued. this is just a patch--better error handling here will come later
if the client or server are launched with a custom temp_dir that does not exist, it will now attempt to create it (previously errored out)
fixed a clean exit after certain client boot fail error handling, and repeated cleaner exit for the server
added some new memory profiling actions to the help->debug menu
parallel subscriptions should now initialise with less of an aggresive CPU spike
if the client or server crash before the application can be launched, the crash log is now called hydrus_crash.log. if the db dir is not yet established, it will now try to find and put it in your desktop and, failing that, then your user dir
the client no longer prints 'booting db' twice
a variety of misc code cleanup and fixes
next week
I was going to take an easy week, but I ended up crunching again, ha ha, so I'll for real take it a bit easier this coming week. There's still some Qt stuff to fix, and doubtless a few more reports to come in, and Qt css theming to add, but I'd also really like to get back to doing some normal work. There's a thousand things to do, so one part of it will just be going through my list and triaging the most important stuff.
As we get to the end of the year, I would like to finish some tag work like namespace siblings and tag filters on uploading and tag processing, and also try getting this mpv window embedded into the media viewer so we have full-featured video and legit audio support. My ideal is to have the 'emergency' tag work complete for February so I can put up a new 'big job to work on next' poll.
0 notes
Text
Version 350
youtube
windows
zip
exe
os x
app
linux
tar.gz
source
tar.gz
I had an ok week. Some IRL things cut into my hydrus time, but I got some good work done. Some bugs in the new duplicate search system are fixed, and I improved advanced file delete and export handling.
The poll for the next 'big job' is up here: https://8ch.net/hydrus/res/12358.html The direct link is https://www.poll-maker.com/poll2331269x9ae447d5-67
duplicate filter
The search addition to the duplicate filter went fairly well, but there were a couple of significant bugs. The 'ghost pair' issue--where a queue would sometimes have a final pair that would never display and lead to high CPU until the filter was closed--is fixed, and safeguards added to catch similar issues in future. The issue with undercounting on large search domains (typically where the dupe page's file query was non-system:everything and covered >10,000 files) is also fixed, but giving the filter 500,000 custom files to work with can be really quite slow. I will keep working here to see if I can speed up big searches like this without compromising accuracy, but if you find your dupe searches are working too slow, try adding a creator: tag to bring the search size down--it works well.
The duplicate filtering workflow itself is still a v1.0 pain. I expect to put some work in here in the coming weeks (and doubly so if dupe work wins the big poll), likely highlighting the differences between the two images with an always-on-top panel and better at-a-glance decision-making for easy comparisons.
advanced delete
This week brings an optional advanced file delete dialog. You can turn it on under options->files and trash and also set custom file delete 'reasons' to assign. Once it is switched on, any file delete under the thumbnail grid, media viewer, or duplicate filter manual file delete will instead give a richer dialog that lets you physically delete files immediately (i.e. skipping the trash) and deleting files without leaving a delete 'record' (which is useful if you want to easily reimport those files later on).
You can also choose from your set reasons or type a completely custom one. These saved reasons reappear when that deleted file fails to import in future (due to being recognised as previously deleted), so if you are interested in tracking why you have previously deleted files, please check this out.
the rest
Export folders now have more timing options, basically the same as import folders. You can pause them individually, tell them not to run periodically, and force them to run manually under a new submenu under file->import and export folders->run export folder now.
The Artstation downloader seems to be not working due to a Cloudflare issue. I am leaving the downloader in for now, in case it starts working again, but if you have a subscription for it, I recommend you pause it for now. The default downloader for new users (and you, if you have it still set as the default) is now safebooru.
If you use the Dolphin file manager, check out this add-on that uses the Client API: https://gitgud.io/prkc/dolphin-hydrus-actions
Advanced users only: Missing file folder recovery is better--if a file folder (one of the subfolders, like 'f39' or 'ta2') is missing on boot but found in another 'known' location, the client will detect the moved folder and propose an auto-update of its records rather than forcing you to go through the complicated manual recovery dialog. If for some reason it is complicated or slow for you to move your folders from A to B through the migrate database dialog, you can now shut down your client, move the folders manually in explorer or wherever, and then reboot the client and be up and working in one click! Just make sure that hydrus knows about B in its locations under migrate database beforehand, and it will do the rest.
full list
the duplicate filter no longer applies the implicit system limit (which defaults to 10,000 files) on its search domains, which solves the undercounting issue on large search domains. duplicate operations will be appropriately slower, so narrow your duplicate file queries (adding a creator: tag works great) if they take too long
fixed the duplicate pairs filter's 'ghost pair' issue. it was failing, when 'both files' was unchecked, to remove pairs that included one file that was non-local. this accidental inclusion resulted in incorrect non-zero count and filter/random pairs that could not display correctly
insulated against potential future iterations of this problem (likely that one of the pair was deleted by another process while a filter is ongoing), with a notification and graceful exiting of the duplicate filter while saving progress
the 'show random duplicates' button now puts the 'base' of the group (to which all the others are potentially matched) as the first thumbnail
added a new 'advanced file deletion' section to 'files and trash' options page to turn on a new advanced dialog and set custom file deletion reasons
if this new dialog is turned on, a delete event from thumbnail grid, regular media viewer, or the duplicate filter's manual delete will launch it. it permits you to delete physically (skipping trash) in one step or delete physically without leaving a deletion record (for easier later re-import) and choose one of the deletion reasons in the new options panel or set a one-time custom reason
export folders now have more run-controls: 'run regularly', 'paused', and 'run now'
the file menu now has a 'run export folder now' submenu just like for import folders-- it is simple now to set up an export folder that only runs when you tell it to
updated the on-boot missing file folder recovery process to automatically 'heal' file location mappings when a missing folder is actually in a location that is known (essentially, you can now manually move a bunch of folders from one known location to another while the client is off and it will recover automatically now). error dialogs will appear in this case summarising the problem and proposed fixes with a chance to bail out if you do not want it to happen
added a new frame type to 'gui' options page called 'regular_center_dialog' for yes/no style dialogs that are better in the center of the parent window
the custom web browser launch path and file type 'open externally' paths are moved from 'files and trash' to a new 'external programs' options page
as the superior '--temp_path' program launch parameter now exists for both client and server, I have removed the limited 'BUGFIX: temp folder override' option from the client's 'files and trash' page and use in the actual code. if this option was important to you, please migrate to the --temp_path launch parameter, which covers temp usage more comprehensively
as the artstation downloader is now non-functional, apparently by a cloudflare issue, the default gug for new users (and anyone with artstation set atm) is now safebooru
added dolphin file manager add-on link to the client api help
some misc file metadata fetching cleanup
next week
Next week is a cleanup week. Beyond boring ongoing code cleanup and rewrites, I would like to move some 'default' image parsing to preference OpenCV over PIL uniformly, including for the server (which currently does not need OpenCV). Having image work preference one library over the other at different times causes problems when they disagree about some image metadata (e.g. whether a file is rotated). OpenCV is fairly easy to get for all platforms now, and it generally runs faster and better, so I feel good about adding it to the server and making it the primary choice in all situations.
We breezed past 500 million mappings on the PTR in the past couple of weeks! This is great, and I am really thankful for everyone's contributions, but this growth is putting pressure on several different areas. You may have noticed we finally hit my 256GB/month bandwidth limit on the shared PTR account at the end of April. Regular PTR-syncing client databases are also getting huge, about 18GB. I have multiple plans for how to deal with the various issues here and expect to chip away at them throughout 2019. I may reserve a month or two of future 'big job' time to handle this work.
0 notes
Text
Version 349
youtube
windows
zip
exe
os x
app
linux
tar.gz
source
tar.gz
I had a great week. The duplicate filter work went really well, and the manage tags dialog has a neat new button for fixing siblings and parents.
The poll for the next 'big job' is live here https://8ch.net/hydrus/res/12358.html ! The direct link is https://www.poll-maker.com/poll2331269x9ae447d5-67
duplicate filter
The duplicate filter page has its old 'file domain' button swapped for a full file search context. This allows you to see 'potential pair' counts, show some random pairs, and launch the full duplicate filter on just a subset of your dupes! For instance, you might want only to filter from a certain creator, or only on very small jpgs. You can also check a box to set whether both of the pair must match your search or only one.
The duplicate page has a significant relayout as well. It now has two tabs--one for the maintenance/search side, and one for the actual filtering--and shows its data in a clearer and less laggy way. The new search data is saved with your page, and on a per-page basis, so if you like you can set up several duplicate processing pages for your different queues.
This system works faster and better than I had expected. Please check it out and let me know what you think. It is usually fairly fast unless you give it a very slow search to work with.
One bug I just noticed with my IRL client is if you give it a specific search with more than 10,000 files (like for me system:archive), the duplicate count will be too low and will change slightly on refreshes. This is because of the 'implicit' system:limit on all queries, which caps search results to 10k files (or whatever you have it set to in the options), and causes the duplicate filter to instead be sampling the potential search domain on each query. This is not true in this case for system:everything, which here uses a search optimisation to get the full count. So, I suggest you throw in a <10k tag for this week, and I will see if I can tackle this problem for 350.
I will update the help for this once I know the ui is settled.
manage tags
The manage tags dialog has a new 'siblings and parents' button that auto-hard-replaces siblings and adds missing parents! It even works on multiple file selections, so if you have a bunch you want to fix, you can just ctrl+a and hit the button and it will sort it out for each file. It gives you a little summary yes/no dialog before it fires just to make sure everything looks sensible.
Beyond some additional sibling/parent code cleanup on that dialog, there is also a new 'cog' option to stop triggering 'remove' actions if you 'enter' a tag that already exists. If you turn this on, only a double-click or delete-key press on the list will make for a 'remove' result.
the rest
I spent some time cleaning up file import and thumbnail generation, particularly for videos. Videos have improved metadata parsing, and thumbnail generation should be much faster. A variety of unusual webms should now be importable and have correct frame counts. Thumbnail right-click->regen commands also work faster and update thumbnail metadata more cleanly and reliably.
If you would like pages to always focus their 'input' boxes when you switch to them, there is now an option for this under the 'gui pages' options panel.
full list
duplicate filter:
the duplicate filter page now has a full-on real-deal file search object to narrow down the duplicate filter, potential duplicate count, and 'show some random dupes' search domains! it also has a 'both files match' checkbox that determines if one of both files of the potential pairs should match the search!
the duplicate filter page has multiple layout changes as a result:
the main management area is now split into two pages--'preparation', for doing maintenance and discovery work, and 'filtering', for actioning the potential dupe pairs found
the 'filtering' page will select by default, but if 'preparation' needs work, its name will become 'preparation (needs work)'
the 'filtering' page now has file search ui and the 'both files' checkbox instead of the file domain button. this search data is saved on a per-page basis
the two pages' status texts are now updated on separate calls which have been rewritten to be asynchronous (with 'updating...' text while they work). both now have explicit refresh buttons to force them to update
the additional non-unknown pair counts listed on the filter area, which were irrelevant to filtering and sometimes confusing, are now gone. it only lists the 'unknown' pair number
the duplicate filter page's help button no longer has the awful 'simple help' entry. the full html help will get a pass in the coming weeks to reflect the new search changes
the duplicate file db code received significant refactoring and improvement to support searching the potential dupe space while cross-referencing the new file search context (and still falling back to the fast code when the search is just blank/system:everything)
misc duplicate file db code cleanup and refactoring
while in advanced mode, you can no longer select 'all known files' file domain for an export folder (and now the duplicate filter page) search context
making a file delete action in the duplicate filter (by hitting delete key or the button on the top hover window, which both trigger a dialog asking to delete one or both) now auto-skips the current pair
.
manage tags:
the manage tags has a new 'siblings and parents' button that will auto-replace incorrect siblings and auto-add missing parents! it works on multi-file selections as well! it gives you a summary yes/no dialog before it fires
the manage tags dialog has a little logic cleanup r.e. siblings and parents and their cog auto-apply options. the auto-application now only applies on add/pend actions
the manage tags dialog has a new cog button option to not trigger 'remove' actions from an autocomplete dropdown or suggested tag input action when the tag already exists
.
the rest:
gave video metadata parsing another pass--it now detects 'hidden' incorrect framerates due to advanced 'skip frame' codec settings and is more accurate at determining frame count and duration, including some fixed offset calculations that was sometimes adding or discounting a few frames
manual video frame count, when needed, is now faster and produces better error text
fixed a critical bug in thumbnail regen that was sometimes potentially looping regen on files with unusual rotation exif information
significant improvements to how the client file manager handles thumbnail identifier information, saving a great deal of time for file import and thumbnail regeneration code of videos
fixed an issue where regenerated file metadata was not propagating up to the ui level in real time
cleaned up some thumbnail cache initialisation code
the 'generate video thumbs this % in' option is moved from the 'media' to 'thumbnails' options page
to simplify code, and in prep for the idle-maintenance-rewrite of this system, the database->regen->thumbnails call is now removed
all three fields of text on serialised pngs now wrap, and they pad a little better as well
added a new option to the 'gui pages' options page to force input text box focus on page changes
fixed a small type issue with the server's session cookie code and some new library versions
next week
I've felt increasingly ill this week, so I am going to take a day or two completely off. Otherwise, next week will be catch-up for smaller jobs and bug reports and other messages, which I have fallen behind on. Please have a think about the big jobs poll, and let me know if you have any questions. Thank you for your support!
0 notes
Text
Version 347
youtube
windows
zip
exe
os x
app
linux
tar.gz
source
tar.gz
I had a good week. OR search is essentially finished, and I cleaned up and fixed a variety of other things.
A new 'big thing to work on next' poll will be going up soon. If you are interested, please check out the discussion thread here:
https://8ch.net/hydrus/res/12152.html
or search
As previously discussed, I have moved OR predicate construction to the standard dropdown list below the tag input. It now appears as the top result, where you can hit enter on it to submit it as-is. Also, while under construction, a 'cancel' button and 'rewind' button (to remove the most recent OR-term added) will appear on the same panel. You can also hit Esc to cancel a currently under-construction OR predicate.
As a reminder, hold shift when you enter a tag to start an OR chain. Further shift+enter events will append new tags to the chain, and a bare enter will cap it off. I will write out some proper help for this.
OR search is basically finished as a v1.0 now. I still have some last tidy-up jobs to do, but I am overall happy with it.
the rest
I built on the past weeks' thumbnail experiments and have written a two-stage thumbnail rendering system that gets thumbs on screen faster (even if they are the wrong size, so will look fuzzy), and regenerates any needed clearer versions in the background and replaces them in-place on screen over the following seconds. It is much smoother and faster than before, and it is pretty neat to see a fuzzy thumb suddenly fade into a clearer version, but I still have a little work to do here.
Now, when you trash a file, a context-appropriate 'deletion reason', such as 'Deleted from Media Page.' will be saved. These statements are mostly trivial, but duplicate filter actions will specify a bit more about the duplicate processing action type. This text will be recovered in an import status window for 'deleted' status results, just as a help if you want to investigate closer (e.g. perhaps you are not sure why a particular file failed to import, but then you see the reason is you already decided you have a better duplicate version of it). Any files deleted before this system will just give "Unknown deletion reason."
Adding OR search caused a couple of search flaws: bare system:rating searches were delivering since-deleted files, and some searches with combinations of OR predicates with regular tags were delivering subsets of the real results. I believe I have fixed both of these, and now many previously slow OR searches should run quite a bit faster, especially when accompanied by with non-OR predicates.
I gave export folders a pass and fixed several bugs and inefficiencies, particularly for 'synchronised' folders that produce subdirectories from their filenames, which were often deleting those subdirectories. Also, and export folder or manual export event that attempts to produce a file path above the base export directory (e.g. if the generated filename begins with ..\ or ../) will now fail with some error text to explain what happened. If you use export folders a lot, particularly 'synchonised' ones, please let me know if you still get any unusual behaviour.
full list
or search:
under construction OR predicates now present at the top of the regular tag results list, prepended with 'OR: ', and skipping default selection
this new OR line is enter-able, which will present it as-is, rather than adding new preds
hitting escape on a 'search' tag input box that is empty but has an under construction or predicate will cancel the or pred
hitting escape on a 'search' tag input box otherwise should more reliably kill its focus when the dropdown is a float window
improved OR search efficiency significantly with dynamic OR search triggering based on other search predicates. OR searches including negated '-tag' components should be massively faster when paired with non-OR tag or file search predicates
I believe I fixed a search issue that would sometimes return insufficient results when OR preds are mixed with certain other combinations of tags
improved reliability of some thumbnail refresh calls
cleaned up a bunch of OR handling ui code
.
the rest:
after previous weeks' experiments, wrote new double-layer thumbnail loading system--now too-small thumbs will quckly scale up fuzzily straight to screen, and then in the coming seconds, the nice regenerated full-size thumb will be made and drawn in place as ready. it presents much faster and looks better, but there is some cleanup to do here that I will tackle next week
all local file trashing events now record a context-appropriate deletion statement such as "Deleted from Media Viewer." this value is recovered in 'deleted' import status 'notes'. You will mostly see 'Unknown deletion reason.', for files deleted before this new system, but it will populate with appropriate info over time
fixed a search optimisation that was not cross-referencing with file domain, meaning for instance that bare system:rating calls were returning since-deleted files
upnp management window now uses new listctrl
cleaned up some old custom page-naming code
added a 'data' debug call to clear out all cached thumbnails and force an instant ui thumb reload
fixed the trash bmp misalignment, ha ha
removed e-hentai login script from the defaults, since this testing script is not appropriate for new users
dejanked some media viewer video transitions by cleaning up animation bar rendering and smoothing out video buffer initialisation
cleaned out some surplus subprocess wait calls that were hanging some systems on various 'open externally' calls
fixed multiple syncing problems with 'synchronise' export folders that produce files with subdirectories. subdirectory structures should now be synced correctly and empty folders deleted
export folders that collapse multiple file results to the same duplicated name should, after the next run, do less overwriting to this same name
if an export folder or the regular export dialog makes a file destination path that is above the chosen directory (e.g. if the path starts with ../ or ..\), the export job will error out with an explanation
big manual file exports _should_ be politer to the ui and cause fewer hangs
doing page tab drag and drops may have less post-drop ui jank on linux, continued feedback would be appreciated
moved 'reason' handling for all content updates to its own area, which neatens many content update data handling issues
fixed petitioning a tag via a shortcut, which had bad reason handling
fixed an issue with committing pending ipfs items that was overchecking service permissions
fixed some remaining bad wx code in the unit tests
misc file status reporting cleanup
next week
I'll tidy up some last OR search stuff and clear out some small jobs. I would like to reduce some lag when the client file manager has a lot of competing access (e.g. when lots of new thumbnails need to be generated), and I would also like to improve some Linux stability with some unified bitmap management.
0 notes