#how to disable safe mode in mysql
Explore tagged Tumblr posts
techygeeks · 7 months ago
Text
Disable safe mode in mysql
Disabling safe mode in MySQL is a significant action that affects the security and integrity of your database system. Safe mode, also known as “secure-file-priv” mode, is a feature designed to enhance security by restricting certain operations that could potentially be exploited by malicious users.
However, there might be scenarios where you need to disable safe mode, but it’s crucial to understand the implications and risks involved.
Tumblr media
Here’s a detailed explanation of how to disable safe mode in MySQL and the considerations you should keep in mind:
Understanding Safe Mode in MySQL:
Safe mode is a MySQL server option that restricts certain operations for security reasons. One of its primary functions is to limit the locations from which files can be loaded or written by the server.
By default, MySQL’s safe mode sets the “secure-file-priv” option to a specific directory, typically a location that’s considered secure. This prevents users from loading or writing files from arbitrary locations on the filesystem, which could be exploited for malicious purposes.
2. Disabling Safe Mode:
To disable safe mode in MySQL, you typically need to modify the server configuration file (my.cnf or my.ini, depending on your system).
Locate the configuration file. This file is often located in the MySQL installation directory or in a configuration directory specified during installation.
Open the configuration file in a text editor.
Search for the “secure-file-priv” option. This option specifies the directory from which MySQL can load or write files.
Comment out or remove the line containing “secure-file-priv”. This effectively disables safe mode.
Save the changes to the configuration file.
Restart the MySQL server for the changes to take effect. This can typically be done using a command like sudo service mysql restart or sudo systemctl restart mysql
3. Considerations and Risks:
Disabling safe mode removes a layer of security from your MySQL server. Without safe mode, MySQL may allow file operations from any location on the filesystem, potentially exposing your system to security vulnerabilities.
Only disable safe mode if you have a specific requirement that cannot be fulfilled while safe mode is enabled, and you understand the risks involved.
If you do disable safe mode, ensure that your MySQL server is properly secured through other means, such as firewall rules, user permissions, and access controls.
Regularly monitor your MySQL server for any suspicious activity or unauthorized access attempts, especially after disabling safe mode.
4. Alternative Solutions:
If you need to load or write files in a specific directory while safe mode is enabled, consider changing the “secure-file-priv” option to point to that directory instead of disabling safe mode entirely. This maintains a level of security while allowing the necessary operations.
Explore other methods or tools that can accomplish your task without requiring you to disable safe mode.
0 notes
brainloading223 · 3 years ago
Text
Desperados 3 Mac Download
Tumblr media
Desperados 3 Mac Download
Desperados 3 Mac Download Windows 10
Desperados 3 Mac Download Utorrent
Desperados 3 Mac Download Version
Desperados IIIDeveloper(s)Mimimi GamesPublisher(s)THQ NordicDirector(s)Dominik AbéArtist(s)Bianca DörrWriter(s)Martin HambergerComposer(s)Filippo Beck PeccozSeriesDesperadosEngineUnityPlatform(s)Microsoft Windows PlayStation 4 Xbox One MacOS LinuxRelease16 June 2020Genre(s)Real-time tacticsMode(s)Single-player
Desperados III is a real-time tacticsvideo game developed by Mimimi Games and published by THQ Nordic. The first installment in the Desperados series since the 2007 spin-off title Helldorado, it was released for Microsoft Windows, PlayStation 4, Xbox One, MacOS, Linux.(1)(2)
Stronghold 3 is a full version game only available for Windows, that is part of the category PC games with subcategory Strategy. More about Stronghold 3. Since we added this game to our catalog in 2016, it has obtained 52 downloads, and last week it achieved 19 downloads. Download Stronghold 3 for Windows now from Softonic: 100% safe and virus free. More than 250 downloads this month. Download Stronghold 3 latest versio. Desperados III is a story-driven, hardcore tactical stealth game, set in a ruthless Wild West scenario. Play smart if you want to succeed. A good plan can make the difference between survival and finding yourself at the business end of a pistol.
Gameplay(edit)
More about Desperados: Wanted Dead Or Alive Since we added this game to our catalog in 2011, it has reached 123,501 downloads, and last week it achieved 63 downloads. Desperados: Wanted Dead Or Alive is a light game that requires less storage than many games in the section PC games.
Desperados III is a real-time tactics video game developed by Mimimi Games and published by THQ Nordic.The first installment in the Desperados series since the 2007 spin-off title Helldorado, it was released for Microsoft Windows, PlayStation 4, Xbox One, MacOS, Linux.
Desperados III is a real-time tactics stealth video game. The game features five playable characters, with each having access to unique weapons and abilities. Players can play the game as a stealth game, in which they can assassinate enemies silently or disguise kills as accidental deaths. It is possible for players to complete missions without killing anyone by knocking out and tying up enemies. Bodies of incapacitated enemies need to be hidden or else other enemies patrolling the area will discover them and trigger an alarm that calls for reinforcement.(3) The cones of vision of all enemies are displayed, allowing players to navigate the map without alerting them.(4)
This video will show you How to Download and Activate Office 2019 for Macbook permanently 100% in easy and fast way.In my previous video i showed how to to. How to download youtube to macbook pro. The new MacBook Air brings many of our most advanced technologies to this iconic design for the first time — including a brilliant Retina display, Touch ID.
Players can also play the game as an action game and utilize the showdown mode to temporarily stop the game, allowing players to coordinate and chain up the actions of the player's squad. In showdown mode, players can issue commands to each of the character in the party. When the player exits showdown mode, the characters will execute the commands issued by the player simultaneously.(5)
Story(edit)
The story is a prequel to Desperados: Wanted Dead or Alive, the first game in the series, and explores the origin of the series' protagonist John Cooper.(6) The game is set in the Wild West in the 1870s and features various locations including Colorado, Louisiana and Mexico. In addition to John Cooper, the game also includes Hector Mendoza, Doc McCoy, Isabelle Moreau, and Kate O'Hara as playable characters.
The story follows bounty hunter John Cooper as he pursues Frank, a notorious bandit leader responsible for killing John's father, James Cooper. Along the way, Cooper meets Doctor McCoy, who was hired by the DeVitt Company, a wealthy corporation, to defend the train Cooper was taking on his way to the town of Flagstone. Once in Flagstone, Cooper learns from his friend Hector Mendoza that Frank is at the mansion of the soon-to-be-married local mayor. In the meantime, the mayor's prospective bride, Kate O'Hara, finds out that her betrothed has sold her family's ranch to DeVitt. In the escalating altercation, O'Hara shoots the mayor as John Cooper walks in, seeking Frank. The newly met pair promptly escape the mansion and head for the O'Hara ranch to defend it from the attacking DeVitt company men. The defense is successful, but Kate's uncle Ian perishes in the battle.
The group eventually gets captured on their way to New Orleans, where Frank, who is working for DeVitt, is located. A Voodoo practitioner called Isabelle Moreau rescues them. Together, they set out to find her partner, Marshall Wayne, who disappeared while investigating DeVitt. Frank's gang have imprisoned many people out in the Louisiana wetlands, to be shipped off to work in DeVitt's mines. Once they rescue Wayne, the group sets fire to the old riverboat that functions as a headquarters for Frank's people. This act puts Frank on alert, and he locks down the city. Having sneaked past the roadblocks and guards, Cooper asserts that he wants to face Frank alone, to which Kate and Hector object. At Hector's mention of James Cooper's fate, John snaps and shoots Hector in the arm. Alone, he proceeds onto a docked freight ship, where he and Frank duel. Cooper is outdrawn and wounded.
The entire group gets captured again and sent to DeVitt's mines as slaves. Install mysql server on mac. They eventually escape after a week, but McCoy cuts his losses and abandons them. The others undertake Wayne's commission to abduct DeVitt himself from a lavish party at this mansion. They manage to spirit DeVitt out, but at the last moment their captive outwits them and hold them at gunpoint, only to be disabled by the returning McCoy. With the group back together, they hunt down Frank at the Devil's Canyon, where James Cooper and a young John pursued Frank years ago. Frank and John have another stand-off, watched over by Frank's lieutenants. The rest of the group overpowers Frank's posse, while John outdraws and finishes off Frank.
Development(edit)
Tumblr media
The game was developed by German studio Mimimi Games, the developer of Shadow Tactics: Blades of the Shogun, whose gameplay mechanics were similar to this game. THQ Nordic, which acquired the rights to the franchise from Atari in 2013, served as the game's publisher.(7) Since the last game in the series was released more than a decade ago, the team made Desperados III a prequel story so that it can be accessible to new players who are new to the franchise or new to the genre. To achieve this, the team ensured that the game features an adequate tutorial system that teaches the player the gameplay foundation, and implemented gamepad controls for players who use a controller to play.(8)(6) The game's showdown mode, which allows players to pause time completely, was created after receiving players' feedback about the limitations of Shadow Tactics's 'shadow mode'. Unlike Shadow Tactics, the game features a more playful tone, with characters bantering with each other more frequently.(9)
The game was officially announced by THQ Nordic in August 2018.(10) Initially set to be released in 2019, the game was released on 16 June 2020 for Microsoft Windows, PlayStation 4 and Xbox One.(11)
Docker install sql server 2019. Run the below 2 commands to install Docker using powershell. Type “y” and press enter to every questions asked. PS Install-Module -Name DockerMsftProvider -Repository PSGallery -Force. PS Install-Package -Name docker -ProviderName DockerMsftProvider. You have now Docker installed on your 2019 server core. Get Windows Server 2019. You can download the ISO to install Windows Server 2019 now, from.
Updates and Expansions(edit)
Since July 2020, Mimimi and THQ Nordic started supplying free updates for the game, which include additional mission scenarios. The first updates entail a loose frame story, titled The Baron's Challenge, in which the main characters get hired by an enigmatic figure, who is simply known as the Baron, to undertake certain missions for the entertainment of his patrons. Each mission can be unlocked with the successful completion of one or several levels in the main game. While the settings are basically the same as in the main story, each of the 14 new missions includes a different objective, sometimes with the characters having their in-play options restricted. In one example the player is required to eliminate certain enemies using environmental kills only, meaning that their other weapons are locked down for the scenario's duration.(12)(13)
Between September and November 2020, Mimimi and THQ Nordic also began publishing a purchasable three-part DLC story expansion, titled 'Money for the Vultures'. The plot is set three months after the events in the main game; Rosie, an NPC previously met in Baton Rouge (Mission 7), hires Cooper's group to hunt for the hidden wealth of Vincent DeVitt.(14)(15)
In December 2020, two new updates were provided: The 'Veteran Bounty Hunter Mode', which allows the player to optionally add the other protagonists to a level where any of them were originally not available (this option does not exist for the Baron's Challenges), and the 'Level Editor Light', a cheat which allows (in the PC version only) the complete rearrangement of a mission map's characters and items.(16)(17)
Reception(edit)
Aggregate scoreAggregatorScoreMetacritic(PC) 86/100(18) (PS4) 82/100(19) (XONE) 85/100(20)
Review scoresPublicationScoreGameSpot9/10(21)GameStar88/100(22)Hardcore Gamer4.5/5(23)IGN8/10(24)PC Gamer (US)86/100(26)PC Games9/10(25)Push Square(27)
Desperados 3 Mac Download
Desperados III received 'generally favorable' reviews, according to review aggregatorMetacritic.(18)(19)(20)
It was nominated for the category of Best Sim/Strategy game at The Game Awards 2020.(28)
Tumblr media Tumblr media
References(edit)
^Wales, Matt (19 February 2020). 'Wild West tactical stealth sequel Desperados 3 now due this summer'. Eurogamer. Retrieved 22 February 2020.
^O'Connor, Alice (2020-09-02). 'Desperados 3 is now on Mac and Linux, and its first DLC is out'. Rock, Paper, Shotgun. Retrieved 2020-10-26.
^Moyse, Chris (22 May 2020). 'Desperados III trailer tells greenhorns all they need to know'. Destructoid. Retrieved 14 June 2020.
^Walker, Alex (4 March 2020). 'Desperados 3 Is More Stealth Tactics Done Well'. Kotaku. Retrieved 14 June 2020.
^Morton, Lauren (20 May 2020). 'Desperados 3 gets a lengthy gameplay trailer before launch next month'. Rock, Paper, Shotgun. Retrieved 14 June 2020.
^ abTakahashi, Dean (10 June 2019). 'Desperados III: Why THQ Nordic is making a prequel for the stealth tactics series'. VentureBeat. Retrieved 14 June 2020.
^Sarker, Samit (24 June 2013). 'Nordic Games acquires rights to Atari's Desperados and Silver'. Polygon. Retrieved 14 June 2020.
^Bishop, Sam (30 August 2019). 'Desperados 3 'the perfect entry point for new players''. Gamereactor. Retrieved 14 June 2020.
^Cox, Matt (19 June 2019). 'Desperados III is Shadow Tactics wearing a lovely cowboy coat that lets you pause'. Rock, Paper, Shotgun. Retrieved 14 June 2020.
^Horti, Samuel (21 August 2018). 'Desperados 3 announced, led by Shadow Tactics developer Mimimi'. PC Gamer. Retrieved 14 June 2020.
^Wakeling, Richard (21 April 2020). 'Desperados 3 Release Date Announced'. GameSpot. Retrieved 14 June 2020.
^Su, Jake (24 July 2020). 'Desperados 3 Update Adds More Fun Challenges to the Wild, Wild West'. PC Invasion. Retrieved 18 October 2020.
^Su, Jake (20 August 2020). 'Desperados III Update Adds Four More Baron's Challenges'. EGM. Retrieved 18 October 2020.
^Binsack, Tom (2 September 2020). 'Desperados 3 The First Story DLC Money for the Vultures is Out Now'. Games Guides. Retrieved 18 October 2020.
^Sinha, Ravi (2 September 2020). 'Desperados 3 – Money for the Vultures Part 1 DLC Out Now'. Gaming Bolt. Retrieved 18 October 2020.
^Romano, Sal (9 December 2020). 'Desperados III 'Bounty Mode' update now available'. Gemano. Retrieved 9 December 2020.
^'Desperados III: 'Level Editing Cheats''. Mimimi Games. Retrieved 10 December 2020.
^ ab'Desperados III for PC Reviews'. Metacritic. Retrieved 20 June 2020.
^ ab'Desperados III for PlayStation 4 Reviews'. Metacritic. Retrieved 20 June 2020.
^ ab'Desperados III for Xbox One Reviews'. Metacritic. Retrieved 20 June 2020.
^Wildgoose, David (4 August 2020). 'Desperados 3 Review - Revolvers And Redos'. GameSpot. Retrieved 4 August 2020.
^Deppe, Martin (12 June 2020). 'Desperados 3 in the test: The best real-time tactical game since Commandos 2'. GameStar. Retrieved 12 June 2020.
^Estrada, Marcus (19 June 2020). 'Review: Desperados III'. Hardcore Gamer. Retrieved 19 June 2020.
^Ogilvie, Tristan (12 June 2020). 'Desperados 3 Review'. IGN. Retrieved 12 June 2020.
^Schutz, Felix (20 June 2020). 'Desperados 3 put to the test: Wild West tactics at its best (update)'. PC Games. Retrieved 20 June 2020.
^Brown, Fraser (12 June 2020). 'Desperados 3 review'. PC Gamer. Retrieved 12 June 2020.
^McCormick, John Cal (12 June 2020). 'Desperados III Review (PS4)'. Push Square. Retrieved 12 June 2020.
^Tassi, Paul (December 11, 2020). 'Here's The Game Awards 2020 Winners List With A Near-Total 'Last Of Us' Sweep'. Forbes.
Desperados 3 Mac Download Windows 10
External links(edit)
Desperados 3 Mac Download Utorrent
Desperados III at MobyGames
Desperados 3 Mac Download Version
Retrieved from 'https://en.wikipedia.org/w/index.php?title=Desperados_III&oldid=1014359412'
Tumblr media
0 notes
globalmediacampaign · 4 years ago
Text
Support for Percona XtraDB Cluster in ProxySQL (Part One)
In recent times I have been designing several solutions focused on High Availability and Disaster Recovery. Some of them using Percona Server for MySQL with group replication, some using Percona XtraDB Cluster (PXC). What many of them had in common was the use of ProxySQL for the connection layer. This is because I consider the use of a layer 7 Proxy preferable, given the possible advantages provided in ReadWrite split and SQL filtering.  The other positive aspect provided by ProxySQL, at least for Group Replication, is the native support which allows us to have a very quick resolution of possible node failures. ProxySQL has Galera support as well, but in the past, that had shown to be pretty unstable, and the old method to use the scheduler was still the best way to go. After Percona Live Online 2020 I decided to try it again and to see if at least the basics were now working fine.  What I Have Tested I was not looking for complicated tests that would have included different levels of transaction isolation. I was instead interested in the more simple and basic ones. My scenario was: 1 ProxySQL node v2.0.15  (192.168.4.191)1 ProxySQL node v2.1.0  (192.168.4.108)3 PXC 8.20 nodes (192.168.4.22/23/233) with internal network (10.0.0.22/23/33)  ProxySQL was freshly installed.  All the commands used to modify the configuration are here. Tests were done first using ProxySQL v2.015 then v2.1.0. Only if results diverge I will report the version and results.  PXC- Failover Scenario As mentioned above I am going to focus on the fail-over needs, period. I will have two different scenarios: Maintenance Node crash  From the ProxySQL point of view I will have three scenarios always with a single Primary: Writer is NOT a reader (option 0 and 2) Writer is also a reader The configuration of the native support will be: INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('192.168.4.22',100,3306,10000,2000,'DC1');INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('192.168.4.22',101,3306,100,2000,'DC1');INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('192.168.4.23',101,3306,10000,2000,'DC1'); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('192.168.4.233',101,3306,10000,2000,'DC1'); Galera host groups: Writer: 100 Reader: 101 Backup_writer: 102 Offline_hostgroup: 9101 Before going ahead let us analyze the Mysql Servers settings. As you can notice I am using the weight attribute to indicate ProxySQL which is my preferred write. But I also use weight for the READ Host Group to indicate which servers should be used and how. Given that we have that: Write 192.168.4.22  is the preferred Primary 192.168.4.23  is the first failover  192.168.4.233 is the last chance  Read 192.168.4.233/23 have the same weight and load should be balanced between the two of them The 192.168.4.22 given is the preferred writer should NOT receive the same load in reads and have a lower weight value.   The Tests First Test The first test is to see how the cluster will behave in the case of 1 Writer and 2 readers, with the option writer_is_also_reader = 0.To achieve this the settings for proxysql will be: insert into mysql_galera_hostgroups (writer_hostgroup,backup_writer_hostgroup,reader_hostgroup, offline_hostgroup,active,max_writers,writer_is_also_reader,max_transactions_behind) values (100,102,101,9101,1,1,0,10); As soon as I load this to runtime, ProxySQL should move the nodes to the relevant Host Group. But this is not happening, instead, it keeps the readers in the writer HG and SHUN them. +--------+-----------+---------------+----------+---------+| weight | hostgroup | srv_host | srv_port | status |+--------+-----------+---------------+----------+---------+| 10000 | 100 | 192.168.4.233 | 3306 | ONLINE || 10000 | 100 | 192.168.4.23 | 3306 | SHUNNED || 10000 | 100 | 192.168.4.22 | 3306 | SHUNNED || 10000 | 102 | 192.168.4.23 | 3306 | ONLINE || 10000 | 102 | 192.168.4.22 | 3306 | ONLINE |+--------+-----------+---------------+----------+---------+ This is, of course, wrong. But why does it happen? The reason is simple. ProxySQL is expecting to see all nodes in the reader group with READ_ONLY flag set to 1.  In ProxySQL documentation we can read:writer_is_also_reader=0: nodes with read_only=0 will be placed either in the writer_hostgroup and in the backup_writer_hostgroup after a topology change, these will be excluded from the reader_hostgroup. This is conceptually wrong.  A PXC cluster is a tightly coupled replication cluster, with virtually synchronous replication. One of its benefits is to have the node “virtually” aligned with respect to the data state.  In this kind of model, the cluster is data-centric, and each node shares the same data view. What it also means is that if correctly set the nodes will be fully consistent in data READ.The other characteristic of the cluster is that ANY node can become a writer anytime.  While best practices indicate that it is better to use one Writer a time as Primary to prevent certification conflicts, this does not mean that the nodes not currently elected as Primary, should be prevented from becoming a writer. Which is exactly what READ_ONLY flag does if activated. Not only, the need to have READ_ONLY set means that we must change it BEFORE we have the node able to become a writer in case of fail-over.  This, in short, means the need to have either a topology manager or a script that will do that with all the relative checks and logic to be safe. Which in time of fail-over means it will add time and complexity when it’s not really needed and that goes against the concept of the tightly-coupled cluster itself. Given the above, we can say that this ProxySQL method related to writer_is_also_reader =0, as it is implemented today for Galera, is, at the best, useless.  Why is it working for Group Replication? That is easy; because Group Replication internally uses a mechanism to lock/unlock the nodes when non-primary, when using the cluster in single Primary mode. That internal mechanism was implemented as a security guard to prevent random writes on multiple nodes, and also manage the READ_ONLY flag.  Second Test Let us move on and test with writer_is_also_reader = 2. Again from the documentation:writer_is_also_reader=2 : Only the nodes with read_only=0 which are placed in the backup_writer_hostgroup are also placed in the reader_hostgroup after a topology change i.e. the nodes with read_only=0 exceeding the defined max_writers. Given the settings as indicated above, my layout before using Galera support is: +---------+-----------+---------------+----------+--------------+| weight | hostgroup | srv_host | srv_port | status |+---------+-----------+---------------+----------+--------------+| 10000 | 100 | 192.168.4.22 | 3306 | ONLINE || 10000 | 101 | 192.168.4.233 | 3306 | ONLINE || 10000 | 101 | 192.168.4.23 | 3306 | ONLINE || 100 | 101 | 192.168.4.22 | 3306 | ONLINE |+---------+-----------+---------------+----------+--------------+  After enabling Galera support: +--------+-----------+---------------+----------+---------+| weight | hostgroup | srv_host | srv_port | status |+--------+-----------+---------------+----------+---------+| 10000 | 100 | 192.168.4.233 | 3306 | ONLINE || 10000 | 100 | 192.168.4.23 | 3306 | SHUNNED || 10000 | 100 | 192.168.4.22 | 3306 | SHUNNED || 10000 | 101 | 192.168.4.23 | 3306 | ONLINE || 10000 | 101 | 192.168.4.22 | 3306 | ONLINE || 10000 | 102 | 192.168.4.23 | 3306 | ONLINE || 10000 | 102 | 192.168.4.22 | 3306 | ONLINE |+--------+-----------+---------------+----------+---------+ So node ending with 22 (the Primary elected) is not in the reader pool. Which can be ok, I assume.  But what is not OK at all is that the READERS have now a completely different weight. Nodes x.23 and x.233 are NOT balancing the load any longer, because the weight is not the same or the one I define. It is instead copied over from the WRITER settings.  Well of course this is wrong and not what I want. Anyhow, let’s test the READ failover. I will use sysbench read-only: sysbench ./src/lua/windmills/oltp_read.lua --mysql-host=192.168.4.191 --mysql-port=6033 --mysql-user=app_test --mysql-password=test --mysql-db=windmills_s --db-driver=mysql --tables=10 --table_size=10000 --rand-type=zipfian --rand-zipfian-exp=0.5 --skip_trx=true --report-interval=1 --mysql_storage_engine=innodb --auto_inc=off --histogram --table_name=windmills --stats_format=csv --db-ps-mode=disable --point-selects=50 --range-selects=true --threads=50 --time=2000 run mysql> select * from runtime_mysql_galera_hostgroups G*************************** 1. row *************************** writer_hostgroup: 100backup_writer_hostgroup: 102 reader_hostgroup: 101 offline_hostgroup: 9101 active: 1 max_writers: 1 writer_is_also_reader: 2max_transactions_behind: 10 comment: NULL1 row in set (0.01 sec)  Test Running +--------+-----------+---------------+----------+---------+----------+| weight | hostgroup | srv_host | srv_port | status | ConnUsed |+--------+-----------+---------------+----------+---------+----------+| 100 | 100 | 192.168.4.233 | 3306 | SHUNNED | 0 || 1000 | 100 | 192.168.4.23 | 3306 | SHUNNED | 0 || 10000 | 100 | 192.168.4.22 | 3306 | ONLINE | 0 || 100 | 101 | 192.168.4.233 | 3306 | ONLINE | 1 || 1000 | 101 | 192.168.4.23 | 3306 | ONLINE | 51 || 100 | 102 | 192.168.4.233 | 3306 | ONLINE | 0 || 1000 | 102 | 192.168.4.23 | 3306 | ONLINE | 0 |+--------+-----------+---------------+----------+---------+----------+ As indicated above the reads are not balanced.  Removing node x.23 using wsrep_reject_queries=all: +---------+--------------+---------------+--------------+----------+| weight | hostgroup_id | srv_host | status | ConnUsed |+---------+--------------+---------------+--------------+----------+| 100 | 100 | 192.168.4.233 | SHUNNED | 0 || 10000 | 100 | 192.168.4.22 | ONLINE | 0 || 100 | 101 | 192.168.4.233 | ONLINE | 48 || 100 | 102 | 192.168.4.233 | ONLINE | 0 |+---------+--------------+---------------+--------------+----------+ The remaining node x.233 is taking all the writes, good. If I set wsrep_reject_queries=all also on x.233: +---------+--------------+---------------+--------------+| weight | hostgroup_id | srv_host | status | +---------+--------------+---------------+--------------+| 10000 | 100 | 192.168.4.22 | ONLINE || 100 | 9101 | 192.168.4.233 | SHUNNED || 10000 | 9101 | 192.168.4.23 | ONLINE |+---------+--------------+---------------+--------------+ And application failed: FATAL: mysql_drv_query() returned error 9001 (Max connect timeout reached while reaching hostgroup 101 after 10000ms) for query ‘SELECT id, millid, date,active,kwatts_s FROM windmills2 WHERE id=9364’ Now, this may be like this by design, but I have serious difficulties understanding what the reasoning is here, given we allow a platform to fail serving while we still have a healthy server.  Last but not least I am not allowed to decide WHICH the backup_writers are, ProxySQL will choose them from my writer list of servers. SO why not also include the one I have declared as Primary, at least in case of needs?  ¯_(ツ)_/¯ Third Test Ok last try with writer_is_also_reader = 1. mysql> select * from runtime_mysql_galera_hostgroups G*************************** 1. row *************************** writer_hostgroup: 100backup_writer_hostgroup: 102 reader_hostgroup: 101 offline_hostgroup: 9101 active: 1 max_writers: 1 writer_is_also_reader: 1max_transactions_behind: 10 comment: NULL1 row in set (0.01 sec) And now I have: +---------+--------------+---------------+--------------+----------+| weight | hostgroup_id | srv_host | status | ConnUsed |+---------+--------------+---------------+--------------+----------+| 100 | 100 | 192.168.4.233 | SHUNNED | 0 || 1000 | 100 | 192.168.4.23 | SHUNNED | 0 || 10000 | 100 | 192.168.4.22 | ONLINE | 0 || 100 | 101 | 192.168.4.233 | ONLINE | 0 || 1000 | 101 | 192.168.4.23 | ONLINE | 0 || 10000 | 101 | 192.168.4.22 | ONLINE | 35 | http://www.tusacentral.com/joomla/index.php/mysql-blogs/230-support-for-percona-xtradb-cluster-in-proxysql-part-one
0 notes
php-sp · 5 years ago
Text
Mighty URL Shortener | Short URL Script
New Post has been published on https://intramate.com/php-scripts/mighty-url-shortener-short-url-script/
Mighty URL Shortener | Short URL Script
LIVE PREVIEWGet it now for only $35
Mighty URL Shortener is a PHP script that takes long URLs and squeezes them into fewer characters like bit.ly and goo.gl. Our script has mighty features like Advanced Analytics, Smart targeting, Featured Administration Panel, Unlimited Members Plans, Custom Redirect Page, Password Protect, Social Media Counts, Bundles, Comments System, Edit Created Links, Unlimited Pages, Advanced API System and more…
The script works on shared, VPS and dedicated hosting plans just check if your hosting company meets the script system requirements listed here. Also, you can find a list of recommended shared hosting companies here.
Demo Account
Frontend: https://mightyurl.mightyscripts.xyz/ Administration Panel: https://mightyurl.mightyscripts.xyz/auth/signin Username: demoaccount Password: password
Short URL with Custom Page: https://mightyurl.mightyscripts.xyz/FtbHAB Short URL Mighty Info Page: https://mightyurl.mightyscripts.xyz/FtbHAB/info
Mighty Features
Advanced Analytics & Reports
Mighty URL Shortener helps you and your members to get advanced reports & analytics for your visitors like the following:
Analytics by continents
Analytics by countries
Analytics by states
Analytics by cities
Analytics by platforms
Analytics by device brand
Analytics by device name
Analytics by referrers
Analytics by browsers
Analytics by social media counts
Smart Targeting
You no longer have to create multiple links for a single link or product. Smart Targeting allows you to create a single link that routes every user to the right place based on the following:
Visitor country
Visitor operating system like Android, iOS, Mac, Windows, Windows Mobile, Linux and BlackBerry
Visitor device type like Desktop, Smart Phone and Tablet
Visitor device model like ipad, iphone, ipod, Kindle and Kindle fire
Unlimited Members Plans
Mighty URL Shortener helps you to create unlimited number of membership plans and you can control features for each plan like:
Maximum number of shortened URLs per day
Maximum number of shortened URLs per month
On/off option to allow short link creator to edit his links but without editing the long URL.
>On/off option to allow short link creator to change the long URL for his links
On/off option to allow short link creator to add custom alias when shorten a URL.
On/off option to allow short link creator to protect short links with a password.
On/off option to allow short link creator to delete his links.
On/off option to allow short link creator to manage bundles(folders) and choose when shorten a link.
On/off option to show/remove ads from short link page
Control the countdown time
On/off option to allow short link creator to display a comment box so visitors can leave their comments into the short link page.
On/off option to control which API tools displayed for each plan like:
Quick Link Tool
Mass Shrinker Tool
Full Page Script Tool
Bookmarklet Tool
Developers API Tool
Ability to enable “Hidden” option for plans which means only admins can see hidden plans and assign it to users but users will not see it at the member area.
Payment Gateways
You can accept payments for membership plans via different gateways like:
PayPal
Stripe
Payza
Skrill
Bitcoin – Coinbase
Bitcoin – CoinPayments
WebMoney
Perfect Money
PAYEER
Wallet Money
Bank Transfer
And more is coming ….
API Tools
Quick Link: Everyone can use the shortest way to shorten links with AdLinkFly.
Mass Shrinker: Enter up to 20 URLs (one per line) to be shrunk and added to your account
Full Page Script: If you have a website with 100’s or 1000’s of links you want to change over to short then tool will be helpful for you.
Developers API: For developers AdLinkFly prepared API which returns responses in JSON format.
Bookmarklet Tool: Short links more easily. Click and drag the following link to your links toolbar.
Captcha System
Three captcha systems:
reCAPTCHA
Invisible reCAPTCHA
Solve Media captcha
Enable/Disable Captcha
Enable/Disable on anonymous short link box
Enable/Disable on Signup Form
Enable/Disable on Forgot Password Form
Enable/Disable on contact Form
Multi domains
Mighty URL Shortener allows you to add unlimited number of domains so your members can choose from it while shorting a link.
Featured Administration Panel
Control all of the features from the administration panel with a click of a button.
Custom Redirect Page
You can custom you redirect page to feel like your website by adding your logo and colors.
Password Protect
Set a password to protect your links from unauthorized access.
Social Media Counts
Display social media counts for most popular networks like Facebook, Google+, Pinterest, LinkedIn, StumbleUpon & Reddit
Bundles
Bundle your links for easy access and share them with the public.
Display Website Articles
Connect your website with the custom redirect page by displaying your articles.
Comments System
The Comments box lets people comment on your links.
Blog System
Admin can write posts about tips and how to use the system or anouncments to website visitors.
Announcements System
Admin can write announcements that will appear only on members dashboard.
Pages
You can add unlimited pages with the ability to edit and delete.
Translation Ready
Easliy translate AdLinkFly to the language of your choice.
Multilingual Ready
Visitors can choose ftheir language from the dropdown.
General
Scan added links with Google Safe Browsing Protection
Scan added links with PhishTank Protection
Testimonials system
Support form for member area
Ajax contact form.
Copy button(No flash required anymore) for shotned links
Administration Panel
Ability to close cegistration
Ability to make your website private
Easily accessible & make users admins
View site statistics on the dashboard
Change website name & description
Change default site language and timezone
Add your website logo in two versions
Enable/Disable Account Activation by Email
Enable/Disable advertising features
Ability to add Head Code into front area pages
Ability to add Head Code into Auth pages like signin, sinup, forgot password pages
Ability to add Head Code into member area
Ability to add Head Code into admin area
Disallow certain domains from be shortened
Ability to prevent any links contain banned words from shorting by checking the destination URL title and description
Change alias min length & max length
Set Mass Shrinker Limit
Admin can ads into various positions
Ability to change currency code
Ability to change Currency symbol
Ability to ad Facebook Page URL
Ability to ad Twitter Profile URL
Ability to ad Google Plus URL
SMTP email support
Ability to filter users
Ability to filter links
Overview Video
Screenshots
Check script screenshots from here.
System Requirements
PHP>= 5.6.0
mod_rewrite module enabled
PDO extension
OpenSSL extension
intl extension
cURL extension
mbstring extension
DOM extension
MySQL 5.1.10 or greater
Change Log
Version 3.5.0 - (23 October 19) - Added: Integrate the Invisible reCAPTCHA - Added: Integrate Paytm payment method - Added: Integrate Paystack payment method - Added: Remember me checkbox while logging in - Added: Option to enable captcha into the login form - Added: Allow wildcard for subdomains for disallowed domains - Added: Trial membership plan - Added: Wildcard for subdomains for full-page script - Added: GUI for the full page script - Added: Maintenance mode - Added: Sitemap - Added: SEO fields for pages & blog - Added: Now user can select multiple bundles for the same link - Added: Database port number while installing - Added: New PHP mail alternative method - Added: favicon url - Added: Assets CDN URL - Enhancement: Short link process performance - Enhancement: More compatibility with Cloudflare Flexible SSL - Enhancement: GDPR compliance - Enhancement: Revamp the social networks login - Fixed: Update Coinbase integration to use the new API - Fixed: Daily & monthly short links limit not applied on Mass Shrinker tool - Fixed: User `urls` count is not increasing with Mass Shrinker tool - Fixed: Alias min. & max. length is not correct - Fixed: Invoice 404 error on member area - Other improvements and minor bug fixes Version 3.2.1 - (29 April 18) - Improvements and minor bug fixes Version 3.2.0 - (29 April 18) - Add: Multilingual ability to plan title and description - Add: https option for short links - Fix: Countries don't appear into "/info" page - Fix: Language dropdown on the short link doesn't work - Fix: Replace api.webthumbnail.org with api.miniature.io - Fix: Language is always "Others" into statistics table. - Fix: Many redirects error when changing the language from member area - Improve: Update geoip2 database - Improve: Update Payza links - Improve: Sometimes link password doesn't save - Improve: false positive results for PhishTank - Other improvements and minor bug fixes Version 3.1.0 (2 January 18) - Fix: View invoice 404 error - Fix: Bookmarklet 404 error - Fix: Comments box has an error when disabled - Fix: Copy button not working - Fix: Invisible reCAPTCHA not function on Auth pages - Fix: Error while editing short links - Improve: Disqus Multi-lingual - Improve: Social login email not saved - Improve: Change PayPal IPN - Improve: Robots.txt file update - Other improvements and minor bug fixes Version 3.0.0 (13 September 17) - NEW: Smart Targeting: > targeting based on country > targeting based on operating system > targeting based on device type > targeting based on device model - New: Bookmarklet tool to short link via browser's toolbar - New: Stripe payment gateway - New: CoinPayments payment gateway - New: Perfect Money payment gateway - New: Payeer payment gateway - New: Add on/off for bundles into plans - New: Add on/off for delete short links into plans - New: add number of URLs that can be shortened for each plan - New: Private service option - New: Integrate Invisible reCAPTCHA - New: "Banned Words" options to prevent any links contain banned words from shorting by checking the destination URL title and description - New: Hidden Plans option, Only admins can see hidden plans and assign it to users but users will not see it in the member area. - New: Filter users by register and login IPs - New: Admin can resend activation emails for pending users - New: Admin can send direct message for user - New: fr_FR language - Fix: Admin can't edit users - Fix: Edit buttons on admin and member dashboards are not working - Fix: Empty Year-Month dropdown - Improve: Add IP for register and login users via social networks - Improve: rebuild "Full page script" tool for better results - Improve: rebuild "Quick Link" tool for better results - Improve: Hide Target Link from "/info" page - Improve: Add bundle select dropdown when editing to short link - Improved: use suitable Google login scope - Improved: localize Year-Month dropdown - Other improvements and minor bug fixes Version 2.0.1 (5 April 17) - Fixed: Error "Unknown column 'api_token' in 'field list" Version 2.0.0 (31 March 17) - Rebuilt the script from scratch using the modern technologies. - New: Paid membership plans - New: PayPal payment gateway for premium membership plans. - New: Payza payment gateway for premium membership plans. - New: Skrill payment gateway for premium membership plans. - New: Webmoney payment gateway for premium membership plans. - New: Bitcoin payment gateway for premium membership plans. - New: Bank Transfer payment for premium membership plans. - New: Multi domains for short links - New: Translation Ready - New: Multilingual Ready - New: Advanced API Tools: - Full page script - Quick Link - Mass shrinker - Developer API(JSON & Text formats) - New: SSL Integration - New: Google Safe Browsing Protection for added links - New: PhishTank Protection for added links - New: Testimonials system - New: Solvemedia Captcha - New: Blog System - New: Announcements system - New: Re-shorten URLs again without refreshing the page - New: Captcha for anonymous short link box - New: Support form for logged in users within their member area. - Added: Option to add head code into member area - Added: Option to add head code into admin area - Added: Option to add head code into auth area - Added: Option to enable/disable email activation - Added: More and more features, please check the demo to test it. Version 1.1.0 (07-02-2016) - Added: Add Vkontakte social network - Added: Add SMTP email support - Improved: Replace allow_url_fopen with curl extension - Improved: Make index.php default directory index page - Improved: Short link appears after member add link - Fixed: Typo fix Version 1.0.1 (06-14-2016) - Fixed: Contact form is not working - Added: Visual effects to homepage Version 1.0.0 (06-13-2016) - First release
LIVE PREVIEWGet it now for only $35
0 notes
webdesignersolutions · 6 years ago
Link
Site Admin demo • Source
16 years ago I stumbled into hosting with Ensim WEBppliance, which was a clusterfuck of a control panel necessitating a bunch of bugfixes. Those bugfixes spawned a control panel, apnscp (Apis Networks Control Panel), that I’ve continued to develop to this day. v3 is the first public release of apnscp and to celebrate I’m giving away 400 free lifetime licenses on r/webhosting each good for 1 server.
Visit apnscp.com/activate/webhosting-lt to get started customizing the installer. Database + PHP are vendor agnostic. apnscp supports any-version Node/Ruby/Python/Go. I’m interested in feedback, if not bugs then certainly ideas for improvement.
apnscp ships with integrated Route 53/CF DNS support in addition to Linode, DO, and Vultr. Additional providers are easy to create. apnscp includes 1-click install/updates for WordPress, Drupal, Laravel, Ghost, Discourse, and Magento. Enabling Passenger, provided you have at least 2 GB memory, opens the door to use any-version Ruby, Node, and Python on your server.
Minimum requirements
2 GB RAM
20 GB disk
CentOS 7.4
xfs or ext4 filesystem
Containers not supported (OpenVZ, Virtuozzo)
Features
100% self-hosted, no third-party agents required
1-click installs/automatic updates for WordPress, Drupal, Ghost, Discourse, Laravel, Magento
Let’s Encrypt issuance, automatic renewals
Resource enforcement via cgroups
Read-only roles for PHP
Integrated DNS for AWS, CF, Digital Ocean, Linode, and Vultr
Multi-tenancy, each account exists in a synthetic root
Any-version Node, Ruby, Python, Go
Automatic system/panel updates
OS checksums, perform integrity checks without RPM hell
Push monitoring for services
SMTP policy controls with rspamd
Firewall, brute-force restrictions on all services including HTTP with a rate-limiting sieve
Malware scrubbing
Multi-server support
apnscp won’t fix all of your woes; you still need to be smart about whom you host and what you host, but it is a step in the right direction. apnscp is not a replacement for a qualified system administrator. It is however a much better alternative to emerging panels in this market.
Installation
Use apnscp Customizer to configure your server as you’d like. See INSTALL.md for installation + usage.
Monitoring installation apnscp will provision your server and this takes around 45 minutes to 2 hours to complete the first time. You can monitor installation real-time from the terminal:
tail -f /root/apnscp-bootstrapper.log
Post Install If you entered an email address while customizing (apnscp_admin_email) and the server isn’t in a RBL, then you will receive an email with your login information. If you don’t get an email after 2 hours, log into the server and check the status:
tail -n30 /root/apnscp-bootstrapper.log
The last line should be similar to: 2019-01-30 18:39:02,923 p=3534 u=root | localhost : ok=3116 changed=1051 unreachable=0 failed=0
If failed=0, everything is set! You can reset the password and refer back to the login information to access the panel or reset your credentials. Post-install will welcome you with a list of helpful commands to get started as well. You may want to change -n30 to -n50!
If failed=n where n > 0, send me a PM, email ([email protected]), get in touch on the forums, or Discord.
Shoot me a PM if you have a question or hop on Discord chat. Either way feedback makes this process tick. Enjoy!
Installation FAQ
Is a system hostname necessary?
No. It can be set at a later date with cpcmd config_set net.hostname new.host.name. A valid hostname is necessary for mail to reliably relay and valid SSL issuance. apnscp can operate without either.
Do you support Ubuntu?
No. This is a highly specialized platform. Red Hat has a proven track record of honoring its 10 year OS lifecycles, which from experience businesses like to move every 5-7 years. Moreover certain facilities like tuned, used to dynamically optimize your server, are unique to Red Hat and its derivatives. As an aside, apnscp also provides a migration facility for seamless zero downtime migrations.
How do I update the panel?
It will update automatically unless disabled. cpcmd config_set apnscp.update-policy major will set the panel to update up to major version changes. cpcmd config_set system.update-policy default will set the OS to update packages as they’re delivered. These are the default panel settings. Supported Web Apps will update within 24 hours of a major version release and every Wednesday/Sunday for asset updates (themes/plugins). An email is sent to the contact assigned for each site (siteinfo,email service variable).
If your update policy is set to “false” in apnscp-vars.yml, then you can manually update the panel by running upcp and OS via yum update -y. If you’ve opted out of 1-click updates, then caveat emptor.
Mail won’t submit from the server on 25/587 via TCP.
This is by design. Use sendmail to inject into the mail queue via binary or authenticate with a user account to ensure ESMTPA is used. Before disabling, and as one victimized by StealRat, I’d urge caution. Sockets are opaque: it’s impossible to discern the UID or PID on the other end.
To disable:
cpcmd config_set apnscp.bootstrapper postfix_relay_mynetworks true
upcp -sb mail/configure-postfix
config_set manages configuration scopes. Scopes are discussed externally. upcp is a wrapper to update the panel, reset the panel (--reset), run integrity checks (-b) with optional tags. -s skips migrations that are otherwise compulsory if present during a panel update; you wouldn’t want an incomplete platform!
My connection is firewalled and I can’t send mail directly!
apnscp provides simple smart host support via configuration scope.
How do I uninstall MySQL or PostgreSQL?
Removing either would render the platform inoperable. Do not do this. PostgreSQL handles mail, long-term statistics, and backup account metadata journaling. MySQL for everything else, including panel data.
Oof. apnscp is taking up 1.5 GB of memory!
There are two important tunables, has_low_memory and clamav_enabled. has_low_memory is a macro that disables several components including:
clamav_enabled => false
passenger_enabled => false
variety of rspamd performance enhancements (redis, proxy worker, neural) => false
MAKEFLAGS=-j1 (non-parallelized build)
dovecot_secure_mode => false (High-security mode)
Switches multi-threaded job daemon Horizon to singular “queue”
clamav_enabled disables ClamAV as well as upload scrubbing and virus checks via Web > Web Apps. This is more of a final line of defense. So long as you are the only custodian of sites on your server, it’s safe to disable.
Resources
apnscp documentation
v3 release notes
Adding sites, logging in
Customizing apnscp
CLI helpers
Knowledgebase – focused for end-users. Administration is covered under hq.apnscp.com
Scopes – simplify complex tasks
License information
Licenses are tied to the server but may be transferred to a new server. Once transferred from the server apnscp will become deactivated on the server, which means your sites will continue to operate but apnscp can no longer help you manage your server, as well as deploy automatic updates. A copy of the license can be made either by copying /usr/local/apnscp/config/license.pem or License > <u>Download License</u> in the top-right corner. Likewise to install the license on a new machine just replace config/license.pem with your original copy.
Submitted February 17, 2019 at 05:14PM by tsammons https://www.reddit.com/r/webhosting/comments/arqya9/built_a_control_panel_over_16_years_free_lifetime/?utm_source=ifttt
from Blogger http://webdesignersolutions1.blogspot.com/2019/02/built-control-panel-over-16-years-free.html via IFTTT
0 notes
globalmediacampaign · 4 years ago
Text
Support for Percona XtraDB Cluster in ProxySQL (Part One)
How native ProxySQL stands in failover support (both v2.0.15 and v2.1.0) In recent times I have been designing several solutions focused on High Availability and Disaster Recovery. Some of them using Percona Server for MySQL with group replication, some using Percona XtraDB Cluster (PXC). What many of them had in common was the use of ProxySQL for the connection layer. This is because I consider the use of a layer 7 Proxy preferable, given the possible advantages provided in ReadWrite split and SQL filtering.  The other positive aspect provided by ProxySQL, at least for Group Replication, is the native support which allows us to have a very quick resolution of possible node failures. ProxySQL has Galera support as well, but in the past, that had shown to be pretty unstable, and the old method to use the scheduler was still the best way to go. After Percona Live Online 2020 I decided to try it again and to see if at least the basics were now working fine.  What I Have Tested I was not looking for complicated tests that would have included different levels of transaction isolation. I was instead interested in the more simple and basic ones. My scenario was: 1 ProxySQL node v2.0.15  (192.168.4.191)1 ProxySQL node v2.1.0  (192.168.4.108)3 PXC 8.20 nodes (192.168.4.22/23/233) with internal network (10.0.0.22/23/33)  ProxySQL was freshly installed.  All the commands used to modify the configuration are here. Tests were done first using ProxySQL v2.015 then v2.1.0. Only if results diverge I will report the version and results.  PXC- Failover Scenario As mentioned above I am going to focus on the fail-over needs, period. I will have two different scenarios: Maintenance Node crash  From the ProxySQL point of view I will have three scenarios always with a single Primary: Writer is NOT a reader (option 0 and 2) Writer is also a reader The configuration of the native support will be: INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('192.168.4.22',100,3306,10000,2000,'Preferred writer'); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('192.168.4.23',100,3306,1000,2000,'Second preferred '); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('192.168.4.233',100,3306,100,2000,'Las chance'); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('192.168.4.22',101,3306,100,2000,'last reader'); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('192.168.4.23',101,3306,10000,2000,'reader1');     INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('192.168.4.233',101,3306,10000,2000,'reader2'); Galera host groups: Writer: 100 Reader: 101 Backup_writer: 102 Offline_hostgroup: 9101 Before going ahead let us analyze the Mysql Servers settings. As you can notice I am using the weight attribute to indicate ProxySQL which is my preferred write. But I also use weight for the READ Host Group to indicate which servers should be used and how. Given that we have that: Write 192.168.4.22  is the preferred Primary 192.168.4.23  is the first failover  192.168.4.233 is the last chance  Read 192.168.4.233/23 have the same weight and load should be balanced between the two of them The 192.168.4.22 given is the preferred writer should NOT receive the same load in reads and have a lower weight value.   The Tests First Test The first test is to see how the cluster will behave in the case of 1 Writer and 2 readers, with the option writer_is_also_reader = 0.To achieve this the settings for proxysql will be: insert into mysql_galera_hostgroups (writer_hostgroup,backup_writer_hostgroup,reader_hostgroup, offline_hostgroup,active,max_writers,writer_is_also_reader,max_transactions_behind) values (100,102,101,9101,1,1,0,10); As soon as I load this to runtime, ProxySQL should move the nodes to the relevant Host Group. But this is not happening, instead, it keeps the readers in the writer HG and SHUN them. +---------+--------------+---------------+--------------+ | weight  | hostgroup_id | srv_host | status       | +---------+--------------+---------------+--------------+ | 100     | 100          | 192.168.4.233 | SHUNNED      | | 1000    | 100          | 192.168.4.23  | SHUNNED      | | 10000   | 100          | 192.168.4.22  | ONLINE       | | 100     | 102          | 192.168.4.233 | ONLINE       | | 1000    | 102          | 192.168.4.23  | ONLINE       | +---------+--------------+---------------+--------------+ This is, of course, wrong. But why does it happen? The reason is simple. ProxySQL is expecting to see all nodes in the reader group with READ_ONLY flag set to 1.  In ProxySQL documentation we can read:writer_is_also_reader=0: nodes with read_only=0 will be placed either in the writer_hostgroup and in the backup_writer_hostgroup after a topology change, these will be excluded from the reader_hostgroup. This is conceptually wrong.  A PXC cluster is a tightly coupled replication cluster, with virtually synchronous replication. One of its benefits is to have the node “virtually” aligned with respect to the data state.  In this kind of model, the cluster is data-centric, and each node shares the same data view. What it also means is that if correctly set the nodes will be fully consistent in data READ. The other characteristic of the cluster is that ANY node can become a writer anytime.  While best practices indicate that it is better to use one Writer a time as Primary to prevent certification conflicts, this does not mean that the nodes not currently elected as Primary, should be prevented from becoming a writer. Which is exactly what READ_ONLY flag does if activated. Not only, the need to have READ_ONLY set means that we must change it BEFORE we have the node able to become a writer in case of fail-over.  This, in short, means the need to have either a topology manager or a script that will do that with all the relative checks and logic to be safe. Which in time of fail-over means it will add time and complexity when it’s not really needed and that goes against the concept of the tightly-coupled cluster itself. Given the above, we can say that this ProxySQL method related to writer_is_also_reader =0, as it is implemented today for Galera, is, at the best, useless.  Why is it working for Group Replication? That is easy; because Group Replication internally uses a mechanism to lock/unlock the nodes when non-primary, when using the cluster in single Primary mode. That internal mechanism was implemented as a security guard to prevent random writes on multiple nodes, and also manage the READ_ONLY flag.  Second Test Let us move on and test with writer_is_also_reader = 2. Again from the documentation:writer_is_also_reader=2 : Only the nodes with read_only=0 which are placed in the backup_writer_hostgroup are also placed in the reader_hostgroup after a topology change i.e. the nodes with read_only=0 exceeding the defined max_writers. Given the settings as indicated above, my layout before using Galera support is: +---------+--------------+---------------+--------------+ | weight  | hostgroup_id | srv_host | status       | +---------+--------------+---------------+--------------+ | 100     | 100          | 192.168.4.233 | ONLINE       | | 1000    | 100          | 192.168.4.23  | ONLINE       | | 10000   | 100          | 192.168.4.22  | ONLINE       | | 10000   | 101          | 192.168.4.233 | ONLINE       | | 10000   | 101          | 192.168.4.23  | ONLINE       | | 100     | 101          | 192.168.4.22  | ONLINE | +---------+--------------+---------------+--------------+ After enabling Galera support: +--------+-----------+---------------+----------+---------+ | weight | hostgroup | srv_host      | srv_port | status  | +--------+-----------+---------------+----------+---------+ | 100    | 100       | 192.168.4.233 | 3306     | SHUNNED | | 1000   | 100       | 192.168.4.23  | 3306     | SHUNNED | | 10000  | 100       | 192.168.4.22  | 3306     | ONLINE  | | 100    | 101       | 192.168.4.233 | 3306     | ONLINE  | | 1000   | 101       | 192.168.4.23  | 3306     | ONLINE  | | 100    | 102       | 192.168.4.233 | 3306     | ONLINE  | | 1000   | 102       | 192.168.4.23  | 3306     | ONLINE  | +--------+-----------+---------------+----------+---------+ So node ending with 22 (the Primary elected) is not in the reader pool. Which can be ok, I assume.  But what is not OK at all is that the READERS have now a completely different weight. Nodes x.23 and x.233 are NOT balancing the load any longer, because the weight is not the same or the one I define. It is instead copied over from the WRITER settings.  Well of course this is wrong and not what I want. Anyhow, let’s test the READ failover. I will use sysbench read-only: sysbench ./src/lua/windmills/oltp_read.lua  --mysql-host=192.168.4.191 --mysql-port=6033 --mysql-user=app_test --mysql-password=test --mysql-db=windmills_s --db-driver=mysql --tables=10 --table_size=10000  --rand-type=zipfian --rand-zipfian-exp=0.5 --skip_trx=true  --report-interval=1  --mysql_storage_engine=innodb --auto_inc=off --histogram --table_name=windmills  --stats_format=csv --db-ps-mode=disable --point-selects=50 --range-selects=true --threads=50 --time=2000   run mysql> select * from  runtime_mysql_galera_hostgroups G *************************** 1. row ***************************        writer_hostgroup: 100 backup_writer_hostgroup: 102        reader_hostgroup: 101       offline_hostgroup: 9101                  active: 1             max_writers: 1   writer_is_also_reader: 2 max_transactions_behind: 10                 comment: NULL Test Running +--------+-----------+---------------+----------+---------+----------+ | weight | hostgroup | srv_host | srv_port | status | ConnUsed | +--------+-----------+---------------+----------+---------+----------+ | 100 | 100 | 192.168.4.233 | 3306 | SHUNNED | 0 | | 1000 | 100 | 192.168.4.23 | 3306 | SHUNNED | 0 | | 10000 | 100 | 192.168.4.22 | 3306 | ONLINE | 0 | | 100 | 101 | 192.168.4.233 | 3306 | ONLINE | 1 | | 1000 | 101 | 192.168.4.23 | 3306 | ONLINE | 51 | | 100 | 102 | 192.168.4.233 | 3306 | ONLINE | 0 | | 1000 | 102 | 192.168.4.23 | 3306 | ONLINE | 0 | +--------+-----------+---------------+----------+---------+----------+ As indicated above the reads are not balanced.  Removing node x.23 using wsrep_reject_queries=all: +---------+--------------+---------------+--------------+----------+ | weight | hostgroup_id | srv_host | status | ConnUsed | +---------+--------------+---------------+--------------+----------+ | 100 | 100 | 192.168.4.233 | SHUNNED | 0 | | 10000 | 100 | 192.168.4.22 | ONLINE | 0 | | 100 | 101 | 192.168.4.233 | ONLINE | 48 | | 100 | 102 | 192.168.4.233 | ONLINE | 0 | +---------+--------------+---------------+--------------+----------+ The remaining node x.233 is taking all the writes, good. If I set wsrep_reject_queries=all also on x.233: +---------+--------------+---------------+--------------+ | weight | hostgroup_id | srv_host | status | +---------+--------------+---------------+--------------+ | 10000 | 100 | 192.168.4.22 | ONLINE | | 100 | 9101 | 192.168.4.233 | SHUNNED | | 10000 | 9101 | 192.168.4.23 | ONLINE | +---------+--------------+---------------+--------------+ And application failed: FATAL: mysql_drv_query() returned error 9001 (Max connect timeout reached while reaching hostgroup 101 after 10000ms) for query ‘SELECT id, millid, date,active,kwatts_s FROM windmills2 WHERE id=9364’ Now, this may be like this by design, but I have serious difficulties understanding what the reasoning is here, given we allow a platform to fail serving while we still have a healthy server.  Last but not least I am not allowed to decide WHICH the backup_writers are, ProxySQL will choose them from my writer list of servers. SO why not also include the one I have declared as Primary, at least in case of needs?  ¯_(ツ)_/¯ Third Test Ok last try with writer_is_also_reader = 1. mysql> select * from runtime_mysql_galera_hostgroups G *************************** 1. row *************************** writer_hostgroup: 100 backup_writer_hostgroup: 102 reader_hostgroup: 101 offline_hostgroup: 9101 active: 1 max_writers: 1 writer_is_also_reader: 1 max_transactions_behind: 10 comment: NULL 1 row in set (0.01 sec) And now I have: +---------+--------------+---------------+--------------+----------+ | weight | hostgroup_id | srv_host | status | ConnUsed | +---------+--------------+---------------+--------------+----------+ | 100 | 100 | 192.168.4.233 | SHUNNED | 0 | | 1000 | 100 | 192.168.4.23 | SHUNNED | 0 | | 10000 | 100 | 192.168.4.22 | ONLINE | 0 | | 100 | 101 | 192.168.4.233 | ONLINE | 0 | | 1000 | 101 | 192.168.4.23 | ONLINE | 0 | | 10000 | 101 | 192.168.4.22 | ONLINE | 35 | https://www.percona.com/blog/2020/11/30/support-for-percona-xtradb-cluster-in-proxysql-part-one/
0 notes
globalmediacampaign · 4 years ago
Text
Migrate from on premise MySQL to MySQL Database Service
If you are running MySQL on premise, it's maybe the right time to think about migrating your lovely MySQL database somewhere where the MySQL Team prepared a comfortable place for it to stay running and safe. This awesome place is MySQL Database Service in OCI. For more information about what MDS is and what it provides, please check this blog from my colleague Airton Lastori. One important word that should come to your mind when we talk about MDS is SECURITY ! Therefore, MDS endpoint can only be a private IP in OCI. This means you won't be able to expose your MySQL database publicly on the Internet. Now that we are aware of this, if we want to migrate an existing database to the MDS, we need to take care of that. What is my case ? When somebody needs to migrate its actual MySQL database, the first question that needs to be answered is: Can we eventually afford large downtime ? If the answer is yes, then the migration is easy: you stop your application(s) you dump MySQL you start your MDS instance you load your data into MDS and that's it ! In case the answer is no, things are of course more interesting and this is the scenario I will cover in this post. Please note that the application is not covered in this post and of course, it's also recommended to migrate it to the cloud, in a compute instance of OCI for example. What's the plan ? To migrate successfully a MySQL database from on premise to MDS, these are the actions I recommend: create a VCN with two subnets, the public and the private one create a MDS instance create a VPN create an Object Storage Bucket dump the data to be loaded in MDS load the data in MDS create an in-bound replication channel in MDS The architecture will look like this: Virtual Cloud Network First thing to do when you have your OCI access, it's to create a VNC from the dashboard. If you have already created some compute instances, these steps are not required anymore:   You can use Start VNC Wizard, but I will cover the VPN later in this article. So let's just use Create VCN: We need a name and a CIDR Block, we use 10.0.0.0/16:   This is what it looks like:   Now we click on the name (lefred_vcn in my case) and we need to create two subnets:   We will create the public one on 10.0.0.0/24:   and the Private one on 10.0.1.0/24. After these two steps, we have something like this:   MySQL Database Service Instance We can create a MDS instance:   And we just follow the creation wizard that is very simple:   It's very important to create an admin user (the name can be what you want) and don't forget the password. We put our service in the private subnet we just created.   The last screen of the wizard is related to the automatic backups:   The MDS instance will be provisioned after a short time and you can see that in its detailed view:   VPN OCI allows you to very easily create IPSEC VPN's with all enterprise level hardware that are used in the industry. Unfortunately I don't have such opportunity at home (and no need for it), so I will use another supported solution that is more appropriate for domestic usage: OpenVPN. If you are able to deploy the IPSEC solution, I suggest you to use it.   On that new page, you have a link to the Marketplace where you can deploy a compute instance to act as OpenVPN server:   You need to follow the wizard and make sure to use the vcn we created and the public subnet:   The compute instance will be launched by Terraform. When done we will be able to reach the OpenVPN web interface using the public IP that was assigned to this compute instance using the credentials we entered in the wizard:   In case you lost those logs, the ip is available in the Compute->Instances page:   As soon as the OpenVPN instance is deployed, we can go on the web interface and setup OpenVPN:   As we want be able to connect from our MDS instance to our on-premise MySQL server for replication, we will need to setup our VPN to use Routing instead of NAT:   We also specified two ranges as we really want to have a static IP for our on-premise MySQL Instance, otherwise, the IP might change the next time we connect to the VPN. The next step is the creation of a user we will use to connect to the VPN:   The settings are very important:   Save the settings and click on the banner to restart OpenVPN. Now, we connect using the user we created to download his profile: That client.ovpn file needs to be copied to the on-premise MySQL Server. If OpenVPN is not yet installed on the on-premise MySQL Server, it's time to install it (yum install openvpn). Now, we copy the client.ovpn in /etc/openvpn/client/ and we call it client.conf: # cp client.ovpn /etc/openvpn/client/client.conf We can start the VPN: # systemctl status openvpn-client@client Enter Auth Username: lefred Enter Auth Password: ****** We can verify that the VPN connection is established: # ifconfig tun0 tun0: flags=4305 mtu 1500 inet 172.27.232.134 netmask 255.255.255.0 destination 172.27.232.134 inet6 fe80::9940:762c:ad22:5c62 prefixlen 64 scopeid 0x20 unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 100 (UNSPEC) RX packets 1218 bytes 102396 (99.9 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1287 bytes 187818 (183.4 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 sytemctl status openvpn-client@client can also be called to see the status. Object Storage To transfer our data to the cloud, we will use Object Storage. And we create a bucket:   Dump the Data To dump the data of our on-premise MySQL server, we will use MySQL Shell that has the capability to Load & Dump large datasets in an optimized and compatible way for OCI since 8.0.21. Please check those links for more details: https://docs.cloud.oracle.com/en-us/iaas/mysql-database/doc/importing-and-exporting-databases.html https://mysqlserverteam.com/mysql-shell-dump-load-part-1-demo/ https://mysqlserverteam.com/mysql-shell-dump-load-part-2-benchmarks/ https://mysqlserverteam.com/mysql-shell-dump-load-part-3-load-dump/ https://mysqlserverteam.com/mysql-shell-8-0-21-speeding-up-the-dump-process/ OCI Config The first step is to create an OCI config file that will look like this: [DEFAULT] user=ocid1.user.oc1..xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx fingerprint=xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx key_file=/home/lefred/oci_api_key.pem tenancy=ocid1.tenancy.oc1..xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx compartment=ocid1.compartment.oc1..xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx region=us-ashburn-1 The user information and key can be found under the Identity section: Please refer to this manual page to generate a PEM key. Now that we have an oci config file (called oci.config in my case), we need to verify that our current MySQL server is using GTID: on-premise mysql> select @@gtid_mode; +-------------+ | @@gtid_mode | +-------------+ | OFF | +-------------+ 1 row in set (0.00 sec) By default GTID mode is disabled and we need to enable it. To be able to perform this operation without restarting the MySQL instance, this is how to proceed: on-premise mysql> SET PERSIST server_id=1; on-premise mysql> SET PERSIST enforce_gtid_consistency=true; on-premise mysql> SET PERSIST gtid_mode=off_permissive; on-premise mysql> SET PERSIST gtid_mode=on_permissive; on-premise mysql> SET PERSIST gtid_mode=on; on-premise mysql> select @@gtid_mode; +-------------+ | @@gtid_mode | +-------------+ | ON | +-------------+ Routing & Security We need to add some routing and firewall rules to our VCN to allow the traffic from and to the VPN.   Now that we dealt with routing and security, it's time to dump the data to Object Store by connecting MySQL Shell to our on-premise server and use util.dumpInstance(): $ mysqlsh MySQL JS > c root@localhost [...] MySQL localhost:33060+ ssl JS > util.dumpInstance('onpremise', {ociConfigFile: "oci.config", osBucketName: "lefred_bucket", osNamespace: "xxxxxxxxxxxx",threads: 4, ocimds: true, compatibility: ["strip_restricted_grants", "strip_definers"]}) You can also find more info on this MDS manual page. Load the data in MDS The data is now already in the cloud and we need to load it in our MDS instance. We first connect to our MDS instance using Shell. We could use a compute instance in the public subnet or the VPN we created. I will use the second option: MySQL localhost:33060+ ssl JS > c [email protected] Creating a session to '[email protected]' Fetching schema names for autocompletion… Press ^C to stop. Closing old connection… Your MySQL connection id is 283 (X protocol) Server version: 8.0.21-u1-cloud MySQL Enterprise - Cloud No default schema selected; type use to set one. It's time to load the data from Object Storage to MDS: MySQL 10.0.1.11:33060+ ssl JS > util.loadDump('onpremise', {ociConfigFile: "oci.config", osBucketName: "lefred_bucket", osNamespace: "xxxxxxxxxxxx",threads: 4}) Loading DDL and Data from OCI ObjectStorage bucket=lefred_bucket, prefix='onpremise' using 4 threads. Target is MySQL 8.0.21-u1-cloud. Dump was produced from MySQL 8.0.21 Checking for pre-existing objects… Executing common preamble SQL Executing DDL script for schema employees Executing DDL script for employees.departments Executing DDL script for employees.salaries Executing DDL script for employees.dept_manager Executing DDL script for employees.dept_emp Executing DDL script for employees.titles Executing DDL script for employees.employees Executing DDL script for employees.current_dept_emp Executing DDL script for employees.dept_emp_latest_date [Worker002] employees@dept_emp@@0.tsv.zst: Records: 331603 Deleted: 0 Skipped: 0 Warnings: 0 [Worker002] employees@dept_manager@@0.tsv.zst: Records: 24 Deleted: 0 Skipped: 0 Warnings: 0 [Worker003] employees@titles@@0.tsv.zst: Records: 443308 Deleted: 0 Skipped: 0 Warnings: 0 [Worker000] employees@employees@@0.tsv.zst: Records: 300024 Deleted: 0 Skipped: 0 Warnings: 0 [Worker002] employees@departments@@0.tsv.zst: Records: 9 Deleted: 0 Skipped: 0 Warnings: 0 [Worker001] employees@salaries@@0.tsv.zst: Records: 2844047 Deleted: 0 Skipped: 0 Warnings: 0 Executing common postamble SQL 6 chunks (3.92M rows, 141.50 MB) for 6 tables in 1 schemas were loaded in 5 min 28 sec (avg throughput 431.39 KB/s) 0 warnings were reported during the load. We still need to set the GTID purged information from when the dump was taken. In MDS, this operation can be achieved calling a dedicated procedure called sys.set_gtid_purged() Now let's find the value we need to add there. The value of GTID executed from the dump is written in the file @.json. This file is located in Object Storage and we need to retrieve it: When you have the value of gtidExecuted in that file you can set it in MDS: MySQL 10.0.1.11:33060+ ssl SQL > call sys.set_gtid_purged("ae82914d-e096-11ea-8a7a-08002718d305:1") In-bound Replication Before stopping our production server running MySQL on premise, we need to resync the data. We also need to be sure we have moved everything we need to the cloud (applications, etc...) and certainly run some tests. This can take some time and during that time we want to keep the data up to date. We will then use replication from on-premise to MDS. Replication user creation On the production MySQL (the one still running on the OCI compute instance), we need to create a user dedicated to replication: mysql> CREATE USER 'repl'@'10.0.1.%' IDENTIFIED BY 'C0mpl1c4t3d!Paddw0rd' REQUIRE SSL; mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl'@'10.0.1.%'; Creation of the replication channel We go back on OCI's dashboard and in our MDS instance's details page, we click on Channels:   We now create a channel and follow the wizard:   We use the credentials we just created and as hostname we put the IP of our OpenVPN client: 172.27.232.134 After a little while, the channel will be created and in MySQL Shell when connected to your MDS instance, you can see that replication is running.   Wooohooo it works ! o/ Conclusion As you can see, transferring the data and creating a replication channel from on-premise to MDS is easy. The most complicated part is the VPN and dealing with the network, but straightforward for a sysadmin. This is a task that you have to do only once and it's the price to pay to have a more secure environment. https://blogs.oracle.com/mysql/migrate-from-on-premise-mysql-to-mysql-database-service
0 notes
globalmediacampaign · 4 years ago
Text
MySQL Deadlocks with INSERT
Support Channel. “Hi, I am getting deadlocks in the database and they occur when I have to rollback the transactions but if we don’t have to roll back all transactions get executed.” Wait, what? After some back and forth it becomes clear that the Dev experiences deadlocks and has data: mysql> pager less mysql> show engine innodb statusG ... MySQL thread id 142531, OS thread handle 139990258222848, query id 4799571 somehost.somedomain someuser update INSERT into sometable (identifier_id, currency, balance ) VALUES ('d4e84cb1-4d56-4d67-9d16-1d548fd26b55', 'EUR', '0') *** (2) HOLDS THE LOCK(S): RECORD LOCKS space id 3523 page no 1106463 n bits 224 index PRIMARY of table `somedb`.`sometable` trx id 9843342279 lock mode S locks gap before recand that is weird because of the lock mode S locks gap in the last line. We get the exact same statement with the exact same value on the second thread, but with lock mode X locks gap. Both transactions have an undo log entry of the length 1 - one row, single insert and the insert has an S-lock. A mystery INSERT and opaque code Many questions arise: how can an INSERT have an S-lock? how can a single INSERT transaction deadlock? what does the originating code look like? The last question can be actually answered by the developer, but because they are using Java, in true Java fashion it is almost - but not quite - useless to a database person. @Transactional(propagation = Propagation.REQUIRES_NEW, timeout = MYSQL_TRANSACTION_TIMEOUT, rollbackFor = { BucketNotFoundException.class, DuplicateTransactionException.class, BucketBalanceUpdateException.class }, isolation = Isolation.SERIALIZABLE ) public void initiateBucketBalanceUpdate(Transaction transaction) throws BucketBalanceUpdateException, DuplicateTransactionException { this.validateAndInsertIdempotencyKey(transaction); this.executeBucketBalanceUpdateFlow(transaction); this.saveTransactionEntries(transaction); }So, where is the SQL? This is often a problem - Object Relational Mappers encapsulate the things that go on in the database so much that it is really hard for anybody - Developers, DBAs and everybody else - to understand what actually happens and make debugging quite painful. Or, if they understand what goes on with the database, to map this to the code. TRANSACTION ISOLATION LEVEL SERIALIZABLE In this case it is solvable, though. The isolation = Isolation.SERIALIZABLE is the culprit here. So when we spoke about transactions and isolation levels previously, I made the decision to leave the fourth and most useless isolation level out of the picture: SET TRANSACTION ISOLATION LEVEL SERIALIZEABLE. The manual says: SERIALIZABLE This level is like REPEATABLE READ, but InnoDB implicitly converts all plain SELECT statements to SELECT ... FOR SHARE if autocommit is disabled. It then goes on to explain how SERIALIZABLE does nothing when there is no explicit transaction going on. It does not explain what it is good for (mostly: shooting yourself into the foot) and when you should use it (mostly: don’t). It does answer the question of “Where to the S-Locks come from?”, though. The SERIALIZEABLE isolation mode turns a normal SELECT statement into a Medusa’s freeze ray that shoots S-Locks all over the tables onto everything it looks at, preventing other threads from changing these things until we end our transaction and drop our locks (And that is why you should not use it, and why I personally believe that your code is broken if it needs it). A broken RMW and lock escalation So instead of a regular Read-Modify-Write Session1> START TRANSACTION READ WRITE; Session1> SELECT * FROM sometable WHERE id=10 FOR UPDATE; -- X-lock granted on rec or gap -- ... Application decides INSERT or UPDATE Session1> INSERT INTO sometable (id, ...) VALUES ( 10, ... ); Session1> COMMIT;we get the following broken Read-Modify-Write, minimum: Session1> START TRANSACTION READ WRITE; Session1> SELECT * FROM sometable WHERE id=10 FOR SHARE; -- S-lock granted on rec or gap -- ... Application decides INSERT or UPDATE Session1> INSERT INTO sometable (id, ...) VALUES ( 10, ... ); -- lock escalation to X Session1> COMMIT;The LOCK IN SHARE MODE or equivalent FOR SHARE is not in the code, it is added implicitly by the isolation level SERIALIZABLE. We get an S-Lock, which is not good for writing. Our transaction now did not get the required locks necessary for reading at the start of the transaction, because the later INSERT requires an X-lock, like any write statement would. The database needs to aquire the X-lock, that is, it needs to upgrade the S-lock to an X-lock. If at that point in time another threads tries to run the exact same statement, which is what happens here, they already hold a second S-lock, preventing the first thread from completing their transaction (it is waiting until the second threads drops the S-lock or it times out). And then that second thread also tries to upgrade their S-lock into an X-lock, which it can’t do, because that first thread is trying to do the same thing, and we have the deadlock and a rollback. Reproduction of the problem We can easily reproduce this. Session1> set transaction isolation level serializable; Session1> start transaction read write; Query OK, 0 rows affected (0.00 sec) Session1> select * from kris where id = 10; +----+-------+ | id | value | +----+-------+ | 10 | 10 | +----+-------+ Session1> select * from performance_schema.data_locksG ... LOCK_TYPE: TABLE LOCK_MODE: IS LOCK_STATUS: GRANTED LOCK_DATA: NULL ... LOCK_TYPE: RECORD LOCK_MODE: S,REC_NOT_GAP LOCK_STATUS: GRANTED LOCK_DATA: 10 ... Session1> update kris set value=11 where id =10;We change the isolation level to SERIALIZABLE and start a transaction (because, as stated in the manual, autocommit does nothing). We then simply look at a single row, and check PERFORMANCE_SCHEMA.DATA_LOCKS afterwards. Lo and behold, S-Locks as promised by the manual. Now, the setup for the deadlock with a second session, by doing the same thing: Session2> set transaction isolation level serializable; Session2> start transaction read write; Query OK, 0 rows affected (0.00 sec) Session2> select * from kris where id = 10; +----+-------+ | id | value | +----+-------+ | 10 | 10 | +----+-------+Checking the data_locks table we now see two sets of IS- and S-Locks belonging to two different threads. We go for an UPDATE here, because we chose existing rows and row locks, instead of non-existing rows and gap locks: Session1> update kris set value=11 where id =10; ... hangs ...and in the other connection: Session2> update kris set value=13 where id =10; ERROR 1213 (40001): Deadlock found when trying to get lock; try restarting transactionComing back to the first session, this now reads Session1> update kris set value=11 where id =10; ... hangs ... Query OK, 1 row affected (2.43 sec) Rows matched: 1 Changed: 1 Warnings: 0The timing given is the time I took to switch between terminals and to type the commands. Resolution Coming back to the support case, the Dev analyzed their code and found out that what they are emitting is actually the sequence Session1> SET TRANSACTION ISOLATION LEVEL SERIALIZABLE; Session1> START TRANSACTION READ WRITE; Session1> SELECT * FROM sometable WHERE id=10; -- implied S-lock granted on rec or gap -- ... Application decides INSERT or UPDATE Session1> SELECT * FROM sometable WHERE id=10 FOR UPDATEl -- lock escalation to X Session1> INSERT INTO sometable (id, ...) VALUES ( 10, ... ); Session1> COMMIT;so their code is already almost correct. They do not need the double read and also not the isolation level SERIALIZABLE. This is an easy fix for them and the deadlocks are gone, the world is safe again. So many things to learn from this: You won’t need SERIALIZABLE unless your code is broken. Trying to use it is a warning sign. A deadlock with an S-lock escalation means you need to check the isolation level. In SERIALIZABLE it is totally possible to deadlock yourself with a simple invisible SELECT and a lone INSERT or UPDATE. The ORM will remove you quite a lot from the emitted SQL. Do you know how to trace your ORM and to get the actual SQL generated? If not, go and find out. A server side trace will not save you - the server is a busy beast. It also cannot see your stackframes, so it can’t link your SQL to the line in your code that called the ORM. Yes, in the client side SQL trace, ideally you also want the tracer to bubble up the stack and give you the first line outside of the ORM to identify what is causing the SQL to be emitted and where in the code that happens. The deadlock information in SHOW ENGINE INNODB STATUS is painfully opaque, but learning to read it is worthwhile. In reproduction, using performance schema is much easier and makes the sequence of events much easier to understand. The server is not very good at explaining the root cause of deadlocks to a developer in the error messages and warnings generated. https://isotopp.github.io/2020/08/02/mysql-deadlocks-with-insert.html
0 notes
globalmediacampaign · 4 years ago
Text
MySQL Shell 8.0.21 for MySQL Server 8.0 and 5.7 has been released
Dear MySQL users, MySQL Shell 8.0.21 is a maintenance release of MySQL Shell 8.0 Series (a component of the MySQL Server). The MySQL Shell is provided under Oracle’s dual-license. MySQL Shell 8.0 is highly recommended for use with MySQL Server 8.0 and 5.7. Please upgrade to MySQL Shell 8.0.21. MySQL Shell is an interactive JavaScript, Python and SQL console interface, supporting development and administration for the MySQL Server. It provides APIs implemented in JavaScript and Python that enable you to work with MySQL InnoDB cluster and use MySQL as a document store. The AdminAPI enables you to work with MySQL InnoDB cluster and InnoDB ReplicaSet, providing integrated solutions for high availability and scalability using InnoDB based MySQL databases, without requiring advanced MySQL expertise.  For more information about how to configure and work with MySQL InnoDB cluster and MySQL InnoDB ReplicaSet see https://dev.mysql.com/doc/refman/en/mysql-innodb-cluster-userguide.html The X DevAPI enables you to create “schema-less” JSON document collections and perform Create, Update, Read, Delete (CRUD) operations on those collections from your favorite scripting language.  For more information about how to use MySQL Shell and the MySQL Document Store support see https://dev.mysql.com/doc/refman/en/document-store.html For more information about the X DevAPI see https://dev.mysql.com/doc/x-devapi-userguide/en/ If you want to write applications that use the the CRUD based X DevAPI you can also use the latest MySQL Connectors for your language of choice. For more information about Connectors see https://dev.mysql.com/doc/index-connectors.html For more information on the APIs provided with MySQL Shell see https://dev.mysql.com/doc/dev/mysqlsh-api-javascript/8.0/ and https://dev.mysql.com/doc/dev/mysqlsh-api-python/8.0/ Using MySQL Shell’s SQL mode you can communicate with servers using the legacy MySQL protocol. Additionally, MySQL Shell provides partial compatibility with the mysql client by supporting many of the same command line options. For full documentation on MySQL Server, MySQL Shell and related topics, see https://dev.mysql.com/doc/mysql-shell/8.0/en/ For more information about how to download MySQL Shell 8.0.21, see the “General Availability (GA) Releases” tab at http://dev.mysql.com/downloads/shell/ We welcome and appreciate your feedback and bug reports, see http://bugs.mysql.com/ Enjoy and thanks for the support! Changes in MySQL Shell 8.0.21 (2020-07-13, General Availability)      * AdminAPI Added or Changed Functionality      * AdminAPI Bugs Fixed      * Functionality Added or Changed      * Bugs Fixed AdminAPI Added or Changed Functionality      * A new user configurable tag framework has been added to        the metadata, to allow specific instances of a cluster or        ReplicaSet to be marked with additional information. Tags        can be any ASCII character and provide a namespace. You        set tags for an instance using the setInstanceOption()        operation. In addition, AdminAPI and MySQL Router 8.0.21        support specific tags, which enable you to mark instances        as hidden and remove them from routing. MySQL Router then        excludes such tagged instances from the routing        destination candidates list. This enables you to safely        take a server instance offline, so that applications and        MySQL Router ignore it, for example while you perform        maintenance tasks, such as server upgrade or        configuration changes. To bring the instance back online,        use the setInstanceOption() operation to remove the tags        and MySQL Router adds the instance back to the routing        destination candidates list, and it becomes online for        applications. For more information, see Tagging the        Metadata (https://dev.mysql.com/doc/refman/8.0/en/admin-api-tagging.html). AdminAPI Bugs Fixed      * Important Change: Previously, Group Replication did not        support binary log checksums, and therefore one of the        requirements for instances in InnoDB cluster was that        binary log checksums were disabled by having the        binlog_checksum system variable set to NONE. AdminAPI        verified the value of binlog_checksum during the        dba.checkInstanceConfiguration() operation and disallowed        creating a cluster or adding an instance to a cluster        that did not have binary log checksums disabled. In        version 8.0.21, Group Replication has lifted this        restriction, therefore InnoDB cluster now permits        instances to use binary log checksums, with        binlog_checksum set to CRC32. The setting for        binlog_checksum does not have to be the same for all        instances. In addition, sandboxes deployed with version        8.0.21 and later do not set the binlog_checksum variable,        which defaults to CRC32. (Bug #31329024)      * Adopting a Group Replication setup as a cluster can be        performed when connected to any member of the group,        regardless of whether it is a primary or a secondary.        However, when a secondary member was used,        super_read_only was being incorrectly disabled on that        instance. Now, all operations performed during an        adoption are done using the primary member of the group.        This ensures that no GTID inconsistencies occur and that        super_read_only is not incorrectly disabled on secondary        members. (Bug #31238233)      * Using the clusterAdmin option to create a user which had        a netmask as part of the host resulted in an error when        this user was passed to the dba.createCluster()        operation. Now, accounts that specify a netmask are        treated as accounts with wildcards, meaning that further        checks to verify if the account accepts remote        connections from all instances are skipped. (Bug        #31018091)      * The check for instance read-only compatibility was using        a wrong MySQL version as the base version. The        cross-version policies were added to Group Replication in        version 8.0.17, but the check was considering instances        running 8.0.16. This resulted in a misleading warning        message indicating that the added instance was read-only        compatible with the cluster, when this was not true (only        for instances 8.0.16). The fix ensures that the check to        verify if an instance is read-compatible or not with a        cluster is only performed if the target instance is        running version 8.0.17 or later. (Bug #30896344)      * The maximum number of instances in an InnoDB cluster is        9, but AdminAPI was not preventing you from trying to add        more instances to a cluster and the resulting error        message was not clear. Now, if a cluster has 9 instances,        Cluster.addInstance prevents you adding more instances.        (Bug #30885157)      * Adding an instance with a compatible GTID set to a InnoDB        cluster or InnoDB ReplicaSet on which provisioning is        required should not require any interaction, because this        is considered a safe operation. Previously, in such a        scenario, when MySQL Clone was supported MySQL Shell        still prompted to choose between cloning or aborting the        operation. Now, the operation proceeds with cloning,        because this is the only way to provision the instance.        Note        instances with an empty GTID set are not considered to        have a compatible GTID set when compared with the InnoDB        cluster or InnoDB ReplicaSet. Such scenarios are        considered to be unknown, therefore MySQL Shell prompts        to confirm which action should be taken.        (Bug #30884590)      * The Group Replication system variables (prefixed with        group_replication) do not exist if the plugin has not        been loaded. Even if the system variables are persisted        to the instance’s option file, they are not loaded unless        the Group Replication plugin is also loaded when the        server starts. If the Group Replication plugin is        installed after the server starts, the option file is not        reloaded, so all system variables have default values.        Instances running MySQL 8.0 do not have a problem because        SET PERSIST is used. However, on instances running        version MySQL 5.7, the dba.rebootCluster() operation        could not restore some system variables if the Group        Replication plugin was uninstalled. Now, the        dba.configureInstance() operation persists the Group        Replication system variables to configuration files with        the loose_ prefix. As a result, once the Group        Replication plugin is installed, on instances running 5.7        the persisted values are used instead of the default        values. (Bug #30768504)      * The updateTopologyMode option has been deprecated and the        behavior of Cluster.rescan() has been changed to always        update the topology mode in the Metadata when a change is        detected. MySQL Shell now displays a message whenever        such a change is detected. (Bug #29330769)      * The cluster.addInstance() and cluster.rejoinInstance()        operations were not checking for the full range of        settings which are required for an instance to be valid        for adding to the cluster. This resulted in attempts to        use instances which run on different operating systems to        fail. For example, a cluster running on two instances        that were hosted on a Linux based operating system would        block the addition of an instance running Microsoft        Windows. Now, cluster.addInstance() and        cluster.rejoinInstance() operations validate the instance        and prevent adding or rejoining an instance to the        cluster if the value of the lower_case_table_names,        group_replication_gtid_assignment_block_size or        default_table_encryption of the instance are different        from the ones on the cluster. (Bug #29255212) Functionality Added or Changed      * MySQL Shell now has an instance dump utility,        dumpInstance(), and schema dump utility, dumpSchemas().        The new utilities support the export of all schemas or a        selected schema from an on-premise MySQL server instance        into an Oracle Cloud Infrastructure Object Storage bucket        or a set of local files. The schemas can then be imported        into a MySQL Database Service DB System using MySQL        Shell’s new dump loading utility. The new utilities        provide Oracle Cloud Infrastructure Object Storage        streaming, MySQL Database Service compatibility checks        and modifications, parallel dumping with multiple        threads, and file compression.      * MySQL Shell’s new dump loading utility, loadDump(),        supports the import of schemas dumped using MySQL Shell’s        new instance dump utility and schema dump utility into a        MySQL Database Service DB System. The dump loading        utility provides data streaming from remote storage,        parallel loading of tables or table chunks, progress        state tracking, resume and reset capability, and the        option of concurrent loading while the dump is taking        place.      * The X DevAPI implementation now supports JSON schema        validation, which enables you to ensure that your        documents have a certain structure before they can be        inserted or updated in a collection. To enable or modify        JSON schema validation you pass in a JSON object like: {     validation: {       level: “off|strict”,       schema: “json-schema”     } }        Here, validation is JSON object which contains the keys        you can use to configure JSON schema validation. The        first key is level, which can take the value strict or        off. The second key, schema, is a JSON schema, as defined        at http://json-schema.org. If the level key is set to        strict, documents are validated against the json-schema        when they are added to the collection, or when an        operation updates the document. If the document does not        validate, the server generates an error and the operation        fails. If the level key is set to off, documents are not        validated against the json-schema.        You can pass a validation JSON object to the        schema.createCollection() operation, to enable JSON        schema validation, and schema.modifyCollection()        operation, to change the current JSON schema validation,        for example to disable validation. For more information,        see JSON Schema Validation (https://dev.mysql.com/doc/x-devapi-userguide/en/collection-validation.html). Bugs Fixed      * MySQL Shell plugins now support the use of the **kwargs        syntax in functions defined in Python that are made        available by the plugin. Using **kwargs in a function        definition lets you call the function using a        variable-length list of keyword arguments with arbitrary        names. If the function is called from MySQL Shell’s        JavaScript mode, MySQL Shell passes the named arguments        and their values into a dictionary object for the Python        function. MySQL Shell first tries to associate a keyword        argument passed to a function with any corresponding        keyword parameter that the function defines, and if there        is none, the keyword argument is automatically included        in the **kwargs list. As a side effect of this support,        any API function called from Python in MySQL Shell that        has a dictionary of options as the last parameter        supports defining these options using named arguments.        (Bug #31495448)      * When switching to SQL mode, MySQL Shell queries the SQL        mode of the connected server to establish whether the        ANSI_QUOTES mode is enabled. Previously, MySQL Shell        could not proceed if it did not receive a result set in        response to the query. The absence of a result is now        handled appropriately. (Bug #31418783, Bug #99728)      * In SQL mode, when the results of a query are to be        printed in table format, MySQL Shell buffers the result        set before printing, in order to identify the correct        column widths for the table. With very large result sets,        it was possible for this practice to cause an out of        memory error. MySQL Shell now buffers a maximum of 1000        rows for a result set before proceeding to format and        print the table. Note that if a field in a row after the        first 1000 rows contains a longer value than previously        seen in that column in the result set, the table        formatting will be misaligned for that row. (Bug        #31304711)      * Context switching in MySQL Shell’s SQL mode has been        refactored and simplified to remove SQL syntax errors        that could be returned when running script files using        the source command. (Bug #31175790, Bug #31197312, Bug        #99303)      * The user account that is used to run MySQL Shell’s        upgrade checker utility checkForServerUpgrade()        previously required ALL privileges. The user account now        requires only the RELOAD, PROCESS, and SELECT privileges.        (Bug #31085098)      * In Python mode, MySQL Shell did not handle invalid UTF-8        sequences in strings returned by queries. (Bug #31083617)      * MySQL Shell’s parallel table import utility importTable()        has a new option characterSet, which specifies a        character set encoding with which the input data file is        interpreted during the import. Setting the option to        binary means that no conversion is done during the        import. When you omit this option, the import uses the        character set specified by the character_set_database        system variable to interpret the input data file. (Bug        #31057707)      * On Windows, if the MySQL Shell package was extracted to        and used from a directory whose name contained multi-byte        characters, MySQL Shell was unable to start. MySQL Shell        now handles directory names with multi-byte characters        correctly, including when setting up Python, loading        prompt themes, and accessing credential helpers. (Bug        #31056783)      * MySQL Shell’s JSON import utility importJSON() now        handles UTF-8 encoded files that include a BOM (byte mark        order) at the start, which is the sequence 0xEF 0xBB        0xBF. As a workaround in earlier releases, remove this        byte sequence, which is not needed. (Bug #30993547, Bug        #98836)      * When the output format was set to JSON, MySQL Shell’s        upgrade checker utility checkForServerUpgrade() included        a description and documentation link for a check even if        no issues were found. These are now omitted from the        output, as they are with the text output format. (Bug        #30950035) On Behalf of Oracle/MySQL Release Engineering Team, Sreedhar S https://insidemysql.com/mysql-shell-8-0-21-for-mysql-server-8-0-and-5-7-has-been-released/
0 notes
webdesignersolutions · 6 years ago
Text
Built a control panel over 16 years, free lifetime release
Site Admin demo • Source
16 years ago I stumbled into hosting with Ensim WEBppliance, which was a clusterfuck of a control panel necessitating a bunch of bugfixes. Those bugfixes spawned a control panel, apnscp (Apis Networks Control Panel), that I've continued to develop to this day. v3 is the first public release of apnscp and to celebrate I'm giving away 400 free lifetime licenses on r/webhosting each good for 1 server.
Visit apnscp.com/activate/webhosting-lt to get started customizing the installer. Database + PHP are vendor agnostic. apnscp supports any-version Node/Ruby/Python/Go. I'm interested in feedback, if not bugs then certainly ideas for improvement.
apnscp ships with integrated Route 53/CF DNS support in addition to Linode, DO, and Vultr. Additional providers are easy to create. apnscp includes 1-click install/updates for Wordpress, Drupal, Laravel, Ghost, Discourse, and Magento. Enabling Passenger, provided you have at least 2 GB memory, opens the door to use any-version Ruby, Node, and Python on your server.
Minimum requirements
2 GB RAM
20 GB disk
CentOS 7.4
xfs or ext4 filesystem
Containers not supported (OpenVZ, Virtuozzo)
Features
100% self-hosted, no third-party agents required
1-click installs/automatic updates for Wordpress, Drupal, Ghost, Discourse, Laravel, Magento
Let's Encrypt issuance, automatic renewals
Resource enforcement via cgroups
Read-only roles for PHP
Integrated DNS for AWS, CF, Digital Ocean, Linode, and Vultr
Multi-tenancy, each account exists in a synthetic root
Any-version Node, Ruby, Python, Go
Automatic system/panel updates
OS checksums, perform integrity checks without RPM hell
Push monitoring for services
SMTP policy controls with rspamd
Firewall, brute-force restrictions on all services including HTTP with a rate-limiting sieve
Malware scrubbing
Multi-server support
apnscp won't fix all of your woes; you still need to be smart about whom you host and what you host, but it is a step in the right direction. apnscp is not a replacement for a qualified system administrator. It is however a much better alternative to emerging panels in this market.
Installation
Use apnscp Customizer to configure your server as you'd like. See INSTALL.md for installation + usage.
Monitoring installation apnscp will provision your server and this takes around 45 minutes to 2 hours to complete the first time. You can monitor installation real-time from the terminal:
tail -f /root/apnscp-bootstrapper.log
Post Install If you entered an email address while customizing (apnscp_admin_email) and the server isn't in a RBL, then you will receive an email with your login information. If you don't get an email after 2 hours, log into the server and check the status:
tail -n30 /root/apnscp-bootstrapper.log
The last line should be similar to: 2019-01-30 18:39:02,923 p=3534 u=root | localhost : ok=3116 changed=1051 unreachable=0 failed=0
If failed=0, everything is set! You can reset the password and refer back to the login information to access the panel or reset your credentials. Post-install will welcome you with a list of helpful commands to get started as well. You may want to change -n30 to -n50!
If failed=n where n > 0, send me a PM, email ([email protected]), get in touch on the forums, or Discord.
Shoot me a PM if you have a question or hop on Discord chat. Either way feedback makes this process tick. Enjoy!
Installation FAQ
Is a system hostname necessary?
No. It can be set at a later date with cpcmd config_set net.hostname new.host.name. A valid hostname is necessary for mail to reliably relay and valid SSL issuance. apnscp can operate without either.
Do you support Ubuntu?
No. This is a highly specialized platform. Red Hat has a proven track record of honoring its 10 year OS lifecycles, which from experience businesses like to move every 5-7 years. Moreover certain facilities like tuned, used to dynamically optimize your server, are unique to Red Hat and its derivatives. As an aside, apnscp also provides a migration facility for seamless zero downtime migrations.
How do I update the panel?
It will update automatically unless disabled. cpcmd config_set apnscp.update-policy major will set the panel to update up to major version changes. cpcmd config_set system.update-policy default will set the OS to update packages as they're delivered. These are the default panel settings. Supported Web Apps will update within 24 hours of a major version release and every Wednesday/Sunday for asset updates (themes/plugins). An email is sent to the contact assigned for each site (siteinfo,email service variable).
If your update policy is set to "false" in apnscp-vars.yml, then you can manually update the panel by running upcp and OS via yum update -y. If you've opted out of 1-click updates, then caveat emptor.
Mail won't submit from the server on 25/587 via TCP.
This is by design. Use sendmail to inject into the mail queue via binary or authenticate with a user account to ensure ESMTPA is used. Before disabling, and as one victimized by StealRat, I'd urge caution. Sockets are opaque: it's impossible to discern the UID or PID on the other end.
To disable:
cpcmd config_set apnscp.bootstrapper postfix_relay_mynetworks true
upcp -sb mail/configure-postfix
config_set manages configuration scopes. Scopes are discussed externally. upcp is a wrapper to update the panel, reset the panel (--reset), run integrity checks (-b) with optional tags. -s skips migrations that are otherwise compulsory if present during a panel update; you wouldn't want an incomplete platform!
My connection is firewalled and I can't send mail directly!
apnscp provides simple smart host support via configuration scope.
How do I uninstall MySQL or PostgreSQL?
Removing either would render the platform inoperable. Do not do this. PostgreSQL handles mail, long-term statistics, and backup account metadata journaling. MySQL for everything else, including panel data.
Oof. apnscp is taking up 1.5 GB of memory!
There are two important tunables, has_low_memory and clamav_enabled. has_low_memory is a macro that disables several components including:
clamav_enabled => false
passenger_enabled => false
variety of rspamd performance enhancements (redis, proxy worker, neural) => false
MAKEFLAGS=-j1 (non-parallelized build)
dovecot_secure_mode => false (High-security mode)
Switches multi-threaded job daemon Horizon to singular "queue"
clamav_enabled disables ClamAV as well as upload scrubbing and virus checks via Web > Web Apps. This is more of a final line of defense. So long as you are the only custodian of sites on your server, it's safe to disable.
Resources
apnscp documentation
v3 release notes
Adding sites, logging in
Customizing apnscp
CLI helpers
Knowledgebase - focused for end-users. Administration is covered under hq.apnscp.com
Scopes - simplify complex tasks
License information
Licenses are tied to the server but may be transferred to a new server. Once transferred from the server apnscp will become deactivated on the server, which means your sites will continue to operate but apnscp can no longer help you manage your server, as well as deploy automatic updates. A copy of the license can be made either by copying /usr/local/apnscp/config/license.pem or License > <u>Download License</u> in the top-right corner. Likewise to install the license on a new machine just replace config/license.pem with your original copy.
Submitted February 17, 2019 at 05:14PM by tsammons https://www.reddit.com/r/webhosting/comments/arqya9/built_a_control_panel_over_16_years_free_lifetime/?utm_source=ifttt from Blogger http://webdesignersolutions1.blogspot.com/2019/02/built-control-panel-over-16-years-free.html via IFTTT
0 notes
webdesignersolutions · 6 years ago
Text
Built a control panel over 16 years, free lifetime release via /r/webhosting
Built a control panel over 16 years, free lifetime release
Site Admin demo • Source
16 years ago I stumbled into hosting with Ensim WEBppliance, which was a clusterfuck of a control panel necessitating a bunch of bugfixes. Those bugfixes spawned a control panel, apnscp (Apis Networks Control Panel), that I've continued to develop to this day. v3 is the first public release of apnscp and to celebrate I'm giving away 400 free lifetime licenses on r/webhosting each good for 1 server.
Visit apnscp.com/activate/webhosting-lt to get started customizing the installer. Database + PHP are vendor agnostic. apnscp supports any-version Node/Ruby/Python/Go. I'm interested in feedback, if not bugs then certainly ideas for improvement.
apnscp ships with integrated Route 53/CF DNS support in addition to Linode, DO, and Vultr. Additional providers are easy to create. apnscp includes 1-click install/updates for Wordpress, Drupal, Laravel, Ghost, Discourse, and Magento. Enabling Passenger, provided you have at least 2 GB memory, opens the door to use any-version Ruby, Node, and Python on your server.
Minimum requirements
2 GB RAM
20 GB disk
CentOS 7.4
xfs or ext4 filesystem
Containers not supported (OpenVZ, Virtuozzo)
Features
100% self-hosted, no third-party agents required
1-click installs/automatic updates for Wordpress, Drupal, Ghost, Discourse, Laravel, Magento
Let's Encrypt issuance, automatic renewals
Resource enforcement via cgroups
Read-only roles for PHP
Integrated DNS for AWS, CF, Digital Ocean, Linode, and Vultr
Multi-tenancy, each account exists in a synthetic root
Any-version Node, Ruby, Python, Go
Automatic system/panel updates
OS checksums, perform integrity checks without RPM hell
Push monitoring for services
SMTP policy controls with rspamd
Firewall, brute-force restrictions on all services including HTTP with a rate-limiting sieve
Malware scrubbing
Multi-server support
apnscp won't fix all of your woes; you still need to be smart about whom you host and what you host, but it is a step in the right direction. apnscp is not a replacement for a qualified system administrator. It is however a much better alternative to emerging panels in this market.
Installation
Use apnscp Customizer to configure your server as you'd like. See INSTALL.md for installation + usage.
Monitoring installation apnscp will provision your server and this takes around 45 minutes to 2 hours to complete the first time. You can monitor installation real-time from the terminal:
tail -f /root/apnscp-bootstrapper.log
Post Install If you entered an email address while customizing (apnscp_admin_email) and the server isn't in a RBL, then you will receive an email with your login information. If you don't get an email after 2 hours, log into the server and check the status:
tail -n30 /root/apnscp-bootstrapper.log
The last line should be similar to: 2019-01-30 18:39:02,923 p=3534 u=root | localhost : ok=3116 changed=1051 unreachable=0 failed=0
If failed=0, everything is set! You can reset the password and refer back to the login information to access the panel or reset your credentials. Post-install will welcome you with a list of helpful commands to get started as well. You may want to change -n30 to -n50!
If failed=n where n > 0, send me a PM, email ([email protected]), get in touch on the forums, or Discord.
Shoot me a PM if you have a question or hop on Discord chat. Either way feedback makes this process tick. Enjoy!
Installation FAQ
Is a system hostname necessary?
No. It can be set at a later date with cpcmd config_set net.hostname new.host.name. A valid hostname is necessary for mail to reliably relay and valid SSL issuance. apnscp can operate without either.
Do you support Ubuntu?
No. This is a highly specialized platform. Red Hat has a proven track record of honoring its 10 year OS lifecycles, which from experience businesses like to move every 5-7 years. Moreover certain facilities like tuned, used to dynamically optimize your server, are unique to Red Hat and its derivatives. As an aside, apnscp also provides a migration facility for seamless zero downtime migrations.
How do I update the panel?
It will update automatically unless disabled. cpcmd config_set apnscp.update-policy major will set the panel to update up to major version changes. cpcmd config_set system.update-policy default will set the OS to update packages as they're delivered. These are the default panel settings. Supported Web Apps will update within 24 hours of a major version release and every Wednesday/Sunday for asset updates (themes/plugins). An email is sent to the contact assigned for each site (siteinfo,email service variable).
If your update policy is set to "false" in apnscp-vars.yml, then you can manually update the panel by running upcp and OS via yum update -y. If you've opted out of 1-click updates, then caveat emptor.
Mail won't submit from the server on 25/587 via TCP.
This is by design. Use sendmail to inject into the mail queue via binary or authenticate with a user account to ensure ESMTPA is used. Before disabling, and as one victimized by StealRat, I'd urge caution. Sockets are opaque: it's impossible to discern the UID or PID on the other end.
To disable:
cpcmd config_set apnscp.bootstrapper postfix_relay_mynetworks true
upcp -sb mail/configure-postfix
config_set manages configuration scopes. Scopes are discussed externally. upcp is a wrapper to update the panel, reset the panel (--reset), run integrity checks (-b) with optional tags. -s skips migrations that are otherwise compulsory if present during a panel update; you wouldn't want an incomplete platform!
My connection is firewalled and I can't send mail directly!
apnscp provides simple smart host support via configuration scope.
How do I uninstall MySQL or PostgreSQL?
Removing either would render the platform inoperable. Do not do this. PostgreSQL handles mail, long-term statistics, and backup account metadata journaling. MySQL for everything else, including panel data.
Oof. apnscp is taking up 1.5 GB of memory!
There are two important tunables, has_low_memory and clamav_enabled. has_low_memory is a macro that disables several components including:
clamav_enabled => false
passenger_enabled => false
variety of rspamd performance enhancements (redis, proxy worker, neural) => false
MAKEFLAGS=-j1 (non-parallelized build)
dovecot_secure_mode => false (High-security mode)
Switches multi-threaded job daemon Horizon to singular "queue"
clamav_enabled disables ClamAV as well as upload scrubbing and virus checks via Web > Web Apps. This is more of a final line of defense. So long as you are the only custodian of sites on your server, it's safe to disable.
Resources
apnscp documentation
v3 release notes
Adding sites, logging in
Customizing apnscp
CLI helpers
Knowledgebase - focused for end-users. Administration is covered under hq.apnscp.com
Scopes - simplify complex tasks
License information
Licenses are tied to the server but may be transferred to a new server. Once transferred from the server apnscp will become deactivated on the server, which means your sites will continue to operate but apnscp can no longer help you manage your server, as well as deploy automatic updates. A copy of the license can be made either by copying /usr/local/apnscp/config/license.pem or License > <u>Download License</u> in the top-right corner. Likewise to install the license on a new machine just replace config/license.pem with your original copy.
Submitted February 17, 2019 at 05:14PM by tsammons via reddit https://www.reddit.com/r/webhosting/comments/arqya9/built_a_control_panel_over_16_years_free_lifetime/?utm_source=ifttt
0 notes