#tag docker image after build
Explore tagged Tumblr posts
hydrus · 1 year ago
Text
Version 556
youtube
windows
zip
exe
macOS
app
linux
tar.zst
I had an ok week. I fixed some bugs and added a system to force-set filetypes.
You will be asked on update if you want to regenerate some animation thumbnails. The popup explains the decision, I recommend 'yes'.
full changelog
forced filetype
The difference between a zip and an Ugoira and a cbz is not perfectly clear cut. I am happy with the current filetype scanner--and there are a couple more improvements this week--but I'm sure there will always be some fuzziness in the difficult cases. This also applies to some clever other situations, like files that secretly store a zip concatenated after a jpeg. You might want that file to be considered something other than hydrus thinks it technically is.
So, on any file selection, you can now hit right-click->manage->force filetype. You can set any file to be seen as any other file. The changes take affect immediately, are reflected in presentation and system:filetype searches, and the files themselves will be renamed on disk (to aid 'open externally'). The original filetype is remembered, and everything is easily undoable through the same dialog.
Also added is 'system:has forced filetype', under the 'system:file properties' entry, if you'd like to find what you have set one way or the other.
This is experimental, and I don't recommend it for the most casual users, but if you are comfortable, have a play with it. I still need to write better error handling for complete nonsense cases (e.g. calling a webm a krita is probably going to raise an error somewhere), but let me know how you get on.
other highlights
I fixed some dodgy numbers in Mr. Bones (deleted file count) and the file history chart (inbox/archive count). If you have had some whack results here, let me know if things are any better! If they aren't, does changing the search to something more specific than 'all my files'/'system:everything' improve things?
Some new boot errors, mostly related to missing database components, are now handled with nicer dialog prompts, and even interactive console prompts serverside.
I _may_ have fixed/relieved the 'program is hung when restored from minimise to system tray' issue, but I am not confident. If you still have this, let me know how things are now. If you still get a hang, more info on what your client was doing during the minimise would help--I just cannot reproduce this problem reliably.
Thanks to a user who figured out all the build script stuff, the Docker package is now Alpine 3.19. The Docker package should have newer libraries and broader file support.
birthday and year summary
The first non-experimental beta of hydrus was released on December 14th, 2011. We are now going on twelve years.
Like many, I had an imperfect 2023. I've no complaints, but IRL problems from 2022 cut into my free time and energy, and I regret that it impacted my hydrus work time. I had hoped to move some larger projects forward this year, but I was mostly treading water with little features and optimisations. That said, looking at the changelog for the year reveals good progress nonetheless, including: multiple duplicate search and filter speed and accuracy improvements, and the 'one file in this search, the other in this search' system; significant Client API expansions, in good part thanks to a user, including the duplicates system, more page inspections, multiple local file domains, and http headers; new sidecar datatypes and string processing tools; improvements to 'related tags' search; much better transparency support, including 'system:has transparency'; more program stability, particularly with mpv; much much faster tag autocomplete results, and faster tag and file search cancelling; the inc/dec rating service; better file timestamp awareness and full editing capability; the SauceNAO-style image search under 'system:similar files'; blurhashes; more and better system predicate parsing, and natural system predicate parsing in the normal file search input; a background database table delete system that relieves huge jobs like 'delete the PTR'; more accurate Mr. Bones and File History, and both windows now taking any search; and multiple new file formats, like HEIF and gzip and Krita, and thumbnails and full rendering for several like PSD and PDF, again in good part thanks to a user, and then most recently the Ugoira and CBZ work.
I'm truly looking forward to the new year, and I plan to keep working and putting out releases every week. I deeply appreciate the feedback and help over the years. Thank you!
next week
I have only one more week in the year before my Christmas holiday, so I'll just do some simple cleanup and little fixes.
0 notes
this-week-in-rust · 1 year ago
Text
This Week in Rust 516
Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on Twitter or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.
This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.
Updates from Rust Community
Official
Announcing Rust 1.73.0
Polonius update
Project/Tooling Updates
rust-analyzer changelog #202
Announcing: pid1 Crate for Easier Rust Docker Images - FP Complete
bit_seq in Rust: A Procedural Macro for Bit Sequence Generation
tcpproxy 0.4 released
Rune 0.13
Rust on Espressif chips - September 29 2023
esp-rs quarterly planning: Q4 2023
Implementing the #[diagnostic] namespace to improve rustc error messages in complex crates
Observations/Thoughts
Safety vs Performance. A case study of C, C++ and Rust sort implementations
Raw SQL in Rust with SQLx
Thread-per-core
Edge IoT with Rust on ESP: HTTP Client
The Ultimate Data Engineering Chadstack. Running Rust inside Apache Airflow
Why Rust doesn't need a standard div_rem: An LLVM tale - CodSpeed
Making Rust supply chain attacks harder with Cackle
[video] Rust 1.73.0: Everything Revealed in 16 Minutes
Rust Walkthroughs
Let's Build A Cargo Compatible Build Tool - Part 5
How we reduced the memory usage of our Rust extension by 4x
Calling Rust from Python
Acceptance Testing embedded-hal Drivers
5 ways to instantiate Rust structs in tests
Research
Looking for Bad Apples in Rust Dependency Trees Using GraphQL and Trustfall
Miscellaneous
Rust, Open Source, Consulting - Interview with Matthias Endler
Edge IoT with Rust on ESP: Connecting WiFi
Bare-metal Rust in Android
[audio] Learn Rust in a Month of Lunches with Dave MacLeod
[video] Rust 1.73.0: Everything Revealed in 16 Minutes
[video] Rust 1.73 Release Train
[video] Why is the JavaScript ecosystem switching to Rust?
Crate of the Week
This week's crate is yarer, a library and command-line tool to evaluate mathematical expressions.
Thanks to Gianluigi Davassi for the self-suggestion!
Please submit your suggestions and votes for next week!
Call for Participation
Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!
Some of these tasks may also have mentors available, visit the task page for more information.
Ockam - Make ockam node delete (no args) interactive by asking the user to choose from a list of nodes to delete (tuify)
Ockam - Improve ockam enroll ----help text by adding doc comment for identity flag (clap command)
Ockam - Enroll "email: '+' character not allowed"
If you are a Rust project owner and are looking for contributors, please submit tasks here.
Updates from the Rust Project
384 pull requests were merged in the last week
formally demote tier 2 MIPS targets to tier 3
add tvOS to target_os for register_dtor
linker: remove -Zgcc-ld option
linker: remove unstable legacy CLI linker flavors
non_lifetime_binders: fix ICE in lint opaque-hidden-inferred-bound
add async_fn_in_trait lint
add a note to duplicate diagnostics
always preserve DebugInfo in DeadStoreElimination
bring back generic parameters for indices in rustc_abi and make it compile on stable
coverage: allow each coverage statement to have multiple code regions
detect missing => after match guard during parsing
diagnostics: be more careful when suggesting struct fields
don't suggest nonsense suggestions for unconstrained type vars in note_source_of_type_mismatch_constraint
dont call mir.post_mono_checks in codegen
emit feature gate warning for auto traits pre-expansion
ensure that ~const trait bounds on associated functions are in const traits or impls
extend impl's def_span to include its where clauses
fix detecting references to packed unsized fields
fix fast-path for try_eval_scalar_int
fix to register analysis passes with -Zllvm-plugins at link-time
for a single impl candidate, try to unify it with error trait ref
generalize small dominators optimization
improve the suggestion of generic_bound_failure
make FnDef 1-ZST in LLVM debuginfo
more accurately point to where default return type should go
move subtyper below reveal_all and change reveal_all
only trigger refining_impl_trait lint on reachable traits
point to full async fn for future
print normalized ty
properly export function defined in test which uses global_asm!()
remove Key impls for types that involve an AllocId
remove is global hack
remove the TypedArena::alloc_from_iter specialization
show more information when multiple impls apply
suggest pin!() instead of Pin::new() when appropriate
make subtyping explicit in MIR
do not run optimizations on trivial MIR
in smir find_crates returns Vec<Crate> instead of Option<Crate>
add Span to various smir types
miri-script: print which sysroot target we are building
miri: auto-detect no_std where possible
miri: continuation of #3054: enable spurious reads in TB
miri: do not use host floats in simd_{ceil,floor,round,trunc}
miri: ensure RET assignments do not get propagated on unwinding
miri: implement llvm.x86.aesni.* intrinsics
miri: refactor dlsym: dispatch symbols via the normal shim mechanism
miri: support getentropy on macOS as a foreign item
miri: tree Borrows: do not create new tags as 'Active'
add missing inline attributes to Duration trait impls
stabilize Option::as_(mut_)slice
reuse existing Somes in Option::(x)or
fix generic bound of str::SplitInclusive's DoubleEndedIterator impl
cargo: refactor(toml): Make manifest file layout more consitent
cargo: add new package cache lock modes
cargo: add unsupported short suggestion for --out-dir flag
cargo: crates-io: add doc comment for NewCrate struct
cargo: feat: add Edition2024
cargo: prep for automating MSRV management
cargo: set and verify all MSRVs in CI
rustdoc-search: fix bug with multi-item impl trait
rustdoc: rename issue-\d+.rs tests to have meaningful names (part 2)
rustdoc: Show enum discrimant if it is a C-like variant
rustfmt: adjust span derivation for const generics
clippy: impl_trait_in_params now supports impls and traits
clippy: into_iter_without_iter: walk up deref impl chain to find iter methods
clippy: std_instead_of_core: avoid lint inside of proc-macro
clippy: avoid invoking ignored_unit_patterns in macro definition
clippy: fix items_after_test_module for non root modules, add applicable suggestion
clippy: fix ICE in redundant_locals
clippy: fix: avoid changing drop order
clippy: improve redundant_locals help message
rust-analyzer: add config option to use rust-analyzer specific target dir
rust-analyzer: add configuration for the default action of the status bar click action in VSCode
rust-analyzer: do flyimport completions by prefix search for short paths
rust-analyzer: add assist for applying De Morgan's law to Iterator::all and Iterator::any
rust-analyzer: add backtick to surrounding and auto-closing pairs
rust-analyzer: implement tuple return type to tuple struct assist
rust-analyzer: ensure rustfmt runs when configured with ./
rust-analyzer: fix path syntax produced by the into_to_qualified_from assist
rust-analyzer: recognize custom main function as binary entrypoint for runnables
Rust Compiler Performance Triage
A quiet week, with few regressions and improvements.
Triage done by @simulacrum. Revision range: 9998f4add..84d44dd
1 Regressions, 2 Improvements, 4 Mixed; 1 of them in rollups
68 artifact comparisons made in total
Full report here
Approved RFCs
Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:
No RFCs were approved this week.
Final Comment Period
Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.
RFCs
[disposition: merge] RFC: Remove implicit features in a new edition
Tracking Issues & PRs
[disposition: merge] Bump COINDUCTIVE_OVERLAP_IN_COHERENCE to deny + warn in deps
[disposition: merge] document ABI compatibility
[disposition: merge] Broaden the consequences of recursive TLS initialization
[disposition: merge] Implement BufRead for VecDeque<u8>
[disposition: merge] Tracking Issue for feature(file_set_times): FileTimes and File::set_times
[disposition: merge] impl Not, Bit{And,Or}{,Assign} for IP addresses
[disposition: close] Make RefMut Sync
[disposition: merge] Implement FusedIterator for DecodeUtf16 when the inner iterator does
[disposition: merge] Stabilize {IpAddr, Ipv6Addr}::to_canonical
[disposition: merge] rustdoc: hide #[repr(transparent)] if it isn't part of the public ABI
New and Updated RFCs
[new] Add closure-move-bindings RFC
[new] RFC: Include Future and IntoFuture in the 2024 prelude
Call for Testing
An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:
No RFCs issued a call for testing this week.
If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.
Upcoming Events
Rusty Events between 2023-10-11 - 2023-11-08 🦀
Virtual
2023-10-11| Virtual (Boulder, CO, US) | Boulder Elixir and Rust
Monthly Meetup
2023-10-12 - 2023-10-13 | Virtual (Brussels, BE) | EuroRust
EuroRust 2023
2023-10-12 | Virtual (Nuremberg, DE) | Rust Nuremberg
Rust NĂĽrnberg online
2023-10-18 | Virtual (Cardiff, UK)| Rust and C++ Cardiff
Operating System Primitives (Atomics & Locks Chapter 8)
2023-10-18 | Virtual (Vancouver, BC, CA) | Vancouver Rust
Rust Study/Hack/Hang-out
2023-10-19 | Virtual (Charlottesville, NC, US) | Charlottesville Rust Meetup
Crafting Interpreters in Rust Collaboratively
2023-10-19 | Virtual (Stuttgart, DE) | Rust Community Stuttgart
Rust-Meetup
2023-10-24 | Virtual (Berlin, DE) | OpenTechSchool Berlin
Rust Hack and Learn | Mirror
2023-10-24 | Virtual (Washington, DC, US) | Rust DC
Month-end Rusting—Fun with 🍌 and 🔎!
2023-10-31 | Virtual (Dallas, TX, US) | Dallas Rust
Last Tuesday
2023-11-01 | Virtual (Indianapolis, IN, US) | Indy Rust
Indy.rs - with Social Distancing
Asia
2023-10-11 | Kuala Lumpur, MY | GoLang Malaysia
Rust Meetup Malaysia October 2023 | Event updates Telegram | Event group chat
2023-10-18 | Tokyo, JP | Tokyo Rust Meetup
Rust and the Age of High-Integrity Languages
Europe
2023-10-11 | Brussels, BE | BeCode Brussels Meetup
Rust on Web - EuroRust Conference
2023-10-12 - 2023-10-13 | Brussels, BE | EuroRust
EuroRust 2023
2023-10-12 | Brussels, BE | Rust Aarhus
Rust Aarhus - EuroRust Conference
2023-10-12 | Reading, UK | Reading Rust Workshop
Reading Rust Meetup at Browns
2023-10-17 | Helsinki, FI | Finland Rust-lang Group
Helsinki Rustaceans Meetup
2023-10-17 | Leipzig, DE | Rust - Modern Systems Programming in Leipzig
SIMD in Rust
2023-10-19 | Amsterdam, NL | Rust Developers Amsterdam Group
Rust Amsterdam Meetup @ Terraform
2023-10-19 | Wrocław, PL | Rust Wrocław
Rust Meetup #35
2023-09-19 | Virtual (Washington, DC, US) | Rust DC
Month-end Rusting—Fun with 🍌 and 🔎!
2023-10-25 | Dublin, IE | Rust Dublin
Biome, web development tooling with Rust
2023-10-26 | Augsburg, DE | Rust - Modern Systems Programming in Leipzig
Augsburg Rust Meetup #3
2023-10-26 | Delft, NL | Rust Nederland
Rust at TU Delft
2023-11-07 | Brussels, BE | Rust Aarhus
Rust Aarhus - Rust and Talk beginners edition
North America
2023-10-11 | Boulder, CO, US | Boulder Rust Meetup
First Meetup - Demo Day and Office Hours
2023-10-12 | Lehi, UT, US | Utah Rust
The Actor Model: Fearless Concurrency, Made Easy w/Chris Mena
2023-10-13 | Cambridge, MA, US | Boston Rust Meetup
Kendall Rust Lunch
2023-10-17 | San Francisco, CA, US | San Francisco Rust Study Group
Rust Hacking in Person
2023-10-18 | Brookline, MA, US | Boston Rust Meetup
Boston University Rust Lunch
2023-10-19 | Mountain View, CA, US | Mountain View Rust Meetup
Rust Meetup at Hacker Dojo
2023-10-19 | Nashville, TN, US | Music City Rust Developers
Rust Goes Where It Pleases Pt2 - Rust on the front end!
2023-10-19 | Seattle, WA, US | Seattle Rust User Group
Seattle Rust User Group - October Meetup
2023-10-25 | Austin, TX, US | Rust ATX
Rust Lunch - Fareground
2023-10-25 | Chicago, IL, US | Deep Dish Rust
Rust Happy Hour
Oceania
2023-10-17 | Christchurch, NZ | Christchurch Rust Meetup Group
Christchurch Rust meetup meeting
2023-10-26 | Brisbane, QLD, AU | Rust Brisbane
October Meetup
If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.
Jobs
Please see the latest Who's Hiring thread on r/rust
Quote of the Week
The Rust mission -- let you write software that's fast and correct, productively -- has never been more alive. So next Rustconf, I plan to celebrate:
All the buffer overflows I didn't create, thanks to Rust
All the unit tests I didn't have to write, thanks to its type system
All the null checks I didn't have to write thanks to Option and Result
All the JS I didn't have to write thanks to WebAssembly
All the impossible states I didn't have to assert "This can never actually happen"
All the JSON field keys I didn't have to manually type in thanks to Serde
All the missing SQL column bugs I caught at compiletime thanks to Diesel
All the race conditions I never had to worry about thanks to the borrow checker
All the connections I can accept concurrently thanks to Tokio
All the formatting comments I didn't have to leave on PRs thanks to Rustfmt
All the performance footguns I didn't create thanks to Clippy
– Adam Chalmers in their RustConf 2023 recap
Thanks to robin for the suggestion!
Please submit quotes and vote for next week!
This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.
Email list hosting is sponsored by The Rust Foundation
Discuss on r/rust
0 notes
codeonedigest · 2 years ago
Text
Docker Tag and Push Image to Hub | Docker Tagging Explained and Best Practices
Full Video Link: https://youtu.be/X-uuxvi10Cw Hi, a new #video on #DockerImageTagging is published on @codeonedigest #youtube channel. Learn TAGGING docker image. Different ways to TAG docker image #Tagdockerimage #pushdockerimagetodockerhubrepository #
Next step after building the docker image is to tag docker image. Image tagging is important to upload docker image to docker hub repository or azure container registry or elastic container registry etc. There are different ways to TAG docker image. Learn how to tag docker image? What are the best practices for docker image tagging? How to tag docker container image? How to tag and push docker…
Tumblr media
View On WordPress
0 notes
globalmediacampaign · 4 years ago
Text
How to set up command-line access to Amazon Keyspaces (for Apache Cassandra) by using the new developer toolkit Docker image
Amazon Keyspaces (for Apache Cassandra) is a scalable, highly available, and fully managed Cassandra-compatible database service. Amazon Keyspaces helps you run your Cassandra workloads more easily by using a serverless database that can scale up and down automatically in response to your actual application traffic. Because Amazon Keyspaces is serverless, there are no clusters or nodes to provision and manage. You can get started with Amazon Keyspaces with a few clicks in the console or a few changes to your existing Cassandra driver configuration. In this post, I show you how to set up command-line access to Amazon Keyspaces by using the keyspaces-toolkit Docker image. The keyspaces-toolkit Docker image contains commonly used Cassandra developer tooling. The toolkit comes with the Cassandra Query Language Shell (cqlsh) and is configured with best practices for Amazon Keyspaces. The container image is open source and also compatible with Apache Cassandra 3.x clusters. A command line interface (CLI) such as cqlsh can be useful when automating database activities. You can use cqlsh to run one-time queries and perform administrative tasks, such as modifying schemas or bulk-loading flat files. You also can use cqlsh to enable Amazon Keyspaces features, such as point-in-time recovery (PITR) backups and assign resource tags to keyspaces and tables. The following screenshot shows a cqlsh session connected to Amazon Keyspaces and the code to run a CQL create table statement. Build a Docker image To get started, download and build the Docker image so that you can run the keyspaces-toolkit in a container. A Docker image is the template for the complete and executable version of an application. It’s a way to package applications and preconfigured tools with all their dependencies. To build and run the image for this post, install the latest Docker engine and Git on the host or local environment. The following command builds the image from the source. docker build --tag amazon/keyspaces-toolkit --build-arg CLI_VERSION=latest https://github.com/aws-samples/amazon-keyspaces-toolkit.git The preceding command includes the following parameters: –tag – The name of the image in the name:tag Leaving out the tag results in latest. –build-arg CLI_VERSION – This allows you to specify the version of the base container. Docker images are composed of layers. If you’re using the AWS CLI Docker image, aligning versions significantly reduces the size and build times of the keyspaces-toolkit image. Connect to Amazon Keyspaces Now that you have a container image built and available in your local repository, you can use it to connect to Amazon Keyspaces. To use cqlsh with Amazon Keyspaces, create service-specific credentials for an existing AWS Identity and Access Management (IAM) user. The service-specific credentials enable IAM users to access Amazon Keyspaces, but not access other AWS services. The following command starts a new container running the cqlsh process. docker run --rm -ti amazon/keyspaces-toolkit cassandra.us-east-1.amazonaws.com 9142 --ssl -u "SERVICEUSERNAME" -p "SERVICEPASSWORD" The preceding command includes the following parameters: run – The Docker command to start the container from an image. It’s the equivalent to running create and start. –rm –Automatically removes the container when it exits and creates a container per session or run. -ti – Allocates a pseudo TTY (t) and keeps STDIN open (i) even if not attached (remove i when user input is not required). amazon/keyspaces-toolkit – The image name of the keyspaces-toolkit. us-east-1.amazonaws.com – The Amazon Keyspaces endpoint. 9142 – The default SSL port for Amazon Keyspaces. After connecting to Amazon Keyspaces, exit the cqlsh session and terminate the process by using the QUIT or EXIT command. Drop-in replacement Now, simplify the setup by assigning an alias (or DOSKEY for Windows) to the Docker command. The alias acts as a shortcut, enabling you to use the alias keyword instead of typing the entire command. You will use cqlsh as the alias keyword so that you can use the alias as a drop-in replacement for your existing Cassandra scripts. The alias contains the parameter –v "$(pwd)":/source, which mounts the current directory of the host. This is useful for importing and exporting data with COPY or using the cqlsh --file command to load external cqlsh scripts. alias cqlsh='docker run --rm -ti -v "$(pwd)":/source amazon/keyspaces-toolkit cassandra.us-east-1.amazonaws.com 9142 --ssl' For security reasons, don’t store the user name and password in the alias. After setting up the alias, you can create a new cqlsh session with Amazon Keyspaces by calling the alias and passing in the service-specific credentials. cqlsh -u "SERVICEUSERNAME" -p "SERVICEPASSWORD" Later in this post, I show how to use AWS Secrets Manager to avoid using plaintext credentials with cqlsh. You can use Secrets Manager to store, manage, and retrieve secrets. Create a keyspace Now that you have the container and alias set up, you can use the keyspaces-toolkit to create a keyspace by using cqlsh to run CQL statements. In Cassandra, a keyspace is the highest-order structure in the CQL schema, which represents a grouping of tables. A keyspace is commonly used to define the domain of a microservice or isolate clients in a multi-tenant strategy. Amazon Keyspaces is serverless, so you don’t have to configure clusters, hosts, or Java virtual machines to create a keyspace or table. When you create a new keyspace or table, it is associated with an AWS Account and Region. Though a traditional Cassandra cluster is limited to 200 to 500 tables, with Amazon Keyspaces the number of keyspaces and tables for an account and Region is virtually unlimited. The following command creates a new keyspace by using SingleRegionStrategy, which replicates data three times across multiple Availability Zones in a single AWS Region. Storage is billed by the raw size of a single replica, and there is no network transfer cost when replicating data across Availability Zones. Using keyspaces-toolkit, connect to Amazon Keyspaces and run the following command from within the cqlsh session. CREATE KEYSPACE amazon WITH REPLICATION = {'class': 'SingleRegionStrategy'} AND TAGS = {'domain' : 'shoppingcart' , 'app' : 'acme-commerce'}; The preceding command includes the following parameters: REPLICATION – SingleRegionStrategy replicates data three times across multiple Availability Zones. TAGS – A label that you assign to an AWS resource. For more information about using tags for access control, microservices, cost allocation, and risk management, see Tagging Best Practices. Create a table Previously, you created a keyspace without needing to define clusters or infrastructure. Now, you will add a table to your keyspace in a similar way. A Cassandra table definition looks like a traditional SQL create table statement with an additional requirement for a partition key and clustering keys. These keys determine how data in CQL rows are distributed, sorted, and uniquely accessed. Tables in Amazon Keyspaces have the following unique characteristics: Virtually no limit to table size or throughput – In Amazon Keyspaces, a table’s capacity scales up and down automatically in response to traffic. You don’t have to manage nodes or consider node density. Performance stays consistent as your tables scale up or down. Support for “wide” partitions – CQL partitions can contain a virtually unbounded number of rows without the need for additional bucketing and sharding partition keys for size. This allows you to scale partitions “wider” than the traditional Cassandra best practice of 100 MB. No compaction strategies to consider – Amazon Keyspaces doesn’t require defined compaction strategies. Because you don’t have to manage compaction strategies, you can build powerful data models without having to consider the internals of the compaction process. Performance stays consistent even as write, read, update, and delete requirements change. No repair process to manage – Amazon Keyspaces doesn’t require you to manage a background repair process for data consistency and quality. No tombstones to manage – With Amazon Keyspaces, you can delete data without the challenge of managing tombstone removal, table-level grace periods, or zombie data problems. 1 MB row quota – Amazon Keyspaces supports the Cassandra blob type, but storing large blob data greater than 1 MB results in an exception. It’s a best practice to store larger blobs across multiple rows or in Amazon Simple Storage Service (Amazon S3) object storage. Fully managed backups – PITR helps protect your Amazon Keyspaces tables from accidental write or delete operations by providing continuous backups of your table data. The following command creates a table in Amazon Keyspaces by using a cqlsh statement with customer properties specifying on-demand capacity mode, PITR enabled, and AWS resource tags. Using keyspaces-toolkit to connect to Amazon Keyspaces, run this command from within the cqlsh session. CREATE TABLE amazon.eventstore( id text, time timeuuid, event text, PRIMARY KEY(id, time)) WITH CUSTOM_PROPERTIES = { 'capacity_mode':{'throughput_mode':'PAY_PER_REQUEST'}, 'point_in_time_recovery':{'status':'enabled'} } AND TAGS = {'domain' : 'shoppingcart' , 'app' : 'acme-commerce' , 'pii': 'true'}; The preceding command includes the following parameters: capacity_mode – Amazon Keyspaces has two read/write capacity modes for processing reads and writes on your tables. The default for new tables is on-demand capacity mode (the PAY_PER_REQUEST flag). point_in_time_recovery – When you enable this parameter, you can restore an Amazon Keyspaces table to a point in time within the preceding 35 days. There is no overhead or performance impact by enabling PITR. TAGS – Allows you to organize resources, define domains, specify environments, allocate cost centers, and label security requirements. Insert rows Before inserting data, check if your table was created successfully. Amazon Keyspaces performs data definition language (DDL) operations asynchronously, such as creating and deleting tables. You also can monitor the creation status of a new resource programmatically by querying the system schema table. Also, you can use a toolkit helper for exponential backoff. Check for table creation status Cassandra provides information about the running cluster in its system tables. With Amazon Keyspaces, there are no clusters to manage, but it still provides system tables for the Amazon Keyspaces resources in an account and Region. You can use the system tables to understand the creation status of a table. The system_schema_mcs keyspace is a new system keyspace with additional content related to serverless functionality. Using keyspaces-toolkit, run the following SELECT statement from within the cqlsh session to retrieve the status of the newly created table. SELECT keyspace_name, table_name, status FROM system_schema_mcs.tables WHERE keyspace_name = 'amazon' AND table_name = 'eventstore'; The following screenshot shows an example of output for the preceding CQL SELECT statement. Insert sample data Now that you have created your table, you can use CQL statements to insert and read sample data. Amazon Keyspaces requires all write operations (insert, update, and delete) to use the LOCAL_QUORUM consistency level for durability. With reads, an application can choose between eventual consistency and strong consistency by using LOCAL_ONE or LOCAL_QUORUM consistency levels. The benefits of eventual consistency in Amazon Keyspaces are higher availability and reduced cost. See the following code. CONSISTENCY LOCAL_QUORUM; INSERT INTO amazon.eventstore(id, time, event) VALUES ('1', now(), '{eventtype:"click-cart"}'); INSERT INTO amazon.eventstore(id, time, event) VALUES ('2', now(), '{eventtype:"showcart"}'); INSERT INTO amazon.eventstore(id, time, event) VALUES ('3', now(), '{eventtype:"clickitem"}') IF NOT EXISTS; SELECT * FROM amazon.eventstore; The preceding code uses IF NOT EXISTS or lightweight transactions to perform a conditional write. With Amazon Keyspaces, there is no heavy performance penalty for using lightweight transactions. You get similar performance characteristics of standard insert, update, and delete operations. The following screenshot shows the output from running the preceding statements in a cqlsh session. The three INSERT statements added three unique rows to the table, and the SELECT statement returned all the data within the table.   Export table data to your local host You now can export the data you just inserted by using the cqlsh COPY TO command. This command exports the data to the source directory, which you mounted earlier to the working directory of the Docker run when creating the alias. The following cqlsh statement exports your table data to the export.csv file located on the host machine. CONSISTENCY LOCAL_ONE; COPY amazon.eventstore(id, time, event) TO '/source/export.csv' WITH HEADER=false; The following screenshot shows the output of the preceding command from the cqlsh session. After the COPY TO command finishes, you should be able to view the export.csv from the current working directory of the host machine. For more information about tuning export and import processes when using cqlsh COPY TO, see Loading data into Amazon Keyspaces with cqlsh. Use credentials stored in Secrets Manager Previously, you used service-specific credentials to connect to Amazon Keyspaces. In the following example, I show how to use the keyspaces-toolkit helpers to store and access service-specific credentials in Secrets Manager. The helpers are a collection of scripts bundled with keyspaces-toolkit to assist with common tasks. By overriding the default entry point cqlsh, you can call the aws-sm-cqlsh.sh script, a wrapper around the cqlsh process that retrieves the Amazon Keyspaces service-specific credentials from Secrets Manager and passes them to the cqlsh process. This script allows you to avoid hard-coding the credentials in your scripts. The following diagram illustrates this architecture. Configure the container to use the host’s AWS CLI credentials The keyspaces-toolkit extends the AWS CLI Docker image, making keyspaces-toolkit extremely lightweight. Because you may already have the AWS CLI Docker image in your local repository, keyspaces-toolkit adds only an additional 10 MB layer extension to the AWS CLI. This is approximately 15 times smaller than using cqlsh from the full Apache Cassandra 3.11 distribution. The AWS CLI runs in a container and doesn’t have access to the AWS credentials stored on the container’s host. You can share credentials with the container by mounting the ~/.aws directory. Mount the host directory to the container by using the -v parameter. To validate a proper setup, the following command lists current AWS CLI named profiles. docker run --rm -ti -v ~/.aws:/root/.aws --entrypoint aws amazon/keyspaces-toolkit configure list-profiles The ~/.aws directory is a common location for the AWS CLI credentials file. If you configured the container correctly, you should see a list of profiles from the host credentials. For instructions about setting up the AWS CLI, see Step 2: Set Up the AWS CLI and AWS SDKs. Store credentials in Secrets Manager Now that you have configured the container to access the host’s AWS CLI credentials, you can use the Secrets Manager API to store the Amazon Keyspaces service-specific credentials in Secrets Manager. The secret name keyspaces-credentials in the following command is also used in subsequent steps. docker run --rm -ti -v ~/.aws:/root/.aws --entrypoint aws amazon/keyspaces-toolkit secretsmanager create-secret --name keyspaces-credentials --description "Store Amazon Keyspaces Generated Service Credentials" --secret-string "{"username":"SERVICEUSERNAME","password":"SERVICEPASSWORD","engine":"cassandra","host":"SERVICEENDPOINT","port":"9142"}" The preceding command includes the following parameters: –entrypoint – The default entry point is cqlsh, but this command uses this flag to access the AWS CLI. –name – The name used to identify the key to retrieve the secret in the future. –secret-string – Stores the service-specific credentials. Replace SERVICEUSERNAME and SERVICEPASSWORD with your credentials. Replace SERVICEENDPOINT with the service endpoint for the AWS Region. Creating and storing secrets requires CreateSecret and GetSecretValue permissions in your IAM policy. As a best practice, rotate secrets periodically when storing database credentials. Use the Secrets Manager helper script Use the Secrets Manager helper script to sign in to Amazon Keyspaces by replacing the user and password fields with the secret key from the preceding keyspaces-credentials command. docker run --rm -ti -v ~/.aws:/root/.aws --entrypoint aws-sm-cqlsh.sh amazon/keyspaces-toolkit keyspaces-credentials --ssl --execute "DESCRIBE Keyspaces" The preceding command includes the following parameters: -v – Used to mount the directory containing the host’s AWS CLI credentials file. –entrypoint – Use the helper by overriding the default entry point of cqlsh to access the Secrets Manager helper script, aws-sm-cqlsh.sh. keyspaces-credentials – The key to access the credentials stored in Secrets Manager. –execute – Runs a CQL statement. Update the alias You now can update the alias so that your scripts don’t contain plaintext passwords. You also can manage users and roles through Secrets Manager. The following code sets up a new alias by using the keyspaces-toolkit Secrets Manager helper for passing the service-specific credentials to Secrets Manager. alias cqlsh='docker run --rm -ti -v ~/.aws:/root/.aws -v "$(pwd)":/source --entrypoint aws-sm-cqlsh.sh amazon/keyspaces-toolkit keyspaces-credentials --ssl' To have the alias available in every new terminal session, add the alias definition to your .bashrc file, which is executed on every new terminal window. You can usually find this file in $HOME/.bashrc or $HOME/bash_aliases (loaded by $HOME/.bashrc). Validate the alias Now that you have updated the alias with the Secrets Manager helper, you can use cqlsh without the Docker details or credentials, as shown in the following code. cqlsh --execute "DESCRIBE TABLE amazon.eventstore;" The following screenshot shows the running of the cqlsh DESCRIBE TABLE statement by using the alias created in the previous section. In the output, you should see the table definition of the amazon.eventstore table you created in the previous step. Conclusion In this post, I showed how to get started with Amazon Keyspaces and the keyspaces-toolkit Docker image. I used Docker to build an image and run a container for a consistent and reproducible experience. I also used an alias to create a drop-in replacement for existing scripts, and used built-in helpers to integrate cqlsh with Secrets Manager to store service-specific credentials. Now you can use the keyspaces-toolkit with your Cassandra workloads. As a next step, you can store the image in Amazon Elastic Container Registry, which allows you to access the keyspaces-toolkit from CI/CD pipelines and other AWS services such as AWS Batch. Additionally, you can control the image lifecycle of the container across your organization. You can even attach policies to expiring images based on age or download count. For more information, see Pushing an image. Cheat sheet of useful commands I did not cover the following commands in this blog post, but they will be helpful when you work with cqlsh, AWS CLI, and Docker. --- Docker --- #To view the logs from the container. Helpful when debugging docker logs CONTAINERID #Exit code of the container. Helpful when debugging docker inspect createtablec --format='{{.State.ExitCode}}' --- CQL --- #Describe keyspace to view keyspace definition DESCRIBE KEYSPACE keyspace_name; #Describe table to view table definition DESCRIBE TABLE keyspace_name.table_name; #Select samples with limit to minimize output SELECT * FROM keyspace_name.table_name LIMIT 10; --- Amazon Keyspaces CQL --- #Change provisioned capacity for tables ALTER TABLE keyspace_name.table_name WITH custom_properties={'capacity_mode':{'throughput_mode': 'PROVISIONED', 'read_capacity_units': 4000, 'write_capacity_units': 3000}} ; #Describe current capacity mode for tables SELECT keyspace_name, table_name, custom_properties FROM system_schema_mcs.tables where keyspace_name = 'amazon' and table_name='eventstore'; --- Linux --- #Line count of multiple/all files in the current directory find . -type f | wc -l #Remove header from csv sed -i '1d' myData.csv About the Author Michael Raney is a Solutions Architect with Amazon Web Services. https://aws.amazon.com/blogs/database/how-to-set-up-command-line-access-to-amazon-keyspaces-for-apache-cassandra-by-using-the-new-developer-toolkit-docker-image/
1 note · View note
linuxtech-blog · 5 years ago
Text
How to Install and Use Docker on CentOS
Tumblr media
Docker is a containerization technology that allows you to quickly build, test and deploy applications as portable, self-sufficient containers that can run virtually anywhere. 
The container is a way to package software along with binaries and settings required to make the software that runs isolated within an operating system.
In this tutorial, we will install Docker CE on CentOS 7 and explore the basic Docker commands  and concepts. Let’s GO !
Prerequisites
Before proceeding with this tutorial, make sure that you have installed a CentOS 7 server (You may want to see our tutorial : How to install a CentOS 7 image )
Install Docker on CentOS
Although the Docker package is available in the official CentOS 7 repository, it may not always be the latest version. The recommended approach is to install Docker from the Docker’s repositories.
To install Docker on your CentOS 7 server follow the steps below:
1. Start by updating your system packages and install the required dependencies:
sudo yum update
sudo yum install yum-utils device-mapper-persistent-data lvm2
2. Next, run the following command which will add the Docker stable repository to your system:
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
3. Now that the Docker repository is enabled, install the latest version of Docker CE (Community Edition) using yum by typing:
sudo yum install docker-ce
4. Once the Docker package is installed, start the Docker daemon and enable it to automatically start at boot time:
sudo systemctl start dockersudo systemctl enable docker
5. To verify that the Docker service is running type:
sudo systemctl status docker
6. The output should look something like this:
● docker.service - Docker Application Container Engine   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)   Active: active (running) since Wed 2018-10-31 08:51:20 UTC; 7s ago     Docs: https://docs.docker.com Main PID: 2492 (dockerd)   CGroup: /system.slice/docker.service           ├─2492 /usr/bin/dockerd           └─2498 docker-containerd --config /var/run/docker/containerd/containerd.toml
At the time of writing, the current stable version of Docker is, 18.06.1, to print the Docker version type:
docker -v
Docker version 18.06.1-ce, build e68fc7a
Executing the Docker Command Without Sudo
By default managing, Docker requires administrator privileges. If you want to run Docker commands as a non-root user without prepending sudo you need to add your user to the docker group which is created during the installation of the Docker CE package. You can do that by typing:
sudo usermod -aG docker $USER
$USER is an environnement variable that holds your username.
Log out and log back in so that the group membership is refreshed.
To verify Docker is installed successfully and that you can run docker commands without sudo, issue the following command which will download a test image, run it in a container, print a “Hello from Docker” message and exit:
docker container run hello-world
The output should look like the following:
Unable to find image 'hello-world:latest' locally latest: Pulling from library/hello-world 9bb5a5d4561a: Pull complete Digest: sha256:f5233545e43561214ca4891fd1157e1c3c563316ed8e237750d59bde73361e77 Status: Downloaded newer image for hello-world:latest Hello from Docker! This message shows that your installation appears to be working correctly.
Docker command line interface
Now that we have a working Docker installation, let’s go over the basic syntax of the docker CLI.
The docker command line take the following form:
docker [option] [subcommand] [arguments]
You can list all available commands by typing docker with no parameters:
docker
If you need more help on any [subcommand], just type:
docker [subcommand] --help
Docker Images
A Docker image is made up of a series of layers representing instructions in the image’s Dockerfile that make up an executable software application. An image is an immutable binary file including the application and all other dependencies such as binaries, libraries, and instructions necessary for running the application. In short, a Docker image is essentially a snapshot of a Docker container.
The Docker Hub is cloud-based registry service which among other functionalities is used for keeping the Docker images either in a public or private repository.
To search the Docker Hub repository for an image just use the search subcommand. For example, to search for the CentOS image, run:
docker search centos
The output should look like the following:
NAME                               DESCRIPTION                                     STARS               OFFICIAL            AUTOMATED centos                             The official build of CentOS.                   4257                [OK] ansible/centos7-ansible            Ansible on Centos7                              109                                     [OK] jdeathe/centos-ssh                 CentOS-6 6.9 x86_64 / CentOS-7 7.4.1708 x86_…   94                                      [OK] consol/centos-xfce-vnc             Centos container with "headless" VNC session…   52                                      [OK] imagine10255/centos6-lnmp-php56    centos6-lnmp-php56                              40                                      [OK] tutum/centos                       Simple CentOS docker image with SSH access      39
As you can see the search results prints a table with five columns, NAME, DESCRIPTION, STARS, OFFICIAL and AUTOMATED. The official image is an image that Docker develops in conjunction with upstream partners.
If we want to download the official build of CentOS 7, we can do that by using the image pull subcommand:
docker image pull centos
Using default tag: latest latest: Pulling from library/centos 469cfcc7a4b3: Pull complete Digest: sha256:989b936d56b1ace20ddf855a301741e52abca38286382cba7f44443210e96d16 Status: Downloaded newer image for centos:latest
Depending on your Internet speed, the download may take a few seconds or a few minutes. Once the image is downloaded we can list the images with:
docker image ls
The output should look something like the following:
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE hello-world         latest              e38bc07ac18e        3 weeks ago         1.85kB centos              latest              e934aafc2206        4 weeks ago         199MB
If for some reason you want to delete an image you can do that with the image rm [image_name] subcommand:
docker image rm centos
Untagged: centos:latest Untagged: centos@sha256:989b936d56b1ace20ddf855a301741e52abca38286382cba7f44443210e96d16 Deleted: sha256:e934aafc22064b7322c0250f1e32e5ce93b2d19b356f4537f5864bd102e8531f Deleted: sha256:43e653f84b79ba52711b0f726ff5a7fd1162ae9df4be76ca1de8370b8bbf9bb0
Docker Containers
An instance of an image is called a container. A container represents a runtime for a single application, process, or service.
It may not be the most appropriate comparison but if you are a programmer you can think of a Docker image as class and Docker container as an instance of a class.
We can start, stop, remove and manage a container with the docker container subcommand.
The following command will start a Docker container based on the CentoOS image. If you don’t have the image locally, it will download it first:
docker container run centos
At first sight, it may seem to you that nothing happened at all. Well, that is not true. The CentOS container stops immediately after booting up because it does not have a long-running process and we didn’t provide any command, so the container booted up, ran an empty command and then exited.
The switch -it allows us to interact with the container via the command line. To start an interactive container type:
docker container run -it centos /bin/bash
As you can see from the output once the container is started the command prompt is changed which means that you’re now working from inside the container:
[root@719ef9304412 /]#
To list running containers: , type:
docker container ls
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES 79ab8e16d567        centos              "/bin/bash"         22 minutes ago      Up 22 minutes                           ecstatic_ardinghelli
If you don’t have any running containers the output will be empty.
To view both running and stopped containers, pass it the -a switch:
docker container ls -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                      PORTS               NAMES 79ab8e16d567        centos              "/bin/bash"              22 minutes ago      Up 22 minutes                                   ecstatic_ardinghelli c55680af670c        centos              "/bin/bash"              30 minutes ago      Exited (0) 30 minutes ago                       modest_hawking c6a147d1bc8a        hello-world         "/hello"                 20 hours ago        Exited (0) 20 hours ago                         sleepy_shannon
To delete one or more containers just copy the container ID (or IDs) from above and paste them after the container rm subcommand:
docker container rm c55680af670c
Conclusion
You have learned how to install Docker on your CentOS 7 machine and how to download Docker images and manage Docker containers. 
This tutorial barely scratches the surface of the Docker ecosystem. In some of our next articles, we will continue to dive into other aspects of Docker. To learn more about Docker check out the official Docker documentation.
If you have any questions or remark, please leave a comment .
2 notes · View notes
docker-commands-cheat-shehp · 2 years ago
Text
docker commands cheat sheet free EEC#
💾 ►►► DOWNLOAD FILE 🔥🔥🔥🔥🔥 docker exec -it /bin/sh. docker restart. Show running container stats. Docker Cheat Sheet All commands below are called as options to the base docker command. for more information on a particular command. To enter a running container, attach a new shell process to a running container called foo, use: docker exec -it foo /bin/bash . Images are just. Reference - Best Practices. Creates a mount point with the specified name and marks it as holding externally mounted volumes from native host or other containers. FROM can appear multiple times within a single Dockerfile in order to create multiple images. The tag or digest values are optional. If you omit either of them, the builder assumes a latest by default. The builder returns an error if it cannot match the tag value. Normal shell processing does not occur when using the exec form. There can only be one CMD instruction in a Dockerfile. If the user specifies arguments to docker run then they will override the default specified in CMD. To include spaces within a LABEL value, use quotes and backslashes as you would in command-line parsing. The environment variables set using ENV will persist when a container is run from the resulting image. Match rules. Prepend exec to get around this drawback. It can be used multiple times in the one Dockerfile. Multiple variables may be defined by specifying ARG multiple times. It is not recommended to use build-time variables for passing secrets like github keys, user credentials, etc. Build-time variable values are visible to any user of the image with the docker history command. The trigger will be executed in the context of the downstream build, as if it had been inserted immediately after the FROM instruction in the downstream Dockerfile. Any build instruction can be registered as a trigger. Triggers are inherited by the "child" build only. In other words, they are not inherited by "grand-children" builds. After a certain number of consecutive failures, it becomes unhealthy. If a single run of the check takes longer than timeout seconds then the check is considered to have failed. It takes retries consecutive failures of the health check for the container to be considered unhealthy. The command's exit status indicates the health status of the container. Allows an alternate shell be used such as zsh , csh , tcsh , powershell , and others.
1 note · View note
docker-commands-cheat-shev4 · 2 years ago
Text
docker commands cheat sheet 100% working 4RXD#
💾 ►►► DOWNLOAD FILE 🔥🔥🔥🔥🔥 docker exec -it /bin/sh. docker restart. Show running container stats. Docker Cheat Sheet All commands below are called as options to the base docker command. for more information on a particular command. To enter a running container, attach a new shell process to a running container called foo, use: docker exec -it foo /bin/bash . Images are just. Reference - Best Practices. Creates a mount point with the specified name and marks it as holding externally mounted volumes from native host or other containers. FROM can appear multiple times within a single Dockerfile in order to create multiple images. The tag or digest values are optional. If you omit either of them, the builder assumes a latest by default. The builder returns an error if it cannot match the tag value. Normal shell processing does not occur when using the exec form. There can only be one CMD instruction in a Dockerfile. If the user specifies arguments to docker run then they will override the default specified in CMD. To include spaces within a LABEL value, use quotes and backslashes as you would in command-line parsing. The environment variables set using ENV will persist when a container is run from the resulting image. Match rules. Prepend exec to get around this drawback. It can be used multiple times in the one Dockerfile. Multiple variables may be defined by specifying ARG multiple times. It is not recommended to use build-time variables for passing secrets like github keys, user credentials, etc. Build-time variable values are visible to any user of the image with the docker history command. The trigger will be executed in the context of the downstream build, as if it had been inserted immediately after the FROM instruction in the downstream Dockerfile. Any build instruction can be registered as a trigger. Triggers are inherited by the "child" build only. In other words, they are not inherited by "grand-children" builds. After a certain number of consecutive failures, it becomes unhealthy. If a single run of the check takes longer than timeout seconds then the check is considered to have failed. It takes retries consecutive failures of the health check for the container to be considered unhealthy. The command's exit status indicates the health status of the container. Allows an alternate shell be used such as zsh , csh , tcsh , powershell , and others.
1 note · View note
docker-commands-cheat-she6m · 2 years ago
Text
docker commands cheat sheet trainer NKD!
💾 ►►► DOWNLOAD FILE 🔥🔥🔥🔥🔥 docker exec -it /bin/sh. docker restart. Show running container stats. Docker Cheat Sheet All commands below are called as options to the base docker command. for more information on a particular command. To enter a running container, attach a new shell process to a running container called foo, use: docker exec -it foo /bin/bash . Images are just. Reference - Best Practices. Creates a mount point with the specified name and marks it as holding externally mounted volumes from native host or other containers. FROM can appear multiple times within a single Dockerfile in order to create multiple images. The tag or digest values are optional. If you omit either of them, the builder assumes a latest by default. The builder returns an error if it cannot match the tag value. Normal shell processing does not occur when using the exec form. There can only be one CMD instruction in a Dockerfile. If the user specifies arguments to docker run then they will override the default specified in CMD. To include spaces within a LABEL value, use quotes and backslashes as you would in command-line parsing. The environment variables set using ENV will persist when a container is run from the resulting image. Match rules. Prepend exec to get around this drawback. It can be used multiple times in the one Dockerfile. Multiple variables may be defined by specifying ARG multiple times. It is not recommended to use build-time variables for passing secrets like github keys, user credentials, etc. Build-time variable values are visible to any user of the image with the docker history command. The trigger will be executed in the context of the downstream build, as if it had been inserted immediately after the FROM instruction in the downstream Dockerfile. Any build instruction can be registered as a trigger. Triggers are inherited by the "child" build only. In other words, they are not inherited by "grand-children" builds. After a certain number of consecutive failures, it becomes unhealthy. If a single run of the check takes longer than timeout seconds then the check is considered to have failed. It takes retries consecutive failures of the health check for the container to be considered unhealthy. The command's exit status indicates the health status of the container. Allows an alternate shell be used such as zsh , csh , tcsh , powershell , and others.
1 note · View note
computingpostcom · 2 years ago
Text
In this tutorial, I will show you how to set up a Docker environment for your Django project when still in the development phase. Although I am using Ubuntu 18.04, the steps remain the same for whatever Linux distro you use with exception of Docker and docker-compose installation. In any typical software development project, we usually have a process of development usually starting from the developer premise to VCS to production. We will try to stick to the rules so that you understand where each DevOps tools come in. DevOps is all about eliminating the yesteryears phrase; “But it works on my laptop!”. Docker containerization creates the production environment for your application so that it can be deployed anywhere, Develop once, deploy anywhere!. Install Docker Engine You need a Docker runtime engine installed on your server/Desktop. Our Docker installation guides should be of great help. How to install Docker on CentOS / Debian / Ubuntu Environment Setup I have set up a GitHub repo for this project called django-docker-dev-app. Feel free to fork it or clone/download it. First, create the folder to hold your project. My folder workspace is called django-docker-dev-app. CD into the folder then opens it in your IDE. Dockerfile In a containerized environment, all applications live in a container. Containers themselves are made up of several images. You can create your own image or use other images from Docker Hub. A Dockerfile is Docker text document Docker reads to automatically create/build an image. Our Dockerfile will list all the dependencies required by our project. Create a file named Dockerfile. $ nano Dockerfile In the file type the following: #base image FROM python:3.7-alpine #maintainer LABEL Author="CodeGenes" # The enviroment variable ensures that the python output is set straight # to the terminal with out buffering it first ENV PYTHONBUFFERED 1 #copy requirements file to image COPY ./requirements.txt /requirements.txt #let pip install required packages RUN pip install -r requirements.txt #directory to store app source code RUN mkdir /app #switch to /app directory so that everything runs from here WORKDIR /app #copy the app code to image working directory COPY ./app /app #create user to run the app(it is not recommended to use root) #we create user called user with -D -> meaning no need for home directory RUN adduser -D user #switch from root to user to run our app USER user Create the requirements.txt file and add your projects requirements. Django>=2.1.3,=3.9.0, sh -c "python manage.py runserver 0.0.0.0:8000" Note the YAML file format!!! We have one service called ddda (Django Docker Dev App). We are now ready to build our service: $ docker-compose build After a successful build you see: Successfully tagged djangodockerdevapp_ddda:latest It means that our image is called djangodockerdevapp_ddda:latest and our service is called ddda Now note that we have not yet created our Django project! If you followed the above procedure successfully then Congrats, you have set Docker environment for your project!. Next, start your development by doing the following: Create the Django project: $ docker-compose run ddda sh -c “django-admin.py startproject app .” To run your container: $ docker-compose up Then visit http://localhost:8000/ Make migrations $ docker-compose exec ddda python manage.py migrate From now onwards you will run Django commands like the above command, adding your command after the service name ddda. That’s it for now. I know it I simple and there are a lot of features we have not covered like a database but this is a start! The project is available on GitHub here.
0 notes
this-week-in-rust · 1 year ago
Text
This Week in Rust 516
Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on Twitter or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.
This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.
Updates from Rust Community
Official
Announcing Rust 1.73.0
Polonius update
Project/Tooling Updates
rust-analyzer changelog #202
Announcing: pid1 Crate for Easier Rust Docker Images - FP Complete
bit_seq in Rust: A Procedural Macro for Bit Sequence Generation
tcpproxy 0.4 released
Rune 0.13
Rust on Espressif chips - September 29 2023
esp-rs quarterly planning: Q4 2023
Implementing the #[diagnostic] namespace to improve rustc error messages in complex crates
Observations/Thoughts
Safety vs Performance. A case study of C, C++ and Rust sort implementations
Raw SQL in Rust with SQLx
Thread-per-core
Edge IoT with Rust on ESP: HTTP Client
The Ultimate Data Engineering Chadstack. Running Rust inside Apache Airflow
Why Rust doesn't need a standard div_rem: An LLVM tale - CodSpeed
Making Rust supply chain attacks harder with Cackle
[video] Rust 1.73.0: Everything Revealed in 16 Minutes
Rust Walkthroughs
Let's Build A Cargo Compatible Build Tool - Part 5
How we reduced the memory usage of our Rust extension by 4x
Calling Rust from Python
Acceptance Testing embedded-hal Drivers
5 ways to instantiate Rust structs in tests
Research
Looking for Bad Apples in Rust Dependency Trees Using GraphQL and Trustfall
Miscellaneous
Rust, Open Source, Consulting - Interview with Matthias Endler
Edge IoT with Rust on ESP: Connecting WiFi
Bare-metal Rust in Android
[audio] Learn Rust in a Month of Lunches with Dave MacLeod
[video] Rust 1.73.0: Everything Revealed in 16 Minutes
[video] Rust 1.73 Release Train
[video] Why is the JavaScript ecosystem switching to Rust?
Crate of the Week
This week's crate is yarer, a library and command-line tool to evaluate mathematical expressions.
Thanks to Gianluigi Davassi for the self-suggestion!
Please submit your suggestions and votes for next week!
Call for Participation
Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!
Some of these tasks may also have mentors available, visit the task page for more information.
Ockam - Make ockam node delete (no args) interactive by asking the user to choose from a list of nodes to delete (tuify)
Ockam - Improve ockam enroll ----help text by adding doc comment for identity flag (clap command)
Ockam - Enroll "email: '+' character not allowed"
If you are a Rust project owner and are looking for contributors, please submit tasks here.
Updates from the Rust Project
384 pull requests were merged in the last week
formally demote tier 2 MIPS targets to tier 3
add tvOS to target_os for register_dtor
linker: remove -Zgcc-ld option
linker: remove unstable legacy CLI linker flavors
non_lifetime_binders: fix ICE in lint opaque-hidden-inferred-bound
add async_fn_in_trait lint
add a note to duplicate diagnostics
always preserve DebugInfo in DeadStoreElimination
bring back generic parameters for indices in rustc_abi and make it compile on stable
coverage: allow each coverage statement to have multiple code regions
detect missing => after match guard during parsing
diagnostics: be more careful when suggesting struct fields
don't suggest nonsense suggestions for unconstrained type vars in note_source_of_type_mismatch_constraint
dont call mir.post_mono_checks in codegen
emit feature gate warning for auto traits pre-expansion
ensure that ~const trait bounds on associated functions are in const traits or impls
extend impl's def_span to include its where clauses
fix detecting references to packed unsized fields
fix fast-path for try_eval_scalar_int
fix to register analysis passes with -Zllvm-plugins at link-time
for a single impl candidate, try to unify it with error trait ref
generalize small dominators optimization
improve the suggestion of generic_bound_failure
make FnDef 1-ZST in LLVM debuginfo
more accurately point to where default return type should go
move subtyper below reveal_all and change reveal_all
only trigger refining_impl_trait lint on reachable traits
point to full async fn for future
print normalized ty
properly export function defined in test which uses global_asm!()
remove Key impls for types that involve an AllocId
remove is global hack
remove the TypedArena::alloc_from_iter specialization
show more information when multiple impls apply
suggest pin!() instead of Pin::new() when appropriate
make subtyping explicit in MIR
do not run optimizations on trivial MIR
in smir find_crates returns Vec<Crate> instead of Option<Crate>
add Span to various smir types
miri-script: print which sysroot target we are building
miri: auto-detect no_std where possible
miri: continuation of #3054: enable spurious reads in TB
miri: do not use host floats in simd_{ceil,floor,round,trunc}
miri: ensure RET assignments do not get propagated on unwinding
miri: implement llvm.x86.aesni.* intrinsics
miri: refactor dlsym: dispatch symbols via the normal shim mechanism
miri: support getentropy on macOS as a foreign item
miri: tree Borrows: do not create new tags as 'Active'
add missing inline attributes to Duration trait impls
stabilize Option::as_(mut_)slice
reuse existing Somes in Option::(x)or
fix generic bound of str::SplitInclusive's DoubleEndedIterator impl
cargo: refactor(toml): Make manifest file layout more consitent
cargo: add new package cache lock modes
cargo: add unsupported short suggestion for --out-dir flag
cargo: crates-io: add doc comment for NewCrate struct
cargo: feat: add Edition2024
cargo: prep for automating MSRV management
cargo: set and verify all MSRVs in CI
rustdoc-search: fix bug with multi-item impl trait
rustdoc: rename issue-\d+.rs tests to have meaningful names (part 2)
rustdoc: Show enum discrimant if it is a C-like variant
rustfmt: adjust span derivation for const generics
clippy: impl_trait_in_params now supports impls and traits
clippy: into_iter_without_iter: walk up deref impl chain to find iter methods
clippy: std_instead_of_core: avoid lint inside of proc-macro
clippy: avoid invoking ignored_unit_patterns in macro definition
clippy: fix items_after_test_module for non root modules, add applicable suggestion
clippy: fix ICE in redundant_locals
clippy: fix: avoid changing drop order
clippy: improve redundant_locals help message
rust-analyzer: add config option to use rust-analyzer specific target dir
rust-analyzer: add configuration for the default action of the status bar click action in VSCode
rust-analyzer: do flyimport completions by prefix search for short paths
rust-analyzer: add assist for applying De Morgan's law to Iterator::all and Iterator::any
rust-analyzer: add backtick to surrounding and auto-closing pairs
rust-analyzer: implement tuple return type to tuple struct assist
rust-analyzer: ensure rustfmt runs when configured with ./
rust-analyzer: fix path syntax produced by the into_to_qualified_from assist
rust-analyzer: recognize custom main function as binary entrypoint for runnables
Rust Compiler Performance Triage
A quiet week, with few regressions and improvements.
Triage done by @simulacrum. Revision range: 9998f4add..84d44dd
1 Regressions, 2 Improvements, 4 Mixed; 1 of them in rollups
68 artifact comparisons made in total
Full report here
Approved RFCs
Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:
No RFCs were approved this week.
Final Comment Period
Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.
RFCs
[disposition: merge] RFC: Remove implicit features in a new edition
Tracking Issues & PRs
[disposition: merge] Bump COINDUCTIVE_OVERLAP_IN_COHERENCE to deny + warn in deps
[disposition: merge] document ABI compatibility
[disposition: merge] Broaden the consequences of recursive TLS initialization
[disposition: merge] Implement BufRead for VecDeque<u8>
[disposition: merge] Tracking Issue for feature(file_set_times): FileTimes and File::set_times
[disposition: merge] impl Not, Bit{And,Or}{,Assign} for IP addresses
[disposition: close] Make RefMut Sync
[disposition: merge] Implement FusedIterator for DecodeUtf16 when the inner iterator does
[disposition: merge] Stabilize {IpAddr, Ipv6Addr}::to_canonical
[disposition: merge] rustdoc: hide #[repr(transparent)] if it isn't part of the public ABI
New and Updated RFCs
[new] Add closure-move-bindings RFC
[new] RFC: Include Future and IntoFuture in the 2024 prelude
Call for Testing
An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:
No RFCs issued a call for testing this week.
If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.
Upcoming Events
Rusty Events between 2023-10-11 - 2023-11-08 🦀
Virtual
2023-10-11| Virtual (Boulder, CO, US) | Boulder Elixir and Rust
Monthly Meetup
2023-10-12 - 2023-10-13 | Virtual (Brussels, BE) | EuroRust
EuroRust 2023
2023-10-12 | Virtual (Nuremberg, DE) | Rust Nuremberg
Rust NĂĽrnberg online
2023-10-18 | Virtual (Cardiff, UK)| Rust and C++ Cardiff
Operating System Primitives (Atomics & Locks Chapter 8)
2023-10-18 | Virtual (Vancouver, BC, CA) | Vancouver Rust
Rust Study/Hack/Hang-out
2023-10-19 | Virtual (Charlottesville, NC, US) | Charlottesville Rust Meetup
Crafting Interpreters in Rust Collaboratively
2023-10-19 | Virtual (Stuttgart, DE) | Rust Community Stuttgart
Rust-Meetup
2023-10-24 | Virtual (Berlin, DE) | OpenTechSchool Berlin
Rust Hack and Learn | Mirror
2023-10-24 | Virtual (Washington, DC, US) | Rust DC
Month-end Rusting—Fun with 🍌 and 🔎!
2023-10-31 | Virtual (Dallas, TX, US) | Dallas Rust
Last Tuesday
2023-11-01 | Virtual (Indianapolis, IN, US) | Indy Rust
Indy.rs - with Social Distancing
Asia
2023-10-11 | Kuala Lumpur, MY | GoLang Malaysia
Rust Meetup Malaysia October 2023 | Event updates Telegram | Event group chat
2023-10-18 | Tokyo, JP | Tokyo Rust Meetup
Rust and the Age of High-Integrity Languages
Europe
2023-10-11 | Brussels, BE | BeCode Brussels Meetup
Rust on Web - EuroRust Conference
2023-10-12 - 2023-10-13 | Brussels, BE | EuroRust
EuroRust 2023
2023-10-12 | Brussels, BE | Rust Aarhus
Rust Aarhus - EuroRust Conference
2023-10-12 | Reading, UK | Reading Rust Workshop
Reading Rust Meetup at Browns
2023-10-17 | Helsinki, FI | Finland Rust-lang Group
Helsinki Rustaceans Meetup
2023-10-17 | Leipzig, DE | Rust - Modern Systems Programming in Leipzig
SIMD in Rust
2023-10-19 | Amsterdam, NL | Rust Developers Amsterdam Group
Rust Amsterdam Meetup @ Terraform
2023-10-19 | Wrocław, PL | Rust Wrocław
Rust Meetup #35
2023-09-19 | Virtual (Washington, DC, US) | Rust DC
Month-end Rusting—Fun with 🍌 and 🔎!
2023-10-25 | Dublin, IE | Rust Dublin
Biome, web development tooling with Rust
2023-10-26 | Augsburg, DE | Rust - Modern Systems Programming in Leipzig
Augsburg Rust Meetup #3
2023-10-26 | Delft, NL | Rust Nederland
Rust at TU Delft
2023-11-07 | Brussels, BE | Rust Aarhus
Rust Aarhus - Rust and Talk beginners edition
North America
2023-10-11 | Boulder, CO, US | Boulder Rust Meetup
First Meetup - Demo Day and Office Hours
2023-10-12 | Lehi, UT, US | Utah Rust
The Actor Model: Fearless Concurrency, Made Easy w/Chris Mena
2023-10-13 | Cambridge, MA, US | Boston Rust Meetup
Kendall Rust Lunch
2023-10-17 | San Francisco, CA, US | San Francisco Rust Study Group
Rust Hacking in Person
2023-10-18 | Brookline, MA, US | Boston Rust Meetup
Boston University Rust Lunch
2023-10-19 | Mountain View, CA, US | Mountain View Rust Meetup
Rust Meetup at Hacker Dojo
2023-10-19 | Nashville, TN, US | Music City Rust Developers
Rust Goes Where It Pleases Pt2 - Rust on the front end!
2023-10-19 | Seattle, WA, US | Seattle Rust User Group
Seattle Rust User Group - October Meetup
2023-10-25 | Austin, TX, US | Rust ATX
Rust Lunch - Fareground
2023-10-25 | Chicago, IL, US | Deep Dish Rust
Rust Happy Hour
Oceania
2023-10-17 | Christchurch, NZ | Christchurch Rust Meetup Group
Christchurch Rust meetup meeting
2023-10-26 | Brisbane, QLD, AU | Rust Brisbane
October Meetup
If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.
Jobs
Please see the latest Who's Hiring thread on r/rust
Quote of the Week
The Rust mission -- let you write software that's fast and correct, productively -- has never been more alive. So next Rustconf, I plan to celebrate:
All the buffer overflows I didn't create, thanks to Rust
All the unit tests I didn't have to write, thanks to its type system
All the null checks I didn't have to write thanks to Option and Result
All the JS I didn't have to write thanks to WebAssembly
All the impossible states I didn't have to assert "This can never actually happen"
All the JSON field keys I didn't have to manually type in thanks to Serde
All the missing SQL column bugs I caught at compiletime thanks to Diesel
All the race conditions I never had to worry about thanks to the borrow checker
All the connections I can accept concurrently thanks to Tokio
All the formatting comments I didn't have to leave on PRs thanks to Rustfmt
All the performance footguns I didn't create thanks to Clippy
– Adam Chalmers in their RustConf 2023 recap
Thanks to robin for the suggestion!
Please submit quotes and vote for next week!
This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.
Email list hosting is sponsored by The Rust Foundation
Discuss on r/rust
1 note · View note
paradisetechsoftsolutions · 3 years ago
Text
What is Docker CE ? | learn how to install Dockers
What is docker
Docker is a computer program which is used to provide a running environment to run all kinds of application which are in docker hub, or created in docker. It creates an image of your application and stores all requirements of files into the container. Whenever we want to run docker application in any system, we have to run a single file without providing any other requirements.
Docker is easy to use in Ubuntu. It also supports Window and Mac operating system. For windows, it runs in Windows10/enterprise only. To use in Windows7/8/8.1 or Windows10 home should use docker toolbox.
There are two kind of docker software for programmers.
Docker CE :- Free community edition :- This is an open source software.
Docker EE :- Docker Enterprise Edition :- This is a paid software design for enterprise development and IT teams who build, ship, and run business-critical applications in production.
Requirements :-
Operating system (ubuntu)
Docker
Steps to install docker . Steps to download docker in ubuntu. 1.  Open terminal and follow these command to install docker.
Just type docker and check if docker is in your system or not. $ docker
2. To check the version of operating system. To install Docker CE, we need the 64-bit version of one of these Ubuntu versions: 1.  Cosmic 18.10  2. Bionic 18.04 (LTS) 3. Xenial 16.04 (LTS) $ lsb_release -a
3. Update the apt package index. $ sudo apt-get update
4. If requires, then install. $ sudo apt-get install
5. If docker is not in your system then install it. $ sudo apt-get install docker.io
6. Now check the staus of docker. $ sudo systemctl status docker
Steps to add user in docker 1. Why sudo :- We have to use 'sudo' command to run docker commands because docker container run user 'root'. We have to join the docker group, when your system join the docker group after that one can run docker command without sudo.
2. 'USER' is your system name, commands to add user as listed below. $user will pick system user 1. $ sudo groupadd docker 2. $ sudo gpasswd -a $USER 3. $ newgrp docker
3. Second way to add user in docker group.       1.  $ sudo groupadd docker        2.  $ sudo usermod -aG docker $USER
4. After adding a 'USER' into the docker group, we have to shut down or restart so that we can run docker commands without 'sudo'.
5. Command to uninstall docker. $ sudo apt-get remove docker docker-engine docker.io containerd runc
Docker commands 1.To check Docker version  $ docker --version
2. To check Docker and containers info $ docker info
3. Find out which users are in the docker group and who is allowed to start docker containers.  1.  $ getent group sudo
 2. $ getent group docker
4. 'pull' command fetch the 'name_of_images' image from the 'Docker registry' and saves it to our system. $ docker pull busybox (busybox is name of image)
5. You can use the 'docker images' command to see a list of all images on your system. $ docker images
6. To find the location of the images in the system we need to follow some commands:- $ docker info path of docker:- "Docker Root Dir: /var/lib/docker"
Commands to check the images:-
$ cd /var/lib/docker
$ ls
pardise@pardise-MS-7817:/var/lib/docker$ cd image
bash: cd: image: Permission denied
Permission denied for all users
$ sudo su
$ root@pardise-MS-7817:/var/lib/docker# ls
Now docker info command will provide all details about images and containers
$root@pardise-MS-7817:/var/lib/docker/image/overlay2# docker info
7. Now run a Docker container based on this image. When you call run, the Docker client finds the image (busybox in this case), loads up the container and then runs a command in that container. $ docker run busybox
8. Now Docker client ran the 'echo' command in our busybox container and then exited it. $docker run busybox echo "hello from busybox"
9. Command to shows you all containers that are currently running. $ docker ps
10. List of all containers that one can run. Do notice that the STATUS column shows that these containers exited a few minutes ago. $ docker ps -a CONTAINER ID – Unique ID given to all the containers. IMAGE – Base image from which the container has been started. COMMAND – Command which was used when the container was started CREATED – Time at which the container was created. STATUS – The current status of the container (Up or Exited). PORTS – Port numbers if any, forwarded to the docker host for communicating with the external world. NAMES – It is a container name, you can specify your own name.
11. To start Container $ docker start (container id)
12. To login in Container $ docker attach (container id)
13. To stop container $ docker stop (container id)
Difference between images and containers
Docker Image is a set of files which has no state, whereas Docker Container is the abstract of Docker Image. In other words, Docker Container is the run time instance of images.
Remove images and containers 1. Docker containers are not automatically removed, firstly stop them, then can use docker rm command. Just copy the container IDs. $ docker rm 419600f601f9 (container_id)
2. Command to deletes all containers that have a status of exited. -q flag, only returns the numeric IDs and -f filters output based on conditions provided. $ docker rm $(docker ps -a -q -f status=exited)
3. Command to delete all container. $ docker container prune
4. Command to delete all images. To remove all images which are not referenced by any existing container, not just dangling ones, use the -a flag: $ docker images prune -a
dangling image is an image that is not tagged and is not used by any container. To remove dangling images type:-
$ docker images prune
$ docker rmi image_id image_id......
5. Removing all Unused Objects. It will remove all stopped containers,all dangling images,and all unused network. To remove all images which are not referenced by any existing container, use the -a flag: $ docker system prune -a
You can follow us and our codes at our github repository: https://github.com/amit-kumar001/You can follow us and our codes at our github
0 notes
roomfox981 · 3 years ago
Text
Start Docker In Ubuntu
Tumblr media
A Linux Dev Environment on Windows with WSL 2, Docker Desktop And the docker docs. Docker Desktop WSL 2 backend. Below is valid only for WSL1. It seems that docker cannot run inside WSL. What they propose is to connect the WSL to your docker desktop running in windows: Setting Up Docker for Windows and WSL. By removing /etc/docker you will loose all Images and data. You can check logs with. Journalctl -u docker.services. Systemctl daemon-reload && systemctl enable docker && systemctl start docker. This worked for me.
$ docker images REPOSITORY TAG ID ubuntu 12.10 b750fe78269d me/myapp latest 7b2431a8d968. Docker-compose start docker-compose stop. After installing the Nvidia Container Toolkit, you'll need to restart the Docker Daemon in order to let Docker use your Nvidia GPU: sudo systemctl restart docker Changing the docker-compose.yml Now that all the packages are in order, let's change the docker-compose.yml to let the Jellyfin container make use of the Nvidia GPU.
Complete Docker CLI
Container Management CLIs
Inspecting The Container
Interacting with Container
Image Management Commands
Image Transfer Comnands
Builder Main Commands
The Docker CLI
Manage images
docker build
Create an image from a Dockerfile.
docker run
Run a command in an image.
Manage containers
docker create
Example
Create a container from an image.
docker exec
Example
Run commands in a container.
docker start
Start/stop a container.
docker ps
Manage containers using ps/kill.
Images
docker images
Manages images.
docker rmi
Deletes images.
Also see
Getting Started(docker.io)
Inheritance
Variables
Initialization
Onbuild
Commands
Entrypoint
Configures a container that will run as an executable.
This will use shell processing to substitute shell variables, and will ignore any CMD or docker run command line arguments.
Metadata
See also
Basic example
Commands
Reference
Building
Ports
Commands
Tumblr media
Environment variables
Dependencies
Other options
Advanced features
Labels
DNS servers
Devices
External links
Hosts
sevices
To view list of all the services runnning in swarm
To see all running services
to see all services logs
To scale services quickly across qualified node
clean up
To clean or prune unused (dangling) images
To remove all images which are not in use containers , add - a
To Purne your entire system
To leave swarm
To remove swarm ( deletes all volume data and database info)
To kill all running containers
Contributor -
Sangam biradar - Docker Community Leader
The Jellyfin project and its contributors offer a number of pre-built binary packages to assist in getting Jellyfin up and running quickly on multiple systems.
Container images
Docker
Windows (x86/x64)
Linux
Linux (generic amd64)
Debian
Ubuntu
Container images
Official container image: jellyfin/jellyfin.
LinuxServer.io image: linuxserver/jellyfin.
hotio image: hotio/jellyfin.
Jellyfin distributes official container images on Docker Hub for multiple architectures. These images are based on Debian and built directly from the Jellyfin source code.
Additionally the LinuxServer.io project and hotio distribute images based on Ubuntu and the official Jellyfin Ubuntu binary packages, see here and here to see their Dockerfile.
Note
For ARM hardware and RPi, it is recommended to use the LinuxServer.io or hotio image since hardware acceleration support is not yet available on the native image.
Docker
Docker allows you to run containers on Linux, Windows and MacOS.
The basic steps to create and run a Jellyfin container using Docker are as follows.
Follow the offical installation guide to install Docker.
Download the latest container image.
Create persistent storage for configuration and cache data.
Either create two persistent volumes:
Or create two directories on the host and use bind mounts:
Create and run a container in one of the following ways.
Note
The default network mode for Docker is bridge mode. Bridge mode will be used if host mode is omitted. Use host mode for networking in order to use DLNA or an HDHomeRun.
Using Docker command line interface:
Using host networking (--net=host) is optional but required in order to use DLNA or HDHomeRun.
Bind Mounts are needed to pass folders from the host OS to the container OS whereas volumes are maintained by Docker and can be considered easier to backup and control by external programs. For a simple setup, it's considered easier to use Bind Mounts instead of volumes. Replace jellyfin-config and jellyfin-cache with /path/to/config and /path/to/cache respectively if using bind mounts. Multiple media libraries can be bind mounted if needed:
Note
There is currently an issue with read-only mounts in Docker. If there are submounts within the main mount, the submounts are read-write capable.
Using Docker Compose:
Create a docker-compose.yml file with the following contents:
Then while in the same folder as the docker-compose.yml run:
To run the container in background add -d to the above command.
You can learn more about using Docker by reading the official Docker documentation.
Hardware Transcoding with Nvidia (Ubuntu)
You are able to use hardware encoding with Nvidia, but it requires some additional configuration. These steps require basic knowledge of Ubuntu but nothing too special.
Adding Package RepositoriesFirst off you'll need to add the Nvidia package repositories to your Ubuntu installation. This can be done by running the following commands:
Installing Nvidia container toolkitNext we'll need to install the Nvidia container toolkit. This can be done by running the following commands:
After installing the Nvidia Container Toolkit, you'll need to restart the Docker Daemon in order to let Docker use your Nvidia GPU:
Changing the docker-compose.ymlNow that all the packages are in order, let's change the docker-compose.yml to let the Jellyfin container make use of the Nvidia GPU.The following lines need to be added to the file:
Your completed docker-compose.yml file should look something like this:
Note
For Nvidia Hardware encoding the minimum version of docker-compose needs to be 2. However we recommend sticking with version 2.3 as it has proven to work with nvenc encoding.
Unraid Docker
An Unraid Docker template is available in the repository.
Open the unRaid GUI (at least unRaid 6.5) and click on the 'Docker' tab.
Add the following line under 'Template Repositories' and save the options.
Click 'Add Container' and select 'jellyfin'.
Adjust any required paths and save your changes.
Kubernetes
A community project to deploy Jellyfin on Kubernetes-based platforms exists at their repository. Any issues or feature requests related to deployment on Kubernetes-based platforms should be filed there.
Podman
Podman allows you to run containers as non-root. It's also the offically supported container solution on RHEL and CentOS.
Steps to run Jellyfin using Podman are almost identical to Docker steps:
Install Podman:
Download the latest container image:
Create persistent storage for configuration and cache data:
Either create two persistent volumes:
Or create two directories on the host and use bind mounts:
Create and run a Jellyfin container:
Note that Podman doesn't require root access and it's recommended to run the Jellyfin container as a separate non-root user for security.
If SELinux is enabled you need to use either --privileged or supply z volume option to allow Jellyfin to access the volumes.
Tumblr media
Replace jellyfin-config and jellyfin-cache with /path/to/config and /path/to/cache respectively if using bind mounts.
To mount your media library read-only append ':ro' to the media volume:
Tumblr media
To run as a systemd service see Running containers with Podman and shareable systemd services.
Cloudron
Cloudron is a complete solution for running apps on your server and keeping them up-to-date and secure. On your Cloudron you can install Jellyfin with a few clicks via the app library and updates are delivered automatically.
The source code for the package can be found here.Any issues or feature requests related to deployment on Cloudron should be filed there.
Windows (x86/x64)
Windows installers and builds in ZIP archive format are available here.
Warning
If you installed a version prior to 10.4.0 using a PowerShell script, you will need to manually remove the service using the command nssm remove Jellyfin and uninstall the server by remove all the files manually. Also one might need to move the data files to the correct location, or point the installer at the old location.
Warning
The 32-bit or x86 version is not recommended. ffmpeg and its video encoders generally perform better as a 64-bit executable due to the extra registers provided. This means that the 32-bit version of Jellyfin is deprecated.
Install using Installer (x64)
Install
Download the latest version.
Run the installer.
(Optional) When installing as a service, pick the service account type.
If everything was completed successfully, the Jellyfin service is now running.
Open your browser at http://localhost:8096 to finish setting up Jellyfin.
Update
Download the latest version.
Run the installer.
If everything was completed successfully, the Jellyfin service is now running as the new version.
Uninstall
Go to Add or remove programs in Windows.
Search for Jellyfin.
Click Uninstall.
Manual Installation (x86/x64)
Install
Download and extract the latest version.
Create a folder jellyfin at your preferred install location.
Copy the extracted folder into the jellyfin folder and rename it to system.
Create jellyfin.bat within your jellyfin folder containing:
To use the default library/data location at %localappdata%:
To use a custom library/data location (Path after the -d parameter):
To use a custom library/data location (Path after the -d parameter) and disable the auto-start of the webapp:
Run
Open your browser at http://<--Server-IP-->:8096 (if auto-start of webapp is disabled)
Update
Stop Jellyfin
Rename the Jellyfin system folder to system-bak
Download and extract the latest Jellyfin version
Copy the extracted folder into the jellyfin folder and rename it to system
Run jellyfin.bat to start the server again
Rollback
Stop Jellyfin.
Delete the system folder.
Rename system-bak to system.
Run jellyfin.bat to start the server again.
MacOS
MacOS Application packages and builds in TAR archive format are available here.
Install
Download the latest version.
Drag the .app package into the Applications folder.
Start the application.
Open your browser at http://127.0.0.1:8096.
Upgrade
Download the latest version.
Stop the currently running server either via the dashboard or using the application icon.
Drag the new .app package into the Applications folder and click yes to replace the files.
Start the application.
Open your browser at http://127.0.0.1:8096.
Uninstall
Start Docker In Ubuntu Virtualbox
Stop the currently running server either via the dashboard or using the application icon.
Move the .app package to the trash.
Deleting Configuation
This will delete all settings and user information. This applies for the .app package and the portable version.
Delete the folder ~/.config/jellyfin/
Delete the folder ~/.local/share/jellyfin/
Portable Version
Download the latest version
Extract it into the Applications folder
Open Terminal and type cd followed with a space then drag the jellyfin folder into the terminal.
Type ./jellyfin to run jellyfin.
Open your browser at http://localhost:8096
Closing the terminal window will end Jellyfin. Running Jellyfin in screen or tmux can prevent this from happening.
Upgrading the Portable Version
Download the latest version.
Stop the currently running server either via the dashboard or using CTRL+C in the terminal window.
Extract the latest version into Applications
Open Terminal and type cd followed with a space then drag the jellyfin folder into the terminal.
Type ./jellyfin to run jellyfin.
Open your browser at http://localhost:8096
Uninstalling the Portable Version
Stop the currently running server either via the dashboard or using CTRL+C in the terminal window.
Move /Application/jellyfin-version folder to the Trash. Replace version with the actual version number you are trying to delete.
Using FFmpeg with the Portable Version
The portable version doesn't come with FFmpeg by default, so to install FFmpeg you have three options.
use the package manager homebrew by typing brew install ffmpeg into your Terminal (here's how to install homebrew if you don't have it already
download the most recent static build from this link (compiled by a third party see this page for options and information), or
compile from source available from the official website
More detailed download options, documentation, and signatures can be found.
If using static build, extract it to the /Applications/ folder.
Navigate to the Playback tab in the Dashboard and set the path to FFmpeg under FFmpeg Path.
Linux
Linux (generic amd64)
Generic amd64 Linux builds in TAR archive format are available here.
Installation Process
Create a directory in /opt for jellyfin and its files, and enter that directory.
Download the latest generic Linux build from the release page. The generic Linux build ends with 'linux-amd64.tar.gz'. The rest of these instructions assume version 10.4.3 is being installed (i.e. jellyfin_10.4.3_linux-amd64.tar.gz). Download the generic build, then extract the archive:
Create a symbolic link to the Jellyfin 10.4.3 directory. This allows an upgrade by repeating the above steps and enabling it by simply re-creating the symbolic link to the new version.
Create four sub-directories for Jellyfin data.
If you are running Debian or a derivative, you can also download and install an ffmpeg release built specifically for Jellyfin. Be sure to download the latest release that matches your OS (4.2.1-5 for Debian Stretch assumed below).
If you run into any dependency errors, run this and it will install them and jellyfin-ffmpeg.
Due to the number of command line options that must be passed, it is easiest to create a small script to run Jellyfin.
Then paste the following commands and modify as needed.
Assuming you desire Jellyfin to run as a non-root user, chmod all files and directories to your normal login user and group. Also make the startup script above executable.
Finally you can run it. You will see lots of log information when run, this is normal. Setup is as usual in the web browser.
Portable DLL
Platform-agnostic .NET Core DLL builds in TAR archive format are available here. These builds use the binary jellyfin.dll and must be loaded with dotnet.
Arch Linux
Jellyfin can be found in the AUR as jellyfin, jellyfin-bin and jellyfin-git.
Fedora
Fedora builds in RPM package format are available here for now but an official Fedora repository is coming soon.
You will need to enable rpmfusion as ffmpeg is a dependency of the jellyfin server package
Note
You do not need to manually install ffmpeg, it will be installed by the jellyfin server package as a dependency
Install the jellyfin server
Install the jellyfin web interface
Enable jellyfin service with systemd
Open jellyfin service with firewalld
Note
This will open the following ports8096 TCP used by default for HTTP traffic, you can change this in the dashboard8920 TCP used by default for HTTPS traffic, you can change this in the dashboard1900 UDP used for service auto-discovery, this is not configurable7359 UDP used for auto-discovery, this is not configurable
Reboot your box
Go to localhost:8096 or ip-address-of-jellyfin-server:8096 to finish setup in the web UI
CentOS
CentOS/RHEL 7 builds in RPM package format are available here and an official CentOS/RHEL repository is planned for the future.
The default CentOS/RHEL repositories don't carry FFmpeg, which the RPM requires. You will need to add a third-party repository which carries FFmpeg, such as RPM Fusion's Free repository.
You can also build Jellyfin's version on your own. This includes gathering the dependencies and compiling and installing them. Instructions can be found at the FFmpeg wiki.
Start Docker In Ubuntu Lts
Debian
Repository
The Jellyfin team provides a Debian repository for installation on Debian Stretch/Buster. Supported architectures are amd64, arm64, and armhf.
Note
Tumblr media
Microsoft does not provide a .NET for 32-bit x86 Linux systems, and hence Jellyfin is not supported on the i386 architecture.
Install HTTPS transport for APT as well as gnupg and lsb-release if you haven't already.
Import the GPG signing key (signed by the Jellyfin Team):
Add a repository configuration at /etc/apt/sources.list.d/jellyfin.list:
Note
Supported releases are stretch, buster, and bullseye.
Update APT repositories:
Install Jellyfin:
Manage the Jellyfin system service with your tool of choice:
Packages
Raw Debian packages, including old versions, are available here.
Note
The repository is the preferred way to obtain Jellyfin on Debian, as it contains several dependencies as well.
Download the desired jellyfin and jellyfin-ffmpeg.deb packages from the repository.
Install the downloaded .deb packages:
Use apt to install any missing dependencies:
Manage the Jellyfin system service with your tool of choice:
Ubuntu
Migrating to the new repository
Previous versions of Jellyfin included Ubuntu under the Debian repository. This has now been split out into its own repository to better handle the separate binary packages. If you encounter errors about the ubuntu release not being found and you previously configured an ubuntujellyfin.list file, please follow these steps.
Run Docker In Ubuntu 18.04
Remove the old /etc/apt/sources.list.d/jellyfin.list file:
Proceed with the following section as written.
Ubuntu Repository
The Jellyfin team provides an Ubuntu repository for installation on Ubuntu Xenial, Bionic, Cosmic, Disco, Eoan, and Focal. Supported architectures are amd64, arm64, and armhf. Only amd64 is supported on Ubuntu Xenial.
Note
Microsoft does not provide a .NET for 32-bit x86 Linux systems, and hence Jellyfin is not supported on the i386 architecture.
Install HTTPS transport for APT if you haven't already:
Enable the Universe repository to obtain all the FFMpeg dependencies:
Note
If the above command fails you will need to install the following package software-properties-common.This can be achieved with the following command sudo apt-get install software-properties-common
Import the GPG signing key (signed by the Jellyfin Team):
Add a repository configuration at /etc/apt/sources.list.d/jellyfin.list:
Note
Supported releases are xenial, bionic, cosmic, disco, eoan, and focal.
Update APT repositories:
Install Jellyfin:
Manage the Jellyfin system service with your tool of choice:
Ubuntu Packages
Raw Ubuntu packages, including old versions, are available here.
Note
The repository is the preferred way to install Jellyfin on Ubuntu, as it contains several dependencies as well.
Start Docker In Ubuntu 20.04
Enable the Universe repository to obtain all the FFMpeg dependencies, and update repositories:
Download the desired jellyfin and jellyfin-ffmpeg.deb packages from the repository.
Install the required dependencies:
Install the downloaded .deb packages:
Use apt to install any missing dependencies:
Manage the Jellyfin system service with your tool of choice:
Migrating native Debuntu install to docker
It's possible to map your local installation's files to the official docker image.
Note
You need to have exactly matching paths for your files inside the docker container! This means that if your media is stored at /media/raid/ this path needs to be accessible at /media/raid/ inside the docker container too - the configurations below do include examples.
To guarantee proper permissions, get the uid and gid of your local jellyfin user and jellyfin group by running the following command:
You need to replace the <uid>:<gid> placeholder below with the correct values.
Using docker
Using docker-compose
Tumblr media
0 notes
loadingtax915 · 4 years ago
Text
Python Docx
Tumblr media
Python Docx4j
Python Docx To Pdf
Python Docx Table
Python Docx To Pdf
Python Docx2txt
Python Docx2txt
When you ask someone to send you a contract or a report there is a high probability that you’ll get a DOCX file. Whether you like it not, it makes sense considering that 1.2 billion people use Microsoft Office although a definition of “use” is quite vague in this case. DOCX is a binary file which is, unlike XLSX, not famous for being easy to integrate into your application. PDF is much easier when you care more about how a document is displayed than its abilities for further modifications. Let’s focus on that.
Python-docx versions 0.3.0 and later are not API-compatible with prior versions. Python-docx is hosted on PyPI, so installation is relatively simple, and just depends on what installation utilities you have installed. Python-docx may be installed with pip if you have it available.
Installing Python-Docx Library Several libraries exist that can be used to read and write MS Word files in Python. However, we will be using the python-docx module owing to its ease-of-use. Execute the following pip command in your terminal to download the python-docx module as shown below.
Python has a few great libraries to work with DOCX (python-dox) and PDF files (PyPDF2, pdfrw). Those are good choices and a lot of fun to read or write files. That said, I know I'd fail miserably trying to achieve 1:1 conversion.
Release v0.8.10 (Installation)python-docx is a Python library for creating and updating Microsoft Word (.docx) files.
Tumblr media
Looking further I came across unoconv. Universal Office Converter is a library that’s converting any document format supported by LibreOffice/OpenOffice. That sound like a solid solution for my use case where I care more about quality than anything else. As execution time isn't my problem I have been only concerned whether it’s possible to run LibreOffice without X display. Apparently, LibreOffice can be run in haedless mode and supports conversion between various formats, sweet!
I’m grateful to unoconv for an idea and great README explaining multiple problems I can come across. In the same time, I’m put off by the number of open issues and abandoned pull requests. If I get versions right, how hard can it be? Not hard at all, with few caveats though.
Testing converter
LibreOffice is available on all major platforms and has an active community. It's not active as new-hot-js-framework-active but still with plenty of good read and support. You can get your copy from the download page. Be a good user and go with up-to-date version. You can always downgrade in case of any problems and feedback on latest release is always appreciated.
On macOS and Windows executable is called soffice and libreoffice on Linux. I'm on macOS, executable soffice isn't available in my PATH after the installation but you can find it inside the LibreOffice.app. To test how LibreOffice deals with your files you can run:
In my case results were more than satisfying. The only problem I saw was a misalignment in a file when the alignment was done with spaces, sad but true. This problem was caused by missing fonts and different width of 'replacements' fonts. No worries, we'll address this problem later.
Setup I
While reading unoconv issues I've noticed that many problems are connected due to the mismatch of the versions. I'm going with Docker so I can have pretty stable setup and so I can be sure that everything works.
Let's start with defining simple Dockerfile, just with dependencies and ADD one DOCX file just for testing:
Tumblr media
Let's build an image:
After image is created we can run the container and convert the file inside the container:
Running LibreOffice as a subprocess
We want to run LibreOffice converter as a subprocess and provide the same API for all platforms. Let's define a module which can be run as a standalone script or which we can later import on our server.
Required arguments which convert_to accepts are folder to which we save PDF and a path to the source file. Optionally we specify a timeout in seconds. I’m saying optional but consider it mandatory. We don’t want a process to hang too long in case of any problems or just to limit computation time we are able to give away to each conversion. LibreOffice executable location and name depends on the platform so edit libreoffice_exec to support platform you’re using.
subprocess.run doesn’t capture stdout and stderr by default. We can easily change the default behavior by passing subprocess.PIPE. Unfortunately, in the case of the failure, LibreOffice will fail with return code 0 and nothing will be written to stderr. I decided to look for the success message assuming that it won’t be there in case of an error and raise LibreOfficeError otherwise. This approach hasn’t failed me so far.
Uploading files with Flask
Converting using the command line is ok for testing and development but won't take us far. Let's build a simple server in Flask.
We'll need few helper function to work with files and few custom errors for handling error messages. Upload directory path is defined in config.py. You can also consider using flask-restplus or flask-restful which makes handling errors a little easier.
The server is pretty straightforward. In production, you would probably want to use some kind of authentication to limit access to uploads directory. If not, give up on serving static files with Flask and go for Nginx.
Important take-away from this example is that you want to tell your app to be threaded so one request won't prevent other routes from being served. However, WSGI server included with Flask is not production ready and focuses on development. In production, you want to use a proper server with automatic worker process management like gunicorn. Check the docs for an example how to integrate gunicorn into your app. We are going to run the application inside a container so host has to be set to publicly visible 0.0.0.0.
Setup II
Now when we have a server we can update Dockerfile. We need to copy our application source code to the image filesystem and install required dependencies.
In docker-compose.yml we want to specify ports mapping and mount a volume. If you followed the code and you tried running examples you have probably noticed that we were missing the way to tell Flask to run in a debugging mode. Defining environment variable without a value is causing that this variable is going to be passed to the container from the host system. Alternatively, you can provide different config files for different environments.
Supporting custom fonts
I've mentioned a problem with missing fonts earlier. LibreOffice can, of course, make use of custom fonts. If you can predict which fonts your user might be using there's a simple remedy. Add following line to your Dockfile.
Now when you put custom font file in the font directory in your project, rebuild the image. From now on you support custom fonts!
Summary
This should give you the idea how you can provide quality conversion of different documents to PDF. Although the main goal was to convert a DOCX file you should be fine with presentations, spreadsheets or images.
Further improvements could be providing support for multiple files, the converter can be configured to accept more than one file as well.
Photo by Samuel Zeller on Unsplash.
Did you enjoy it? Follow me@MichalZalecki on Twitter, where I share some interesting, bite-size content.
This ebook goes beyond Jest documentation to explain software testing techniques. I focus on unit test separation, mocking, matchers, patterns, and best practices.
Get it now!
Mastering Jest: Tips & Tricks | $9
Latest version
Released:
Extract content from docx files
Project description
Extract docx headers, footers, text, footnotes, endnotes, properties, and images to a Python object.
The code is an expansion/contraction of python-docx2txt (Copyright (c) 2015 Ankush Shah). The original code is mostly gone, but some of the bones may still be here.
shared features:
Tumblr media
extracts text from docx files
extracts images from docx files
no dependencies (docx2python requires pytest to test)
additions:
extracts footnotes and endnotes
converts bullets and numbered lists to ascii with indentation
converts hyperlinks to <a href='http:/...'>link text</a>
retains some structure of the original file (more below)
extracts document properties (creator, lastModifiedBy, etc.)
inserts image placeholders in text ('----image1.jpg----')
inserts plain text footnote and endnote references in text ('----footnote1----')
(optionally) retains font size, font color, bold, italics, and underscore as html
extract user selections from checkboxes and dropdown menus
full test coverage and documentation for developers
subtractions:
no command-line interface
will only work with Python 3.4+
Installation
Use
Note on html feature:
Tumblr media
font size, font color, bold, italics, and underline supported
hyperlinks will always be exported as html (<a href='http:/...'>link text</a>), even if export_font_style=False, because I couldn't think of a more cononical representation.
every tag open in a paragraph will be closed in that paragraph (and, where appropriate, reopened in the next paragraph). If two subsequenct paragraphs are bold, they will be returned as <b>paragraph q</b>, <b>paragraph 2</b>. This is intentional to make each paragraph its own entity.
if you specify export_font_style=True, > and < in your docx text will be encoded as > and <
Return Value
Function docx2python returns an object with several attributes.
header - contents of the docx headers in the return format described herein
footer - contents of the docx footers in the return format described herein
body - contents of the docx in the return format described herein
footnotes - contents of the docx in the return format described herein
endnotes - contents of the docx in the return format described herein
document - header + body + footer (read only)
text - all docx text as one string, similar to what you'd get from python-docx2txt
properties - docx property names mapped to values (e.g., {'lastModifiedBy': 'Shay Hill'})
images - image names mapped to images in binary format. Write to filesystem with
Tumblr media
Return Format
Some structure will be maintained. Text will be returned in a nested list, with paragraphs always at depth 4 (i.e., output.body[i][j][k][l] will be a paragraph).
If your docx has no tables, output.body will appear as one a table with all contents in one cell:
Table cells will appear as table cells. Text outside tables will appear as table cells.
To preserve the even depth (text always at depth 4), nested tables will appear as new, top-level tables. This is clearer with an example:
becomes ...
This ensures text appears
only once
in the order it appears in the docx
always at depth four (i.e., result.body[i][j][k][l] will be a string).
Working with output
This package provides several documented helper functions in the docx2python.iterators module. Here are a few recipes possible with these functions:
Some fine print about checkboxes:
MS Word has checkboxes that can be checked any time, and others that can only be checked when the form is locked.The previous print as. u2610 (open checkbox) or u2612 (crossed checkbox). Which this module, the latter willtoo. I gave checkboxes a bailout value of ----checkbox failed---- if the xml doesn't look like I expect it to,because I don't have several-thousand test files with checkboxes (as I did with most of the other form elements).Checkboxes should work, but please let me know if you encounter any that do not.
Release historyRelease notifications | RSS feed
1.27.1
1.27
1.26
Python Docx4j
1.25
1.24
1.23
1.22
1.21
1.19
1.18
1.17
1.16
1.15
1.14
1.13
1.12
1.11
1.2
Python Docx To Pdf
1.1
Python Docx Table
1.0
0.1
Python Docx To Pdf
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Python Docx2txt
Files for docx2python, version 1.27.1Filename, sizeFile typePython versionUpload dateHashesFilename, size docx2python-1.27.1-py3-none-any.whl (22.9 kB) File type Wheel Python version py3 Upload dateHashesFilename, size docx2python-1.27.1.tar.gz (33.3 kB) File type Source Python version None Upload dateHashes
Close
Hashes for docx2python-1.27.1-py3-none-any.whl
Hashes for docx2python-1.27.1-py3-none-any.whlAlgorithmHash digestSHA25651f6f03149efff07372ea023824d4fd863cb70b531aa558513070fe60f1c420aMD54b0ee20fed4a8cb0eaba8580c33f946bBLAKE2-256e7d5ff32d733592b17310193280786c1cab22ca4738daa97e1825d650f55157c
Close
Hashes for docx2python-1.27.1.tar.gz
Python Docx2txt
Hashes for docx2python-1.27.1.tar.gzAlgorithmHash digestSHA2566ca0a92ee9220708060ece485cede894408588353dc458ee5ec17959488fa668MD5759e1630c6990533414192eb57333c72BLAKE2-25684783b70aec51652a4ec4f42aa419a8af18d967b06390764527c81f183d1c02a
Tumblr media
0 notes
globalmediacampaign · 4 years ago
Text
Supercharge your knowledge graph using Amazon Neptune, Amazon Comprehend, and Amazon Lex
Knowledge graph applications are one of the most popular graph use cases being built on Amazon Neptune today. Knowledge graphs consolidate and integrate an organization’s information into a single location by relating data stored from structured systems (e.g., e-commerce, sales records, CRM systems) and unstructured systems (e.g., text documents, email, news articles) together in a way that makes it readily available to users and applications. In reality, data rarely exists in a format that allows us to easily extract and connect relevant elements. In this post we’ll build a full-stack knowledge graph application that demonstrates how to provide structure to unstructured and semi-structured data, and how to expose this structure in a way that’s easy for users to consume. We’ll use Amazon Neptune to store our knowledge graph, Amazon Comprehend to provide structure to semi-structured data from the AWS Database Blog, and Amazon Lex to provide a chatbot interface to answer natural language questions as illustrated below. Deploy the application Let’s discuss the overall architecture and implementation steps used to build this application. If you want to experiment, all the code is available on GitHub. We begin by deploying our application and taking a look at how it works. Our sample solution includes the following: A Neptune cluster Multiple AWS Lambda functions and layers that handle the reading and writing of data to and from our knowledge graph An Amazon API Gateway that our web application uses to fetch data via REST An Amazon Lex chatbot, configured with the appropriate intents, which interacts with via our web application An Amazon Cognito identity pool required for our web application to connect to the chatbot Code that scrapes posts from the AWS Database blog for Neptune, enhances the data, and loads it into our knowledge graph A React-based web application with an AWS Amplify chatbot component Before you deploy our application, make sure you have the following: A machine running Docker, either a laptop or a server, for running our web interface An AWS account with the ability to create resources With these prerequisites satisfied, let’s deploy our application: Launch our solution using the provided AWS CloudFormation template in your desired Region: us-east-1 us-west-2 Costs to run this solution depend on the Neptune instance size chosen with a minimal cost for the other services used. Provide the desired stack name and instance size. Acknowledge the capabilities and choose Create stack. This process may take 10–15 minutes. When it’s complete, the CloudFormation stack’s Outputs tab lists the following values, which you need to run the web front end: ApiGatewayInvokeURL IdentityPoolId Run the following command to create the web interface, providing the appropriate parameters: docker run -td -p 3000:3000 -e IDENTITY_POOL_ID= -e API_GATEWAY_INVOKE_URL= -e REGION= public.ecr.aws/a8u6m715/neptune-chatbot:latest After this container has started, you can access the web application on port 3000 of your Docker server (http://localhost:3000/). If port 3000 is in use on your current server, you can alter the port by changing -p :3000. Use the application With the application started, let’s try out the chatbot integration using the following phrases: Show me all posts by Ian Robinson What has Dave Bechberger written on Amazon Neptune? Have Ian Robinson and Kelvin Lawrence worked together Show me posts by Taylor Rigg (This should prompt for Taylor Riggan; answer “Yes”.) Show me all posts on Amazon Neptune Refreshing the browser clears the canvas and chatbox. Each of these phrases provides a visual representation of the contextually relevant connections with our knowledge graph. Build the application Now that we know what our application is capable of doing, let’s look at how the AWS services are integrated to build a full-stack application. We built this knowledge graph application using a common paradigm for developing these types of applications known as ingest, curate, and discover. This paradigm begins by first ingesting data from one or more sources and creating semi-structured entities from it. In our application, we use Python and Beautiful Soup to scrape the AWS Database blog website to generate and store semi-structured data, which is stored in our Neptune-powered knowledge graph. After we extract these semi-structured entities, we curate and enhance them with additional meaning and connections. We do so using Amazon Comprehend to extract named entities from the blog post text. We connect these extracted entities within our knowledge graph to provide more contextually relevant connections within our blog data. Finally, we create an interface to allow easy discovery of our newly connected information. For our application, we use a React application, powered by Amazon Lex and Amplify, to provide a web-based chatbot interface to provide contextually relevant answers to the questions asked. Putting these aspects together gives the following application architecture. Ingest the AWS Database blog The ingest portion of our application uses Beautiful Soup to scrape the AWS Database blog. We don’t examine it in detail, but knowing the structure of the webpage allows us to create semi-structured data from the unstructured text. We use Beautiful Soup to extract several key pieces of information from each post, such as author, title, tags, and images: { "url": "https://aws.amazon.com/blogs/database/configure-amazon-vpc-for-sparql-1-1-federated-query-with-amazon-neptune/", "img_src": "https://d2908q01vomqb2.cloudfront.net/887309d048beef83ad3eabf2a79a64a389ab1c9f/2020/10/02/VPCforSPARQL1.1.FederatedQueryNeptune2.png", "img_height": "395", "img_width": "800", "title": "Configure Amazon VPC for SPARQL 1.1 Federated Query with Amazon Neptune", "authors": [ { "name": "Charles Ivie", "img_src": "https://d2908q01vomqb2.cloudfront.net/887309d048beef83ad3eabf2a79a64a389ab1c9f/2020/08/06/charles-ivie-100.jpg", "img_height": "132", "img_width": "100" } ], "date": "12 OCT 2020", "tags": [ "Amazon Neptune", "Database" ], "post": "" } After we extract this information for all the posts in the blog, we store it in our knowledge graph. The following figure shows the what this looks like for the first five posts. Although this begins to show connections between the entities in our graph, we can extract more context by examining the data stored within each post. To increase the connectedness of the data in our knowledge graph, let’s look at additional methods to curate this semi-structured data. Curate our semi-structured data To extract additional information from our semi-structured data, we use the DetectEntity functionality in Amazon Comprehend. This feature takes a text input and looks for unique names of real-world items such as people, places, items, or references to measures such as quantities or dates. By default, the types of entities returned are provided a label of COMMERCIAL_ITEM, DATE, EVENT, LOCATION, ORGANIZATION, OTHER, PERSON, QUANTITY, or TITLE. To enhance our data, the input is required to be UTF-encoded strings of up to 5,000-byte chunks. We do this by dividing each post by paragraph and running each paragraph through the batch_detect_entities method in batches of 25. For each entity that’s detected, the score, type, text, as well as begin and end offsets are returned, as in the following example code: { "Score": 0.9177741408348083, "Type": "TITLE", "Text": "SPARQL 1.1", "BeginOffset": 13, "EndOffset": 23 } Associating each of these detected entities with our semi-structured data in our knowledge graph shows even more connections, as seen in a subset in the following graph. When we compare this to our previous graph, we see that a significant number of additional connections have been added. These connections not only link posts together, they allow us to provide additional relevant answers by linking posts based on contextual relevant information stored within the post. This brings us to the final portion of this sample application: creating a web-based chatbot to interact with our new knowledge graph. Discover information in our knowledge graph Creating a web application to discover information in our knowledge graph has two steps: defining our chatbot and integrating the chatbot into a web application. Defining our chatbot Our application’s chatbot is powered by Amazon Lex, which makes it easy to build conversational interfaces for applications. The building block for building any bot with Amazon Lex is an intent. An intent is an action that responds to natural language user input. Our bot has four different intents specified: CoAuthorPosts PostByAuthor PostByAuthorAndTag PostByTag Each intent is identified by defining a variety of short training phrases for that intent, known as utterances. Each utterance is unique and when a user speaks or types that phrase, the associated intent is invoked. The phrases act as training data for the Lex bot to identify user input and map it to the appropriate intent. For example, the PostsByAuthor intent has a few different utterances that can invoke it, as shown in the following screenshot. The best practice is to use 15–20 sample utterances to provide the necessary variations for the model to perform with optimum accuracy. One or more slot values are within each utterance, identified by the curly brackets, which represent input data that is needed for the intent. Each slot value can be required or not, has a slot type associated with it, and has a prompt that used by the chatbot for eliciting the slot value if it’s not present. In addition to configuring the chatbot to prompt the user for missing slot values, you can specify a Lambda function to provide a more thorough validation of the inputs (against values available in the database) or return potential options back to the user to allow them to choose. For our PostsByAuthor intent, we configure a validation check to ensure that the author entered a valid author in our knowledge graph. The final piece is to define the fulfillment action for the intents. This is the action that occurs after the intent is invoked and all required slots are filled or validation checks have occurred. When defining the fulfillment action, you can choose from either invoking a Lambda function or returning the parameters back to the client. After you define the intent, you build and test it on the provided console. Now that the chatbot is functioning as expected, let’s integrate it into our web application. Integrate our chatbot Integration of our chatbot into our React application is relatively straightforward. We use the familiar React paradigms of components and REST calls, so let’s highlight how to configure the integration between React and Amazon Lex. Amazon Lex supports a variety of different deployment options natively, including Facebook, Slack, Twilio, or Amplify. Our application uses Amplify, which is an end-to-end toolkit that allows us to construct full-stack applications powered by AWS. Amplify offers a set of component libraries for a variety of front-end frameworks, including React, which we use in our application. Amplify provides an interaction component that makes it simple to integrate a React front end to our Amazon Lex chatbot. To accomplish this, we need some configuration information, such as the chatbot name and a configured Amazon Cognito identity pool ID. After we provide these configuration values, the component handles the work of wiring up the integration with our chatbot. In addition to the configuration, we need an additional piece of code (available on GitHub) to handle the search parameters returned as the fulfillment action from our Amazon Lex intent: useEffect(() => { const handleChatComplete = (event) => { const { data, err } = event.detail; if (data) { props.setSearchParameter(data["slots"]); } if (err) console.error("Chat failed:", err); }; const chatbotElement = document.querySelector("amplify-chatbot"); chatbotElement.addEventListener("chatCompleted", handleChatComplete); return function cleanup() { chatbotElement.removeEventListener("chatCompleted", handleChatComplete); }; }, [props]); With the search parameters for our intent, we can now call our API Gateway as you would for other REST based calls. Clean up your resources To clean up the resources used in this post, use either the AWS Command Line Interface (AWS CLI) or the AWS Management Console. Delete the CloudFormation template that you used to configure the remaining resources generated as part of this application. Conclusion Knowledge graphs—especially enterprise knowledge graphs—are efficient ways to access vast arrays of information within an organization. However, doing this effectively requires the ability to extract connections from within a large amount of unstructured and semi-structured data. This information then needs to be accessible via a simple and user-friendly interface. NLP and natural language search techniques such as those demonstrated in this post are just one of the many ways that AWS powers an intelligent data extraction and insight platform within organizations. If you have any questions or want a deeper dive into how to leverage Amazon Neptune for your own knowledge graph use case, we suggest looking at our Neptune Workbench notebook (01-Building-a-Knowledge-Graph-Application). This workbook uses the same AWS Database Blog data used here but provides additional details on the methodology, data model, and queries required to build a knowledge graph based search solution. As with other AWS services, we’re always available through your Amazon account manager or via the Amazon Neptune Discussion Forums. About the author Dave Bechberger is a Sr. Graph Architect with the Amazon Neptune team. He used his years of experience working with customers to build graph database-backed applications as inspiration to co-author “Graph Databases in Action” by Manning. https://aws.amazon.com/blogs/database/supercharge-your-knowledge-graph-using-amazon-neptune-amazon-comprehend-and-amazon-lex/
0 notes
computingpostcom · 2 years ago
Text
How to install Podman on Ubuntu?. Podman (Pod Manager) is a tool used to create and maintain containers. It is part of the libpod library. The Red Hat team has been working on a set of tools for running containers without a daemon. Did you know you can’t run Docker containers without Docker Engine daemon?. The following set of tools work together to power the use of Containers without an all-time running daemon process. Buildah to facilitate building of OCI images Skopeo for sharing/finding container images on Docker registries, the Atomic registry, private registries, local directories and local OCI-layout directories. Podman for running containers without need for daemon. Buildah’s commands replicate all of the commands that are found in a Dockerfile. Buildah containers are just created to allow content to be added back to the container image. How To Build OCI & Docker Container Images With Buildah Podman gives you all the commands and functions required to maintain and modify OCI images, such as pulling and tagging. It also allows you to create, run, and maintain containers created from those images. Install Podman on Ubuntu 22.04|20.04|18.04 The podman package is on a PPA repository which needs to be added prior to installation. Start a new terminal session on your Ubuntu machine and run commands below. Install Podman on Ubuntu 22.04 On Ubuntu 22.04 you can install Podman from the OS default APT repositories as it contains the package: sudo apt update sudo apt install podman Install Podman on Ubuntu 20.04 / Ubuntu 18.04 Add Kubic project repository to your Ubuntu 20.04|18.04 system: . /etc/os-release echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_$VERSION_ID/ /" | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list curl -L "https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_$VERSION_ID/Release.key" | sudo apt-key add - Once the repository is added, proceed to install Podman. sudo apt update sudo apt -y install podman After the installation, you can display information pertaining to the host, current storage stats, and build of podman. $ podman --version podman version 3.4.2 $ podman info Test Podman on Ubuntu 22.04|20.04|18.04 Pull Alpine docker image. $ podman pull alpine Trying to pull docker.io/library/alpine…Getting image source signatures Copying blob 8e402f1a9c57: 2.63 MiB / 2.63 MiB [=======================] 5s Copying config 5cb3aa00f899: 1.48 KiB / 1.48 KiB [=====================] 0s Writing manifest to image destination Storing signatures 5cb3aa00f89934411ffba5c063a9bc98ace875d8f92e77d0029543d9f2ef4ad0 List Downloaded images $ podman images REPOSITORY TAG IMAGE ID CREATED SIZE docker.io/library/alpine latest 5cb3aa00f899 3 days ago 5.79 MB docker.io/library/hello-world latest fce289e99eb9 2 months ago 5.62 kB Podman’s local repository is in /var/lib/containers Run container with command – command options similar to docker. $ podman run -it --rm docker.io/library/alpine /bin/sh / # apk update fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/main/x86_64/APKINDEX.tar.gz fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/community/x86_64/APKINDEX.tar.gz v3.9.2-1-g592d872fb8 [http://dl-cdn.alpinelinux.org/alpine/v3.9/main] v3.9.2-2-ge7dc3349a9 [http://dl-cdn.alpinelinux.org/alpine/v3.9/community] OK: 9754 distinct packages available / # apk add vim (1/5) Installing lua5.3-libs (5.3.5-r1) (2/5) Installing ncurses-terminfo-base (6.1_p20190105-r0) (3/5) Installing ncurses-terminfo (6.1_p20190105-r0) (4/5) Installing ncurses-libs (6.1_p20190105-r0) (5/5) Installing vim (8.1.0630-r0) Executing busybox-1.29.3-r10.trigger OK: 40 MiB in 19 packages / # exit Show running containers. $ podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1eb35f1b7de8 docker.io/library/alpine:latest /bin/sh 4 seconds ago Up 4 seconds ago pedantic_roentgen ec9c5b12db46 docker.io/library/alpine:latest /bin/sh 5 minutes ago Up 5 minutes ago ecstatic_wiles For more on usage, check: How To run Docker Containers using Podman and Libpod To setup private registry, checkout: Setup Docker Container Registry with Podman & Let’s Encrypt SSL Conclusion Podman seems to be a replacement for Docker and other container management tools which require a daemon to work. It is still fresh and in early development stages to tell a lot. In RHEL 8, the officially supported tools for managing Docker containers are Podman and Buildah.
0 notes
opsmxspinnaker · 4 years ago
Link
Spinnaker is a Continuous Delivery (CD) platform that was developed at Netflix where they used it to perform a high number of deployments ( 8000+/day). Later they made it available as an open-source tool. Previously enterprise release cycles used to be stretched for 7/8 months. But with the availability of the Spinnaker CD tool, enterprises have been able to shorten the release cycles from months to weeks to days (even multiple releases a day).
There are several other CD tools available in the market but what made Spinnaker so special?
Spinnaker Features:
Multicloud Deployments
It includes support of deployment to multiple cloud environments like Kubernetes (K8s), OpenShift, AWS, Azure, GCP, and so on. It abstracts the cloud environment to be worked on and managed easily.
Automated releases
Spinnaker allows you to create and configure CD pipelines that can be triggered manually or by some events. Thus the entire release process is automated end-to-end,  
Safe Deployments
With a  high number of release deployments, it is hard to know if some unwanted or bad release has been deployed into production which otherwise should have been failed. The built-in rollback mechanisms with Spinnaker allow you to test and quickly rollback a deployment and lets the application go back to its earlier state.
Maintain Visibility & Control
This feature in Spinnaker allows you to monitor your application across different cloud providers without needing you to log in to multiple accounts.
So Spinnaker is a foundational platform for Continuous Delivery (CD) that can be quite easily extended to match your deployment requirements.
Overview of Spinnaker’s Application Management & Deployment Pipelines Functionality
Spinnaker supports application management. In the Spinnaker UI, an application is represented as an inventory of all the infrastructure resources – clusters/server-groups, load balancers, firewalls, functions (even serverless functions) that are part of your application.
You can manage the same application deployed to different environments like AWS, GCP, Kubernetes, and so on from the Spinnaker UI itself. Spinnaker supports access control for multiple accounts. For e.g. users like dev or testers with permission can deploy to Dev or Stage environments, where as only the Ops people get to deploy the application into production. You can view and manage the different aspects of the application – like scaling the application, view health of different Kubernetes pods that are running, and see the performance and output of those pods.
Spinnaker pipelines let you have all your application’s infrastructure up and running. You can define your deployment workflow and configure your pipeline-as-a-code (JSON). It enables github-style operations.
Spinnaker pipelines allow you to configure:
Execution options– flexibility to run fully automatically or have manual interventions
Automated triggers– the capability to trigger your workflows through Jenkins jobs, webhooks, etc
Parameters– ability to define parameter which can be also accessed dynamically during pipeline execution
Notifications– to notify stakeholders about the status of pipeline execution
As part of the pipeline, you can configure and create automated triggers. These triggers can be fired based on events like a code check-in to the github repository or a new image being published to a Docker repository. You can have them scheduled to run at frequent intervals. You can pass different parameters to your pipeline so that you can use the same pipeline to deploy to different stages just by varying the parameters. You can set up notifications for integrations with different channels like slack or email.
After configuring the setup you can add different stages each of which is responsible for doing a different set of actions like calling a Jenkins job,  deploying to Kubernetes, and so on.  All these stages are first-class objects or actions that are built-in and that allows you to build a pretty complex pipeline. Spinnaker allows you to extend these pipelines easily and also do release management.
Once you run the Spinnaker pipeline you can monitor the deployment progress. You can view and troubleshoot if somethings go wrong such as Jenkins build failure. After a successful build, the build number is passed and tagged to the build image which is then used in subsequent stages to deploy that image.
You can see the results of deployment like what yaml got deployed. Spinnaker adds a lot of extra annotations to the yaml code so that it can manage the resources. As mentioned earlier, you can check all aspects (status of the deployment, the health of infrastructure, traffic, etc) of the associated application resources from the UI.
So we can summarize that Spinnaker displays the inventory of your application i.e. it shows all the infrastructure behind that application and it has pipelines for you to deploy that application in a continuous fashion.
Problems with other CD tools
Each organization is at a different maturity level for their release cycles. Today’s fast-paced business environment may mandate some of them to push code checked-in by developers to be deployed to production in a matter of hours if not minutes. So the questions that developers or DevOps managers ask themselves are:
What if I want to choose what features to promote to the next stage?
What if I want to plan and schedule a release?
What if I want different stakeholders (product managers/QA leads) to sign off (approve) before I promote?
For all the above use cases, Spinnaker is an ideal CD tool of choice as it does not require lots of custom scripting to orchestrate all these tasks.  Although, there are many solutions in the marketplace that can orchestrate the business processes associated with the software delivery they lack interoperability- the ability to integrate with existing tools in the ecosystem.
Can I include the steps to deploy the software also in the same tool?
Can the same tool be used by the developers, release managers, operations teams to promote the release?
The cost of delivery is pretty high when you have broken releases. Without end-to-end integration of delivery stages, the deployment process often results in broken releases. For e.g. raising a Jira ticket for one stage, letting custom scripting be done for that stage, and passing on to the next stage in a similar fashion.
Use BOM (bill-of-materials) to define what gets released
Integrate with your existing approval process in the delivery pipeline
Do the actual task of delivering the software
Say, your release manager decides that from ten releases, release A and B  (i.e. components of software) will be released. Then it needs all the approvals ( from testers/DevOps/Project managers/release manager) to be integrated into the deployment process of these releases. And, all this can be achieved using a Spinnaker pipeline.
Example of a Spinnaker pipeline
The BOM application configuration ( example below) is managed in some source control repository. Once you make any change and commit, it triggers the pipeline that would deploy the version of the services. Under the hood, Spinnaker would read the file from a repository, and inject it into the pipeline, deploy the different versions of the services, validate the deployment and promote it to the next stage.  
Example of a BOM
A BOM can have a list of services that have been installed. You may not install all services in the release or promote all the services. So you will declare if the service is being released or not, and the version of the release or image that is going to be published. Here in this example, we are doing it with a Kubernetes application.  You can also input different parameters that are going to be part of the release e.g. release it in the US region only.
So the key features of this release process are:
Source Controlled
Versioned (Know what got released and when?)
Approved (Being gated makes the release items become the source of truth. Once it is merged with the main branch it’s ready to get deployed)
Auditable ( Being source-controlled, it will definitely have the audit history about who made the change, and what changes were made)
Some interesting ways to enforce approvals
Integrations with Jira, ServiceNow
Policy checks for release conformance
Manual Judgment
Approvals would include integrations with Jira, ServiceNow, policy checks for release conformance e.g. before you release any release items you need to have their SonarQube coverage for static analysis of code quality and security vulnerabilities at 80%. Finally, if you are not ready to automatically promoting the release to production you can make a manual judgment and promote the same
Spinnaker supports managing releases giving you control over what version of different services would get deployed and released. So all the versions need not have continuous delivery but planned release. It lets you plan releases, determine what releases would get promoted, and promote them through the whole process in an automated manner.
OpsMx is a leading provider of Continuous Delivery solutions that help enterprises safely deliver software at scale and without any human intervention. We help engineering teams take the risk and manual effort out of releasing innovations at the speed of modern business.
0 notes