#with that i retire to writing my discussion post that was due 5 hours ago :')
Explore tagged Tumblr posts
quatregats · 1 year ago
Text
By the way, apropos of nothing, but when I say "heigho it'll go without these oddments" I am in fact referencing this panel in Tintin in the Land of the Soviets which 12-year-old me decided to adopt as a catchphrase before I knew what a meme was:
Tumblr media
113 notes · View notes
z-h-i-e · 11 months ago
Text
New Year Fanfic Asks (3/5)
11. Would you like to try any new fanfic genres or tropes this year?
It will be difficult to find something I haven’t done. However, there are many things I have written in the key of two Imladris Advisers, that I have not tried out with the Bling Kings, so perhaps that.
12. Will you change anything about the way you edit or rewrite this year?
Nope, I’ll still be a chaotic buttercup who posts in the middle of the night and sometimes has someone look over my shoulder and make suggestions while I’m writing, but unless there’s a ‘use a beta’ notation, these stories get posted as they are, we die like Men of Gondor, and then like fifteen years later I’ll do a revision and correct the additional typos then.
13. Aside from fanfic, are there any other fan works you’d like to try creating? Fanart, or fanvids, gifsets, or podfic? 
I’ve done fanart, will probably do more. (As of last year, I have publicly displayed artwork with one of those fancy engraved nameplates and the description of the work attached to the wall next to it, so I think more art will happen in more than one way.) Videos, unsure. I unearthed some old animations I did for the fandom a long while ago; if you’re in a discord with me you may have seen those shares. Unlikely on gifsets. Podfic – I’m working through logistics.  I do a fair amount of live reading, and I have a good microphone (with pretty colors) now, so I really should. I need people to poke me about those. 
14. Have you ever lost large chunks of your work in the past, due to not backing up your work? Will you change your methods this year?
The year was…well, it had to be either 2002 or 2003. I’m working part time, and I carry my disks with me (yes disks) that have fanfics on them. And I’ve got a few of them, including the one with the original Little Balrog with me at work.  And a student worker picked them up from the desk and set them onto the desensitiser unit–the one that demagnetizes things.  And I immediately said, ‘Well, you just ruined all of my disks,’ and we had a lesson on why one does not move things that are not theirs, and why they absolutely put nothing but books on the desensitizer (we had a different unit for the VHS tapes, because it could mess those up, too).  So I did rewrite Little Balrog, and it turned out a little different, there’s a whole part in the beginning with Elrond, Erestor, and Glorfindel having a long discussion that I just skipped rewriting.  However, that means I have been a backup the backup of the backup person since then – I have so many flash drives of everything, and paper copies, and copies in the cloud, and I don’t trust technology since then, so it’s been a good 20 year run of tech not failing.  Now, whether I can FIND the flashdrive, the disk, the CD… >.> They’re around here somewhere… I also did a fair amount of sending fics to Yahoo!Groups and then not archiving them, so that is what I’m working on, is getting things archived on AO3. 
15. Do you foresee any personal or professional obstacles this year, that would keep you from creating fanworks? 
Yeah, I’m currently trying to make a huge decision between staying in the librarian & professor camp or shifting into management. Pros and cons to both.  I have about 11 years left in my plans of how long I anticipate working before retirement (I have been in the workforce since 1995; 40 years of working seems like a fair amount in my opinion). I waffle on this on a daily basis, whether I want to continue what I’m doing for the duration, or to try one more adventure. Strangely enough, I think the latter would actually mean more free time, because what I do now is work a full time job plus a part time job, and that can be 60 or more hours a week, especially during midterms and finals.
0 notes
cllrnat · 7 years ago
Text
My first attempt at writing:
Chapter 1
Major Berenice Griselda Wolfe, World Renowned Trauma Surgeon for the RAMC, was up to her elbows in viscera due to the continuing trauma cases pouring into the South Sinai, Multinational Forces and Observers Base (MFO). She had been sent to South Sinai to update the medical staff in new trauma skills, also refurbish and re-establish their new trauma unit.
The whole area had once again become a disputed zone, despite the beautiful resorts of Na’ama Bay and Sharm El Sheikh being full of tourists, and situated just 5 minutes away.
What had upset Major Wolfe the most was the majority of the casualties were children, blown up on their way to school. Also, the local, peaceful Bedouin were being targeted but for much worse reasons, their land was valuable and the greedy developers and corrupt officials were determined to move them further away from civilisation and totally isolate them.
To top it all off, after months of peace and relaxation as she starts the final leg of her Military Command before retirement, she receives communication that a delegation of Doctors and Surgeons were on their way for a 3 month secondment to observe and learn from the best there is – The World Renowned Trauma Surgeon – Major Bernie Wolfe.
Her legendary taciturn abruptness reared its ugly head, as it always did when she was upset. Not one who likes to show emotions in front of her subordinates, family or friends, she stomped her way around the base for the remainder of the day, puffing away on a cigarette. A sure sign to her colleagues that she is really pissed off but too frustrated to speak about it.
Major Wolfe was well known for bottling her emotions up and continuing with the British stiff upper lip. She laughed and let out a derisive snort at her own thoughts; ” friends and family”, there’s a bloody laugh. She had mucked up every friendship ever offered to her at the risk of getting too close to what she actually wanted and afraid of getting hurt, so she ended everything to prevent this. Running away was Bernie’s favourite past-time, not having to face personal issues.
Family, hah, she’d even married a man just to prevent gossip in med school and to ensure there was no risk to her military career. He, Marcus, had given her 2 beautiful children, but the distance between them was vast due to 25 years in the Army. She did love them, Cameron and Charlotte, but couldn’t show it, especially when Marcus was around. He would continually bring up her absences, despite knowing she had a military life all planned out when they had met, following in her father’s footsteps.
Marcus Dunn, now there was another matter altogether. He has a career in Orthopaedics which comes with the typical ‘Orthopods’ attitude. He also hated being known as ‘Major Wolfe’s Husband’. When they’d attended functions, he’d sneer at her attending in Uniform instead of a ballgown, like the other clones. He also detested it when his colleagues and peers cornered her to discuss her breakthrough surgical techniques and how exciting and rewarding her work must be.
Afterwards he would make her feel small and humiliated by muttering, for her ears only, that she may be a famous surgeon but she was a shit wife and mother. He knew just the right way to piss her off, so her scowl for the remainder of any evening after his snideness had more than a few people nervous in her Company, thanks to his attempts to demean her career and make himself look glowing. Conceited Orthopaedic prick, this thought made her snigger to herself as she puffed away on the cigarettes he hated so much.
Major Berenice Wolfe (oh yeah, he also hated that she refused to become Mrs Dunn when they were married) was shy, awkward and totally lacking in the social graces, unless she was scrubbed and gowned up in theatre, crawling through the sand and dirt at the roadside, dodging missiles or caressing a beautiful woman’s body. Then, she was all confidence and passion. Yes, Major Wolfe was gay. She had finally accepted this at the age of 50 and was now going to live life to the max. Sod what other people thought, she had been pleasing others all her life, now it’s me time.
Bloody good job she has asked for a divorce. It was all going amicably until they had gotten drunk during negotiations and he had said “he was glad, now he could find a real doctor’s wife, who could make him happy and satisfy him sexually”.
Since they were divorcing and drunk she thought she’d finally let him know, “So can I”, she lowered her head to hide behind her unruly fringe as she always did. Marcus being the typical arrogant arse that he was, went very still and quiet. He stared at her through squinted dark lashes, while his eyebrows almost joined in the middle due to the frown.
Bernie started giggling as she looked at his face, and pictured him in a cartoon with steam coming out of his nostrils and ears, this made her unique laugh fill the house.
She slowly stood and started to leave the room, then remembering who she was, she straightened her shoulders and became ‘The Major’, she slowly turned and in a calm and quiet voice informed him again, “So can I, and don’t you dare try playing the hurt husband with me, I know you have been playing around for years and turning my children against me, and before you deny it, I have proof of your infidelities, so should you try and manipulate anything I will show the children and the world what a manipulative sleeze bag you truly are”.
With that final arrow to his heart, she marched upstairs, packed her Army rucksack with all the items she thought she’d need, and left.
On returning to the base early, her Superior Officer called her into a meeting and explained the position they required her to fill at the MFO base along with their personal request for her presence.
She quickly explained her predicament concerning Marcus and their imminent divorce. He told her he was shocked her marriage had lasted so long. He quickly opened his computer and asked her who she banked with. Her confused look caused him to laugh out loud.
“Wolfe are you really that naive? That slime ball you’ve been married to for 25 years will no doubt try to freeze and strip your assets and take all your hard earned money, as soon as the bank opens tomorrow. Been there, done that, got the t-shirt Wolfe. He will leave you broke in an attempt to force you back. Well, he should have realised decades ago that the military take care of their own”.
“Got your bank account details Wolfe?”
With that, he made a quick call to payroll, opened her a military account and via online banking transferred all her savings and half their joint account in minutes. She looked dumbfounded, he just smirked and spoke softly to her, “Bernie do you think you’re the only one this has ever happened to? This is now my speciality, stopping my soldiers from being shafted by angry ex’s”.
“What about the property Wolfe? Who owns that? Don’t worry, our Solicitors will ensure the Land Registry is checked and that he won’t be able to sell out from under you. We’ll handle that Wolfe, now off you go Major, oh and enjoy the Desert won’t you.”
“I’d truly love to be a fly on the wall tomorrow Wolfe”.
They both laughed and he could visibly see her shoulders relax as she saluted, thanked him and went to pack her bag ready for deployment to Sinai.
Within an hour she was ready, bags packed and just like that Major Berenice Griselda Wolfe was getting divorced, once more had her own money, was forcing the sale of the family house to ensure she had her share and was boarding the plane that was leaving with the British UN Peacekeeping forces. She was given her own pale blue UN beret, since she was going to be part of that unit for the remainder of her military service. While she was in the cargo plane she text Cameron and Charlotte to explain what had happened and she would wait to hear from them.
That was 1 month ago, no wonder she was stomping around. She finished her soul searching, crushed the cigarette end under her boot and continued her post surgery analysis along with the analysis of her pitiful life.
She looked down over Na’ama Bay and a broad smile changed her whole visage. At least she had the pleasure of diving. Bernie had been diving since she had discovered the relaxing isolation of the sport about 25 years ago.
Unknown to her comrades Bernie had purchased a newly built home overlooking the Red Sea and had also invested in a SCUBA Diving Centre along with her close Egyptian friends. This allowed her free access to the sport anytime she liked, her friend Akram was fronting the business, and it was also an investment for her savings rather than just wasting it as she normally would have. At least this was a bit of hers, Bernie’s. Something that neither Marcus or her family could touch. A small smile appeared on her lips but also the sparkle went to her eyes. Bernie was sort of happy and content for a moment. This would all change in a few days, for the better definitely, if only she knew it.
Serena Wendy Campbell, a well respected Vascular Surgeon, Deputy CEO, mother of a delinquent and ex-wife to a misogynistic, alcoholic prick who had played around since they were married and blamed everyone for his own shortcomings, had recently lost her mother to Dementia, found out she had a sister (now deceased) and was once again a source of the vicious Holby Gossip Mill perpetuated by the porters and nurses.
All because she was attempting to keep her mother’s dignity in tact and struggling to care for her alone, that was, until she could no longer cope or take the abuse her mother was doling out. Serena took it all on the chin as usual, taking the criticism thrown at her by her supposed friends and colleagues. They actually thought she was abusing her mother, rather than the other way around. That was until her mother had to be admitted to AAU urgently, then they witnessed the vileness this horrible illness caused, they saw how her mother beat her, did not recognise her and accused her of things and in typical Serena style she took it all. They saw the scars on her back, left untreated because Serena didn’t want anyone to know she wasn’t coping and couldn’t ask for treatment and help.
Now she found she had a challenging Nephew with Aspergers and all she really wanted was ‘a bloody rest’.
As if he could read her mind and right on cue, Henrik Hansen appeared with an offer she wasn’t allowed to refuse.
“What?, You want me to go where?”, exclaimed Serena.
“Egypt, Ms Campbell, I thought I made myself quite clear on that point”.
“Well yes, but why me?”, Serena asks exasperated.
“Why not you? You run AAU, plus I want, no I need you to see how ‘The Major’ works and runs things”, Hansen said calmly. “I have a feeling you will be very grateful to me for choosing you in the long run”. With that he gave her a small smile and that was that.
Ms Campbell was going to be away for 3 months.
Serena got home and before she knew it, she was getting her suitcases out and all her Summer clothes.
Hansen had arranged for her to stay in a very nice Hotel near the base; she was not going to be staying in any military base and sleeping in a bunk, no thank you, that was not how Ms Serena Campbell rolled. At least he had booked the flight so she had a few days to relax and settle in before having to meet the others and the Military Staff she was expected to work with for the next few months.
Well at least I have a week of sunbathing and relaxing before having to get all military, she thinks as she packs suntan lotion and an assortment of sunglasses. Serena is one of those women who pack for every eventuality happening in one day; she’s actually surprised there are any of her clothes left in the house before she hears the taxi honking its horn outside, ready to take her to the airport, ready for her flight.
She relaxed in the seat, dreaming of sipping a lovely Shiraz while travelling in luxury to Cairo - or so she assumed
1 note · View note
jmtapio · 6 years ago
Link
Backups!
After my last post, didn’t feel like writing for a while. But now I found a good subject: backups. Ah, backups…
I’ve run my current home-grown backup solution for a long time. Git history says at least since mid-2007 (so 11½ years), but the scripts didn’t start in Git, so 12 years is a fair assessment.
It’s a simple solution, based on incremental filesystem dumps, and back to level 0 periodically. I did use my backups to recover files (around once per year, I think), so it works, but it’s clunky. The biggest deficiencies are:
I don’t have enough space to backup everything I want to backup, if I want long-term history (since the full dumps every N units of time are costly).
Since the dump utility I use is limited to 9 levels, it also creates a limitation on how often I can make backups, which leads to too coarse backup granularity (and large at-risk intervals).
Since the dump is incremental, one needs to restore the correct archives in the correct order to get to the file… urgh!
Clearly I’m using technology from the stone-age, so this week I took a look at what’s available to replace my home-grown stuff.
But let’s make it clear first: I’m not interested in cloud-based backups as main solution. They might suit as an N+M (where M > 2) option, but not as primary/only solution. Plus, where’s the fun in delegating the technical stuff to others?
Various options
rsnapshot
The first thing I looked at, because it was on the back of my mind for a while, was rsnapshot. Its simplicity is very appealing, as well as its nice file-based deduplication, but a quick look at the current situation is not very encouraging:
it seems half-orphaned; not a very dire situation, but the fact that despite much discussion on that bug, it didn’t get a clear clojure; activity is small, the last official release was in 2015 and only a few commits since then;
low activity wouldn’t be a problem, but there are quite a few bugs filled that point to potential data loss, for example issue 141: “At certain conditions rsnapshot removes old backups without make new ones”;
Looking especially at the mentioned issue 141 made realise that the use of relative (e.g. hourly.N, etc.) timestamps is what leads to fragility in the script. Ideally the actual directories would be absolute-timestamp-based (e.g. 2018-12-09T15:45:44), and there would be just helpful symlinks (hourly.0) to these. Sure, there is the “sync_first” mode which seems safer, but it still doesn’t guarantee the correct transition since the various rotate calls are independent from each other and from the sync action itself.
Speaking of the rotate calls, the whole cron story (“create a cron entry for each cycle, and make sure to run the greater period ones before the lower periods”) points to more issues regarding the architecture of the rotation.
The conclusion was that at best, this would be a small improvement on my current solution. And since rsnapshot itself is a 4K LOC Perl script, I’m unlikely to contribute significantly to it; also, my desired changes would change the use of it significantly.
So, if this doesn’t work, what about other solutions?
borg backup
A tool very highly spoken of in the DIY/self-hosting backups is borgbackup. A quick look at it shows many advantages over rsnapshot:
space efficient storage, due to chunk-based (variable chunk? not entirely clear what’s the criteria for chunk length) deduplication, even across source filesystems/source machine/etc.
data encryption, yay!
customisable compression
It also can do off-site backups, of course, also requiring SSH access; and if the tool is also installed remotely, it’s much more efficient.
Something not clearly spoken about in the readme is the “correct” (IMHO) handling of repository maintenance: since archives are time-based and not relative, you declare pruning much more logically, along the lines of “keep only N backups older than T”. And it’s pruning, not rotation, which is very good.
Add on top the better handling of multiple filesystems/areas to be backed up, all in a single repository, and at first glance everything looks good. But a bit deeper look make me worried about a few things.
Reliability: On one hand, the archives are mountable. Which seems fancy. But it also means that without the tool working, and the metadata in good shape, you can’t access the data. A quick look at the design shows significant complexity, which means likely bugs, in the whole archive/database/chunk handling. If this would be the only way to get space-efficient compression, all would be good, but if you’re willing to give up encryption (at least for local backups this can be an acceptable trade-off), then rsnapshot plus a tool like duperemove which can do block-based deduplication (yes, it will kill performance on HDDs) seems a much simpler way to get the same result. And without the entire overhead of “your repository consists of opaque blobs” potential problem.
Of course, having significant internal state, there are tools to support this, like borg check and borg recreate, but the existence of these tools in itself confirms to me that there’s an inherent risk in such a design. A rsnapshot directory can be deleted, but it’s hard to get it corrupted.
Speaking of mounting archives, it also means that getting to your files a few hours ago is not as trivial as in rsnapshot’s case, which is simply cp /snapshots/hourly.3/desired/path/file ., without mounting, needing to come up with the right permissions to allow unprivileged users to do it, etc.
Security: The promise of isolating clients from bad servers and viceversa is good indeed. But it also has a few issues, out of which for my use case most important is the following: in order to allow clients to only push new archives, but not delete/break old ones (i.e. append-only mode), one can set a per-connection (via SSH keys forced command args) append only mode: you just need to set --append-only for that client. It gives a nice example, but it ends up with:
As data is only appended, and nothing removed, commands like prune or delete won’t free disk space, they merely tag data as deleted in a new transaction.
Be aware that as soon as you write to the repo in non-append-only mode (e.g. prune, delete or create archives from an admin machine), it will remove the deleted objects permanently (including the ones that were already marked as deleted, but not removed, in append-only mode).
So basically, the append only mode is not “reject other actions” (and ideally alert on this), but rather “postpone modifications until later”, which makes it IMHO useless.
Conclusion: borg backup is useful if you want a relatively hands-off, works well solution, but it has corner cases that kind of nullify its space savings advantage, depending on your trade-offs. So, not for me.
What would my ideal solution be?
After thinking on it, these are the important trade-offs:
File or block/chunk-based deduplication? Given native (file-system-level) block-based deduplication, a “native” (in the backup tool) seems preferred for local backups; for remote backups, of course it’s different, but then deduplication with encryption is its own story
File storage: native (1:1) or bundled (and needs extraction step); I personally would take native again, just to ensure I can get access to the files without the tool/its internal state to be needed to work
Per-file-system or global repository: ideally global, so that different file-systems don’t require separate handling/integration.
This leans more towards a rsnapshot-like solution… And then there are additional bonus points (in random order):
facilitating secure periodic snapshots to offline media
facilitating secure remote backups on dumb storage (not over SSH!) so that cloud-based backups can be used if desired
native support for redundancy in terms of Reed-Solomon error correction so that small blocks being lost don’t risk losing an entire file
ideally good customisation for the retention policy
ideally good exclusion rules (i.e. needing to add manually /home/*/.mozilla/cache is not “good”)
That’s a nice list, and from my search, I don’t think there is something like that.
Which makes me worried that I’ll start another project I won’t have time to properly maintain…
Next steps
Well, at least the next step is to get bigger harddrives for my current backup solution ☺ I’m impressed by the ~64K hours (7+ years) Power_On_Hours of my current HDDs, and it makes me feel good about choosing right hardware way back, but I can buy now 5× or more bigger hard-drives, which will allow more retention and more experiments. I was hoping I can retire my HDDs completely and switch to SSDs only, but that’s still too expensive, and nothing can beat the density and price of 10TB+ HDDs…
Comments and suggestions are very welcome! In the meantime, I’m shopping for hardware :-P
via Planet Debian
0 notes
stephaniefchase · 7 years ago
Text
Bajan Newscap 10/23/2017
Good Morning #realdreamchasers. Here is your daily news cap for Monday 23rd October, 2017. There is a lot to read and digest so take your time. Remember you can read full articles via Barbados Today (BT), or by purchasing Daily Nation Newspaper (DN).
Tumblr media
MARA THOMPSON NOT IN SUPPORT OF GUTTER POLITICS – With talk of gutter politics assuming prominence ahead of general election constitutionally due in the coming months, Member of Parliament (MP) Mara Thompson has urged members of the ruling Democratic Labour Party (DLP) not to go that route. In fact, the wife of the later former Prime Minister David Thompson urged the current leader of the DLP Freundel Stuart to rise above attacks thrown by the opposition Barbados Labour Party (BL) and take the high road during the upcoming election campaign. “I accept that this party is not immune to these vagaries but I urge my party leader to guard against the political cannibalism as we must be distinguishable from the other side,” she said this morning at the annual David Thompson Memorial Service held at St Margaret’s Anglican Church. “David was known to have a sharp tongue and a gift for not mincing his words, but he always treated his opponents as humans. It saddens me today that our next election has the ability to divide this country even more. “As a party and a Government, we must try to resist the temptation to arm our words that they so detonate to cause persons harm. This is not the DLP that I was introduced to,” Thompson added. She accused the BLP of weaponizing and cultivating a new level of corrosive politics. “I want to appeal to all within my hearing. We cannot create a post-independence Barbados where we are normalizing mediocrity undergirded by opposing for opposing sake,” she said. (BT)
MORE BARBADIANS LOOKING TO FOREIGN UNIVERSITIES FOR THEIR TERTIARY EDUCATION – More Barbadian students are considering overseas universities for their tertiary education. Speaking Saturday at an education fair held at the Lloyd Erskine Sandiford Centre, Legal Counsel and Director of Communications at The Student Centre Kimberley Alleyne-Pinder said that in recent years, there has been an increase in the number of undergraduate and postgraduate students venturing to countries such as Canada to further their education. The curious parents and students discussed their options with recruiters from 21 universities and colleges at the education fair, hosted by the Student Centre and the Canadian High Commission. The Embassy also facilitated seminars to inform patrons about student visas and immigration requirements as well as scholarships offered by the Canadian Government. Political Counsellor of the High Commission of Canada, Agnes Pust, also mentioned that there has been an increase in the number of students interested in studying in Canada. She attributed this to the dissemination of information from past Barbados students who immensely enjoyed their experiences studying in Canada. (BT)
DRY TAP BLUES – Thousands of Barbadians across the island found themselves with dry taps yesterday. And even as the Barbados Water Authority (BWA) worked steadily to rectify the issue, for some that water outage could continue for some time. From early yesterday morning, the BWA’s Rapid Response Unit was called into action servicing communities in portions of St James, St Andrew, St Thomas, St Michael and Christ Church. They were providing much needed relief for those who experienced dry taps as several BWA pumping stations reeled from the knock-on effects of a Barbados Light & Power (BL&P) electrical outage on Saturday. (DN)
BELLE PUMPING STATION SHUT DOWN - The Barbados Water Authority (BWA) wishes to inform the public that in order to increase the water supply at the Belle Pumping Station that station had been shut down from 3 to 6 p.m. today. For the last few days the Belle station has not been pumping at full capacity leading to low water pressure or outages in parts of Christ Church and parts of St Michael. The three-hour shut down was expected to rectify this problem. In addition, there has also been low water pressure and outages in the north of the island in areas along Highway 2A such as Bagatelle, Redmans Village, Prior Park, Melrose and surrounding districts. This is primarily due to a power outage across the island yesterday. After power was restored the BWA stations went back on line but the reservoirs remained low. A subsequent power outage last night sent the Haymans and Whim stations in the north off line again leading to intermittent outages; water levels at those stations are slowly returning to normal. The BWA is also investigating a suspected burst main which maybe impacting parts of Chalky Mount and Cambridge, St Andrew and surrounding districts. Meanwhile, investigations are continuing into outages at Gibbons and St Christopher, Christ Church. The BWA apologises for any inconvenience caused to its valued customers. (DN)
BWA ‘FLUSHING’ IN WORTHING – Traversing along Worthing, Christ Church was a decidedly unpleasant experience yesterday evening. It was due to the recurring problem of spilling sewage. One woman who was holding her nose as she passed told the DAILY NATION: “That was so for two months now and you can smell it from here all the way down to the gas station. When it rains, it does be even worse and then you got people driving by and splashing that nasty water all in your face.” A motorist said she noticed the problem two weeks ago, telling the photographer: “You don’t need a camera, you need a smell metre.” Another pedestrian remarked: “That smell is really nasty, although today is the first time I’ve seen it in a long time. The last time it was bad was when we had all that rain.” (DN)
FOGGING SCHEDULE – The Ministry of Health will continue its efforts to rid the island of dengue and other mosquito borne diseases by fogging a number of districts across the island. On Monday, October 23, the following districts in St Philip will be sprayed: Apple Hall, Inchcape Terrace, Ocean View with Avenues, Harrismith, Shrewsbury, Castle Heights, Bequest Tenantry, Harmony Hall, Eastbourne, Lynches, Lynches Tenantry, Casuarina Development, Merricks, East Point, Sealy Hill, St Catherine’s and environs. The following day, Tuesday, October 24, the team will be in St Lucy and St Peter and will carry out fogging operations at Oxford, Mount Gay, Mount Gay Village, Church Hill, Alleynesdale, Reservoir Road, Millionaire Road, Station Hill, Around-the-Town, Golden Mile, Douglas Development, Church Hill Road, Tom Bend Road, Mile-and-a-Quarter and neighbouring districts. The Vector Control Team will be in St Michael on Wednesday, October 25, and will fog: Grazettes Industrial Park, Piedmont Park, Long Gap, Field Side Drive, Hope Land, Spooners Hill, White Hall, Austin Drive, Medford Road, White Hall Road, White Hall New Development, Green Hill and surrounding districts.  On Thursday, October 26, the team will be in St Joseph and St John and will fog St Elizabeth, St Elizabeth New Road, Phillips Road, Bathsheba, Bathsheba Road, Hillswick, Pool Land, Four Roads, Cherry Grove, Henley Road, The Glen, Roebuck Street and neighbouring districts. The Vector Team will return to St Michael on Friday, October 27, and will spray Parkinson Field, Field Road, Rock Avenue, Rock Close, Bottom Close, Factory Avenue, Pine East West Boulevard, Phillips Road, Midway Lane, Golden Road, Smith Road, Princess Royal Avenue, Blackman Field, Hilltop and environs. The fogging operations run from 4:30 to 7:30 p.m. each day, and residents are reminded to open their windows and doors to allow the fog to penetrate. Parents are reminded that children should be prevented from playing in the fog or running behind the fogging machine. (DN)
PSV OWNERS WANT $3 FARES – Owners of public service vehicles (PSVs) are still hoping to see their fares increased to $3. But that number, according to president of the Alliance Owners of Public Transport (AOPT), Roy Raphael, still might not be enough to effectively cover their growing costs. Speaking to DAILY NATION yesterday, Raphael highlighted a number of issues, which he said were impacting on owners’ bottom lines. These include bad roads that were sending them speeding to mechanics, and high fuel costs. “When we did our analysis, it really showed that bus fare should really be increased [to $5]. Unlike the Transport Board that gets subsidised, we don’t get any subsidy from Government. But we are prepared to give some consideration if the need arises,” he said.  (DN)
TAXI MEN GET GATE SURPRISE – New gate, new rules. That’s what the majority of the island’s over 3 000 taxi operators are encountering after the Bridgetown Port Inc. opened its new gate earlier this month. But an official at the port has suggested the entity is open to dialogue on the matter and that aggrieved taxi operators could send their concerns in writing to the port. Just over a week ago, after a few months’ construction, the gate was reopened and a new feature, a small turning circle, was put in place where passengers have to be dropped off outside the port if they were not with a registered seaport operator. Long-time taxi operator Carl Hinds, of CJ’s Tours and Taxi Service, said for the past week he has had to drop off passengers outside the port even if it was raining or if they were wheelchair users. (DN)
WINDOW PAIN – A historic church which survived the fury of a massive hurricane of 1831, fell victim to a major act of vandalism yesterday. And now shocked officials at St Mary’s Anglican Church in The City are contemplating how they are going to find the money to replace 19 of the 34 original Georgian windows, including stained glass ones, that have been smashed. “It is really sad that society is of such that people no longer respect the church,” retired senior superintendent of police and now church warden, Jeddar Robinson, told the DAILY NATION. “Once upon a time we used to leave the gates open so that people could come in. But then the vagrants came in and dirtied the place; left the place so dirty that we now have to lock the gates,” he explained.  (DN)  
SILVER HILL SHOOTING VICTIM DIES - One of the victims of Saturday’s shooting at Silver Hill, Christ Church has died. Andrew Prescod, 34, of Welches, Oistins, Christ Church, succumbed to his injuries sometime around 5:20 p.m. today. Prescod and two females, 21 and 29 years old respectively, were shot while liming with other persons. Prescod was shot in the head, the 21-year-old female was shot in her right knee and the 29-year-old female was grazed by a bullet on her chin. As police continue their investigations, they are asking anyone who can provide any information to assist them to contact the Oistins Police Station at 418-2612 or 4182658, police emergency number “211”, Crime Stoppers at 1-800 –TIPS (8477) or the nearest police station. (DN)
HIGH WORDS – The battle for the presidency has quickly become a war of words. Outgoing Barbados Olympic Association head Steve Stoute isn’t going away quietly, having engaged in a back-and-forth exchange with BOA director and presidential candidate Lieutenant Colonel Trevor Browne ahead of Wednesday’s elections. Expressing a clear difference of opinion over the direction of the body, the two board members sent out some strongly-worded emails to the BOA’s membership directed towards each other. The issue appears to stem from Browne’s decision to invite the membership and candidates to an unofficial impromptu meeting tonight to discuss concerns surrounding the upcoming general elections. (DN)
BISHOO BAGS 5 TO PUT WI IN CONTROL – Leg-spinner Devendra Bishoo’s fourth five-wicket haul in Tests handed West Indies a precious 60-run first innings lead, and the Caribbean side then batted themselves into control of the opening cricket Test against Zimbabwe on the second day here yesterday. Exploiting the turning pitch at Queens Sports Club, the 31-year-old picked up five for 79 as Zimbabwe, responding to West Indies’ disappointing 219, were restricted to a paltry 159 in their first innings. Pacers Kemar Roach (2-23) and Jason Holder (2-25) both supported with two wickets apiece to ensure there was no revival by the hosts. Opener Hamilton Masakadza top-scored with 42, Craig Ervine got 39 and Solomon Mire chipped in with 27, but the hosts lost their way from their lunch time position of 101 three – their last seven wickets tumbling for a mere 58 runs. (DN)
NORTH AND SOUTH SIDES VICTORIOUS - The David Thompson Memorial Constituency Councils’ Classic is proving to be a happy hunting ground for former Barbados footballers and promising juniors. When the action continued on Friday night, former Barbados winger Jason Blackman engineered St James North’s 6-0 thrashing of St Joseph at Speightstown while talented T’Shane Lorde led St Philip North to a 4-1 triumph over St Michael West Central at Friendship with a brace. Blackman opened the scoring in the 19th minute and added a second in the 59th minute in a second half goal feast by St James North, which saw Javan Best making a double strike in the 76th and 80th minutes. Rashad Blackman (50th) and Omar Archer (64th) also found the nets. In the other game at Speightstown, St George North defeated St Peter 2-0, courtesy goals by Jakobi Boucher (9th) and Shaquille Bailey (68th).  (DN)
RICHER LIVES – For valedictorian Rhyesa Joseph, graduating from the University of the West Indies (UWI) Cave Hill was like a crossroad between reward and responsibility. The St Lucian, who gained first class honours with a Bachelor of Science degree in political science with international relations, made the point while addressing UWI’s evening graduation ceremony at Cave Hill on Saturday. She received her scroll from acting campus registrar Rommel Carter. Basing her speech on an excerpt from a poem entitled Love After Love, penned by her fellow countryman and Nobel Laureate Derek Walcott who died this year, Joseph gave colleagues several pointers as they prepared for the world beyond university life. Using an allegory of a feast, she advised the Social Science class of 2017 that while their degrees were valuable assets, they needed to be further enriched with more than just that. (DN)
That’s all for today folks there are 70 days left in the year Shalom! #thechasefiles #dailynewscaps Follow us on Twitter, Facebook & Instagram for your daily news. #bajannewscaps #newscapsbystephaniefchase
0 notes
chaj · 8 years ago
Link
via www.pyimagesearch.com
My Uncle John is a long haul tractor trailer truck driver.
For each new assignment, he picks his load up from a local company early in the morning and then sets off on a lengthy, enduring cross-country trek across the United States that takes him days to complete.
John is a nice, outgoing guy, who carries a smart, witty demeanor. He also fits the “cowboy of the highway” stereotype to a T, sporting a big ole’ trucker cap, red-checkered flannel shirt, and a faded pair of Levi’s that have more than one splotch of oil stain from quick and dirty roadside fixes. He also loves his country music.
I caught up with John a few weeks ago during a family dinner and asked him about his trucking job.
I was genuinely curious — before I entered high school I thought it would be fun to drive a truck or a car for a living (personally, I find driving to be a pleasurable, therapeutic experience).
But my question was also a bit self-motivated as well:
Earlier that morning I had just finished writing the code for this blog post and wanted to get his take on how computer science (and more specifically, computer vision) was affecting his trucking job.
The truth was this:
John was scared about his future employment, his livelihood, and his future.
The first five sentences out of his mouth included the words:
Tesla
Self-driving cars
Artificial Intelligence (AI)
Many proponents of autonomous, self-driving vehicles argue that the first industry that will be completely and totally overhauled by self-driving cars/trucks (even before consumer vehicles) is the long haul tractor trailer business.
If self-driving tractor trailers becomes a reality in the next few years, John has good reason to be worried — he’ll be out of a job, one that he’s been doing his entire life. He’s also getting close to retirement and needs to finish out his working years strong.
This isn’t speculation either: NVIDIA recently announced a partnership with PACCAR, a leading global truck manufacturer. The goal of this partnership is to make self-driving semi-trailers a reality.
After John and I were done discussing self-driving vehicles, I asked him the critical question that this very blog post hinges on:
Have you ever fallen asleep at the wheel?
I could tell instantly that John was uncomfortable. He didn’t look me in the eye. And when he finally did answer, it wasn’t a direct one — instead he recalled a story about his friend (name left out on purpose) who fell asleep after disobeying company policy on maximum number of hours driven during a 24 hour period.
The man ran off the highway, the contents of his truck spilling all over the road, blocking the interstate almost the entire night. Luckily, no one was injured, but it gave John quite the scare as he realized that if it could happen to other drivers, it could happen to him as well.
I then explained to John my work from earlier in the day — a computer vision system that can automatically detect driver drowsiness in a real-time video stream and then play an alarm if the driver appears to be drowsy.
While John said he was uncomfortable being directly video surveyed while driving, he did admit that it the technique would be helpful in the industry and ideally reduce the number of fatigue-related accidents.
Today, I am going to show you my implementation of detecting drowsiness in a video stream — my hope is that you’ll be able to use it in your own applications.
To learn more about drowsiness detection with OpenCV, just keep reading.
Looking for the source code to this post? Jump right to the downloads section.
Drowsiness detection with OpenCV
Two weeks ago I discussed how to detect eye blinks in video streams using facial landmarks.
Today, we are going to extend this method and use it to determine how long a given person’s eyes have been closed for. If there eyes have been closed for a certain amount of time, we’ll assume that they are starting to doze off and play an alarm to wake them up and grab their attention.
To accomplish this task, I’ve broken down today’s tutorial into three parts.
In the first part, I’ll show you how I setup my camera in my car so I could easily detect my face and apply facial landmark localization to monitor my eyes.
I’ll then demonstrate how we can implement our own drowsiness detector using OpenCV, dlib, and Python.
Finally, I’ll hop in my car and go for a drive (and pretend to be falling asleep as I do).
As we’ll see, the drowsiness detector works well and reliably alerts me each time I start to “snooze”.
Rigging my car with a drowsiness detector
Figure 1: Mounting my camera to my car dash for drowsiness detection.
The camera I used for this project was a Logitech C920. I love this camera as it:
Is relatively affordable.
Can shoot in full 1080p.
Is plug-and-play compatible with nearly every device I’ve tried it with (including the Raspberry Pi).
I took this camera and mounted it to the top of my dash using some double-sided tape to keep it from moving around during the drive (Figure 1 above).
The camera was then connected to my MacBook Pro on the seat next to me:
Figure 2: I’ll be using my MacBook Pro to run the actual drowsiness detection algorithm.
Originally, I had intended on using my Raspberry Pi 3 due to (1) form factor and (2) the real-world implications of building a driver drowsiness detector using very affordable hardware; however, as last week’s blog post discussed, the Raspberry Pi isn’t quite fast enough for real-time facial landmark detection.
In a future blog post I’ll be discussing how to optimize the Raspberry Pi along with the dlib compile to enable real-time facial landmark detection. However, for the time being, we’ll simply use a standard laptop computer.
With all my hardware setup, I was ready to move on to building the actual drowsiness detector using computer vision techniques.
The drowsiness detector algorithm
The general flow of our drowsiness detection algorithm is fairly straightforward.
First, we’ll setup a camera that monitors a stream for faces:
Figure 3: Step #1 — Look for faces in the input video stream.
If a face is found, we apply facial landmark detection and extract the eye regions:
Figure 4: Step #2 — Apply facial landmark localization to extract the eye regions from the face.
Now that we have the eye regions, we can compute the eye aspect ratio (detailed here) to determine if the eyes are closed:
Figure 5: Step #3 — Compute the eye aspect ratio to determine if the eyes are closed.
If the eye aspect ratio indicates that the eyes have been closed for a sufficiently long enough amount of time, we’ll sound an alarm to wake up the driver:
Figure 6: Step #4 — Sound an alarm if the eyes have been closed for a sufficiently long enough time.
In the next section, we’ll implement the drowsiness detection algorithm detailed above using OpenCV, dlib, and Python.
Building the drowsiness detector with OpenCV
To start our implementation, open up a new file, name it
detect_drowsiness.py
 , and insert the following code:
# import the necessary packages from scipy.spatial import distance as dist from imutils.video import VideoStream from imutils import face_utils from threading import Thread import numpy as np import playsound import argparse import imutils import time import dlib import cv2
Lines 2-12 import our required Python packages.
We’ll need the SciPy package so we can compute the Euclidean distance between facial landmarks points in the eye aspect ratio calculation (not strictly a requirement, but you should have SciPy installed if you intend on doing any work in the computer vision, image processing, or machine learning space).
We’ll also need the imutils package, my series of computer vision and image processing functions to make working with OpenCV easier.
If you don’t already have
imutils
  installed on your system, you can install/upgrade
imutils
  via:
$ pip install --upgrade imutils
We’ll also import the
Thread
  class so we can play our alarm in a separate thread from the main thread to ensure our script doesn’t pause execution while the alarm sounds.
In order to actually play our WAV/MP3 alarm, we need the playsound library, a pure Python, cross-platform implementation for playing simple sounds.
The
playsound
  library is conveniently installable via
pip
 :
$ pip install playsound
However, if you are using macOS (like I did for this project), you’ll also want to install pyobjc, otherwise you’ll get an error related to
AppKit
  when you actually try to play the sound:
$ pip install pyobjc
I only tested
playsound
  on macOS, but according to both the documentation and Taylor Marks (the developer and maintainer of
playsound
 ), the library should work on Linux and Windows as well.
Note: If you are having problems with
playsound
 , please consult their documentation as I am not an expert on audio libraries.
To detect and localize facial landmarks we’ll need the dlib library which is imported on Line 11. If you need help installing dlib on your system, please refer to this tutorial.
Next, we need to define our
sound_alarm
  function which accepts a
path
  to an audio file residing on disk and then plays the file:
# import the necessary packages from scipy.spatial import distance as dist from imutils.video import VideoStream from imutils import face_utils from threading import Thread import numpy as np import playsound import argparse import imutils import time import dlib import cv2 def sound_alarm(path): # play an alarm sound playsound.playsound(path)
We also need to define the
eye_aspect_ratio
  function which is used to compute the ratio of distances between the vertical eye landmarks and the distances between the horizontal eye landmarks:
# import the necessary packages from scipy.spatial import distance as dist from imutils.video import VideoStream from imutils import face_utils from threading import Thread import numpy as np import playsound import argparse import imutils import time import dlib import cv2 def sound_alarm(path): # play an alarm sound playsound.playsound(path) def eye_aspect_ratio(eye): # compute the euclidean distances between the two sets of # vertical eye landmarks (x, y)-coordinates A = dist.euclidean(eye[1], eye[5]) B = dist.euclidean(eye[2], eye[4]) # compute the euclidean distance between the horizontal # eye landmark (x, y)-coordinates C = dist.euclidean(eye[0], eye[3]) # compute the eye aspect ratio ear = (A + B) / (2.0 * C) # return the eye aspect ratio return ear
The return value of the eye aspect ratio will be approximately constant when the eye is open. The value will then rapid decrease towards zero during a blink.
If the eye is closed, the eye aspect ratio will again remain approximately constant, but will be much smaller than the ratio when the eye is open.
To visualize this, consider the following figure from Soukupová and Čech’s 2016 paper, Real-Time Eye Blink Detection using Facial Landmarks:
Figure 7: Top-left: A visualization of eye landmarks when then the eye is open. Top-right: Eye landmarks when the eye is closed. Bottom: Plotting the eye aspect ratio over time. The dip in the eye aspect ratio indicates a blink (Figure 1 of Soukupová and Čech).
On the top-left we have an eye that is fully open with the eye facial landmarks plotted. Then on the top-right we have an eye that is closed. The bottom then plots the eye aspect ratio over time.
As we can see, the eye aspect ratio is constant (indicating the eye is open), then rapidly drops to zero, then increases again, indicating a blink has taken place.
In our drowsiness detector case, we’ll be monitoring the eye aspect ratio to see if the value falls but does not increase again, thus implying that the person has closed their eyes.
You can read more about blink detection and the eye aspect ratio in my previous post.
Next, let’s parse our command line arguments:
# import the necessary packages from scipy.spatial import distance as dist from imutils.video import VideoStream from imutils import face_utils from threading import Thread import numpy as np import playsound import argparse import imutils import time import dlib import cv2 def sound_alarm(path): # play an alarm sound playsound.playsound(path) def eye_aspect_ratio(eye): # compute the euclidean distances between the two sets of # vertical eye landmarks (x, y)-coordinates A = dist.euclidean(eye[1], eye[5]) B = dist.euclidean(eye[2], eye[4]) # compute the euclidean distance between the horizontal # eye landmark (x, y)-coordinates C = dist.euclidean(eye[0], eye[3]) # compute the eye aspect ratio ear = (A + B) / (2.0 * C) # return the eye aspect ratio return ear # construct the argument parse and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-p", "--shape-predictor", required=True, help="path to facial landmark predictor") ap.add_argument("-a", "--alarm", type=str, default="", help="path alarm .WAV file") ap.add_argument("-w", "--webcam", type=int, default=0, help="index of webcam on system") args = vars(ap.parse_args())
Our drowsiness detector requires one command line argument followed by two optional ones, each of which is detailed below:
--shape-predictor
 : This is the path to dlib’s pre-trained facial landmark detector. You can download the detector along with the source code to this tutorial by using the “Downloads” section at the bottom of this blog post.
--alarm
 : Here you can optionally specify the path to an input audio file to be used as an alarm.
--webcam
 : This integer controls the index of your built-in webcam/USB camera.
Now that our command line arguments have been parsed, we need to define a few important variables:
# import the necessary packages from scipy.spatial import distance as dist from imutils.video import VideoStream from imutils import face_utils from threading import Thread import numpy as np import playsound import argparse import imutils import time import dlib import cv2 def sound_alarm(path): # play an alarm sound playsound.playsound(path) def eye_aspect_ratio(eye): # compute the euclidean distances between the two sets of # vertical eye landmarks (x, y)-coordinates A = dist.euclidean(eye[1], eye[5]) B = dist.euclidean(eye[2], eye[4]) # compute the euclidean distance between the horizontal # eye landmark (x, y)-coordinates C = dist.euclidean(eye[0], eye[3]) # compute the eye aspect ratio ear = (A + B) / (2.0 * C) # return the eye aspect ratio return ear # construct the argument parse and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-p", "--shape-predictor", required=True, help="path to facial landmark predictor") ap.add_argument("-a", "--alarm", type=str, default="", help="path alarm .WAV file") ap.add_argument("-w", "--webcam", type=int, default=0, help="index of webcam on system") args = vars(ap.parse_args()) # define two constants, one for the eye aspect ratio to indicate # blink and then a second constant for the number of consecutive # frames the eye must be below the threshold for to set off the # alarm EYE_AR_THRESH = 0.3 EYE_AR_CONSEC_FRAMES = 48 # initialize the frame counter as well as a boolean used to # indicate if the alarm is going off COUNTER = 0 ALARM_ON = False
Line 48 defines the
EYE_AR_THRESH
 . If the eye aspect ratio falls below this threshold, we’ll start counting the number of frames the person has closed their eyes for.
If the number of frames the person has closed their eyes in exceeds
EYE_AR_CONSEC_FRAMES
  (Line 49), we’ll sound an alarm.
Experimentally, I’ve found that an
EYE_AR_THRESH
  of
0.3
  works well in a variety of situations (although you may need to tune it yourself for your own applications).
I’ve also set the
EYE_AR_CONSEC_FRAMES
  to be
48
 , meaning that if a person has closed their eyes for 48 consecutive frames, we’ll play the alarm sound.
You can make the drowsiness detector more sensitive by decreasing the
EYE_AR_CONSEC_FRAMES
  — similarly, you can make the drowsiness detector less sensitive by increasing it.
Line 53 defines
COUNTER
 , the total number of consecutive frames where the eye aspect ratio is below
EYE_AR_THRESH
 .
If
COUNTER
  exceeds
EYE_AR_CONSEC_FRAMES
 , then we’ll update the boolean
ALARM_ON
  (Line 54).
The dlib library ships with a Histogram of Oriented Gradients-based face detector along with a facial landmark predictor — we instantiate both of these in the following code block:
# import the necessary packages from scipy.spatial import distance as dist from imutils.video import VideoStream from imutils import face_utils from threading import Thread import numpy as np import playsound import argparse import imutils import time import dlib import cv2 def sound_alarm(path): # play an alarm sound playsound.playsound(path) def eye_aspect_ratio(eye): # compute the euclidean distances between the two sets of # vertical eye landmarks (x, y)-coordinates A = dist.euclidean(eye[1], eye[5]) B = dist.euclidean(eye[2], eye[4]) # compute the euclidean distance between the horizontal # eye landmark (x, y)-coordinates C = dist.euclidean(eye[0], eye[3]) # compute the eye aspect ratio ear = (A + B) / (2.0 * C) # return the eye aspect ratio return ear # construct the argument parse and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-p", "--shape-predictor", required=True, help="path to facial landmark predictor") ap.add_argument("-a", "--alarm", type=str, default="", help="path alarm .WAV file") ap.add_argument("-w", "--webcam", type=int, default=0, help="index of webcam on system") args = vars(ap.parse_args()) # define two constants, one for the eye aspect ratio to indicate # blink and then a second constant for the number of consecutive # frames the eye must be below the threshold for to set off the # alarm EYE_AR_THRESH = 0.3 EYE_AR_CONSEC_FRAMES = 48 # initialize the frame counter as well as a boolean used to # indicate if the alarm is going off COUNTER = 0 ALARM_ON = False # initialize dlib's face detector (HOG-based) and then create # the facial landmark predictor print("[INFO] loading facial landmark predictor...") detector = dlib.get_frontal_face_detector() predictor = dlib.shape_predictor(args["shape_predictor"])
The facial landmarks produced by dlib are an indexable list, as I describe here:
Figure 8: Visualizing the 68 facial landmark coordinates from the iBUG 300-W dataset (larger resolution).
Therefore, to extract the eye regions from a set of facial landmarks, we simply need to know the correct array slice indexes:
# import the necessary packages from scipy.spatial import distance as dist from imutils.video import VideoStream from imutils import face_utils from threading import Thread import numpy as np import playsound import argparse import imutils import time import dlib import cv2 def sound_alarm(path): # play an alarm sound playsound.playsound(path) def eye_aspect_ratio(eye): # compute the euclidean distances between the two sets of # vertical eye landmarks (x, y)-coordinates A = dist.euclidean(eye[1], eye[5]) B = dist.euclidean(eye[2], eye[4]) # compute the euclidean distance between the horizontal # eye landmark (x, y)-coordinates C = dist.euclidean(eye[0], eye[3]) # compute the eye aspect ratio ear = (A + B) / (2.0 * C) # return the eye aspect ratio return ear # construct the argument parse and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-p", "--shape-predictor", required=True, help="path to facial landmark predictor") ap.add_argument("-a", "--alarm", type=str, default="", help="path alarm .WAV file") ap.add_argument("-w", "--webcam", type=int, default=0, help="index of webcam on system") args = vars(ap.parse_args()) # define two constants, one for the eye aspect ratio to indicate # blink and then a second constant for the number of consecutive # frames the eye must be below the threshold for to set off the # alarm EYE_AR_THRESH = 0.3 EYE_AR_CONSEC_FRAMES = 48 # initialize the frame counter as well as a boolean used to # indicate if the alarm is going off COUNTER = 0 ALARM_ON = False # initialize dlib's face detector (HOG-based) and then create # the facial landmark predictor print("[INFO] loading facial landmark predictor...") detector = dlib.get_frontal_face_detector() predictor = dlib.shape_predictor(args["shape_predictor"]) # grab the indexes of the facial landmarks for the left and # right eye, respectively (lStart, lEnd) = face_utils.FACIAL_LANDMARKS_IDXS["left_eye"] (rStart, rEnd) = face_utils.FACIAL_LANDMARKS_IDXS["right_eye"]
Using these indexes, we’ll easily be able to extract the eye regions via an array slice.
We are now ready to start the core of our drowsiness detector:
# import the necessary packages from scipy.spatial import distance as dist from imutils.video import VideoStream from imutils import face_utils from threading import Thread import numpy as np import playsound import argparse import imutils import time import dlib import cv2 def sound_alarm(path): # play an alarm sound playsound.playsound(path) def eye_aspect_ratio(eye): # compute the euclidean distances between the two sets of # vertical eye landmarks (x, y)-coordinates A = dist.euclidean(eye[1], eye[5]) B = dist.euclidean(eye[2], eye[4]) # compute the euclidean distance between the horizontal # eye landmark (x, y)-coordinates C = dist.euclidean(eye[0], eye[3]) # compute the eye aspect ratio ear = (A + B) / (2.0 * C) # return the eye aspect ratio return ear # construct the argument parse and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-p", "--shape-predictor", required=True, help="path to facial landmark predictor") ap.add_argument("-a", "--alarm", type=str, default="", help="path alarm .WAV file") ap.add_argument("-w", "--webcam", type=int, default=0, help="index of webcam on system") args = vars(ap.parse_args()) # define two constants, one for the eye aspect ratio to indicate # blink and then a second constant for the number of consecutive # frames the eye must be below the threshold for to set off the # alarm EYE_AR_THRESH = 0.3 EYE_AR_CONSEC_FRAMES = 48 # initialize the frame counter as well as a boolean used to # indicate if the alarm is going off COUNTER = 0 ALARM_ON = False # initialize dlib's face detector (HOG-based) and then create # the facial landmark predictor print("[INFO] loading facial landmark predictor...") detector = dlib.get_frontal_face_detector() predictor = dlib.shape_predictor(args["shape_predictor"]) # grab the indexes of the facial landmarks for the left and # right eye, respectively (lStart, lEnd) = face_utils.FACIAL_LANDMARKS_IDXS["left_eye"] (rStart, rEnd) = face_utils.FACIAL_LANDMARKS_IDXS["right_eye"] # start the video stream thread print("[INFO] starting video stream thread...") vs = VideoStream(src=args["webcam"]).start() time.sleep(1.0) # loop over frames from the video stream while True: # grab the frame from the threaded video file stream, resize # it, and convert it to grayscale # channels) frame = vs.read() frame = imutils.resize(frame, width=450) gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) # detect faces in the grayscale frame rects = detector(gray, 0)
On Line 69 we instantiate our
VideoStream
  using the supplied
--webcam
  index.
We then pause for a second to allow the camera sensor to warm up (Line 70).
On Line 73 we start looping over frames in our video stream.
Line 77 reads the next
frame
 , which we then preprocess by resizing it to have a width of 450 pixels and converting it to grayscale (Lines 78 and 79).
Line 82 applies dlib’s face detector to find and locate the face(s) in the image.
The next step is to apply facial landmark detection to localize each of the important regions of the face:
# import the necessary packages from scipy.spatial import distance as dist from imutils.video import VideoStream from imutils import face_utils from threading import Thread import numpy as np import playsound import argparse import imutils import time import dlib import cv2 def sound_alarm(path): # play an alarm sound playsound.playsound(path) def eye_aspect_ratio(eye): # compute the euclidean distances between the two sets of # vertical eye landmarks (x, y)-coordinates A = dist.euclidean(eye[1], eye[5]) B = dist.euclidean(eye[2], eye[4]) # compute the euclidean distance between the horizontal # eye landmark (x, y)-coordinates C = dist.euclidean(eye[0], eye[3]) # compute the eye aspect ratio ear = (A + B) / (2.0 * C) # return the eye aspect ratio return ear # construct the argument parse and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-p", "--shape-predictor", required=True, help="path to facial landmark predictor") ap.add_argument("-a", "--alarm", type=str, default="", help="path alarm .WAV file") ap.add_argument("-w", "--webcam", type=int, default=0, help="index of webcam on system") args = vars(ap.parse_args()) # define two constants, one for the eye aspect ratio to indicate # blink and then a second constant for the number of consecutive # frames the eye must be below the threshold for to set off the # alarm EYE_AR_THRESH = 0.3 EYE_AR_CONSEC_FRAMES = 48 # initialize the frame counter as well as a boolean used to # indicate if the alarm is going off COUNTER = 0 ALARM_ON = False # initialize dlib's face detector (HOG-based) and then create # the facial landmark predictor print("[INFO] loading facial landmark predictor...") detector = dlib.get_frontal_face_detector() predictor = dlib.shape_predictor(args["shape_predictor"]) # grab the indexes of the facial landmarks for the left and # right eye, respectively (lStart, lEnd) = face_utils.FACIAL_LANDMARKS_IDXS["left_eye"] (rStart, rEnd) = face_utils.FACIAL_LANDMARKS_IDXS["right_eye"] # start the video stream thread print("[INFO] starting video stream thread...") vs = VideoStream(src=args["webcam"]).start() time.sleep(1.0) # loop over frames from the video stream while True: # grab the frame from the threaded video file stream, resize # it, and convert it to grayscale # channels) frame = vs.read() frame = imutils.resize(frame, width=450) gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) # detect faces in the grayscale frame rects = detector(gray, 0) # loop over the face detections for rect in rects: # determine the facial landmarks for the face region, then # convert the facial landmark (x, y)-coordinates to a NumPy # array shape = predictor(gray, rect) shape = face_utils.shape_to_np(shape) # extract the left and right eye coordinates, then use the # coordinates to compute the eye aspect ratio for both eyes leftEye = shape[lStart:lEnd] rightEye = shape[rStart:rEnd] leftEAR = eye_aspect_ratio(leftEye) rightEAR = eye_aspect_ratio(rightEye) # average the eye aspect ratio together for both eyes ear = (leftEAR + rightEAR) / 2.0
We loop over each of the detected faces on Line 85 — in our implementation (specifically related to driver drowsiness), we assume there is only one face — the driver — but I left this
for
  loop in here just in case you want to apply the technique to videos with more than one face.
For each of the detected faces, we apply dlib’s facial landmark detector (Line 89) and convert the result to a NumPy array (Line 90).
Using NumPy array slicing we can extract the (x, y)-coordinates of the left and right eye, respectively (Lines 94 and 95).
Given the (x, y)-coordinates for both eyes, we then compute their eye aspect ratios on Line 96 and 97.
Soukupová and Čech recommend averaging both eye aspect ratios together to obtain a better estimation (Line 100).
We can then visualize each of the eye regions on our
frame
  by using the
cv2.drawContours
  function below — this is often helpful when we are trying to debug our script and want to ensure that the eyes are being correctly detected and localized:
# import the necessary packages from scipy.spatial import distance as dist from imutils.video import VideoStream from imutils import face_utils from threading import Thread import numpy as np import playsound import argparse import imutils import time import dlib import cv2 def sound_alarm(path): # play an alarm sound playsound.playsound(path) def eye_aspect_ratio(eye): # compute the euclidean distances between the two sets of # vertical eye landmarks (x, y)-coordinates A = dist.euclidean(eye[1], eye[5]) B = dist.euclidean(eye[2], eye[4]) # compute the euclidean distance between the horizontal # eye landmark (x, y)-coordinates C = dist.euclidean(eye[0], eye[3]) # compute the eye aspect ratio ear = (A + B) / (2.0 * C) # return the eye aspect ratio return ear # construct the argument parse and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-p", "--shape-predictor", required=True, help="path to facial landmark predictor") ap.add_argument("-a", "--alarm", type=str, default="", help="path alarm .WAV file") ap.add_argument("-w", "--webcam", type=int, default=0, help="index of webcam on system") args = vars(ap.parse_args()) # define two constants, one for the eye aspect ratio to indicate # blink and then a second constant for the number of consecutive # frames the eye must be below the threshold for to set off the # alarm EYE_AR_THRESH = 0.3 EYE_AR_CONSEC_FRAMES = 48 # initialize the frame counter as well as a boolean used to # indicate if the alarm is going off COUNTER = 0 ALARM_ON = False # initialize dlib's face detector (HOG-based) and then create # the facial landmark predictor print("[INFO] loading facial landmark predictor...") detector = dlib.get_frontal_face_detector() predictor = dlib.shape_predictor(args["shape_predictor"]) # grab the indexes of the facial landmarks for the left and # right eye, respectively (lStart, lEnd) = face_utils.FACIAL_LANDMARKS_IDXS["left_eye"] (rStart, rEnd) = face_utils.FACIAL_LANDMARKS_IDXS["right_eye"] # start the video stream thread print("[INFO] starting video stream thread...") vs = VideoStream(src=args["webcam"]).start() time.sleep(1.0) # loop over frames from the video stream while True: # grab the frame from the threaded video file stream, resize # it, and convert it to grayscale # channels) frame = vs.read() frame = imutils.resize(frame, width=450) gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) # detect faces in the grayscale frame rects = detector(gray, 0) # loop over the face detections for rect in rects: # determine the facial landmarks for the face region, then # convert the facial landmark (x, y)-coordinates to a NumPy # array shape = predictor(gray, rect) shape = face_utils.shape_to_np(shape) # extract the left and right eye coordinates, then use the # coordinates to compute the eye aspect ratio for both eyes leftEye = shape[lStart:lEnd] rightEye = shape[rStart:rEnd] leftEAR = eye_aspect_ratio(leftEye) rightEAR = eye_aspect_ratio(rightEye) # average the eye aspect ratio together for both eyes ear = (leftEAR + rightEAR) / 2.0 # compute the convex hull for the left and right eye, then # visualize each of the eyes leftEyeHull = cv2.convexHull(leftEye) rightEyeHull = cv2.convexHull(rightEye) cv2.drawContours(frame, [leftEyeHull], -1, (0, 255, 0), 1) cv2.drawContours(frame, [rightEyeHull], -1, (0, 255, 0), 1)
Finally, we are now ready to check to see if the person in our video stream is starting to show symptoms of drowsiness:
# import the necessary packages from scipy.spatial import distance as dist from imutils.video import VideoStream from imutils import face_utils from threading import Thread import numpy as np import playsound import argparse import imutils import time import dlib import cv2 def sound_alarm(path): # play an alarm sound playsound.playsound(path) def eye_aspect_ratio(eye): # compute the euclidean distances between the two sets of # vertical eye landmarks (x, y)-coordinates A = dist.euclidean(eye[1], eye[5]) B = dist.euclidean(eye[2], eye[4]) # compute the euclidean distance between the horizontal # eye landmark (x, y)-coordinates C = dist.euclidean(eye[0], eye[3]) # compute the eye aspect ratio ear = (A + B) / (2.0 * C) # return the eye aspect ratio return ear # construct the argument parse and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-p", "--shape-predictor", required=True, help="path to facial landmark predictor") ap.add_argument("-a", "--alarm", type=str, default="", help="path alarm .WAV file") ap.add_argument("-w", "--webcam", type=int, default=0, help="index of webcam on system") args = vars(ap.parse_args()) # define two constants, one for the eye aspect ratio to indicate # blink and then a second constant for the number of consecutive # frames the eye must be below the threshold for to set off the # alarm EYE_AR_THRESH = 0.3 EYE_AR_CONSEC_FRAMES = 48 # initialize the frame counter as well as a boolean used to # indicate if the alarm is going off COUNTER = 0 ALARM_ON = False # initialize dlib's face detector (HOG-based) and then create # the facial landmark predictor print("[INFO] loading facial landmark predictor...") detector = dlib.get_frontal_face_detector() predictor = dlib.shape_predictor(args["shape_predictor"]) # grab the indexes of the facial landmarks for the left and # right eye, respectively (lStart, lEnd) = face_utils.FACIAL_LANDMARKS_IDXS["left_eye"] (rStart, rEnd) = face_utils.FACIAL_LANDMARKS_IDXS["right_eye"] # start the video stream thread print("[INFO] starting video stream thread...") vs = VideoStream(src=args["webcam"]).start() time.sleep(1.0) # loop over frames from the video stream while True: # grab the frame from the threaded video file stream, resize # it, and convert it to grayscale # channels) frame = vs.read() frame = imutils.resize(frame, width=450) gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) # detect faces in the grayscale frame rects = detector(gray, 0) # loop over the face detections for rect in rects: # determine the facial landmarks for the face region, then # convert the facial landmark (x, y)-coordinates to a NumPy # array shape = predictor(gray, rect) shape = face_utils.shape_to_np(shape) # extract the left and right eye coordinates, then use the # coordinates to compute the eye aspect ratio for both eyes leftEye = shape[lStart:lEnd] rightEye = shape[rStart:rEnd] leftEAR = eye_aspect_ratio(leftEye) rightEAR = eye_aspect_ratio(rightEye) # average the eye aspect ratio together for both eyes ear = (leftEAR + rightEAR) / 2.0 # compute the convex hull for the left and right eye, then # visualize each of the eyes leftEyeHull = cv2.convexHull(leftEye) rightEyeHull = cv2.convexHull(rightEye) cv2.drawContours(frame, [leftEyeHull], -1, (0, 255, 0), 1) cv2.drawContours(frame, [rightEyeHull], -1, (0, 255, 0), 1) # check to see if the eye aspect ratio is below the blink # threshold, and if so, increment the blink frame counter if ear < EYE_AR_THRESH: COUNTER += 1 # if the eyes were closed for a sufficient number of # then sound the alarm if COUNTER >= EYE_AR_CONSEC_FRAMES: # if the alarm is not on, turn it on if not ALARM_ON: ALARM_ON = True # check to see if an alarm file was supplied, # and if so, start a thread to have the alarm # sound played in the background if args["alarm"] != "": t = Thread(target=sound_alarm, args=(args["alarm"],)) t.deamon = True t.start() # draw an alarm on the frame cv2.putText(frame, "DROWSINESS ALERT!", (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2) # otherwise, the eye aspect ratio is not below the blink # threshold, so reset the counter and alarm else: COUNTER = 0 ALARM_ON = False
On Line 111 we make a check to see if the eye aspect ratio is below the “blink/closed” eye threshold,
EYE_AR_THRESH
 .
If it is, we increment
COUNTER
 , the total number of consecutive frames where the person has had their eyes closed.
If
COUNTER
 exceeds
EYE_AR_CONSEC_FRAMES
  (Line 116), then we assume the person is starting to doze off.
Another check is made, this time on Line 118 and 119 to see if the alarm is on — if it’s not, we turn it on.
Lines 124-128 handle playing the alarm sound, provided an
--alarm
  path was supplied when the script was executed. We take special care to create a separate thread responsible for calling
sound_alarm
  to ensure that our main program isn’t blocked until the sound finishes playing.
Lines 131 and 132 draw the text
DROWSINESS ALERT!
  on our
frame
  — again, this is often helpful for debugging, especially if you are not using the
playsound
  library.
Finally, Lines 136-138 handle the case where the eye aspect ratio is larger than
EYE_AR_THRESH
 , indicating the eyes are open. If the eyes are open, we reset
COUNTER
  and ensure the alarm is off.
The final code block in our drowsiness detector handles displaying the output
frame
  to our screen:
# import the necessary packages from scipy.spatial import distance as dist from imutils.video import VideoStream from imutils import face_utils from threading import Thread import numpy as np import playsound import argparse import imutils import time import dlib import cv2 def sound_alarm(path): # play an alarm sound playsound.playsound(path) def eye_aspect_ratio(eye): # compute the euclidean distances between the two sets of # vertical eye landmarks (x, y)-coordinates A = dist.euclidean(eye[1], eye[5]) B = dist.euclidean(eye[2], eye[4]) # compute the euclidean distance between the horizontal # eye landmark (x, y)-coordinates C = dist.euclidean(eye[0], eye[3]) # compute the eye aspect ratio ear = (A + B) / (2.0 * C) # return the eye aspect ratio return ear # construct the argument parse and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-p", "--shape-predictor", required=True, help="path to facial landmark predictor") ap.add_argument("-a", "--alarm", type=str, default="", help="path alarm .WAV file") ap.add_argument("-w", "--webcam", type=int, default=0, help="index of webcam on system") args = vars(ap.parse_args()) # define two constants, one for the eye aspect ratio to indicate # blink and then a second constant for the number of consecutive # frames the eye must be below the threshold for to set off the # alarm EYE_AR_THRESH = 0.3 EYE_AR_CONSEC_FRAMES = 48 # initialize the frame counter as well as a boolean used to # indicate if the alarm is going off COUNTER = 0 ALARM_ON = False # initialize dlib's face detector (HOG-based) and then create # the facial landmark predictor print("[INFO] loading facial landmark predictor...") detector = dlib.get_frontal_face_detector() predictor = dlib.shape_predictor(args["shape_predictor"]) # grab the indexes of the facial landmarks for the left and # right eye, respectively (lStart, lEnd) = face_utils.FACIAL_LANDMARKS_IDXS["left_eye"] (rStart, rEnd) = face_utils.FACIAL_LANDMARKS_IDXS["right_eye"] # start the video stream thread print("[INFO] starting video stream thread...") vs = VideoStream(src=args["webcam"]).start() time.sleep(1.0) # loop over frames from the video stream while True: # grab the frame from the threaded video file stream, resize # it, and convert it to grayscale # channels) frame = vs.read() frame = imutils.resize(frame, width=450) gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) # detect faces in the grayscale frame rects = detector(gray, 0) # loop over the face detections for rect in rects: # determine the facial landmarks for the face region, then # convert the facial landmark (x, y)-coordinates to a NumPy # array shape = predictor(gray, rect) shape = face_utils.shape_to_np(shape) # extract the left and right eye coordinates, then use the # coordinates to compute the eye aspect ratio for both eyes leftEye = shape[lStart:lEnd] rightEye = shape[rStart:rEnd] leftEAR = eye_aspect_ratio(leftEye) rightEAR = eye_aspect_ratio(rightEye) # average the eye aspect ratio together for both eyes ear = (leftEAR + rightEAR) / 2.0 # compute the convex hull for the left and right eye, then # visualize each of the eyes leftEyeHull = cv2.convexHull(leftEye) rightEyeHull = cv2.convexHull(rightEye) cv2.drawContours(frame, [leftEyeHull], -1, (0, 255, 0), 1) cv2.drawContours(frame, [rightEyeHull], -1, (0, 255, 0), 1) # check to see if the eye aspect ratio is below the blink # threshold, and if so, increment the blink frame counter if ear < EYE_AR_THRESH: COUNTER += 1 # if the eyes were closed for a sufficient number of # then sound the alarm if COUNTER >= EYE_AR_CONSEC_FRAMES: # if the alarm is not on, turn it on if not ALARM_ON: ALARM_ON = True # check to see if an alarm file was supplied, # and if so, start a thread to have the alarm # sound played in the background if args["alarm"] != "": t = Thread(target=sound_alarm, args=(args["alarm"],)) t.deamon = True t.start() # draw an alarm on the frame cv2.putText(frame, "DROWSINESS ALERT!", (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2) # otherwise, the eye aspect ratio is not below the blink # threshold, so reset the counter and alarm else: COUNTER = 0 ALARM_ON = False # draw the computed eye aspect ratio on the frame to help # with debugging and setting the correct eye aspect ratio # thresholds and frame counters cv2.putText(frame, "EAR: {:.2f}".format(ear), (300, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2) # show the frame cv2.imshow("Frame", frame) key = cv2.waitKey(1) & 0xFF # if the `q` key was pressed, break from the loop if key == ord("q"): break # do a bit of cleanup cv2.destroyAllWindows() vs.stop()
To see our drowsiness detector in action, proceed to the next section.
Testing the OpenCV drowsiness detector
To start, make sure you use the “Downloads” section below to download the source code + dlib’s pre-trained facial landmark predictor + example audio alarm file utilized in today’s blog post.
I would then suggest testing the
detect_drowsiness.py
  script on your local system in the comfort of your home/office before you start to wire up your car for driver drowsiness detection.
In my case, once I was sufficiently happy with my implementation, I moved my laptop + webcam out to my car (as detailed in the “Rigging my car with a drowsiness detector” section above), and then executed the following command:
$ python detect_drowsiness.py \ --shape-predictor shape_predictor_68_face_landmarks.dat \ --alarm alarm.wav
I have recorded my entire drive session to share with you — you can find the results of the drowsiness detection implementation below:
Note: The actual
alarm.wav
  file came from this website, credited to Matt Koenig.
As you can see from the screencast, once the video stream was up and running, I carefully started testing the drowsiness detector in the parking garage by my apartment to ensure it was indeed working properly.
After a few tests, I then moved on to some back roads and parking lots were there was very little traffic (it was a major holiday in the United States, so there were very few cars on the road) to continue testing the drowsiness detector.
Remember, driving with your eyes closed, even for a second, is dangerous, so I took extra special precautions to ensure that the only person who could be harmed during the experiment was myself.
As the results show, our drowsiness detector is able to detect when I’m at risk of dozing off and then plays a loud alarm to grab my attention.
The drowsiness detector is even able to work in a variety of conditions, including direct sunlight when driving on the road and low/artificial lighting while in the concrete parking garage.
Summary
In today’s blog post I demonstrated how to build a drowsiness detector using OpenCV, dlib, and Python.
Our drowsiness detector hinged on two important computer vision techniques:
Facial landmark detection
Eye aspect ratio
Facial landmark prediction is the process of localizing key facial structures on a face, including the eyes, eyebrows, nose, mouth, and jawline.
Specifically, in the context of drowsiness detection, we only needed the eye regions (I provide more detail on how to extract each facial structure from a face here).
Once we have our eye regions, we can apply the eye aspect ratio to determine if the eyes are closed. If the eyes have been closed for a sufficiently long enough period of time, we can assume the user is at risk of falling asleep and sound an alarm to grab their attention. More details on the eye aspect ratio and how it was derived can be found in my previous tutorial on blink detection.
If you’ve enjoyed this blog post on drowsiness detection with OpenCV (and want to learn more about computer vision techniques applied to faces), be sure to enter your email address in the form below — I’ll be sure to notify you when new content is published here on the PyImageSearch blog.
Downloads:
If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 11-page Resource Guide on Computer Vision and Image Search Engines, including exclusive techniques that I don’t post on this blog! Sound good? If so, enter your email address and I’ll send you the code immediately!
Email address:
The post Drowsiness detection with OpenCV appeared first on PyImageSearch.
0 notes
mavwrekmarketing · 8 years ago
Link
The apparent coziness between the Trump administration and the Russian government could be the nations biggest scandal since Watergate, legendary TV journalist Dan Rather warned Tuesday.
Watergate is the biggest political scandal of my lifetime, until maybe now, Rather wrote in a Facebook message that quickly went viral.
On a 10 scale of armageddon for our form of government, I would put Watergate at a 9, added Rather, who won acclaim for his coverage of Watergate as a White House correspondent. This Russia scandal is currently somewhere around a 5 or 6, in my opinion, but it is cascading in intensity seemingly by the hour.
Rathers analysis was a response to the resignation of President Donald Trumps national security adviser, Mike Flynn, late Monday and the questions it has raised about the Trump administrations ties to Russia.
Flynn resigned after a series of reports revealed he had not been honest when he denied discussing sanctions on a December phone call with the Russian ambassador to the United States. TheWashington Poston Monday, in an article that prompted his resignation, reported that then-acting Attorney General Sally Yates had informed the White House weeks ago that Flynn had not been honest to senior administration officials about his conversation with the Russian ambassador and may have made himself vulnerable to extortion by the Russian government.
The disclosures provide fodder for Trump critics like Rather, who are concerned that the new administration is too chummy with the Russian government to properly advocate for American interests. U.S. intelligence agencies believe that Russia ordered the hacking of Democratic campaign emails with the goal of electing Trump last year. As president, Trump has refused to criticize Russian President Vladimir Putin and indicated that he would like to mend relations with the country.
On Tuesday, Rather was one of many journalists asking how long Trump had known about Flynns deception and why he did not act sooner.
We are still less than a month into the Trump Presidency, and many are asking that question made famous by Tennessee Senator Howard Baker those many years ago: What did the President know, and when did he know it? Rather wrote, quoting Bakers question about President Richard Nixon during the Senates investigation of the1972 break-in at Democratic National Committee headquarters in the Watergate building.
Keith Bedford / Reuters
Broadcast journalist Dan Rather has developed a new following on Facebook thanks to his writings about the Trump presidency.
Trump did not discipline Flynn sooner because he wanted to give the retired lieutenant general due process and the White House counsel maintained that Flynn had not violated the law, White House Press Secretary Sean Spicer said Tuesday. Trump ultimately demanded Flynns resignation over Flynns denial to Vice President Mike Pence that he had discussed sanctions on the call, which led Pence to repeat the lie on television, according to Spicer.
Rather called for an independent investigation of the matter, arguing that the administration and the Republican Congress have forfeited their right to be trusted on these matters.
Damn the lies, full throttle forward on the truth, he concluded.
Rather, best known for serving as host of CBS Evening News from 1981 to 2005, has developed a large and devoted Facebook following for his ominous screeds about the dangers of the Trump presidency. Rather also hosts an interview show on cable channel AXS TV.
Read more: http://ift.tt/2lJ9x1W
    The post Dan Rather: Trump’s Russia Scandal Could Rival Watergate appeared first on MavWrek Marketing by Jason
http://ift.tt/2lOKvdu
0 notes