#for once Kevin/San has the brain cells
Explore tagged Tumblr posts
Text
An attempt at the Ghidorahs was made. Part of @flamingkingoftheskies ' PJO/Godzilla AU Pssst, by the way, check 'em out. Their stuff is cool
#But there can only be two prae-#Shut up and vote for us.#godzilla#pjo#i know it has less sketches than the previous sketch buuut#i did this at class and Im currently a bit too tired to draw more#also the golden fleece jackets are what gives them their regenerating powers#ichi and ni are probably dying cuz they decided to wear long sleeved jackets in freaking California#but like hell are they gonna take 'em off#for once Kevin/San has the brain cells#headcanon number two: All three of them are running together for praetor
22 notes
·
View notes
Text
WWE Vengeance 2011
Date: October 23, 2011.
Location: AT&T Center in San Antonio, Texas.
Attendance: 8,000.
Commentary: Michael Cole, Jerry Lawler, and Booker T.
Results:
1. WWE Tag Team Championship Match: Air Boom (Kofi Kingston and Evan Bourne) (champions) defeated Dolph Ziggler and Jack Swagger (with Vickie Guerrero).
2. WWE United States Championship Match: Dolph Ziggler (champion) (with Jack Swagger and Vickie Guerrero) defeated Zack Ryder.
3. WWE Divas Championship Match: Beth Phoenix (champion) defeated Eve Torres.
4. Sheamus defeated Christian.
5. The Miz and R-Truth defeated Triple H and CM Punk.
6. Randy Orton defeated Cody Rhodes.
7. WWE World Heavyweight Championship Match: Mark Henry (champion) fought Big Show to a no contest after the ring collapsed.
8. Last Man Standing Match for the WWE Championship: Alberto Del Rio (champion) (with Ricardo Rodriguez) defeated John Cena.
My Review
With the trauma and depression brought on by quarantine, I’ve been unsure if I wanted to write up a review this month. Can you blame me? These times have been taxing on all of us. Just getting out of bed in the morning feels like I’m Ric Flair trying to go the distance at the 1992 Royal Rumble. But since writing has always been my outlet, however difficult it may be to conjure up the words, I’ve decided to press forward. Plus, I have plenty of free time right now, so why not watch and review more wrestling? Unfortunately, I’ve chosen to review Vengeance 2011, a completely unremarkable show’s that difficult trying to write enough words on. Oh joy.
In the early 2010s, WWE put on several pay-per-views that wound up as one-off events like Fatal 4 Way or Capitol Punishment. For reasons beyond me, it felt like they were trying to break from the yearly PPV calendar of the previous decade, which included shows like Judgement Day and No Mercy. In October 2011, they put on two pay-per-views, the first being Hell in a Cell. The second brought back Vengeance, a relic B show from the 2000s. It was a weird decision considering the last Vengeance in 2007 holds a degree of infamy, if only because it took place on the same weekend when one of their performers committed a horrific double-murder suicide. And then the 2011 show came and went, and Vengeance disappeared off WWE’s PPV calendar once again. Weird, indeed.
Vengeance 2011 isn’t a bad show; it’s just a wholly forgettable one. It screams of WWE trying to milk more money out of its fans’ pockets than anything else. If it were the mid ‘90s, it would be a streamlined In Your House that nobody ever talked about again. I think the main reason why it fails to stand out is that it seems stuck in the middle of a transition in focus. The biggest storyline in the months before this show was the Summer of Punk, but it’s late October now, and that storyline has since spiraled into some nonsense about Triple H and Kevin Nash. Punk’s game-changing pipe bomb back in July seems like a distant memory. In fact, Punk seems like a supporting character in his own storyline. Focus is now shifting toward John Cena’s battles with the team of the Miz and R-Truth, which will parlay into Cena’s WrestleMania-headlining feud with The Rock. So as a result, there isn’t a really “big” moment for either storyline that really sticks with you. That much of the mid-card fails to stand out in any way doesn’t help matters either.
If the show is memorable for anything, it’s definitely for its last two matches. Mark Henry and Big Show have a surprisingly great match which highlights the growth of each performer .Think about it: in a previous era, Henry vs. Show would’ve been a throwaway match on Raw is War circa 1999. The match ends in a ring collapse spot, which has been done several times before, but there’s a twist here: it sets up the next match between Cena and Alberto Del Rio. It’s a nifty way of adding another dimension to what could’ve been a pretty standard Last Man Standing match. I mean, can you think of any main events held in a collapsed ring? I certainly can’t, though I’m sure some fucked-up indy may be happy to claim that. Cena vs. Del Rio is pretty fun, too. At the least, it gives us the image of Cena getting slammed through the big letter V in the entrance way, so job well done.
It’s a shame the rest of this show doesn’t stand out, though. I literally just watched it and feel like I dreamt half of it. If you’re in quarantine like me, perhaps you’’ll find the curiosity and dredge up this show from the dusty corners of the WWE Network archive. Hey, as long as you’re staying safe, right?
My Random Notes
So weird to think there was a point within the last decade where Triple H still had hair, but that’s also because the image of Paul Levesque, bald NXT dad, has been burned so deeply into my brain by now.
Speaking of hair, CM Punk is sporting his slicked-back look here. Such a handsome devil. I still mourn over how he dropped it in favor of looking like knockoff Wolverine.
Right out of the gate, I’m reminded heel Michael Cole is still a thing by the time of this show. How long did that last, by the way? Early 2010’s WWE is such a blur to me. You could tell me his heel schtick lasted for years and I’d believe you.
Okay, so the Triple H walkout storyline: Even with the nice little video package, I’m so lost on what the hell it’s supposed to be about. Is Triple H suddenly a sympathetic babyface now? How did they move on from Cena/Punk/Vince to this? I’m sure not the only one who feels this way, but still.
I had the unfortunate timing of watching this around the same time Zack Ryder was released by the WWE. On this show, he’s just beginning his initial push that will infamously turn to shit, so it’s really bittersweet to see him look so excited here knowing how that push will eventually turn out.
It was much to my surprise to learn that one of Cody Rhodes’ bag boys here is Oney Lorcan. He has not aged a day since then. He really has Arn Anderson syndrome of perennially looking close to 40.
Completely forgot Air Boom was ever a thing. So much of Kofi’s pre-New Day career escapes my memory, to be honest. The time between his feud with Orton and the start of New Day all just blends into one.
Mark Henry is in the midst of highly praised “Hall of Pain” run here. I love the part after his match where he’s stumbling around in a daze, yelling out to no one in particular how he’s still the champion. It’s good shit, pal.
1 note
·
View note
Text
Startup Stories: Revolutionizing Driving AI with Netradyne
David Julian has quite the resume. Once an engineer at NASA—more specifically, a communication systems engineer—and signal specialist in the mobile phone division at Qualcomm, he now serves as the enthusiastic CTO of San Diego startup Netradyne. We were curious how Julian has navigated the tech startup world, so spent an hour parsing his inspiration, his strategy, and his vision for new technologies—most notably artificial intelligence (AI). He also offers budding tech entrepreneurs some sage advice as they launch in the sea of 2018 tech innovation.
Your personal experience in technology is deep, and for most people, too technical to understand fully. Can you boil down your resume into a digestible summary?
In short, I create algorithms to solve hard problems—mostly by capturing raw signals and processing them into sophisticated output.
As a graduate student, I worked in the jet propulsion labs at NASA, an area primarily responsible for deep-space, unmanned missions. I also worked on the Galileo and Cassini missions to Jupiter and Saturn—mostly the communications of data sent from NASA stations back to earth.
From there, I got my PhD in information theory from Stanford. That discipline tentacled out into a lot of different fields, underscored by how you can collect information and use it for best possible outcomes. As a result, I’ve worked in video compression, communication, deep learning, and AI.
After getting my PhD, I went to work at Qualcomm, where I was involved in the mobile phone revolution. We were working on building sophisticated functions “under the hood” that made things simpler for the end user. It was all about creating a powerful, seamless experience.
It was at Qualcomm that you started considering new applications of AI and signal processing, right?
Yes. During my last years at Qualcomm, I spent a lot of time on deep-learning, looking at ways to automate the cell phone experience. But my colleagues and I also noticed that cameras were becoming ubiquitous and cheap—while also very powerful. So we asked ourselves how we could combine deep learning with powerful, accessible cameras.
That really spells out the genesis of Netradyne. What was the leap from that early ideation to actual problem-solving and company-building?
When we first started out, we were looking at a number of ways to combine AI and camera technology. We looked at drones for roof inspections, for example, and the possibilities of the same technologies at use in autonomous driving. But we felt that real use cases for cameras and AI in autonomous driving were probably 10-20 years out and would progress slowly. What we saw instead was that commercial driver safety was an area that really needed some immediate attention. The existing approaches were very manual—people in a command or call center watching videos recorded using rudimentary hard-stop and hard-start triggers. It was tedious and time-consuming, and productivity gains were clearly needed.
David Julian, CTO of Netradyne
The platform you created to solve that problem is called Driveri. How does it differ from the recording processes that was relatively standard in commercial driving fleets?
When we first started talking to fleet managers, it was clear that they were only capturing a few minutes here and there. We wanted to capture continuous video—both as a way to document safety issues more effectively, and to recognize drivers for good performance.
What did your competition look like at launch?
Our competition consisted of two main companies—the ones who had created the technology around hard-start and hard-stop recording. But we didn’t see them as competition as much as we did a boon for Netradyne. They educated the industry about what cameras could do for commercial driving, which primed the market for our entry in 2015.
By the time we entered the market, many fleets realized they needed a better recording solution. While most managers were primarily interested in safety, we saw Driveri as a way to improve their retention as well. Not many people know this, but commercial fleets suffer from an 87% turnover rate so retention of good drivers is critical. With continuous recording, Driveri gave fleet managers the opportunity to reward high-performing safe drivers and keep them in the fleet.
All of this awareness was possible because of the two companies that came before us.
Let’s talk more about the AI component of Driveri. What exactly does this technology do, utilizing camera hardware?
First of all, let me say that AI is very broad. It encompasses both what we’re seeing in technology today and a lot of functionality that’s aspirational. We’re using it primarily for signal processing—taking raw signals, in our case video, and outputting processes that follow from an interpretation of those raw signals.
The AI—or algorithm—we’ve created is able to capture and recognize things like lanes, painted lines, car movement, environmental obstacles, and other impactful elements in a driver’s surroundings. But it can also tell when a driver is distracted or drowsy. That information can then be passed on to a command center where managers can take action.
What’s even more incredible is that the continuous recording allows the AI to recognize entire contexts for traffic decisions. For instance, a quick video that captures a commercial truck tailgating is likely to get a driver dinged for poor driving. But what a few minutes of video doesn’t capture is the car that cutoff that truck right before recording began—an indication that the driver was not at fault, but had to compensate in the moment for someone’s else’s bad driving decision.
You’ve used the term “deep learning” before as well. What does that mean?
Deep learning is akin to a brain’s neural network. It allows a platform, like Driveri, to improve its understanding of signals to a point that surpasses even human analysis. For example, standard programming could “teach” a platform to recognize one kind of car in one specific setting. But what about the effects of changing light or weather? Deep learning allows a platform to recognize a defined object even with hundreds of variables.
That’s impressive, but how do you “teach” a platform to do that?
You put a lot of images or video into the platform, after programming in a definition for the object you want it to identify. Then, you ask it to tell you which of the images or video scenes include that object. Initially, it will be correct maybe 15% or 20% of the time. But as you correct it, it learns to recognize the object in a variety of contexts.
One of the reasons we’re able to do this is because of the availability of data. There’s a lot of video from commercial fleets already available, and it’s growing every day. The more data we get, the smarter our algorithm becomes. Plus, it’s connected constantly to the cloud via cellular networks, so it’s continuously learning.
How do you determine a baseline for recognition before rolling a feature out to a fleet?
It depends. In many cases, we get to about 90-95% precision before we make new functionality live for our customers. Then, we ask for their feedback as they use it in their fleets. Eventually, we get that precision up to 99% or more—far better than any human being could do.
What about concerns around cybersecurity and data sharing?
We take that very seriously. All of our devices securely communicate with the cloud to ensure data is safe.
We have had to address privacy issues with our inward-facing cameras, however. In addition to cameras that record traffic and road conditions, Driveri also leverages internal cameras to watch drivers. But we make the collected video data as anonymous as possible, and we do have auto shut-off features for long-haul drivers who sleep in their cabs.
I spoke with leadership guru Kevin Kruse a while ago, and asked what challenges we would be facing in American business over the next 10 years. He called out AI as a particularly troubling development—primarily because it threatened to make human workers superfluous. Can you speak to that?
There will be implications along those lines. But in my mind, what we’re creating is AI functionality that automates mundane, tedious tasks that people don’t want to do—or can’t do as effectively as a platform that doesn’t get tired or distracted.
It’s important to note that Netradyne is absolutely pro-driver. Our technology is designed to give them the capabilities to improve their safety and make the commercial driver environment more comfortable and effective.
You’ve used the term “Industrial Internet of Things (IIoT)” in your description of Driveri. People are generally familiar with IoT in the form of home-linked speakers and thermostats and such, but how is IIoT playing out right now? Where can we see evidence of this?
There are a ton of industrial applications—temperature sensors and manufacturing sensors; robots on assembly lines; and parking meters are all examples of this kind of technology at play.
I have noticed that IoT suffers from competition. Companies like Google and Apple want to control your house entirely, so their technologies are intentionally incompatible; the hope is that you buy all Apple or all Google products to complete your IoT environment. Does IIoT suffer from the same pitfalls?
With industrial scale, there’s more planned deployment, so there’s more collaboration. We want to create a completely curated, seamless experience. Companies involved in IIoT know that, and are therefore more willing to collaborate on a solution for a customer.
Alright, you promised us some tips for tech and AI startups, so let’s dig into those. What wisdom would you offer 2018 tech entrepreneurs?
1. Get A-team players. All of our team members came to the table with a lot of experience and they fit together well—in all of the areas that are central to Netradyne’s mission. Without them, our work would not be possible. 2. Know the problem before you create a solution. Rather than jumping into creating right away, we talked to commercial fleets about the problems they were facing. There was no sense building a solution when we didn’t know what it should do. Many eager tech startups fall into the innovation trap—building a product first and forcing it into a solution. That never works. 3. As you experiment and test, be sure to be pragmatic in materials use. We initially got off-the-shelf components for solution-testing early on. It made it cost-effective and easy to experiment while we were still learning what worked and what didn’t. 4. Use your investment dollars wisely, especially when hiring your team. Netradyne hired a team that was split between the U.S. and India, which was less expensive than hiring a completely U.S.-based team. We also do a lot of multitasking; everyone here wears multiple hats. There’s an added benefit to this, too; everyone has a much wider view of company strategy and direction, instead of being tunnel-focused on a single problem. 5. Distribute information carefully, consistently, and clearly to the entire company. Not only do we have frequent meetings to discuss projects, but we are very clear about who should be involved in which phases of development. That ensures that the right people—and ONLY the right people—are involved in key decisions. 6. Automate wherever possible. For example, instead of paying five people to handle our billing, we invested in an enterprise level of SalesForce, which handles all of our order management, invoicing, and so on. This is shared across several dashboards (again, distributing information clearly) so that everyone who needs to know the status of our sales and finances does. 7. Invest in (good) PR. Especially in the tech world, it’s very easy for those on the front line to confuse the public—and potential customers—with too much jargon. Hire trusted PR teams to build lasting relationships in the right industries, especially where relationships and trust are key to closing sales. 8. Get the customer involved. There’s no better way to create an advocate for your brand, product, or platform than to give a customer the opportunity to weigh in on development and execution. Get their fingerprints on your work and they will not only be educated about how you develop, but will be more excited about your brand. Without a doubt, they will become willing advocates and spread the word about what you’re doing. 9. Intellectual property is important, so make sure it’s protected. File patents quickly, and make sure contracts with outside sources (i.e. tech collaborators) are set so you can iterate quickly. 10. Be patient. You’re going to trip and stumble. Celebrate the wins and learn from the losses. As quickly as tech is moving these days, it’s important to take your time and do things right. Keep your focus on what you want to improve and see value from the customer’s perspective. That’s the only way to build a product or a platform that will have lasting impact. 11. Be intentional about building company culture. This goes for any company, but tech startups are not all about the product. The people are what make your company successful. At Netradyne, we tried to keep this mind from the very first day. We intentionally created an open office environment to encourage collaboration; we implement fun days for relaxation and team interaction; and we promote continuing education by encouraging team members to attend conferences and share research in weekly roundtables.
Last, but certainly not least: What are you most proud of so far in your work at Netradyne?
I think there several things. Our mission to focus on the positive aspects of commercial driving has been rewarding. I’m happy to be serving that underserved population, especially since the retention issue is so big.
Also, I’m proud of the speed at which we’ve actually put together a world-class solution that spans different technologies, from AI to our cameras and cloud structure. It’s amazing how much our team has done in such a short time to create visible change.
For more information about the impactful work of Netradyne, visit netradyne.com.
Learn the Secrets of a Successful Product Launch
As part of ETR University's Product Launch Course. Enroll today! >>>
The post Startup Stories: Revolutionizing Driving AI with Netradyne appeared first on Early To Rise.
0 notes