#also i used lens blur in camera raw filter not bad not bad
Explore tagged Tumblr posts
elderwisp · 7 months ago
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
𝔄𝔫𝔡 𝔪𝔶 𝔩𝔬𝔳𝔢 𝔦𝔰 𝔫𝔬 𝔤𝔬𝔬𝔡 𝔄𝔤𝔞𝔦𝔫𝔰𝔱 𝔱𝔥𝔢 𝔣𝔬𝔯𝔱𝔯𝔢𝔰𝔰 𝔱𝔥𝔞𝔱 𝔦𝔱 𝔪𝔞𝔡𝔢 𝔬𝔣 𝔶𝔬𝔲 𝔅𝔩𝔬𝔬𝔡 𝔦𝔰 𝔯𝔲𝔫𝔫𝔦𝔫𝔤 𝔡𝔢𝔢𝔭 𝔖𝔬𝔯𝔯𝔬𝔴 𝔱𝔥𝔞𝔱 𝔶𝔬𝔲 𝔨𝔢𝔢𝔭
303 notes · View notes
jennaoliver · 4 years ago
Text
Landscape Photography Research
In order to create our persona’s content we must be able to work like they do. To do this I began some research into how I could become a better landscape photographer. I found these tips really useful and insightful as they were definitely things I would not have picked up on. Reading these tips and ideas has gotten me excited to try and experiment with them, especially the lens filters I didn't even know they existed. 
Location
You should always have a clear idea of where you are planning to go, and at what time of the day you will be able to capture the best photograph.
Have patients
The key is to always allow yourself enough time at a location, so that you are able to wait if you need to. Forward planning can also help you hugely, so make sure to check weather forecasts before leaving, maximizing your opportunity for the weather you require.
Don't be lazy
Don’t rely on easily accessible viewpoints, that everyone else can just pull up to and see. Instead, look for those unique spots (providing they are safe to get to) that offer amazing scenes, even if they require determination to get there.
Use the best light
The best light for landscape photography is early in the morning or late afternoon, with the midday sun offering the harshest light. But, part of the challenge of landscape photography is about being able to adapt and cope with different lighting conditions, for example, great landscape photos can be captured even on stormy or cloudy days. The key is to use the best light as much as possible, and be able to influence the look and feel of your photos to it.
Carry a tripod 
Photography in low light conditions (e.g. early morning or early evening) without a tripod would require an increase in ISO to be able to avoid camera shake, which in turn means more noise in your images. If you want to capture a scene using a slow shutter speed or long exposure (for example, to capture the movement of clouds or water) then without a tripod you simply won’t be able to hold the camera steady enough, to avoid blurred images from camera shake.Photography in low light conditions (e.g. early morning or early evening) without a tripod would require an increase in ISO to be able to avoid camera shake, which in turn means more noise in your images. If you want to capture a scene using a slow shutter speed or long exposure (for example, to capture the movement of clouds or water) then without a tripod you simply won’t be able to hold the camera steady enough, to avoid blurred images from camera shake.
Maximize your depth of field 
Usually landscape photos require the vast majority of the photo to be sharp (the foreground and background) so you need a deeper depth of field (f/16-f/22) than if you are taking a portrait of someone. A shallower depth of field can also be a powerful creative tool if used correctly, as it can isolate the subject by keeping it sharp, while the rest of the image is blurred. As a starting point, if you are looking to keep the majority of the photo sharp, set your camera to Aperture Priority (A or Av) mode, so you can take control of the aperture. Start at around f/8 and work up (f/11 or higher) until you get the desired effect. 
Think about composition 
There are several techniques that you can use to help your composition (such as the rule of thirds), but ultimately you need to train yourself to be able to see a scene, and analyze it in your mind. With practice this will become second nature, but the important thing is to take your time.
Use neutral density and polarizing filters
A polarizing filter can help by minimizing the reflections and also enhancing the colours (greens and blues). But remember, polarizing filters often have little or no effect on a scene if you’re directly facing the sun, or it’s behind you. For best results position yourself between 45° and 90° to the sun.
One of the other big challenges is getting a balanced exposure between the foreground, which is usually darker, and a bright sky. Graduated ND filters help to compensate for this by darkening the sky, while keeping the foreground brighter. This can be replicated in post-production, but it is always best to try and capture the photo as well as possible in-camera.
Use the histogram 
A histogram is a simple graph that shows the different tonal distribution in your image. The left side of the graph is for dark tones and the right side of the graph represents bright tones.
If you find that the majority of the graph is shifted to one side, this is an indication that your photo is too light or dark (overexposed or underexposed). This isn’t always a bad thing, and some images work perfectly well either way. However, if you find that your graph extends beyond the left or right edge, this shows that you have parts of the photo with lost detail (pure black areas if the histogram extends beyond the left edge and pure white if it extends beyond the right edge). 
Shoot in RAW format 
RAW files contain much more detail and information, and give far greater flexibility in post-production without losing quality.
Use a wide angle lens
Wide-angle lenses are preferred for landscape photography because they can show a broader view, and therefore give a sense of wide open space. They also tend to give a greater depth of field and allow you to use faster shutter speeds because they allow more light. Taking an image at f/16 will make both the foreground and background sharp.
Capture movement
If you are working with moving water you can create a stunning white water effect by choosing a long exposure. If working with bright daylight you must use an ND filter to reduce the amount of light hitting the camera, this way the camera will allow you to have a longer shutter time. You must always use a tripod for this kind of shot so that the rest of your image remains sharp.
______________
References:
Dadfar, Kav. "12 Tips to Help You Capture Stunning Landscape Photos." Digital Photography School. Last modified June 13, 2016. https://digital-photography-school.com/12-tips-to-help-you-capture-stunning-landscape-photos/.
Kun, Attila. "Landscape Photography Tips." ExposureGuide.com. Last modified November 25, 2019. https://www.exposureguide.com/landscape-photography-tips/.
0 notes
imageclippinglove · 5 years ago
Text
Discover Why You Shouldn't Consider Noise your Enemy and Add It to Your Pictures
Whenever we talk about noise we usually talk in pejorative terms, that is, we consider noise our enemy, something ugly that we should avoid. However, this hate of noise sometimes makes us too obsessive with it, we want to get shots completely free of noise. And, I'm sorry to tell you that, cold, but a shot without noise does not exist. Noise is an inherent part of photography .
But there is no need to worry! What we must learn is to control the noise, to handle it as we please and, why not, to add it as an aesthetic motif in our photographs.
  Remembering what is noise
We already explain to you what the noise is in our article " ISO in Photography: What it is and How It Is Used ", but we will refresh the memory a bit.
As a general rule, the higher the ISO you shoot, the greater the amount of noise that appears in your picture . But what exactly is noise? Noise is that kind of grain that appears mostly in the darkest areas of the photo. To understand the concept of noise and how it is generated, we must first understand how image capture works in our camera. please follow the link for getting a service  clipping path service.
Tumblr media
The sensor of our camera is composed of a mesh of thousands of photosensitive cells that receive the light that enters through the lens. Upon receiving the light, these cells generate an electric current, which will be processed by the camera and converted into digital data. Each of these cells will generate a pixel of the final photograph.
However, this electrical signal not only possesses the data of the captured image, but also generates random data, the result of the electric current itself. This random data will be reflected in the image as noise.
When we increase the ISO we are digitally amplifying the electrical signal received by the photosensitive cells, but at the same time we will also be amplifying that random data . That's why the more ISO we upload, the more noise will appear in our picture.
 Learn Easy Photography ... with dzoom PREMIUM
Learning to Control Noise
You might think then, after knowing how the noise is generated in our camera, that it would be worth underexposing a photograph rather than increasing the ISO, so as not to generate noise. Well, you are very wrong.
A poorly exposed photograph will always have a lower quality than a well exposed photograph . And trying to expose it later in editing programs like Photoshop or Lightroom will always generate more noise than if we had exposed it correctly when shooting, even if it had been increasing the ISO. So, raising the ISO is not bad, you just have to control it and know how far your camera can go.
The underexposed and later clarified photo in edition has more noise despite having been shot with a lower ISO
Each camera has a different noise treatment, and you should consider how much ISO you can upload on your camera and take pictures with acceptable noise. We can always reduce the noise in editing programs a bit , but we will lose definition and if we go over we can blur the picture so much that it ends up looking like an oil painting. So, unfortunately noise is an enemy that we will always have to deal with.  for learning more about cliping path, please visit https://www.udemy.com/course/how-to-adobe-photoshop-clipping-mask-layer-tutorial-images-photoshop/
However, we will not always consider him "our enemy . " The vintage nostalgia that is lived today has made many photographers choose to add noise in their photographs. Adding noise to your photographs can help you emulate that characteristic grain of analog photographs, or it can give your photographs a cinematic look. So, you yourself will be the one who judges what amount of noise is best for your photographs.
Remember that noise tolerance is subjective , so we cannot speak of an “ISO limit”. In addition, as I said before, each camera has a different noise management. Therefore, it should be you who sets the maximum ISO limit for your camera, in which the noise that appears is acceptable to you. Do tests with your camera at different ISOs and in different situations to know it thoroughly and know how far you can go.
Adding Noise in your Photographs as Aesthetic Reason
If you see any winning analog photograph of multiple awards and cheers, you will discover that it has noise. Or grain, as it is also called. Noise and grain are the same thing, only that term "noise" has acquired a connotation so negative that when we add it on purpose with an aesthetic motive we usually refer to it as grain .
Call it what you call it, the truth is that that grain does not affect photography, it is more that analog, vintage or cinematic touch that makes us look at that image in a different way. That attractive touch that brings grain to photography is what many photographers look for in their work. And that's why many of them add noise on purpose in their shots.
You can choose to add the noise at the time of shooting, of course. You will only have to raise the ISO and compensate for this greater light input by increasing the shutter speed and closing the diaphragm , following the Law of reciprocity .
However, if you are still experimenting with this style and you are not totally convinced if you will like the result or not, or if you prefer to control how much noise you add in a much more personalized way, it is best to shoot your shot as clean as possible and that you add it later in the processing .
How to Add Grain to your Photos in Lightroom
Adding grain to your photos in Lightroom is very easy. In the development module, go to the Effects panel . In the Granulated submenu you will find 3 sliders that will allow you to configure the amount and appearance of noise to your liking:
Quantity: The amount of noise you want to add.
Size: The size of the grain. The bigger it is, the more it will be perceived.
Roughness: The distribution of grain. With a low roughness the grain will be more orderly distributed, while rising the roughness will be distributed in a more random way.
How to Add Grain to your Photos in Photoshop
To add grain to your photos from Photoshop you can do it in two different ways.
One is to add it from Adobe Camera RAW , which will open automatically when you open a RAW in Photoshop, or you can also activate it from the Filter / Filter menu of Camera RAW . As this tool is a "mini Lightroom" integrated into Photoshop, the way to add noise will be exactly the same as we have seen in the previous point. You will find those same sliders in the Effects tab (fx) .
The other way is to add it from Photoshop's own noise tool. You will find it in the menu Filter / Noise / Add noise . In this window you will find 3 options to customize your noise:
Quantity: The amount of noise you want to add.
Distribution: You can choose between Uniform, which will add a tidier noise, or Gaussian, which will add a more randomly distributed noise.
Monochromatic: If you do not check this box, the noise you will add will be chromatic noise (that is, it will have color) while if you mark it, it will simply be grain.
0 notes
photographerguide-blog · 6 years ago
Text
The future of photography is code
New Post has been published on https://photographyguideto.com/must-see/the-future-of-photography-is-code-2/
The future of photography is code
What’s in a camera? A lens, a shutter, a light-sensitive surface and, increasingly, a set of highly sophisticated algorithms. While the physical components are still improving bit by bit, Google, Samsung and Apple are increasingly investing in (and showcasing) improvements wrought entirely from code. Computational photography is the only real battleground now.
The reason for this shift is pretty simple: Cameras can’t get too much better than they are right now, or at least not without some rather extreme shifts in how they work. Here’s how smartphone makers hit the wall on photography, and how they were forced to jump over it.
Not enough buckets
An image sensor one might find in a digital camera
The sensors in our smartphone cameras are truly amazing things. The work that’s been done by the likes of Sony, OmniVision, Samsung and others to design and fabricate tiny yet sensitive and versatile chips is really pretty mind-blowing. For a photographer who’s watched the evolution of digital photography from the early days, the level of quality these microscopic sensors deliver is nothing short of astonishing.
But there’s no Moore’s Law for those sensors. Or rather, just as Moore’s Law is now running into quantum limits at sub-10-nanometer levels, camera sensors hit physical limits much earlier. Think about light hitting the sensor as rain falling on a bunch of buckets; you can place bigger buckets, but there are fewer of them; you can put smaller ones, but they can’t catch as much each; you can make them square or stagger them or do all kinds of other tricks, but ultimately there are only so many raindrops and no amount of bucket-rearranging can change that.
Sensors are getting better, yes, but not only is this pace too slow to keep consumers buying new phones year after year (imagine trying to sell a camera that’s 3 percent better), but phone manufacturers often use the same or similar camera stacks, so the improvements (like the recent switch to backside illumination) are shared amongst them. So no one is getting ahead on sensors alone.
See the new iPhone’s ‘focus pixels’ up close
Perhaps they could improve the lens? Not really. Lenses have arrived at a level of sophistication and perfection that is hard to improve on, especially at small scale. To say space is limited inside a smartphone’s camera stack is a major understatement — there’s hardly a square micron to spare. You might be able to improve them slightly as far as how much light passes through and how little distortion there is, but these are old problems that have been mostly optimized.
The only way to gather more light would be to increase the size of the lens, either by having it A: project outwards from the body; B: displace critical components within the body; or C: increase the thickness of the phone. Which of those options does Apple seem likely to find acceptable?
In retrospect it was inevitable that Apple (and Samsung, and Huawei, and others) would have to choose D: none of the above. If you can’t get more light, you just have to do more with the light you’ve got.
Isn’t all photography computational?
The broadest definition of computational photography includes just about any digital imaging at all. Unlike film, even the most basic digital camera requires computation to turn the light hitting the sensor into a usable image. And camera makers differ widely in the way they do this, producing different JPEG processing methods, RAW formats and color science.
For a long time there wasn’t much of interest on top of this basic layer, partly from a lack of processing power. Sure, there have been filters, and quick in-camera tweaks to improve contrast and color. But ultimately these just amount to automated dial-twiddling.
The first real computational photography features were arguably object identification and tracking for the purposes of autofocus. Face and eye tracking made it easier to capture people in complex lighting or poses, and object tracking made sports and action photography easier as the system adjusted its AF point to a target moving across the frame.
These were early examples of deriving metadata from the image and using it proactively, to improve that image or feeding forward to the next.
In DSLRs, autofocus accuracy and flexibility are marquee features, so this early use case made sense; but outside a few gimmicks, these “serious” cameras generally deployed computation in a fairly vanilla way. Faster image sensors meant faster sensor offloading and burst speeds, some extra cycles dedicated to color and detail preservation and so on. DSLRs weren’t being used for live video or augmented reality. And until fairly recently, the same was true of smartphone cameras, which were more like point and shoots than the all-purpose media tools we know them as today.
The limits of traditional imaging
Despite experimentation here and there and the occasional outlier, smartphone cameras are pretty much the same. They have to fit within a few millimeters of depth, which limits their optics to a few configurations. The size of the sensor is likewise limited — a DSLR might use an APS-C sensor 23 by 15 millimeters across, making an area of 345 mm2; the sensor in the iPhone XS, probably the largest and most advanced on the market right now, is 7 by 5.8 mm or so, for a total of 40.6 mm2.
Roughly speaking, it’s collecting an order of magnitude less light than a “normal” camera, but is expected to reconstruct a scene with roughly the same fidelity, colors and such — around the same number of megapixels, too. On its face this is sort of an impossible problem.
Improvements in the traditional sense help out — optical and electronic stabilization, for instance, make it possible to expose for longer without blurring, collecting more light. But these devices are still being asked to spin straw into gold.
Luckily, as I mentioned, everyone is pretty much in the same boat. Because of the fundamental limitations in play, there’s no way Apple or Samsung can reinvent the camera or come up with some crazy lens structure that puts them leagues ahead of the competition. They’ve all been given the same basic foundation.
All competition therefore comprises what these companies build on top of that foundation.
Image as stream
The key insight in computational photography is that an image coming from a digital camera’s sensor isn’t a snapshot, the way it is generally thought of. In traditional cameras the shutter opens and closes, exposing the light-sensitive medium for a fraction of a second. That’s not what digital cameras do, or at least not what they can do.
A camera’s sensor is constantly bombarded with light; rain is constantly falling on the field of buckets, to return to our metaphor, but when you’re not taking a picture, these buckets are bottomless and no one is checking their contents. But the rain is falling nevertheless.
To capture an image the camera system picks a point at which to start counting the raindrops, measuring the light that hits the sensor. Then it picks a point to stop. For the purposes of traditional photography, this enables nearly arbitrarily short shutter speeds, which isn’t much use to tiny sensors.
Why not just always be recording? Theoretically you could, but it would drain the battery and produce a lot of heat. Fortunately, in the last few years image processing chips have gotten efficient enough that they can, when the camera app is open, keep a certain duration of that stream — limited resolution captures of the last 60 frames, for instance. Sure, it costs a little battery, but it’s worth it.
Access to the stream allows the camera to do all kinds of things. It adds context.
Context can mean a lot of things. It can be photographic elements like the lighting and distance to subject. But it can also be motion, objects, intention.
A simple example of context is what is commonly referred to as HDR, or high dynamic range imagery. This technique uses multiple images taken in a row with different exposures to more accurately capture areas of the image that might have been underexposed or overexposed in a single exposure. The context in this case is understanding which areas those are and how to intelligently combine the images together.
This can be accomplished with exposure bracketing, a very old photographic technique, but it can be accomplished instantly and without warning if the image stream is being manipulated to produce multiple exposure ranges all the time. That’s exactly what Google and Apple now do.
Something more complex is of course the “portrait mode” and artificial background blur or bokeh that is becoming more and more common. Context here is not simply the distance of a face, but an understanding of what parts of the image constitute a particular physical object, and the exact contours of that object. This can be derived from motion in the stream, from stereo separation in multiple cameras, and from machine learning models that have been trained to identify and delineate human shapes.
These techniques are only possible, first, because the requisite imagery has been captured from the stream in the first place (an advance in image sensor and RAM speed), and second, because companies developed highly efficient algorithms to perform these calculations, trained on enormous data sets and immense amounts of computation time.
What’s important about these techniques, however, is not simply that they can be done, but that one company may do them better than the other. And this quality is entirely a function of the software engineering work and artistic oversight that goes into them.
A system to tell good fake bokeh from bad
DxOMark did a comparison of some early artificial bokeh systems; the results, however, were somewhat unsatisfying. It was less a question of which looked better, and more of whether they failed or succeeded in applying the effect. Computational photography is in such early days that it is enough for the feature to simply work to impress people. Like a dog walking on its hind legs, we are amazed that it occurs at all.
But Apple has pulled ahead with what some would say is an almost absurdly over-engineered solution to the bokeh problem. It didn’t just learn how to replicate the effect — it used the computing power it has at its disposal to create virtual physical models of the optical phenomenon that produces it. It’s like the difference between animating a bouncing ball and simulating realistic gravity and elastic material physics.
Why go to such lengths? Because Apple knows what is becoming clear to others: that it is absurd to worry about the limits of computational capability at all. There are limits to how well an optical phenomenon can be replicated if you are taking shortcuts like Gaussian blurring. There are no limits to how well it can be replicated if you simulate it at the level of the photon.
Similarly the idea of combining five, 10, or 100 images into a single HDR image seems absurd, but the truth is that in photography, more information is almost always better. If the cost of these computational acrobatics is negligible and the results measurable, why shouldn’t our devices be performing these calculations? In a few years they too will seem ordinary.
If the result is a better product, the computational power and engineering ability has been deployed with success; just as Leica or Canon might spend millions to eke fractional performance improvements out of a stable optical system like a $2,000 zoom lens, Apple and others are spending money where they can create value: not in glass, but in silicon.
Double vision
One trend that may appear to conflict with the computational photography narrative I’ve described is the advent of systems comprising multiple cameras.
This technique doesn’t add more light to the sensor — that would be prohibitively complex and expensive optically, and probably wouldn’t work anyway. But if you can free up a little space lengthwise (rather than depthwise, which we found impractical) you can put a whole separate camera right by the first that captures photos extremely similar to those taken by the first.
A mock-up of what a line of color iPhones could look like
Now, if all you want to do is re-enact Wayne’s World at an imperceptible scale (camera one, camera two… camera one, camera two…) that’s all you need. But no one actually wants to take two images simultaneously, a fraction of an inch apart.
These two cameras operate either independently (as wide-angle and zoom) or one is used to augment the other, forming a single system with multiple inputs.
The thing is that taking the data from one camera and using it to enhance the data from another is — you guessed it — extremely computationally intensive. It’s like the HDR problem of multiple exposures, except far more complex as the images aren’t taken with the same lens and sensor. It can be optimized, but that doesn’t make it easy.
So although adding a second camera is indeed a way to improve the imaging system by physical means, the possibility only exists because of the state of computational photography. And it is the quality of that computational imagery that results in a better photograph — or doesn’t. The Light camera with its 16 sensors and lenses is an example of an ambitious effort that simply didn’t produce better images, though it was using established computational photography techniques to harvest and winnow an even larger collection of images.
Light and code
The future of photography is computational, not optical. This is a massive shift in paradigm and one that every company that makes or uses cameras is currently grappling with. There will be repercussions in traditional cameras like SLRs (rapidly giving way to mirrorless systems), in phones, in embedded devices and everywhere that light is captured and turned into images.
Sometimes this means that the cameras we hear about will be much the same as last year’s, as far as megapixel counts, ISO ranges, f-numbers and so on. That’s okay. With some exceptions these have gotten as good as we can reasonably expect them to be: Glass isn’t getting any clearer, and our vision isn’t getting any more acute. The way light moves through our devices and eyeballs isn’t likely to change much.
What those devices do with that light, however, is changing at an incredible rate. This will produce features that sound ridiculous, or pseudoscience babble on stage, or drained batteries. That’s okay, too. Just as we have experimented with other parts of the camera for the last century and brought them to varying levels of perfection, we have moved onto a new, non-physical “part” which nonetheless has a very important effect on the quality and even possibility of the images we take.
Read more: https://techcrunch.com
0 notes
slrlounge1 · 6 years ago
Text
Sony A7RIII Review: Officially The Best Pro Full-Frame Mirrorless Camera On The Market
When I test a new camera, I usually have an idea of how the review might go, but there are always some things that are a complete unknown, and a few things that totally surprise me.
I know better than to pass judgment on the day a camera is announced. The images, and the user experience are what matter. If a camera has all the best specs but lacks reliability or customizability, it’s a no-deal for me.
So, you can probably guess how this review of the Sony A7RIII is going to turn out. Released in October of 2017 at $3200, the camera has already had well over a year of real-world field use, by working professionals and hobbyists alike. Also, it now faces full-frame competition from both Canon and Nikon, in the forms of Canons new RF mount and Nikon’s Z mount, although these only have two bodies each and 3-4 native lenses each.
As a full-time wedding and portrait photographer, however, I can’t just jump on a new camera the moment it arrives. Indeed, the aspects of reliability and sheer durability are always important. And, considering the track record for reliability (buggy-ness) of the Sony A7-series as a whole since its birth in October of 2013, I waited patiently for there to be a general consensus about this third generation of cameras.
Indeed, the consensus has been loud: third time’s a charm.
Although, I wouldn’t exactly call the A7RIII a charming little camera. Little, sure, professional, absolutely! But, it has taken a very long time to become truly familiar and comfortable with it.
Before you comment, “oh no, not another person complaining about how hard it is to ‘figure out’ a Sony camera” …please give me a chance to thoroughly describe just how incredibly good of a camera the A7RIII is, and tell you why you should (probably) get one, in spite of (or even because of) its complexity.
Sony A7RIII (mk3) Specs
42-megapixel full-frame BSI CMOS sensor (7952 x 5304 pixels)
ISO 100-32000 (up to ISO 50 and ISO 102400)
5-axis sensor-based image stabilization (IBIS)
Hybrid autofocus, 399 phase-detect, 425 contrast-detect AF points
3.6M dot Electronic Viewfinder, 1.4M dot 3″ rear LCD
10 FPS (frames per second) continuous shooting, with autofocus
Dual SD card slots (one UHS-II)
4K @ 30, 24p video, 1080 @ 120, 60, 30, 24p
Metal frame, weather-sealed body design
Sony A7R3, Sony 16-35mm f/2.8 GM | 1/80 sec, f/11, ISO 100
100% crop from the above image, fine-radius sharpening applied
Sony A7R3 Pros
The pros are going to far outweigh the cons for this camera, and that should come as no surprise to anyone who has paid attention to Sony’s mirrorless development over the last few years. Still, let’s break things down so that we can discuss each one as it actually relates to the types of photography you do.
Because, even though I’ve already called the A7RIII the “best pro full-frame mirrorless camera”, there may still be at least a couple other great choices (spoiler: they’re also Sony) for certain types of working pro photographers.
Sony A7R3, Sony 16-35mm f/2.8 GM, 3-stop Polar Pro NDPL Polarizer filter 2 sec, f/10, ISO 100
Image Quality
The A7RIII’s image quality is definitely a major accomplishment. The  42-megapixel sensor was already a milestone in overall image quality when it first appeared in the A7RII, with its incredible resolution and impressive high ISO image quality. This next-generation sensor is yet another (incremental) step forward.
Compared to its predecessor, by the way, at the lowest ISO’s (mostly at 100) you can expect slightly lower noise and higher dynamic range. At the higher ISOs, you can expect roughly the same (awesome) image quality.
Also, when scaled back down to the resolution of its 12-24 megapixel Sony siblings, let alone the competition, it’s truly impressive to see what the A7R3 can output.
Speaking of its competitors’ sensors, the Sony either leaves them in the dust, (for example, versus a Canon sensor’s low-ISO shadow recovery) …or roughly matches their performance. (For example, versus Nikon’s 36 and 45-megapixel sensor resolution and dynamic range.)
Sony A7R3, 16-35mm f/2.8 GM | 31mm, 1/6 sec, f/10, ISO 100, handheld
100% crop from the above image – IBIS works extremely well!
I’ll be honest, though: I reviewed this camera with the perspective of a serious landscape photographer. If you’re a very casual photographer, whether you do nature, portraits, or travel or action sports, casually, then literally any camera made in the last 5+ years will deliver more image quality than you’ll ever need.
The Sony A7R3’s advantages come into play when you start to push the envelope of what is possible in extremely difficult conditions. Printing huge. Shooting single exposures in highly dynamic lighting conditions, especially when you have no control over the light/conditions. Shooting in near-pitch-dark conditions, by moonlight or starlight… You name it; the A7RIII will match, or begin to pull ahead of, the competition.
Personally, as a landscape, nightscape, and time-lapse photographer, I couldn’t ask for a better all-around sensor and image quality than this. Sure, I would love to have a native ISO 50, and I do appreciate the three Nikon sensors which offer a base ISO of 64, when I’m shooting in conditions that benefit from it.
Still, as a sensor that lets me go from ISO 100 to ISO 6400 without thinking twice about whether or not I can make a big print, the A7RIII’s images have everything I require.
Sony Color Science
Before we move on, we must address the common stereotype about dissatisfaction with “Sony colors”. Simply put, it takes two…Actually, it takes three! The camera manufacturer, the raw processing software, and you, the artist who wields those two complex, advanced tools.
Sony A7R3, Sony 16-35mm f/2.8 GM | 6 sec, f/3.5, ISO 100 | Lightroom CCC
The truth is that, in my opinion, Adobe is the most guilty party when it comes to getting colors “right”, or having them look “off”, or having muddy tones in general.
Why do I place blame on what is by far the most ubiquitous, and in popular opinion the absolute best, raw processing software? My feelings are indeed based on facts, and not the ambiguous “je ne sais quoi” that some photographers try to complain about:
1.) If you shoot any camera in JPG, whether Sony, Nikon, or Canon, they are all capable of beautiful skin tones and other colors. Yes, I know, serious photographers all shot RAW. However, looking at a JPG is the only way to fairly judge the manufacturer’s intended color science. And in that regard, Sony’s colors are not bad at all.
2.) If you use another raw processing engine, such as Capture One Pro, you get a whole different experience with Sony .ARW raw files, both with regard to tonal response and color science. The contrast and colors both look great. Different, yes, but still great.
I use Adobe’s camera profiles when looking for punchy colors from raw files
Again, I’ll leave it up to you to decide which is truly better in your eyes as an artist. In some lighting conditions, I absolutely love Canon, Nikon, and Fuji colors too. However, in my experience, it is mostly the raw engine, and the skill of the person using it, that is to blame when someone vaguely claims, “I just don’t like the colors”…
Disclaimer: I say this as someone who worked full-time in post-production for many years, and who has post-produced over 2M Canon CR2 files, 2M Nikon NEF files, and over 100,000 Sony ARW files.
Features
There is no question that the A7R3 shook up the market with its feature set, regardless of the price point. This is a high-megapixel full-frame mirrorless camera with enough important features that any full-time working pro could easily rely on the camera to get any job done.
Flagship Autofocus
The first problem that most professional photographers had with mirrorless technology was that it just couldn’t keep up with the low-light autofocus reliability of a DSLR’s phase-detect AF system.
This line has been blurred quite a bit from the debut of the Sony A7R2 onward, however, and with this current-generation, hybrid on-sensor phase+contrast detection AF system, I am happy to report that I’m simply done worrying about autofocus. Period. I’m done counting the number of in-focus frames from side-by-side comparisons between a mirrorless camera and a DSLR competitor.
In other words, yes, there could be a small percent difference in tack-sharp keepers between the A7R3’s autofocus system and that of, say, a Canon 5D4 or a Nikon D850. In certain light, with certain AF point configurations, the Canon/Nikon do deliver a few more in-focus shots, on a good day. But, I don’t care.
Sony A7R3, 16-35mm f/2.8 GM | 1/160 sec, f/2.8, ISO 100
Why? Because the A7R3 is giving me a less frustrating experience overall due to the fact that I’ve completely stopped worrying about AF microadjustment, and having to check for front-focus or back-focus issues on this-or-that prime lens. If anything, the faster the aperture, the better the lens is at nailing focus in low light. That wasn’t the case with DSLRs; usually, it was the 24-70 and 70-200 2.8’s that were truly reliable at focusing in terrible light, and most f/1.4 or f/1.8 DSLR primes were hit-or-miss. I am so glad those days are over.
Now, the A7R3 either nails everything perfectly or, when the light gets truly terrible, it still manages to deliver about the same number of in-focus shots as I’d be getting out of my DSLRs anyways.
Eye Autofocus and AF customization
Furthermore, the A7R3 offers a diverse variety of focus point control and operation. And, with new technologies such as face-detection and Eye AF, the controls really do need to be flexible! Thanks to the level of customizability offered in the the A7R3, I can do all kinds of things, such as:
Quickly change from a designated, static AF point to a dynamic, adaptable AF point. (C1 or C2 button, you pick which based on your own dexterity and muscle memory)
Easily switch face-detection on and off. (I put this in the Fn menu)
Designate the AF-ON button to perform traditional autofocus, while the AEL button performs Eye-AF autofocus. (Or vice-versa, again depending on your own muscle memory and dexterity)
Switch between AF-S and AF-C using any number of physical customizations. (I do wish there was a physical switch for this, though, like the Sony A9 has.)
Oh, and it goes without saying that a ~$3,000 camera gets a dedicated AF point joystick, although I must say I’m preferential to touchscreen AF point control now that there are literally hundreds of AF points to choose from.
In short, this is one area where Sony did almost everything right. They faced a daunting challenge of offering ways to implement all these useful technologies, and they largely succeeded.
This is not just professional-class autofocus, it’s a whole new generation of autofocus, a new way of thinking about how we ensure that each shot, whether portrait or not, is perfectly focused exactly how we want it to be, even with ultra-shallow apertures or in extremely low light.
Dual Card Slots
Like professional autofocus, dual card slots is nothing new in a ~$3,000 camera. Both the Nikon D850 (and D810, etc.) and the Canon 5D4 (and 5D3) have had these features for years. Although, notably, the $3400 Nikon Z7 does not; it opted for a single XQD slot instead. Read our full Nikon Z7 review here.)
Unlike those DLSRs, however, the Sony A7R3 combines the professional one-two punch of pro AF and dual card slots with other things such as the portability and other general benefits of mirrorless, as well as great 4K video specs and IBIS. (By the way, no, IBIS and 4K video aren’t exclusive to mirrorless; many DSLRs have 4K video now, and Pentax has had IBIS in its traditional DSLRs for many years too.)
One of my favorite features: Not only can the camera be charged via USB, it can operate directly from USB power!
Sony A7R3 Mirrorless Battery Life
One of the last major drawbacks of mirrorless systems, and the nemesis of Sony’s earlier A7-series in particular was battery life. The operative word being, WAS. Now, the Sony NP-FZ100 battery allows the A7R3 to last just as long as, or in some cases even longer than, a DSLR with comparable specs. (Such as lens-based stabilization, or 4K video)
Oh, and Sony’s is the only full-frame mirrorless platform that allows you to directly run a camera off USB power without a “dummy” battery, as of March 2019. This allows you to shoot video without ever interrupting your clip/take to swap batteries, and capture time-lapses for innumerable hours, or, just get through a long wedding day without having to worry about carrying more than one or two fully charged batteries.
By the way, for all you marathon-length event photographers and videographers out there: A spare Sony NP-FZ-100 battery will set you back $78, while an Anker 20,100 mAh USB battery goes for just $49. So, no matter your budget, your battery life woes are officially over.
Durability
This is one thing I don’t like to speak about until the gear I’m reviewing has been out in the real world for a long time. I’ve been burned before, by cameras that I rushed to review as soon as they were released, and I gave some of them high praise even, only to discover a few weeks/months later that there’s a major issue with durability or functionality, sometimes even on the scale of a mass recall. (*cough*D600*cough*)
Thankfully, we don’t have that problem here, since the A7RIII has been out in the real world for well over a year now. I can confidently report, based on both my own experience and the general consensus from all those who I’ve talked to directly, that this camera is a rock-solid beast. It is designed and built tough, with good overall strength and extensive weather sealing.
It does lack one awesome feature that the Canon EOS R offers, which is the simple but effective use of the mechanical mirror to protect the sensor whenever the camera is of, or when changing lenses. Because, if I’m honest, the Sony A7R3 sensor is a dust magnet, and the sensor cleaner doesn’t usually do more than shake one or two of the three or five specks of dust that are always landing on the sensor after just a half-day of swapping lenses periodically, especially in drier, static-y environments.
Value
Currently, at just under $3200 and sometimes on sale for less than $2800, there’s no dispute- We have the best value around, if you actually need the specific things that the A7RIII offers compared to your other options.
But, could there be an even better camera out there, for you and your specific needs?
If you don’t plan to make giant prints, and you rarely ever crop your images very much, then you just don’t need 42 megapixels. In fact, it’s actually going to be quite a burden on your memory cards, hard drives, and computer CPU/RAM, especially if you decide to shoot uncompressed raw and rack up a few thousand images each time you take the camera out.
Indeed, the 24 megapixels of the A7III is currently (and will likely remain) the goldilocks resolution for almost all amateurs and many types of working pros. Personally, as a wedding and portrait photographer, I would much rather have “just” 24 megapixels for the long 12-14+ hour weddings that I shoot. It adds up to many terabytes at the end of the year. Especially if you shoot the camera any faster than its slowest continuous drive mode. (You better buy some 128GB SD cards!)
As a landscape photographer, of course, I truly appreciate the A7RIII’s extra resolution. I would too if I were a fashion, commercial, or any other type of photographer whose priority involved delivering high-res imagery.
We’ll get deeper into which cameras are direct competition or an attractive alternative to the A7RIII later. Let’s wrap up this discussion of value with a quick overview of the closest sibling to the A7RIII, which is of course the A7III.
The differences between them go beyond just a sensor. The A7III has a slightly newer AF system, with just a little bit more borrowed technology from the Sony A9. But, it also has a slightly lower resolution EVF and rear LCD, making the viewfinder shooting experience just a little bit more digital looking. Lastly, partly thanks to its lower megapixel count, and lower resolution screens, the A7III gets even better battery life than the A7RIII. (It goes without saying that you’ll save space on your memory cards and hard drives, too.)
So, it’s not cut-and-dry at all. You might even decide that the A7III is actually a better camera for you and what you shoot. Personally, I certainly might prefer the $1998 A7III  if I shot action sports, wildlife, journalism, weddings, and certainly nightscapes, especially if I wasn’t going to be making huge prints of any of those photography genres.
Or, if you’re a serious pro, you need a backup camera anyway, and since they’re physically identical, buy both! The  A7III and A7RIII are the best two-camera kit ever conceived. Throw one of your 2.8 zooms on the A7III, and your favorite prime on the A7RIII. As a bonus, you can program “Super-35 Mode” onto one of your remaining customization options, (I like C4 for this) and you’ve got two primes in one!
Sony A7R3, Sony FE 16-35mm f/2.8GM | 1/4000 sec, f/10, ISO 100 (Extreme dynamic range processing applied to this single file)
Cons
This is going to be a short list. In fact, I’ll spoil it for you right now: If you’re already a (happy) Sony shooter, or if you have tried a Sony camera and found it easy to operate, there are essentially zero cons about this camera, aside from the few aforementioned reasons which might incline certain photographers to get an A7III instead.
A very not-so-helpful notification that is often seen on Sony cameras. I really do wish they could have taken the time to write a few details for all function incompatibilities, not just some of them!
Ergonomics & Menus
I’ll get right to the point: as someone who has tested and/or reviewed almost every DSLR camera (and lots of mirrorless cameras) from the last 15 years, from some of the earliest Canon Rebels to the latest 1D and D5 flagships, I have never encountered a more complex camera than the A7R3.
Sony, I suspect in their effort to make the camera attractive to both photographers and videographers alike, has made the camera monumentally customizable.
We’ll get to the sheer learning curve and customizations of the camera in a bit, but first, a word on the physical ergonomics: Basically, Sony has made it clear that they are going to stay focused on compactness and portability, even if it’s just not a comfortable grip for anyone with slightly larger hands.
The argument seems to be clearly divided among those who prefer the compact design, and those who dislike it.
The dedicated AF-ON button is very close to three other main controls, the REC button for video, the rear command dial, and the AF point joystick. With large thumbs, AF operation just isn’t as effortless and intuitive as it could be. Which is a shame, because I definitely love the customizations that have given me instant access to multiple AF modes. I just wish the AF-ON button, and that whole thumb area, was designed better. My already minor fumbling will wane even further with familiarity, but that doesn’t mean it is an optimal design.
By the way, I’m not expecting Sony to make a huge camera that totally defeats one of the main purposes of the mirrorless format. In fact, in my Nikon D850 review, I realized that the camera was in fact too big and that I’m already accustomed to a smaller camera, something along the size of a Nikon D750, or a Canon EOS R.
Speaking of the Canon EOS R, I think all full-frame cameras ought to have a grip like that one. It is a perfect balance between portability and grip comfort. After you hold the EOS R, or even the EOS RP, you’ll realize that there’s no reason for a full-frame mirrorless camera not to have a perfect, deep grip.
As another example, while I applaud Sony for putting the power switch in the right spot, (come on, Canon!) …I strongly dislike their placement of the lens release button. If the lens release button were on the other side, where it normally is on  Canon and Nikon, then maybe we could have custom function buttons similar to Nikon’s. These buttons are perfectly positioned for my grip fingers while holding the camera naturally, so I find them effortless to reach compared to Sony’s C1 and C2 buttons.
As I hinted earlier, I strongly suspect that a lot of this ergonomic design is meant to be useful to both photographers and videographers alike. And videographers, more of tne than not, simply aren’t shooting with the camera hand-held up to their eye, instead, the camera is on a tripod, monopod, slider, or gimbal. In this shooting scenario, buttons are accessed in a totally different way, and in fact, the controls of the latest Sony bodies all make more sense.
It’s a shame, because, for this reason I feel compelled to disclaim that if you absolutely don’t shoot video, you may find that Nikon and/or Canon ergonomics are significantly more user-friendly, whether you’re working with their DSLRs or their mirrorless bodies. (And yes, I actually like the Canon “touch dial”. Read my full Canon EOS R review here.)
Before we move on, though, I need to make one thing clear: if a camera is complicated, but it’s the best professional option on the market, then the responsible thing for a pro to do is to suck it up and master the camera. I actually love such a challenge, because it’s my job  and because I’m a camera geek, but I absolutely don’t hold it against even a professional landscape photographer for going with a Nikon Z7, or a professional portrait photographer for going with a Canon EOS R. (Single SD card slot aside.)
Yet another quick-access menu. However, this one cannot be customized.
Customizability
This is definitely the biggest catch-22 of the whole story. The Sony A7R3 is very complex to operate, and even more complex to customize. Of course, it has little choice in the matter, as a pioneer of so many new features and functions. For example, I cannot fault a camera for offering different bitrates for video compression, just because it adds one more item to a menu page. In fact, this is a huge bonus, just like the ability to shoot both uncompressed and compressed .ARW files.
By the way, the “Beep” settings are called “Audio signals”
There are 35 pages of menu items with nearly 200 items total, plus five pages available for custom menu creation, a 2×6 grid of live/viewfinder screen functions, and approximately a dozen physical buttons can be re-programmed or totally customized.
I actually love customizing cameras, and it’s the very first thing I do whenever I pick up a new camera. I go over every single button, and every single menu item, to see how I can set up the camera so that it is perfect for me. This is a process I’ve always thoroughly enjoyed, that is until the Sony A7-series came along. When I first saw how customizable the camera was, I was grinning. However, it took literally two whole weeks to figure out which button ought to perform which function, and which arrangement was best for the Fn menu, and then last but not least, how to categorize the remaining five pages of menu items I needed to access while shooting. Because even if I memorized all 35 pages, it still wouldn’t be practical to go digging through them to access the various things I need to access in an active scenario.
Then, I started to notice that not every function or setting could be programmed to just any button or Fn menu. Despite offering extensive customization options, (some customization options have 22 pages of options,) there are still a few things that just can’t be done.
“Shoot Mode” is how you change the exposure when the camera’s exposure mode dial is set to video mode. Which is useful if you shot a lot of video…
For example, It’s not easy to change the exposure mode when the camera’s mode dial itself is set to video mode. You can’t just program “Shooting Mode” to one of the C1/C2 buttons, it can only go in the Fn or custom menu.
As another example, for some reason, you can’t program both E-shutter and Silent Shooting to the Fn menu, even though these functions are so similar that they belong next to each other in any shooter’s memory.
Lastly, because the camera relies so heavily on customization, you may find that you run out of buttons when trying to figure out where to put common things that used to have their own buttons, such as Metering, White Balance, AF Points. Not to mention the handful of new bells and whistles that you might want to program to a physical button, such as switching IBIS on/off, or activating Eye AF.
All in all, the camera is already extremely complex, and yet I feel like it could also use an extra 2-3 buttons, and even more customization for the existing buttons. Which, again, leads me to the conclusion that if you’re looking for an intuitive camera that is effortless to pick up and shoot with, you may have nightmares about the user manual for this thing. And if you don’t even shoot video at all, then like I said, you’re almost better off going with something simpler.
But again, just to make sure we’re still on the same page here: If you’re a working professional, or a serious hobbyist even, you make it work. It’s your job to know your tools! (The Apollo astronauts didn’t say, “ehh, no thanks” just because their capsule was complicated to operate!)
Every camera has quirks. But, not every camera offers the images and a feature set like the A7R3 does. As a camera geek, and as someone who does shoot a decent amount of both photo and video, I’d opt for the A7R3 in a heartbeat.
Sony A7R3, Sony FE 16-35mm f/2.8 GM, PolarPro ND1000 filter 15 sec, f/14, ISO 100
The A7RIII’s Competition & Alternatives
Now that it’s early 2019, we finally have Canon and Nikon competition in the market of full-frame mirrorless camera platforms. (Not to mention Panasonic, Sigma, Lecia…)
So, where does that put this mk3 version of the Sony A7R, a third-generation camera which is part of a system that is now over 5 years old?
Until more competition enters the market, this section of our review can be very simple, thankfully. I’ll be blunt and to the point…
First things first: the Sony A7R3 has them all beat, in terms of overall features and value. You just can’t get a full-frame mirrorless body with this many features, for this price, anywhere else. Not only does the Sony have the market cornered, they have three options with roughly the same level of professional features, when you count the A7III and the A9.
Having said that, here’s the second thing you should know: Canon and Nikon’s new full-frame mirrorless mounts are going to try as hard as they can to out-shine Sony’s FE lens lineup, as soon as possible. Literally the first thing Canon did for its RF mount was a jaw-droppingly good 50mm f/1.2, and of course the massive beast that is the 28-70mm f/2. Oh, and Nikon announced that they’d be resurrecting their legendary “Noct” lens nomenclature, for an absurdly fast 58mm f/0.95.
If you’re at all interested in this type of exotic, high-end glass, the larger size diameters and shorter flange distances of Canon and Nikon’s new FF mounts may prove to have a slight advantage over Sony’s relatively modest E mount.
However, as Sony has already proven, its mount is nothing to scoff at, and is entirely capable of amazing glass with professional results. Two of their newest fast-aperture prime lenses, the 135mm f/1.8 G-Master and the 24mm f/1.4 G-Master, prove this. Both lenses are almost optically flawless, and ready to easily resolve the 42 megapixels of this generation A7R-series camera, and likely the next generation too even if it has a 75-megapixel sensor.
This indicates that although Canon and Nikon’s may have an advantage when it comes to the upper limits of what is possible with new optics, Sony’s FE lens lineup will be more than enough for most pros.
Sony A7R3, Sony FE 70-300mm G OSS |  128mm, 1/100 sec, f/14, ISO 100
Sony A7RIII Review Verdict & Conclusion
There is no denying that Sony has achieved a huge milestone with the A7R mk3, in every single way. From its price point and feature set to its image quality and durable body, it is quite possibly the biggest threat that its main competitors, Canon and Nikon, face.
So, the final verdict for this review is very simple: If you want the most feature-rich full-frame camera (and system) that $3,200 can buy you, (well, get you started in) …the best investment you can make is the Sony A7RIII.
(By the way, it is currently just $2798, as of March 2019, and if you missed this particular sale price, just know that the camera might go on sale for $400 off, sooner or later.)
Sony A7R3, Sony FE 70-300mm G OSS | 1/400 sec, f/10, ISO 100
Really, the only major drawback for the “average” photographer is the learning curve, which even after three generations still feels like a sheer cliff when you first pick up the camera and look through its massive menu interface and customizations. The A7R3 body, (nor the A9 or A7III, for that matter) is not for the “casual” shooter who wants to just leave the camera in “green box mode”, and expect it to be simple to operate. I’ve been testing and reviewing digital cameras for over 15 years now, and the A7RIII is by far the most complex camera I’ve ever picked up.
That shouldn’t be a deterrent for the serious pro, because these cameras are literally the tools of our trade. We don’t have to get a degree in electrical engineering or mechanical engineering in order to be photographers, we just have to master our camera gear, and of course the creativity that happens after we’ve mastered that gear.
However, a serious pro who is considering switching from Nikon or Canon should still be aware that not everything you’re used to with those camera bodies is possible, let alone effortless feeling, on this Sony. The sheer volume of functionality related to focusing alone will require you to spend many hours learning how the camera works, and then customizing its different options to the custom buttons and custom menus so that you can achieve something that mimics simplicity, and effortless operation.
Sony A7R3, Sony 16-35mm f/2.8 GM | 1/4 sec, f/14, ISO 100
Personally, I’m always up for challenge. It took me a month of learning, customizing, and re-customizing this mk3-generation of Sony camera bodies, but I got it the way I want it, and now I get the benefits of things like having both  the witchcraft/magic that is Eye-AF, and the traditinal “oldschool” AF methods, at my fingertips. As a working pro who shoots in active conditions, from portraits and weddings to action sports and stage performance, it has been absolutely worth it to tackle the steepest learning curve of my entire career. I have confidence that you’re up to the task, too.
from SLR Lounge https://www.slrlounge.com/sony-a7riii-review-best-full-frame-pro-mirrorless-camera/ via IFTTT
0 notes
theinvinciblenoob · 6 years ago
Link
What’s in a camera? A lens, a shutter, a light-sensitive surface and, increasingly, a set of highly sophisticated algorithms. While the physical components are still improving bit by bit, Google, Samsung and Apple are increasingly investing in (and showcasing) improvements wrought entirely from code. Computational photography is the only real battleground now.
The reason for this shift is pretty simple: Cameras can’t get too much better than they are right now, or at least not without some rather extreme shifts in how they work. Here’s how smartphone makers hit the wall on photography, and how they were forced to jump over it.
Not enough buckets
An image sensor one might find in a digital camera
The sensors in our smartphone cameras are truly amazing things. The work that’s been done by the likes of Sony, OmniVision, Samsung and others to design and fabricate tiny yet sensitive and versatile chips is really pretty mind-blowing. For a photographer who’s watched the evolution of digital photography from the early days, the level of quality these microscopic sensors deliver is nothing short of astonishing.
But there’s no Moore’s Law for those sensors. Or rather, just as Moore’s Law is now running into quantum limits at sub-10-nanometer levels, camera sensors hit physical limits much earlier. Think about light hitting the sensor as rain falling on a bunch of buckets; you can place bigger buckets, but there are fewer of them; you can put smaller ones, but they can’t catch as much each; you can make them square or stagger them or do all kinds of other tricks, but ultimately there are only so many raindrops and no amount of bucket-rearranging can change that.
Sensors are getting better, yes, but not only is this pace too slow to keep consumers buying new phones year after year (imagine trying to sell a camera that’s 3 percent better), but phone manufacturers often use the same or similar camera stacks, so the improvements (like the recent switch to backside illumination) are shared amongst them. So no one is getting ahead on sensors alone.
See the new iPhone’s ‘focus pixels’ up close
Perhaps they could improve the lens? Not really. Lenses have arrived at a level of sophistication and perfection that is hard to improve on, especially at small scale. To say space is limited inside a smartphone’s camera stack is a major understatement — there’s hardly a square micron to spare. You might be able to improve them slightly as far as how much light passes through and how little distortion there is, but these are old problems that have been mostly optimized.
The only way to gather more light would be to increase the size of the lens, either by having it A: project outwards from the body; B: displace critical components within the body; or C: increase the thickness of the phone. Which of those options does Apple seem likely to find acceptable?
In retrospect it was inevitable that Apple (and Samsung, and Huawei, and others) would have to choose D: none of the above. If you can’t get more light, you just have to do more with the light you’ve got.
Isn’t all photography computational?
The broadest definition of computational photography includes just about any digital imaging at all. Unlike film, even the most basic digital camera requires computation to turn the light hitting the sensor into a usable image. And camera makers differ widely in the way they do this, producing different JPEG processing methods, RAW formats and color science.
For a long time there wasn’t much of interest on top of this basic layer, partly from a lack of processing power. Sure, there have been filters, and quick in-camera tweaks to improve contrast and color. But ultimately these just amount to automated dial-twiddling.
The first real computational photography features were arguably object identification and tracking for the purposes of autofocus. Face and eye tracking made it easier to capture people in complex lighting or poses, and object tracking made sports and action photography easier as the system adjusted its AF point to a target moving across the frame.
These were early examples of deriving metadata from the image and using it proactively, to improve that image or feeding forward to the next.
In DSLRs, autofocus accuracy and flexibility are marquee features, so this early use case made sense; but outside a few gimmicks, these “serious” cameras generally deployed computation in a fairly vanilla way. Faster image sensors meant faster sensor offloading and burst speeds, some extra cycles dedicated to color and detail preservation and so on. DSLRs weren’t being used for live video or augmented reality. And until fairly recently, the same was true of smartphone cameras, which were more like point and shoots than the all-purpose media tools we know them as today.
The limits of traditional imaging
Despite experimentation here and there and the occasional outlier, smartphone cameras are pretty much the same. They have to fit within a few millimeters of depth, which limits their optics to a few configurations. The size of the sensor is likewise limited — a DSLR might use an APS-C sensor 23 by 15 millimeters across, making an area of 345 mm2; the sensor in the iPhone XS, probably the largest and most advanced on the market right now, is 7 by 5.8 mm or so, for a total of 40.6 mm2.
Roughly speaking, it’s collecting an order of magnitude less light than a “normal” camera, but is expected to reconstruct a scene with roughly the same fidelity, colors and such — around the same number of megapixels, too. On its face this is sort of an impossible problem.
Improvements in the traditional sense help out — optical and electronic stabilization, for instance, make it possible to expose for longer without blurring, collecting more light. But these devices are still being asked to spin straw into gold.
Luckily, as I mentioned, everyone is pretty much in the same boat. Because of the fundamental limitations in play, there’s no way Apple or Samsung can reinvent the camera or come up with some crazy lens structure that puts them leagues ahead of the competition. They’ve all been given the same basic foundation.
All competition therefore comprises what these companies build on top of that foundation.
Image as stream
The key insight in computational photography is that an image coming from a digital camera’s sensor isn’t a snapshot, the way it is generally thought of. In traditional cameras the shutter opens and closes, exposing the light-sensitive medium for a fraction of a second. That’s not what digital cameras do, or at least not what they can do.
A camera’s sensor is constantly bombarded with light; rain is constantly falling on the field of buckets, to return to our metaphor, but when you’re not taking a picture, these buckets are bottomless and no one is checking their contents. But the rain is falling nevertheless.
To capture an image the camera system picks a point at which to start counting the raindrops, measuring the light that hits the sensor. Then it picks a point to stop. For the purposes of traditional photography, this enables nearly arbitrarily short shutter speeds, which isn’t much use to tiny sensors.
Why not just always be recording? Theoretically you could, but it would drain the battery and produce a lot of heat. Fortunately, in the last few years image processing chips have gotten efficient enough that they can, when the camera app is open, keep a certain duration of that stream — limited resolution captures of the last 60 frames, for instance. Sure, it costs a little battery, but it’s worth it.
Access to the stream allows the camera to do all kinds of things. It adds context.
Context can mean a lot of things. It can be photographic elements like the lighting and distance to subject. But it can also be motion, objects, intention.
A simple example of context is what is commonly referred to as HDR, or high dynamic range imagery. This technique uses multiple images taken in a row with different exposures to more accurately capture areas of the image that might have been underexposed or overexposed in a single exposure. The context in this case is understanding which areas those are and how to intelligently combine the images together.
This can be accomplished with exposure bracketing, a very old photographic technique, but it can be accomplished instantly and without warning if the image stream is being manipulated to produce multiple exposure ranges all the time. That’s exactly what Google and Apple now do.
Something more complex is of course the “portrait mode” and artificial background blur or bokeh that is becoming more and more common. Context here is not simply the distance of a face, but an understanding of what parts of the image constitute a particular physical object, and the exact contours of that object. This can be derived from motion in the stream, from stereo separation in multiple cameras, and from machine learning models that have been trained to identify and delineate human shapes.
These techniques are only possible, first, because the requisite imagery has been captured from the stream in the first place (an advance in image sensor and RAM speed), and second, because companies developed highly efficient algorithms to perform these calculations, trained on enormous data sets and immense amounts of computation time.
What’s important about these techniques, however, is not simply that they can be done, but that one company may do them better than the other. And this quality is entirely a function of the software engineering work and artistic oversight that goes into them.
A system to tell good fake bokeh from bad
DxOMark did a comparison of some early artificial bokeh systems; the results, however, were somewhat unsatisfying. It was less a question of which looked better, and more of whether they failed or succeeded in applying the effect. Computational photography is in such early days that it is enough for the feature to simply work to impress people. Like a dog walking on its hind legs, we are amazed that it occurs at all.
But Apple has pulled ahead with what some would say is an almost absurdly over-engineered solution to the bokeh problem. It didn’t just learn how to replicate the effect — it used the computing power it has at its disposal to create virtual physical models of the optical phenomenon that produces it. It’s like the difference between animating a bouncing ball and simulating realistic gravity and elastic material physics.
Why go to such lengths? Because Apple knows what is becoming clear to others: that it is absurd to worry about the limits of computational capability at all. There are limits to how well an optical phenomenon can be replicated if you are taking shortcuts like Gaussian blurring. There are no limits to how well it can be replicated if you simulate it at the level of the photon.
Similarly the idea of combining five, 10, or 100 images into a single HDR image seems absurd, but the truth is that in photography, more information is almost always better. If the cost of these computational acrobatics is negligible and the results measurable, why shouldn’t our devices be performing these calculations? In a few years they too will seem ordinary.
If the result is a better product, the computational power and engineering ability has been deployed with success; just as Leica or Canon might spend millions to eke fractional performance improvements out of a stable optical system like a $2,000 zoom lens, Apple and others are spending money where they can create value: not in glass, but in silicon.
Double vision
One trend that may appear to conflict with the computational photography narrative I’ve described is the advent of systems comprising multiple cameras.
This technique doesn’t add more light to the sensor — that would be prohibitively complex and expensive optically, and probably wouldn’t work anyway. But if you can free up a little space lengthwise (rather than depthwise, which we found impractical) you can put a whole separate camera right by the first that captures photos extremely similar to those taken by the first.
A mock-up of what a line of color iPhones could look like
Now, if all you want to do is re-enact Wayne’s World at an imperceptible scale (camera one, camera two… camera one, camera two…) that’s all you need. But no one actually wants to take two images simultaneously, a fraction of an inch apart.
These two cameras operate either independently (as wide-angle and zoom) or one is used to augment the other, forming a single system with multiple inputs.
The thing is that taking the data from one camera and using it to enhance the data from another is — you guessed it — extremely computationally intensive. It’s like the HDR problem of multiple exposures, except far more complex as the images aren’t taken with the same lens and sensor. It can be optimized, but that doesn’t make it easy.
So although adding a second camera is indeed a way to improve the imaging system by physical means, the possibility only exists because of the state of computational photography. And it is the quality of that computational imagery that results in a better photograph — or doesn’t. The Light camera with its 16 sensors and lenses is an example of an ambitious effort that simply didn’t produce better images, though it was using established computational photography techniques to harvest and winnow an even larger collection of images.
Light and code
The future of photography is computational, not optical. This is a massive shift in paradigm and one that every company that makes or uses cameras is currently grappling with. There will be repercussions in traditional cameras like SLRs (rapidly giving way to mirrorless systems), in phones, in embedded devices and everywhere that light is captured and turned into images.
Sometimes this means that the cameras we hear about will be much the same as last year’s, as far as megapixel counts, ISO ranges, f-numbers and so on. That’s okay. With some exceptions these have gotten as good as we can reasonably expect them to be: Glass isn’t getting any clearer, and our vision isn’t getting any more acute. The way light moves through our devices and eyeballs isn’t likely to change much.
What those devices do with that light, however, is changing at an incredible rate. This will produce features that sound ridiculous, or pseudoscience babble on stage, or drained batteries. That’s okay, too. Just as we have experimented with other parts of the camera for the last century and brought them to varying levels of perfection, we have moved onto a new, non-physical “part” which nonetheless has a very important effect on the quality and even possibility of the images we take.
via TechCrunch
0 notes
joannemaly · 7 years ago
Text
How I Created a 16-Gigapixel Photo of Quito, Ecuador
A few years ago, I flew out to Ecuador to create a high-resolution image of the capital city of Quito. The final image turned out to be 16 gigapixels in size and at a printed size of over 25 meters (~82 feet), it allows people see jaw-dropping detail even when viewed from a few inches away.
I’ve always thought that gigapixel technology was amazing since I first saw it around 8 or 9 years ago. It combines everything that I like about photography: the adventure of trying to capture a complex image in challenging conditions as well as using high tech equipment, powerful computers, and advanced image processing software to create the final image.
I’ve been doing this for a while now, so I thought that I would share some of my experiences with you all so that you can make your own incredible gigapixel image as well.
The Gist
The picture was made with the 50-megapixel Canon 5DSR and a 100-400mm lens. It consists of 912 photos with each one having a .RAW file size of over 60MB. To create the image a robotic camera mount was used to capture over 900 images with a Canon 5DSR and 400mm lens. Digital stitching software was then used to combine them into a uniform high-resolution picture.
With a resolution of 300,000×55,313 pixels, the picture is the highest resolution photo of Quito ever taken. This allows you instantly view and explore high-resolution images that are over several gigabytes in size.
Site Selection
The first step in taking the photo is site selection. I went around Quito and viewed several different sites. Some the sites I felt were too low to the ground and didn’t give the wide enough panorama that I was looking for. Other sites were difficult to access or were high up but still not able to give the wide panoramic view that I was looking for.
I finally settled on taking the image from near the top of the Pichincha Volcano. Pichincha is classified as a stratovolcano and its peak is over 15,000ft high. I was to access the spot via a cable car and it gave a huge panoramic vista of the entire city as well as all the volcanoes that surround Quito.
The only drawback that I saw to the site is that I felt that it was a little too far away from the city and I didn’t think that people would be able to see any detail in the city when they zoomed in. To fix this situation I decided to choose a site a bit further down from the visitor center. That meant that we would have to carry all there equipment there (which isn’t easy at high altitudes) but I felt that it would give the best combination of a great panoramic view and be close enough to the city for detail to be captured.
The Setup
The site was surrounded by very tall grass as well as a little bit of a hill that could block the complete view so I decided to set up three levels of scaffolding and shoot from the top of that. There wasn’t any power at the site since it was on the side of a volcano so we had to bring a small generator with us.
I ran extension cords from the generator up to the top of the scaffolding where it powered the panorama head, as well as my computer. I didn’t plug in the camera in because I would be able to easily change the batteries if they ran out.
Atmospheric Conditions
Anything that affects the light rays on their path to the camera’s sensor will affect the ultimate sharpness of the image. Something that is rarely mentioned is the effects of the atmosphere on high-resolution photos. Two factors are used to define atmospheric conditions: seeing and visibility.
Seeing is the term astronomers use to describe the sky’s atmospheric conditions. The atmosphere is in continual motion due to changing temperatures, air currents, weather fronts and dust particles. These factors are what cause the star images to twinkle. If the stars are twinkling considerably we have “poor” seeing conditions and when the star images are steady we have “good” seeing conditions.
Have you ever seen a quarter lying on the bottom of a swimming pool? The movement of the water makes it look like the quarter is moving around and maybe a little bit blurry. Just as the movement of water moves an image, atmospheric currents can blur a terrestrial image. These effects can be seen in terrestrial photography as the mirage effect, which is caused by heat currents and also as a wavy image due to windy conditions. It’s interesting to note that seeing can be categorized according to the Antoniadi scale.
The scale is a five-point system, with 1 being the best seeing conditions and 5 being the worst. The actual definitions are as follows:
Perfect seeing, without a quiver.
Slight quivering of the image with moments of calm lasting several seconds.
Moderate seeing with larger air tremors that blur the image.
Poor seeing, constant troublesome undulations of the image.
Very bad seeing, hardly stable enough to allow a rough sketch to be made.
(Note that the scale is usually indicated by use of a Roman numeral or an ordinary number.)
Visibility: The second factor that goes into atmospheric conditions is visibility, also called visible range is a measure of the distance at which an object or light can be clearly discerned. Mist, fog, haze, smoke, dust and even volcanic ash can all effect visibility.
The clear high altitude air of Quito made for some amazing visibility the day of the shoot. The only things that affected it that day were a few small grass fires in the city. The Cotopaxi volcano was also giving off smoke and ash but it didn’t seem to be a problem since it was blowing away from the city. There also weren’t any clouds in the sky which made it so that the exposure wouldn’t be affected by any clouds blocking out the sun.
Equipment
Camera: I decided to use a 50 megapixel Canon 5DS R. The 5DS R is an amazing camera that is designed without a low-pass filter which enables it to get amazing pixel-level detail and image sharpness.
Lens: A Canon 100-400mm f/5.6 II lens was used to capture the image. Several factors went into the decision to use this lens such as size, wight and focal length. The 100-400mm was small and light and would allow the robotic pano head to function with no problems. It also has a good focal length of 400mm with would allow for some nice detail to be captured.
I thought about using a 400mm DO and 400mm f/2.8 but each had its own drawbacks. The 400mm DO didn’t have a zoom and I wanted to be able to change the focal length for different types of captures if I had any problems and the 400mm f/2.8 was too big and heavy to be used properly in the pano head. I have a Canon 800mm f/5.6 which I would have loved to have used but it was also too heavy to be used with the robotic pano head (humble brag).
Another interesting factor that went into my decision to use the 100-500mm f/5.6 is that the diameter of the front lens element was small enough so that atmospheric distortion wouldn’t be too much of a problem. I have spent a lot of time experimenting with astrophotography and the larger the front lens element is the more atmospheric distortion or “mirage effect” will be picked up resulting in a blurring of the photo.
Pano Head: I used a GigaPan Epic Pro for the image capture. The GigaPan is an amazing piece of equipment which automates the image capture process. The GigaPan equipment is based on the same technology employed by the Mars Rovers, Spirit, and Opportunity, and is actually a spin-off of a research collaboration between a team of researchers at NASA and Carnegie Mellon University.
To use a GigaPan you first need to set it up for the focal length of the lens that you are using. You then tell it where the upper-left-hand corner of the image is located and where the bottom-right-hand corner of the image is. It then divides the image into a series of frames and automatically begins scanning across the scene triggering the camera at regular intervals until the scene is completely captured.
There are several other brands of panorama heads out there including Nodal Ninja and Clauss-Rodeon but I like the GigaPan the best since it is automated, simple and reliable. The GigaPan is also able to be connected to an external power source so the battery won’t run out during large image capture sequences.
Computer: I didn’t think that the memory card would be large enough for all the images to be stored on it especially since I was going to be making multiple attempts at capturing the image. I decided to shoot with the camera tethered to a MacBook Pro via Canons EOS Utility. This software not only allowed me to write the images directly to my hard drive, it also allowed me to zoom into the image in live view to get critical focus. Just in case something went wrong I simultaneously wrote the images to an external hard drive as a backup.
Camera Settings
Aperture: I set the aperture to f/8. This was done for a couple of reasons. The first was to increase the resolution of the image. Although the Canon 100-400mm f/5.6 II is a very sharp lens shooting wide open, stopping down the lens a little bit increases its sharpness. Stopping down the lens also reduces vignetting, which is a darkening of the edges and corners of the image.
Although the vignetting is minimal on the lens, I have found out that even the slightest amount of vignetting on the frame will result in dark vertical bands being shown in the final stitched image.I didn’t want to stop down the aperture too much because I was worried about diffraction reducing the resolution of the image.
Focal Length – I shot at 400mm so I could capture as much detail in the city. I could have used a 2x teleconverter but there was so much wind at the site that I was afraid that the camera would move around too much and the image would come out blurry.
ISO: I shot at an ISO of 640 due to all the wind at the site. I knew that using a high ISO would increase my shutter speed and reduce the chance of vibrations from the wind blurring the photo.
Shutter speed: All of these factors combined gave me a final shutter speed of 1/2700.
RAW: I shot in .RAW (actually .CR2) to get the maximum resolution in the photos.
Live View: I used the cameras live view function via Canons EOS Utility to raise and lock the mirror during the capture sequence. This reduced the chance of mirror slap vibrating the camera.
GigaPan Settings
The GigaPan has a lot of different settings for the capture sequence of the images. One can shoot in columns from left to right or in rows from the top down and left to right or any combination thereof. I choose to shoot the image going across in rows from top down going from left to right. Even though the image capture sequence would only take an hour or so I have found that shooting in this sequence makes for a more natural looking image in case of any change in lighting conditions. I also included a 1-second pause between the GigaPan head moving and the trigger of the camera to reduce any shake that may have been present from the pano head moving.
Image Capture
I had to go at it a few times but the final image was taken with 960 photos with each one having a .RAW file size of over 60MB.
Image Processing
Two Image Sets: Each day of the shoot presented itself with different problems. One day the city was clear but the horizon and volcanoes were obscured with clouds. On another day the horizon was totally clear. I decided to create two different image sets and combine them together to make the final image. One large image set was used for the clear sky and volcanoes another image set was used for the city.
Pre-Processing: For the horizon and volcanos I selected an image that I felt represented an average exposure of the sky into photoshop and corrected it to remove any vignetting.
For the image set of the city found an exposure of the city and color corrected and sharpened it to the way I wanted it before bringing the images into the stitching software. I recorded the image adjustments that I made and made a photoshop droplet with them. I then dragged and dropped all the files onto the droplet and let it run, automatically correcting each image of the photo sequences. It took a long time but it worked.
Autopano Giga: After the images were captured I put all of them into Autopano Giga. Autopano is a program that uses something called a scale-invariant feature transform (SIFT) algorithm to detect and describe local features in images. These features are then matched with features in other frames and the images are combined or stitched together. The software is pretty straightforward but I did a few things to make the final image.
Anti-ghosting: Autopano has something called an “anti-ghosting” which designed to avoid blending pixels that don’t match. This is useful for removing half cars or half people that could show up in the image due to the movement of objects between frames. Exposure blending – Just in case of any vignetting or differences in the lighting I used the exposure blend function in the software to even out the exposures and make a nice blend.
.PSB: .PSB stands for Photoshop Big. The format is almost identical to Photoshop’s more common PSD format except that PSB supports significantly larger files, both in image dimension and overall size.
More specifically, PSB files can be used with images that have a height and width of up to 300,000 pixels. PSDs, on the other hand, are limited to 2 GB and image dimensions of 30,000 pixels. This 300,000-pixel limit is the reason why the final image has a 300,000-pixel width. I could have made the image a little bigger but I would have had to use a .kro format and I’m not sure that I would have been able to successfully blend the two images (one for the horizon and one for the city) together.
Computer: To stitch the .PSB together I used a laptop. I was worried that my laptop wouldn’t have enough horsepower to get the job done but it worked. The computer I used had the following specs: MacBook Pro (Retina, 15-inch, Mid 2015), 2.8 GHz Intel Core i7, 16GB 1600 Mhz DDR3, AMD Radeon R9 M370X 2048MB.
Hard Drive: The important thing to know when processing gigapixel images is that due to the large sizes of the images the processor speeds and RAM don’t really matter that much.
Since the processor cache and RAM fills up pretty quick when processing an image of that size the software directs everything to the hard drive where it creates something called a “page file” or “swap file” A page/swap file is a reserved portion of a hard disk that is used as an extension of random access memory (RAM) for data in RAM that hasn’t been used recently. By using a page/swap file, a computer can use more memory than what is physically installed in the computer. However, if the computer is low on drive space the computer can run slower because of the inability of the swap file to grow.
Since everything is happening on the hard disk it is really important to not only have a hard drive that is fast, but also one with a lot of space since it fills up really fast and won’t process the image if there isn’t enough space available since the swap file size can get gigantic. To process the Quito image I tried to use a fast PCI SSD that had around 500GB of space to process the image but the drive filled up. I took the computer back and got one with a 1TB PCI SSD and it was able to process the image.
Photoshop: I had to stitch one image for the horizon and another image for the background. Once these were done I opened them up in photoshop and used the eraser tool set to a large diameter to manually tool to manually blend them together. I then flattened the image and saved it as a .PSB file.
Image Tiling: I used a program called KRPano to make a tile of the images. If I uploaded the resulting .PSB file to the internet it would take forever for it to load up so people could see it. KRPano divides up the image into layers of small tiles. Each image you see is made up of a low-resolution tile. As you zoom into the image different small image tiles are quickly loaded and displayed with allows people to quickly view and explore the image without having to load the entire image. About 174,470 tiles were created for this image.
Image Upload
Once all the image tiles were created I compressed them into a .zip file. I felt that it would be easier to upload one large file instead of over 174,000 separate small files. The image upload went fine and I manually unzipped the image inside of the Hostgator server using FileZilla. It is good to check with the hosting company to make sure that they allow files to be unzipped inside their servers.
Website
Once the image was created, tiled and uploaded I made a simple website and embedded the .html file into an iframe so It could be displayed.
You can view the photo through an interactive viewer on Quito Gigapixel.
Closing
I hope that this little guide proves helpful for all of you. Gigapixel technology is really interesting and fun to try out. I have done quite a few gigapixel images but am by no means an expert and am always interested in learning more.
About the author: Jeff Cremer is a Lima, Peru-based photographer who works in the Amazon. You can find more of his work on Rainforest Expeditions and on Twitter and Instagram.
[Read More ...]
The following blog post was first published to republished from: https://www.proton-pack.com/ 
How I Created a 16-Gigapixel Photo of Quito, Ecuador was originally posted by https://www.proton-pack.com
0 notes
gryphon1911 · 7 years ago
Link
© Ricoh Imaging
Background
There are TONS of cameras out there...some you might not even realize. One of these relative unknowns is from a relatively well known camera maker, Pentax.  The diminutive Q series cameras.  At the time of release, the Q series camera was the smallest interchangable lens system on the market.  Not sure if that is true anymore...but this camera system is tiny. I have seen a few online friends that used this system and was impressed by the IQ coming from this small 1/1.7" sensor. Initial cost of these cameras new precluded me from experimenting then...but with some online sluething and patience, you can now get this system for a fraction of what it cost new. I was lucky enough to find a whole system for sale.   The Pentax Q7 in black and silver with 4 lenses will be the focus of this review. Follow along to see what we thought of the IQ, the handling, the good/bad of this little system. Here we go!
© Ricoh Imaging
Handling/Weight/Size
This is a small camera.  Captain Obvious has just entered the room.  We already covered that in the opening of this article...but it bears mentioning again.   I've got some pretty meaty hands and I need something to grip.  My smallest camera that I feel I can hold comfortably for serious photography is an Olympus EM5 style body. So, even though I knew there was a risk that I might not be able to handle this camera body well, I gave it a try. Yes, it is very small.  My basic grip on this camera body is index finger on the shutter release with the thumb on the small bump on the back.   My middle finger fits into 90% of the front grip, ring finger barely sits at the bottom.  Most of the time it slips off the bottom.  I wish they made an add on grip for it.  Another inch or 2 at the bottom would be just awesome. It's almost too small to hold with one hand if you needed to do that.   I end up using a wrist strap and my left hand. Even with the small camera, the buttons are in a good location.  The rear dial is easy to get to with the thumb.  The mode dial has enough resistence that you will most likely not bump it out of position.  I never did so far through my testing. Here are some camera dimensions for you: Body dimensions: 4x2.3x1.3 inches Weight 7.1 ounces (200g) with battery As compared to an Olympus EM5 Mk II Body dimensions:  4.9x3.4x1.8 inches Weight: 16.5 ounces (469g) with battery and a Nikon D500 Body dimensions:  5.79x4.53x3.19 inches Weight:  30.34 ounces (860g) with battery
front view of cameras (camerasize.com)
Top view with kit lenses (camerasize.com) Pentax 5-15/2.8-4.5, Olympus 14-42, Nikon 18-55 AF-P
Pentax had a great idea when it came to the battery and SD card door.  They are both on the side of the camera, each on the opposite side of the body.  SD card door is on the right, battery on the left.
Notable Features 1 The Front Dial
Similar to what you might see on the Olympus PEN-F, there is a front dial with 5 positions.
The front dial can be used for a few functions.  You can have custom image modes like B&W , portrait, bright, etc. set to the 4 positions.  If you create a user defined custome image mode(you can save 3), you can use them here too.  Other options are toggling the built in ND filter, aspect ration, focus method, focus peaking. Unfortunately, those options are not allowed to be mixed together.  For example, I cannot use the first 2 positions for custom image and the last 2 to toggle the ND filter off/on.  It's either all custom image settings or all ND toggles.  This is a shame that you are limited in this way.  A great idea, but not taken to a logical conclusion. Not 100% sure what I will settle on for this feature.
Notable Features 2 Bokeh Control
On the mode dial, there is the lettering "BC".  This is for bokeh control.  There are 3 levels of control, each blurring more and more.   I found through my cursory testing that 2 and 3 are way too much.  1 worked just fine.
Basically you have 24-211mm field of view covered here in this little kit.
Notable Features 3 RAW In The Buffer When Shooting JPG
This one shocked and delighted me.  WHY IS EVERYONE NOT DOING THIS?!  Basically, what this camera does is when shooting JPG only in camera, the Q keeps the last images RAW data in the buffer.  It is accessible to you where you can go in, save it, do in camera RAW processing on it. Think about how awesome and cool that is.  Say you are shooting JPG and the last shot you took is pushing the capabilities of the sensor and JPG bit depth.   Press the image review button check out the shot.  Look to the right and you'll see that by pressng the exposure comp button, you have the option to save the RAW file.
Image Quality
The 1/1.7" sensor can get a lot of heat from some people.  Yes...it is small.  Yes, it doesn't have the dynamic range...it is "only" a 12mp sensor. However, Pentax has done quite a good job in processing from this little camera.   Shooting JPG and for color images, I don't like going over ISO 1600  For monochrome, I'm OK all the way up to ISO 3200.  Even with that, the processing that Pentax does is not really for me.  The colors are not to my liking and even with sharpness turned down some, there is some artifacting that just looks bad. RAW gives you a lot more latitude and you can run the Adobe DNG files in Lightroom or your processor of choice and really get the most out of those files.  I experimented quite a bit and found  good recipe in Lightroom that I feel gives me superior IQ over the in camera JPG engine. We'll provide plenty of sample images in the lens section below.  Bottom line - I'll be shooting this camera in RAW all the time.  It's the best way to get he most quality that appeals to me.  You can get way more quality out of this camera than it has any business producing.
Shake Reduction
This tiny little camera has in body stabilization.  Pentax calls it SR for shake reduction.  It works fairly well.  Not in the same league as an Olympus 5 axis IBIS or Nikon's newest VR....but it will save your bacon in a pinch.  It's just nice to see it included. I'm not sure if it only kicks in when you depress the shutter or not.   On longer lenses, the LCD seems shakier than I think it should...but I'll have to do more research into it.
Auto Focus
I found that the other reviewers of this camera were right.   In good light, the AF is effective and relatively quick.  It is not going to beat a current Micro Four Thirds camera, but on the whole it will not disappoint for most applications.  Stick to S-AF and you are good to go.   I'd ignore C-AF. You have multiple focus modes. Face - face detection Continuous - AF tracking of subjects Spot - the AF is locked to the middle of the frame Auto - you select the size of the focus area, of which there are 3 sizes, and the camera determines what in this area to lock onto.  It actually does a fairly good job at this. Select - the AF box(small area) can be moved by first pressing the OK button and then using the direction buttons on the back to move it.  The AF array does not cover the whole sensor, so you will see your boundaries by a thin black box on the back of the LCD.
Manual Focus
Your typical focus by wire affair.  I'm not  big fan of this and the very small focus rings don't help it much here.   However, MF is there should you need it.  Pentax also included a menu option to allow for full time AF override just by turning the focus ring.  I had to turn this off as I found just the slightest of touches would throw you into MF mode.
Battery Life
With the body being so small, you have a limited space for a big battery.  CIPA ratings on this camera body are 250 shots.   I doubt most people would get that, having to rely on the rear LCD for everything is going to churn through some battery pretty quickly.  I recommend getting a few extras. During a day long shooting session, I made it through 3/4 of a day and came home with 280 images.  That was shooting RAW+JPG, image review and changing camera settings.   Technically one could say thatit took double that number, one RAW and one JPG.   I'm going to run the camera another day and shoot just RAW to see the number of shots I can get.  All in all, for the size of the battery and the fact that it needs to use the 3" LCD for everything, not that bad.
Video
Nothing really special here.  An standard 1080p offering.  This would not be my first option, and to be honest a modern cell phone will probably do just as well if not better since they do 4k.  The benefit of this system is the ability to use lenses, which a cell phone lacks.
The Lenses
The Q7 has a "crop factor" of 4.7, so multiply the focal length by this number to get the approximate field of view (FOV) of these lenses. The lens numbers denote the order in which they were released, field of view on a 135 equivalent is in parenthesis below. 8.5mm f/1.9 - The 01 Standard Prime  (40mm) 5-15mm f/2.8-4.5 - The 02 Standard Zoom (24-70mm) 15-45mm f/2.8 - The 06 Telephoto Zoom (70-212mm) 11.5mm f/9 - The 07 Shield Lens - body cap lens (55mm) The real jewel here is the original kit lens, the 8.5/1.9.  Sharp and fast.  What fits into the theme...it's tiny.   The 07 Shield lens is smaller, but not in the same league. So let's get into the images from these lenses.   Most of these shots were shot wide open as well.   Diffraction is going to hit pretty quickly if you go much past f/5.6, so shooting from f/1.9 through f/4.5 is where I stayed most of the time. We'll start with some images from the 01 Standard Prime, 8.5mm f/1.9.  This lens is excellent optically.  Very sharp even wide open.
8.5mm f/1.9 lens (01 Prime) 1/60, f/2.8, ISO 250
8.5mm f/1.9 lens (01 Prime) 1/80, f/1.9, ISO 200
8.5mm f/1.9 lens (01 Prime) 1/2000, f/1.9, ISO 100
8.5mm f/1.9 lens (01 Prime) Higher ISO Example (B&W conversion in On1 Effects 2018 1/60, f/1.9, ISO 2000
Then let's move to the 02 Standard Zoom, the 5-15mm f/2.8-4.5.   Given some of the online reports, I was expecting this lens to be quite a disappointment.  On the contrary, it is rather quite good and a bit better than I anticipated.  It is true that it is weakest at 15mm, but very good up through 12mm.
5-15mm f/2.8-4.5 (02 Standard Zoom) 1/500, f/2.8, ISO 200 @ 5.5mm B&W processed in Nik Silver Efex Pro 2 from RAW
5-15mm f/2.8-4.5 (02 Standard Zoom) 1/250, f/3.5, ISO 200 @ 9.5mm
5-15mm f/2.8-4.5 (02 Standard Zoom) 1/60, f/4, ISO 125 @ 8.2mm
5-15mm f/2.8-4.5 (02 Standard Zoom) 1/200, f/4, ISO 100 @ 9.8mm
5-15mm f/2.8-4.5 (02 Standard Zoom) 1/200, f/3.5, ISO 100 @ 9.8mm
The shield lens, 07 - 11.5mm f/9.  Not sure when I would ever use this lens outside of this testing.  I was not fond of this lens.   Unless you are super into lomo type photography I'd skip this lens.  Here are some samples.
11.5mm f/9 (07 Shield Lens) 1/60, f/9, ISO 125
11.5mm f/9 (07 Shield Lens) 1/60, f/9, ISO 320
11.5mm f/9 (07 Shield Lens) 1/60, f/9, ISO 125
11.5mm f/9 (07 Shield Lens) 1/60, f/9, ISO 250
11.5mm f/9 (07 Shield Lens) 1/125, f/9, ISO 100
The constant f/2.8 Telephoto Lens, the 06 15-45mm f/2.8.  Not the optical equivalent of the 01 Prime, but very good.   The constant f/2.8 is a great option to have for this 70-200-ish FOV lens.
15-45mm f/2.8 (06 Telephoto Zoom) 1/2500, f/2.8, ISO 100 @ 45mm
15-45mm f/2.8 (06 Telephoto Zoom) 1/200, f/4, ISO 160 @ 45mm
15-45mm f/2.8 (06 Telephoto Zoom) 1/2500, f/4, ISO 100 @ 15.1mm
15-45mm f/2.8 (06 Telephoto Zoom) 1/125, f/2.8, ISO 320 @ 22mm
15-45mm f/2.8 (06 Telephoto Zoom) 1/400, f/2.8, ISO 200 @ 15mm Through some dirty coffee shop glass
Other Misc. Items of Note
Shutter Sound: The shutter sound is quiet, mainly because for most of the Q lenses, the shutter is of the leaf variety and found within the lens itself. The leaf shutter handles everything up through 1/2000 of a second and any shutter speed higher than that is handled by an electronic shutter to 1/8000 of a second. A second benefit of the leaf shutter in some of the lenses is that you get flash sync up to 1/2000.  You can go full electronic if you wish or the camera can determine when to use the either. Built in Neutral Density (ND) Filter: Some lenses contain an ND filter built into them.  Useful for when you run out of shutter speed or if you do not want to use the electronic shutter above 1/2000. Quick Menu: If you are familiar with Fuji's Quick menu, Olympus' Super Control Panel (SCP) or the Nikon MyMenu...this is the Pentax version.   Just about any shooting option you want to get to quickly is here.  Just press the INFO button on the back of the camera when in shooting mode to bring up the grid.
TOP ROW Option 1 is the custom image selection. Here you can pick the type of jpg you want like natural, monochrome, cross process and tweak them. Option 2 is Digital Filter. Options here are things like Toy Camera, high contrast, tone expansion, fish eye, etc. Option 3 is in camera HDR. Option 4 and 5 are highlight and shadow control respectively. MIDDLE ROW Option 1 is metering - matrix, center and spot weighted. Option 2 is toggling on/off the in built ND filter(if the lens has that). Option 3 is toggle between AF and MF. Option 4 handles the focusing methods.  Face, continuous, auto, select (movable single AF point) and spot (single center focused AF point - not movable). Option 5 is the focus peaking toggle. BOTTOM ROW Option 1 is toggle for lens distortion correction. Option 2 is aspect ratio. Options are 4:3, 3:2, 1:1 and 16:9. Option 3 is image save format.  JPG, RAW or RAW+JPG. Option 4 is the JPG quality. Option 5 toggles shake reduction.
Bottom Line
This is not a camera for everyone, and you may be thinking why on earth would I even get one.   I saw some good things from it from the online forums and the price is so low now, when you find a bargain that it is a good time to experiment. I'm a big proponent of viewfinders and I'm not really falling in love with the rear LCD.  Not because it is horrible...but I just prefer the view and stability of a optical or electronic viewfinder.  I might find an alternative to it in some way.  Not sure what that looks like at the moment. So why use this when there is so much on paper that is against it when looking at other small, interchangeable lens camera systems? Honestly, it is a bit of fun.  It is something different and I've not been exposed to a Pentax anything before.  There are some really well laid out controls and menu functions as well.  I always say that you can learn something from everyone...and gear is no different.   All technologies have a contribution. I'm very happy with what I'm seeing up through ISO 1600 and in good light the files hold up well when shot in JPG.  You can eek out a bit more quality if you shoot in RAW.  With the 01 Prime, you can put this thing in your pant pocket, it is that small.  It is something you can keep with you ll the time with little hassle.  You will be a bit more limited with it...but just remember that and shoot to the strengths of the system. It would not be my first or favorite choice for very dark, low light shooting unless I had the ability to shoot on tripod and at base ISO.
0 notes
fastmusclecar123 · 7 years ago
Photo
Tumblr media
New Post has been published on http://fastmusclecar.com/best-muscle-cars/the-best-11-tips-for-muscle-car-photogaphy/
The Best 11 Tips For Muscle Car Photography
Today’s lesson boys and girls is going to be on car photography. Sit up straight, arms folded and pay attention. This is important.
It’s all too common to see car photos that are out of focus, badly cropped and badly edited, even for vehicles over $100,000. So it is now our job to make sure everybody can take great photos, so they can enjoy them for years to come.
The following tips can be used to take the type of photos you see in commercials and ads., but the real aim here is to help you take better photos of cars in general and be familiar with the elements needed to get a great shot of a car. You can use these tips for getting better car photos for selling a car, for your own enjoyment or even if you have aspirations of becoming a car photographer.
Tools needed. 1 x camera. 1 x tripod. 1 x photo editing software. 1 x computer. PC or Mac. 6 x beers for when you have finished.
  Cameras. There are many different cameras from smart phone cameras to DSLR’s (digital single-lens reflex – the big fancy ones), to even bigger fancier medium format cameras. With everything in life, you get what you pay for. You don’t have to spend tens of thousands of dollars on a camera, you just stick to the rules of having a good lens and a good camera body. Photography is all about light. The better the quality of light going into the camera and the larger the area it hits in the camera (the sensor or film) the better the image. So it’s usually the rule that the larger lenses and larger camera bodies have better quality, but are the most expensive.
Medium format cameras These cameras are generally used for commercial photographer. You don’t need one of these, unless you have the money to splash out. They do take the most beautiful and most rich images, but you can expect to pay around $40,000 and that’s just for the camera body, not including lens. They are also arguably not as good at being all-around cameras as the DSLRs below. They are made to get the very best of optimally lit scenes, where you have the budget to spend all day lighting and then you take the picture.
DLSRs Image by digital camera world. (the little ‘green’ thing on top of the camera in the image above is a ‘spirit level.’ Straight images are important!)
DLSRs are the most common cameras to get the best quality and the best all-round functionality. There may seem initially complicated, but really they just have lots of options as they are meant to work in any condition. You will come to love all these options to get real control of your images. Most people go for either the Canon or Nikon brands. for some of the best DSLR reviews online go to www.dpreview.com. I always go there when I’m thinking about buying a new camera body or lens.
The advantage with a DSLR is that you can change lens whenever you want.
Lens Trying to sum up camera lenses in a paragraph or two when encyclopaedias are written on the subject, is quite hard. I am recommending you to buy a DSLR, but you have to choose your brand ( most likely Canon or Nikon) and start with a 50mm lens. A 50mm lens is roughly the same as you see with your own eye, but as it is not a zoom lens you have to move closer or further away from your subject to zoom in and out, but a fixed 50mm focal length lens  or ‘prime lens’ (a number like 50mm, instead of a zoom lens, e.g. 17mm-70mm) is always better quality than a zoom lens. If you buy a zoom lens, the better quality ones are ones with the smallest zoom range, e.g a 17-70mm zoom lens will be generally better quality than a 17-300mm lens.) For cars, don’t go below 10mm lens or you will get the fish eye look ( unless you want that.) Above around 110mm lens is probably too ‘telephoto’ for your needs.
Try to stay away from cheap third-party lenses, stick to the brand of your camera body. A good lens will be expensive, but it will last you years. A cheap lens costs less, but you out grow it in no time. Cheap lens = False economy.
Smart phones You can capture some great looking images on smartphones, especially with masses of post-editing software available. But no matter how good the post editing software, the biggest negative of the smart phone is its very small lens. Essentially, it is a pinhole camera and you are relying very heavily on the sensor and software to do the heavy lifting for you. So if you do have the choice, always go for a DSLR.
Tripod Humans can’t keep still, we are not designed that way. We are made to keep moving. Buy a full-size tripod and if you can, buy a ‘camera trigger.’ It will allow you to trigger the camera remotely with no vibrations. Once you own one of these, there is no going back.
    Photo editing software Processing your photos is fun and you need software to do the job. Lightroom – Highly recommended – http://www.adobe.com/uk/products/photoshop-lightroom.html Photoshop is the de facto standard, which you can use online – http://www.photoshop.com/tools You can also buy the full-blown version of Photoshop here – http://www.photoshop.com/ For a free photo editing software, try ‘The Gimp’ (yes, its called that!) – http://www.gimp.org/ Adobe RAW file converters (Editing and turns RAW files into JPEGS) –  http://www.adobe.com/support/downloads/product.jsp?product=106&platform=Windows
Computer Most computers built in the last five years will have enough horsepower for your photo editing needs.
  Preparation You should have a camera (DLSR or smartphone), a tripod, a car, a scene, light. You now have to experiment with the above elements to get the best photo. Simple!
Prepare the car Just make it as beautiful as it can look. Wash, wax it, thoroughly clean the interior. Think of it like dressing and putting make-up on a model.
  Setting the scene Use your best weapons, your eyes. Scan a possible scene like you are a Security Droid. You can get as creative as you want with this step, but the goal here is to get used to getting basic, great images of a car. You can have driving shots, interesting backgrounds, motion blur, interacting with nature, night shots, panning shots, the list goes on. So starts with a location. When choosing a location remember the car is the star. The location background should not distract from the car. Ideas – top of a car park overlooking the city, an industrial part of town, derelict warehouses, next to modern architecture, on top of the crest of the hill, overlooking a landscape, sunset in the background, the first few hours after sunrise. You could hire a fully professional studio or large indoor garage, but why pay for that when there are so many great locations. Once you are happy with the scene you can then start to get creative with images like different angles, vantage points, minute details, even parts of the car no one else would think of taking a photo of. Everything is interesting on a car if it is taken in the right way.
Look, look again, then look again. Common things like fingers over lenses, cropping off an element of the car by accident, blurred images.
Rule of thirds image – http://mag.desert-motors.com/articles/2009-0708-phototips.html This will help you set the scene and position your car for the best photo. The rule of thirds is simply a grid like in the image above which helps you position objects in your photo. Simply position your car around where two lines cross. Always put your horizon or where land meets sky around the top or bottom horizontal line. The example image has the horizon near the top line. If you always use the rule of thirds grid on your camera ( you should have an option where you can overlay the rule of thirds grid over every image you take.) it should stop you cutting off edges of the car.
Lighting Its now time to plan the lighting. You can light your scene with loads of carefully positioned flashguns and lights or you can just simply use the natural light available. For simplicity, we are going to use natural light. The hours after sunrise and before sunset usually gets the best light and the least amount of harsh shadows. Midday usually gives the most harsh shadows, unless you want that. Knowing how to use a flash can make an average photo look incredible, but this is outside the scope of this article, so we recommend you read up on flash photography and experiment from there.
Shiny, bouncy light Very shiny cars suffer from the same problem as photographing someone wearing glasses. Light bounces off their glasses so you don’t see their eyes. The same with some colors of car. In this case use a polarising filter. This filter can get rid of glaring light. Some car colors are matt looking. These colors may look better in direct sunlight.
Night-time This is where you will need to use artificial light, but the results are worth it. One trick performed by professional car photographers is to take a series of images of the car with a light source at many different positions. The series of images are then edited together, so the final image looks like a car with just one light source, but done in a way which you just simply couldn’t capture with just one single image. Lots of time needed for this process!
Reflections Some are good, some are bad, always be making visual notes of the reflections in the car. The general rule –  no reflections, as they are simply distracting. The car should be the center of focus, but let’s say a car has tons of chrome. A reflection on the bumper of an old street sign may look good, but you will have to evaluate this at the time. You should not be a reflection on the car.
Camera settings There are many different ways to skin a cat, rabbit, octopus….. Whatever the saying is, but it is the same with the many options on a camera. It depends on the situation. You can put any camera on fully ‘automatic’ and let the camera decide the settings for you, but you will soon outgrow this and eventually go to fully ‘manual’ mode. This will allow you to change settings to exactly as you want them. ‘F’ numbers Basic rules are lens ‘F’ numbers = the bigger the ‘F’ number, e.g. F16, the smaller the hole in the lens which makes everything in focus front to back. The smallest ‘F’ number, e.g. F2.5 has the biggest hole in the camera lens or aperture, which will usually makes only the things in the foreground in focus, everything in the background is usually blurred. But this depends………….. you may have a small ‘F’ number setting to just get the car in focus and everything in the background blurred, but if this is a landscape shot and you are more than say 10 m away from the car, the background may still be more in focus than you like. Experiment.
Shutter speed The faster the shutter, the quicker the camera captures the image. Usually, the faster the subject is moving or brighter the light, the faster the shutter speed, but again there are caveats. You can take a picture of a moving car with a slow shutter speed which will make all the background blurred and the car in focus. Again experiment. Shutter speed and F numbers go hand-in-hand, you alter one and the camera alters the other.
ISO Like the old film speed. In the old film days, if you were planning on taking images in low light, you bought a higher ISO number film. Tons of light = a lower number, e.g. ISO50. Now you change that on the fly with your digital camera. Most situations are ISO 100-400, so stick to that range for starters. If you have near to no light a high ISO number is better, but the higher the number goes, the grainier the image becomes. Its a trade off. Low light = slow shutter speeds, so need more light or up the ISO and/or lower the ‘F’ number, but low ‘F’ numbers give a shallow depth of focus….arghhh!
White balance This dictates how natural the colors will look. Professionals will take a test image of a ‘grey’ card, which is a neutral colored piece of card. When they post edit, they sample the grey card’s test image color and use it to color correct the rest of the images. Unless I am getting creative or on a professional job, I will usually set this to either neutral or automatic, then color correct in software.
RAW or JPEGS? I have to mention this as it is a pet peeve of mine. RAW images – are essentially like digital negatives. They are the basic information of the image, which allows you to post process the image without degrading it. Jpegs are a compression format. Essentially, you are letting the camera decide how the image should look. When you post process the image, each step will slightly degrade the image as it has already been processed. If you have a DSLR or even a more compact camera, you should be able to shoot in RAW or JPEG versions. Only shoot in RAW format, unless you’ve little space left on your camera’s memory card, then JPEGS are allowed. I think of the situation as the same as having a film negative or a Polaroid. Polaroids are okay for instance images, but not the best images. ….. Calm down, breathe!
Start with ‘auto’ mode on your camera and note the shutter speed, ISO and F number, then gradually go through the manual settings, one at a time, so you know what setting changes affect you images. Its a juggling act.
Post Editing You can take the photos straight off your camera as-is, but some post-editing will always help. As I always shoot in Raw format, this file type needs to be turned into a .JPEG or .TIFF file. My tool of choice is Adobe’s Lightroom. This is basically a ‘digital darkroom’ allowing you to edit the image without further processing. Most of the time I don’t even use Photoshop, I just live in Lightroom. Most digital SLRs will have free software to convert raw files to any other file type. Most will enable you to do the same edits you can in Lightroom. Lightroom is just a more fully featured product.
White balance, Levels, Contrast There are so many different edits you can do in the post processing of an image, but tweaking the basics is a good starting point. The white balance is the general color temperature of an image. Too ‘cold’ and your image will look like it has a blue filter. Too ‘warm’ and it looked like it has a yellow filter. Your aim to get the most natural colors, which is usually somewhere in the middle. Advantage of RAW files – white balance is easy to tweak with a RAW file, a pain with JPEG’s. You never want a scenario where a potentially good photo is lost because you took it as a JPEG and can’t tweak the white balance. Levels are how light and dark the most light and dark parts of the image are viewed. If something looks too dark in an image, you can move the levels to make the darkest parts of an image, slightly lighter. If you want a more basic version of levels, you can use the familiar contrast setting.
Cropping A simple way to think about how to crop an image, is to use the ‘rule of third’s’ above. If you make sure your car is roughly around where two lines intersect and no element of the car is outside the image, you should be roughly in the ballpark. A well cropped image can make the difference between an average image and a great image. Check out the image above as a comparison.
The more you learn about how camera works and your post processing software, the better your images become. Tip – play with one camera setting at a time with the same scene, e.g. ‘F’ numbers. Use the full range of numbers, the same scene and compare the images.
Beers Available from most outlets. For the last step in the process after all work is completed….. and only then!
  If you stick to the basics, the simple rules and are very observant, continually checking all the elements of an image, you should end up with some fantastic car photos, which you should be proud of.
I think of taking a photo as the opposite process of painting a picture. With photography you build and light the scene as you want it, then you take the photo. Clicking the shutter button is just one element in the process. This is why movies have a ‘Director of photography.’ They know how to build and light any scene to make it look its best. So the next time someone says photography is just clicking a button, tell them ‘yeah, …and Van Gough just held a paintbrush!’
Resources Beginners to advanced tutorials – http://www.carphototutorials.com/tutorials.html
Photo software Lightroom – Highly recommended – http://www.adobe.com/uk/products/photoshop-lightroom.html Photoshop is the de facto standard, which you can use online – http://www.photoshop.com/tools You can also buy the full-blown version of Photoshop here – http://www.photoshop.com/ For a free photo editing software, try ‘The Gimp’ (yes, its called that!) – http://www.gimp.org/ Adobe RAW file converters (Editing and turns RAW files into JPEGS) –  http://www.adobe.com/support/downloads/product.jsp?product=106&platform=Windows
0 notes
Text
These Streets Test Shoot
As time has progressed I have become more aware of the amount of content required for the chapters in my book; especially for "These Streets" and "Saturated". I decided to book a trip to London on the weekend of London Fashion Week. I found a low cost hostel and booked my coach tickets in advance to keep my expenses low. I was hoping that, being the beginning of LFW, this would encourage people (especially men) to flood the streets show sing their fashion and style. Though I managed to capture a few interesting individuals, my expectations were not met. On the day of the trip, I did come down with a sickness bug which made the trip difficult as I had low energy and wasn't at my best. However I tried to make the most out of being there. I started at Embankment, made my way round Waterloo, Camden, Shoreditch, Old Street, Victoria, Chelsea, Piccadilly. I managed to get a few nice shots of the landscapes and buildings as well which I plan on using in the chapter. However like I said I wasn't fully satisfied with the trip and it did end up costing me quite a bit of money once I was down there. In the future I will only go for the day as the second day I wasn't as productive as I would've wanted to be due to a bad night and further sickness. I think I will also try and seek out more local destinations to make it easier to travel and keep costs lower. I will follow this post with some raw versions of the images I collected whilst in London once I have filtered through the shots I plan on using. One positive that I was very happy with was my ability with the camera and lenses I was using. I took out a Canon 700D with a 100mm and 35mm lens and the quality of the images was very high. The clarity and background blur was very effective and allowed me to decide on the kind of photographic style I want this chapter to embody.
0 notes
slrlounge1 · 6 years ago
Text
Sony A7RIII Review: Officially The Best Pro Full-Frame Mirrorless Camera On The Market
When I test a new camera, I usually have an idea of how the review might go, but there are always some things that are a complete unknown, and a few things that totally surprise me.
I know better than to pass judgment on the day a camera is announced. The images, and the user experience are what matter. If a camera has all the best specs but lacks reliability or customizability, it’s a no-deal for me.
So, you can probably guess how this review of the Sony A7RIII is going to turn out. Released in October of 2017 at $3200, the camera has already had well over a year of real-world field use, by working professionals and hobbyists alike. Also, it now faces full-frame competition from both Canon and Nikon, in the forms of Canons new RF mount and Nikon’s Z mount, although these only have two bodies each and 3-4 native lenses each.
As a full-time wedding and portrait photographer, however, I can’t just jump on a new camera the moment it arrives. Indeed, the aspects of reliability and sheer durability are always important. And, considering the track record for reliability (buggy-ness) of the Sony A7-series as a whole since its birth in October of 2013, I waited patiently for there to be a general consensus about this third generation of cameras.
Indeed, the consensus has been loud: third time’s a charm.
Although, I wouldn’t exactly call the A7RIII a charming little camera. Little, sure, professional, absolutely! But, it has taken a very long time to become truly familiar and comfortable with it.
Before you comment, “oh no, not another person complaining about how hard it is to ‘figure out’ a Sony camera” …please give me a chance to thoroughly describe just how incredibly good of a camera the A7RIII is, and tell you why you should (probably) get one, in spite of (or even because of) its complexity.
Sony A7RIII (mk3) Specs
42-megapixel full-frame BSI CMOS sensor (7952 x 5304 pixels)
ISO 100-32000 (up to ISO 50 and ISO 102400)
5-axis sensor-based image stabilization (IBIS)
Hybrid autofocus, 399 phase-detect, 425 contrast-detect AF points
3.6M dot Electronic Viewfinder, 1.4M dot 3″ rear LCD
10 FPS (frames per second) continuous shooting, with autofocus
Dual SD card slots (one UHS-II)
4K @ 30, 24p video, 1080 @ 120, 60, 30, 24p
Metal frame, weather-sealed body design
Sony A7R3, Sony 16-35mm f/2.8 GM | 1/80 sec, f/11, ISO 100
100% crop from the above image, fine-radius sharpening applied
Sony A7R3 Pros
The pros are going to far outweigh the cons for this camera, and that should come as no surprise to anyone who has paid attention to Sony’s mirrorless development over the last few years. Still, let’s break things down so that we can discuss each one as it actually relates to the types of photography you do.
Because, even though I’ve already called the A7RIII the “best pro full-frame mirrorless camera”, there may still be at least a couple other great choices (spoiler: they’re also Sony) for certain types of working pro photographers.
Sony A7R3, Sony 16-35mm f/2.8 GM, 3-stop Polar Pro NDPL Polarizer filter 2 sec, f/10, ISO 100
Image Quality
The A7RIII’s image quality is definitely a major accomplishment. The  42-megapixel sensor was already a milestone in overall image quality when it first appeared in the A7RII, with its incredible resolution and impressive high ISO image quality. This next-generation sensor is yet another (incremental) step forward.
Compared to its predecessor, by the way, at the lowest ISO’s (mostly at 100) you can expect slightly lower noise and higher dynamic range. At the higher ISOs, you can expect roughly the same (awesome) image quality.
Also, when scaled back down to the resolution of its 12-24 megapixel Sony siblings, let alone the competition, it’s truly impressive to see what the A7R3 can output.
Speaking of its competitors’ sensors, the Sony either leaves them in the dust, (for example, versus a Canon sensor’s low-ISO shadow recovery) …or roughly matches their performance. (For example, versus Nikon’s 36 and 45-megapixel sensor resolution and dynamic range.)
Sony A7R3, 16-35mm f/2.8 GM | 31mm, 1/6 sec, f/10, ISO 100, handheld
100% crop from the above image – IBIS works extremely well!
I’ll be honest, though: I reviewed this camera with the perspective of a serious landscape photographer. If you’re a very casual photographer, whether you do nature, portraits, or travel or action sports, casually, then literally any camera made in the last 5+ years will deliver more image quality than you’ll ever need.
The Sony A7R3’s advantages come into play when you start to push the envelope of what is possible in extremely difficult conditions. Printing huge. Shooting single exposures in highly dynamic lighting conditions, especially when you have no control over the light/conditions. Shooting in near-pitch-dark conditions, by moonlight or starlight… You name it; the A7RIII will match, or begin to pull ahead of, the competition.
Personally, as a landscape, nightscape, and time-lapse photographer, I couldn’t ask for a better all-around sensor and image quality than this. Sure, I would love to have a native ISO 50, and I do appreciate the three Nikon sensors which offer a base ISO of 64, when I’m shooting in conditions that benefit from it.
Still, as a sensor that lets me go from ISO 100 to ISO 6400 without thinking twice about whether or not I can make a big print, the A7RIII’s images have everything I require.
Sony Color Science
Before we move on, we must address the common stereotype about dissatisfaction with “Sony colors”. Simply put, it takes two…Actually, it takes three! The camera manufacturer, the raw processing software, and you, the artist who wields those two complex, advanced tools.
Sony A7R3, Sony 16-35mm f/2.8 GM | 6 sec, f/3.5, ISO 100 | Lightroom CCC
The truth is that, in my opinion, Adobe is the most guilty party when it comes to getting colors “right”, or having them look “off”, or having muddy tones in general.
Why do I place blame on what is by far the most ubiquitous, and in popular opinion the absolute best, raw processing software? My feelings are indeed based on facts, and not the ambiguous “je ne sais quoi” that some photographers try to complain about:
1.) If you shoot any camera in JPG, whether Sony, Nikon, or Canon, they are all capable of beautiful skin tones and other colors. Yes, I know, serious photographers all shot RAW. However, looking at a JPG is the only way to fairly judge the manufacturer’s intended color science. And in that regard, Sony’s colors are not bad at all.
2.) If you use another raw processing engine, such as Capture One Pro, you get a whole different experience with Sony .ARW raw files, both with regard to tonal response and color science. The contrast and colors both look great. Different, yes, but still great.
I use Adobe’s camera profiles when looking for punchy colors from raw files
Again, I’ll leave it up to you to decide which is truly better in your eyes as an artist. In some lighting conditions, I absolutely love Canon, Nikon, and Fuji colors too. However, in my experience, it is mostly the raw engine, and the skill of the person using it, that is to blame when someone vaguely claims, “I just don’t like the colors”…
Disclaimer: I say this as someone who worked full-time in post-production for many years, and who has post-produced over 2M Canon CR2 files, 2M Nikon NEF files, and over 100,000 Sony ARW files.
Features
There is no question that the A7R3 shook up the market with its feature set, regardless of the price point. This is a high-megapixel full-frame mirrorless camera with enough important features that any full-time working pro could easily rely on the camera to get any job done.
Flagship Autofocus
The first problem that most professional photographers had with mirrorless technology was that it just couldn’t keep up with the low-light autofocus reliability of a DSLR’s phase-detect AF system.
This line has been blurred quite a bit from the debut of the Sony A7R2 onward, however, and with this current-generation, hybrid on-sensor phase+contrast detection AF system, I am happy to report that I’m simply done worrying about autofocus. Period. I’m done counting the number of in-focus frames from side-by-side comparisons between a mirrorless camera and a DSLR competitor.
In other words, yes, there could be a small percent difference in tack-sharp keepers between the A7R3’s autofocus system and that of, say, a Canon 5D4 or a Nikon D850. In certain light, with certain AF point configurations, the Canon/Nikon do deliver a few more in-focus shots, on a good day. But, I don’t care.
Sony A7R3, 16-35mm f/2.8 GM | 1/160 sec, f/2.8, ISO 100
Why? Because the A7R3 is giving me a less frustrating experience overall due to the fact that I’ve completely stopped worrying about AF microadjustment, and having to check for front-focus or back-focus issues on this-or-that prime lens. If anything, the faster the aperture, the better the lens is at nailing focus in low light. That wasn’t the case with DSLRs; usually, it was the 24-70 and 70-200 2.8’s that were truly reliable at focusing in terrible light, and most f/1.4 or f/1.8 DSLR primes were hit-or-miss. I am so glad those days are over.
Now, the A7R3 either nails everything perfectly or, when the light gets truly terrible, it still manages to deliver about the same number of in-focus shots as I’d be getting out of my DSLRs anyways.
Eye Autofocus and AF customization
Furthermore, the A7R3 offers a diverse variety of focus point control and operation. And, with new technologies such as face-detection and Eye AF, the controls really do need to be flexible! Thanks to the level of customizability offered in the the A7R3, I can do all kinds of things, such as:
Quickly change from a designated, static AF point to a dynamic, adaptable AF point. (C1 or C2 button, you pick which based on your own dexterity and muscle memory)
Easily switch face-detection on and off. (I put this in the Fn menu)
Designate the AF-ON button to perform traditional autofocus, while the AEL button performs Eye-AF autofocus. (Or vice-versa, again depending on your own muscle memory and dexterity)
Switch between AF-S and AF-C using any number of physical customizations. (I do wish there was a physical switch for this, though, like the Sony A9 has.)
Oh, and it goes without saying that a ~$3,000 camera gets a dedicated AF point joystick, although I must say I’m preferential to touchscreen AF point control now that there are literally hundreds of AF points to choose from.
In short, this is one area where Sony did almost everything right. They faced a daunting challenge of offering ways to implement all these useful technologies, and they largely succeeded.
This is not just professional-class autofocus, it’s a whole new generation of autofocus, a new way of thinking about how we ensure that each shot, whether portrait or not, is perfectly focused exactly how we want it to be, even with ultra-shallow apertures or in extremely low light.
Dual Card Slots
Like professional autofocus, dual card slots is nothing new in a ~$3,000 camera. Both the Nikon D850 (and D810, etc.) and the Canon 5D4 (and 5D3) have had these features for years. Although, notably, the $3400 Nikon Z7 does not; it opted for a single XQD slot instead. Read our full Nikon Z7 review here.)
Unlike those DLSRs, however, the Sony A7R3 combines the professional one-two punch of pro AF and dual card slots with other things such as the portability and other general benefits of mirrorless, as well as great 4K video specs and IBIS. (By the way, no, IBIS and 4K video aren’t exclusive to mirrorless; many DSLRs have 4K video now, and Pentax has had IBIS in its traditional DSLRs for many years too.)
One of my favorite features: Not only can the camera be charged via USB, it can operate directly from USB power!
Sony A7R3 Mirrorless Battery Life
One of the last major drawbacks of mirrorless systems, and the nemesis of Sony’s earlier A7-series in particular was battery life. The operative word being, WAS. Now, the Sony NP-FZ100 battery allows the A7R3 to last just as long as, or in some cases even longer than, a DSLR with comparable specs. (Such as lens-based stabilization, or 4K video)
Oh, and Sony’s is the only full-frame mirrorless platform that allows you to directly run a camera off USB power without a “dummy” battery, as of March 2019. This allows you to shoot video without ever interrupting your clip/take to swap batteries, and capture time-lapses for innumerable hours, or, just get through a long wedding day without having to worry about carrying more than one or two fully charged batteries.
By the way, for all you marathon-length event photographers and videographers out there: A spare Sony NP-FZ-100 battery will set you back $78, while an Anker 20,100 mAh USB battery goes for just $49. So, no matter your budget, your battery life woes are officially over.
Durability
This is one thing I don’t like to speak about until the gear I’m reviewing has been out in the real world for a long time. I’ve been burned before, by cameras that I rushed to review as soon as they were released, and I gave some of them high praise even, only to discover a few weeks/months later that there’s a major issue with durability or functionality, sometimes even on the scale of a mass recall. (*cough*D600*cough*)
Thankfully, we don’t have that problem here, since the A7RIII has been out in the real world for well over a year now. I can confidently report, based on both my own experience and the general consensus from all those who I’ve talked to directly, that this camera is a rock-solid beast. It is designed and built tough, with good overall strength and extensive weather sealing.
It does lack one awesome feature that the Canon EOS R offers, which is the simple but effective use of the mechanical mirror to protect the sensor whenever the camera is of, or when changing lenses. Because, if I’m honest, the Sony A7R3 sensor is a dust magnet, and the sensor cleaner doesn’t usually do more than shake one or two of the three or five specks of dust that are always landing on the sensor after just a half-day of swapping lenses periodically, especially in drier, static-y environments.
Value
Currently, at just under $3200 and sometimes on sale for less than $2800, there’s no dispute- We have the best value around, if you actually need the specific things that the A7RIII offers compared to your other options.
But, could there be an even better camera out there, for you and your specific needs?
If you don’t plan to make giant prints, and you rarely ever crop your images very much, then you just don’t need 42 megapixels. In fact, it’s actually going to be quite a burden on your memory cards, hard drives, and computer CPU/RAM, especially if you decide to shoot uncompressed raw and rack up a few thousand images each time you take the camera out.
Indeed, the 24 megapixels of the A7III is currently (and will likely remain) the goldilocks resolution for almost all amateurs and many types of working pros. Personally, as a wedding and portrait photographer, I would much rather have “just” 24 megapixels for the long 12-14+ hour weddings that I shoot. It adds up to many terabytes at the end of the year. Especially if you shoot the camera any faster than its slowest continuous drive mode. (You better buy some 128GB SD cards!)
As a landscape photographer, of course, I truly appreciate the A7RIII’s extra resolution. I would too if I were a fashion, commercial, or any other type of photographer whose priority involved delivering high-res imagery.
We’ll get deeper into which cameras are direct competition or an attractive alternative to the A7RIII later. Let’s wrap up this discussion of value with a quick overview of the closest sibling to the A7RIII, which is of course the A7III.
The differences between them go beyond just a sensor. The A7III has a slightly newer AF system, with just a little bit more borrowed technology from the Sony A9. But, it also has a slightly lower resolution EVF and rear LCD, making the viewfinder shooting experience just a little bit more digital looking. Lastly, partly thanks to its lower megapixel count, and lower resolution screens, the A7III gets even better battery life than the A7RIII. (It goes without saying that you’ll save space on your memory cards and hard drives, too.)
So, it’s not cut-and-dry at all. You might even decide that the A7III is actually a better camera for you and what you shoot. Personally, I certainly might prefer the $1998 A7III  if I shot action sports, wildlife, journalism, weddings, and certainly nightscapes, especially if I wasn’t going to be making huge prints of any of those photography genres.
Or, if you’re a serious pro, you need a backup camera anyway, and since they’re physically identical, buy both! The  A7III and A7RIII are the best two-camera kit ever conceived. Throw one of your 2.8 zooms on the A7III, and your favorite prime on the A7RIII. As a bonus, you can program “Super-35 Mode” onto one of your remaining customization options, (I like C4 for this) and you’ve got two primes in one!
Sony A7R3, Sony FE 16-35mm f/2.8GM | 1/4000 sec, f/10, ISO 100 (Extreme dynamic range processing applied to this single file)
Cons
This is going to be a short list. In fact, I’ll spoil it for you right now: If you’re already a (happy) Sony shooter, or if you have tried a Sony camera and found it easy to operate, there are essentially zero cons about this camera, aside from the few aforementioned reasons which might incline certain photographers to get an A7III instead.
A very not-so-helpful notification that is often seen on Sony cameras. I really do wish they could have taken the time to write a few details for all function incompatibilities, not just some of them!
Ergonomics & Menus
I’ll get right to the point: as someone who has tested and/or reviewed almost every DSLR camera (and lots of mirrorless cameras) from the last 15 years, from some of the earliest Canon Rebels to the latest 1D and D5 flagships, I have never encountered a more complex camera than the A7R3.
Sony, I suspect in their effort to make the camera attractive to both photographers and videographers alike, has made the camera monumentally customizable.
We’ll get to the sheer learning curve and customizations of the camera in a bit, but first, a word on the physical ergonomics: Basically, Sony has made it clear that they are going to stay focused on compactness and portability, even if it’s just not a comfortable grip for anyone with slightly larger hands.
The argument seems to be clearly divided among those who prefer the compact design, and those who dislike it.
The dedicated AF-ON button is very close to three other main controls, the REC button for video, the rear command dial, and the AF point joystick. With large thumbs, AF operation just isn’t as effortless and intuitive as it could be. Which is a shame, because I definitely love the customizations that have given me instant access to multiple AF modes. I just wish the AF-ON button, and that whole thumb area, was designed better. My already minor fumbling will wane even further with familiarity, but that doesn’t mean it is an optimal design.
By the way, I’m not expecting Sony to make a huge camera that totally defeats one of the main purposes of the mirrorless format. In fact, in my Nikon D850 review, I realized that the camera was in fact too big and that I’m already accustomed to a smaller camera, something along the size of a Nikon D750, or a Canon EOS R.
Speaking of the Canon EOS R, I think all full-frame cameras ought to have a grip like that one. It is a perfect balance between portability and grip comfort. After you hold the EOS R, or even the EOS RP, you’ll realize that there’s no reason for a full-frame mirrorless camera not to have a perfect, deep grip.
As another example, while I applaud Sony for putting the power switch in the right spot, (come on, Canon!) …I strongly dislike their placement of the lens release button. If the lens release button were on the other side, where it normally is on  Canon and Nikon, then maybe we could have custom function buttons similar to Nikon’s. These buttons are perfectly positioned for my grip fingers while holding the camera naturally, so I find them effortless to reach compared to Sony’s C1 and C2 buttons.
As I hinted earlier, I strongly suspect that a lot of this ergonomic design is meant to be useful to both photographers and videographers alike. And videographers, more of tne than not, simply aren’t shooting with the camera hand-held up to their eye, instead, the camera is on a tripod, monopod, slider, or gimbal. In this shooting scenario, buttons are accessed in a totally different way, and in fact, the controls of the latest Sony bodies all make more sense.
It’s a shame, because, for this reason I feel compelled to disclaim that if you absolutely don’t shoot video, you may find that Nikon and/or Canon ergonomics are significantly more user-friendly, whether you’re working with their DSLRs or their mirrorless bodies. (And yes, I actually like the Canon “touch dial”. Read my full Canon EOS R review here.)
Before we move on, though, I need to make one thing clear: if a camera is complicated, but it’s the best professional option on the market, then the responsible thing for a pro to do is to suck it up and master the camera. I actually love such a challenge, because it’s my job  and because I’m a camera geek, but I absolutely don’t hold it against even a professional landscape photographer for going with a Nikon Z7, or a professional portrait photographer for going with a Canon EOS R. (Single SD card slot aside.)
Yet another quick-access menu. However, this one cannot be customized.
Customizability
This is definitely the biggest catch-22 of the whole story. The Sony A7R3 is very complex to operate, and even more complex to customize. Of course, it has little choice in the matter, as a pioneer of so many new features and functions. For example, I cannot fault a camera for offering different bitrates for video compression, just because it adds one more item to a menu page. In fact, this is a huge bonus, just like the ability to shoot both uncompressed and compressed .ARW files.
By the way, the “Beep” settings are called “Audio signals”
There are 35 pages of menu items with nearly 200 items total, plus five pages available for custom menu creation, a 2×6 grid of live/viewfinder screen functions, and approximately a dozen physical buttons can be re-programmed or totally customized.
I actually love customizing cameras, and it’s the very first thing I do whenever I pick up a new camera. I go over every single button, and every single menu item, to see how I can set up the camera so that it is perfect for me. This is a process I’ve always thoroughly enjoyed, that is until the Sony A7-series came along. When I first saw how customizable the camera was, I was grinning. However, it took literally two whole weeks to figure out which button ought to perform which function, and which arrangement was best for the Fn menu, and then last but not least, how to categorize the remaining five pages of menu items I needed to access while shooting. Because even if I memorized all 35 pages, it still wouldn’t be practical to go digging through them to access the various things I need to access in an active scenario.
Then, I started to notice that not every function or setting could be programmed to just any button or Fn menu. Despite offering extensive customization options, (some customization options have 22 pages of options,) there are still a few things that just can’t be done.
“Shoot Mode” is how you change the exposure when the camera’s exposure mode dial is set to video mode. Which is useful if you shot a lot of video…
For example, It’s not easy to change the exposure mode when the camera’s mode dial itself is set to video mode. You can’t just program “Shooting Mode” to one of the C1/C2 buttons, it can only go in the Fn or custom menu.
As another example, for some reason, you can’t program both E-shutter and Silent Shooting to the Fn menu, even though these functions are so similar that they belong next to each other in any shooter’s memory.
Lastly, because the camera relies so heavily on customization, you may find that you run out of buttons when trying to figure out where to put common things that used to have their own buttons, such as Metering, White Balance, AF Points. Not to mention the handful of new bells and whistles that you might want to program to a physical button, such as switching IBIS on/off, or activating Eye AF.
All in all, the camera is already extremely complex, and yet I feel like it could also use an extra 2-3 buttons, and even more customization for the existing buttons. Which, again, leads me to the conclusion that if you’re looking for an intuitive camera that is effortless to pick up and shoot with, you may have nightmares about the user manual for this thing. And if you don’t even shoot video at all, then like I said, you’re almost better off going with something simpler.
But again, just to make sure we’re still on the same page here: If you’re a working professional, or a serious hobbyist even, you make it work. It’s your job to know your tools! (The Apollo astronauts didn’t say, “ehh, no thanks” just because their capsule was complicated to operate!)
Every camera has quirks. But, not every camera offers the images and a feature set like the A7R3 does. As a camera geek, and as someone who does shoot a decent amount of both photo and video, I’d opt for the A7R3 in a heartbeat.
Sony A7R3, Sony FE 16-35mm f/2.8 GM, PolarPro ND1000 filter 15 sec, f/14, ISO 100
The A7RIII’s Competition & Alternatives
Now that it’s early 2019, we finally have Canon and Nikon competition in the market of full-frame mirrorless camera platforms. (Not to mention Panasonic, Sigma, Lecia…)
So, where does that put this mk3 version of the Sony A7R, a third-generation camera which is part of a system that is now over 5 years old?
Until more competition enters the market, this section of our review can be very simple, thankfully. I’ll be blunt and to the point…
First things first: the Sony A7R3 has them all beat, in terms of overall features and value. You just can’t get a full-frame mirrorless body with this many features, for this price, anywhere else. Not only does the Sony have the market cornered, they have three options with roughly the same level of professional features, when you count the A7III and the A9.
Having said that, here’s the second thing you should know: Canon and Nikon’s new full-frame mirrorless mounts are going to try as hard as they can to out-shine Sony’s FE lens lineup, as soon as possible. Literally the first thing Canon did for its RF mount was a jaw-droppingly good 50mm f/1.2, and of course the massive beast that is the 28-70mm f/2. Oh, and Nikon announced that they’d be resurrecting their legendary “Noct” lens nomenclature, for an absurdly fast 58mm f/0.95.
If you’re at all interested in this type of exotic, high-end glass, the larger size diameters and shorter flange distances of Canon and Nikon’s new FF mounts may prove to have a slight advantage over Sony’s relatively modest E mount.
However, as Sony has already proven, its mount is nothing to scoff at, and is entirely capable of amazing glass with professional results. Two of their newest fast-aperture prime lenses, the 135mm f/1.8 G-Master and the 24mm f/1.4 G-Master, prove this. Both lenses are almost optically flawless, and ready to easily resolve the 42 megapixels of this generation A7R-series camera, and likely the next generation too even if it has a 75-megapixel sensor.
This indicates that although Canon and Nikon’s may have an advantage when it comes to the upper limits of what is possible with new optics, Sony’s FE lens lineup will be more than enough for most pros.
Sony A7R3, Sony FE 70-300mm G OSS |  128mm, 1/100 sec, f/14, ISO 100
Sony A7RIII Review Verdict & Conclusion
There is no denying that Sony has achieved a huge milestone with the A7R mk3, in every single way. From its price point and feature set to its image quality and durable body, it is quite possibly the biggest threat that its main competitors, Canon and Nikon, face.
So, the final verdict for this review is very simple: If you want the most feature-rich full-frame camera (and system) that $3,200 can buy you, (well, get you started in) …the best investment you can make is the Sony A7RIII.
(By the way, it is currently just $2798, as of March 2019, and if you missed this particular sale price, just know that the camera might go on sale for $400 off, sooner or later.)
Sony A7R3, Sony FE 70-300mm G OSS | 1/400 sec, f/10, ISO 100
Really, the only major drawback for the “average” photographer is the learning curve, which even after three generations still feels like a sheer cliff when you first pick up the camera and look through its massive menu interface and customizations. The A7R3 body, (nor the A9 or A7III, for that matter) is not for the “casual” shooter who wants to just leave the camera in “green box mode”, and expect it to be simple to operate. I’ve been testing and reviewing digital cameras for over 15 years now, and the A7RIII is by far the most complex camera I’ve ever picked up.
That shouldn’t be a deterrent for the serious pro, because these cameras are literally the tools of our trade. We don’t have to get a degree in electrical engineering or mechanical engineering in order to be photographers, we just have to master our camera gear, and of course the creativity that happens after we’ve mastered that gear.
However, a serious pro who is considering switching from Nikon or Canon should still be aware that not everything you’re used to with those camera bodies is possible, let alone effortless feeling, on this Sony. The sheer volume of functionality related to focusing alone will require you to spend many hours learning how the camera works, and then customizing its different options to the custom buttons and custom menus so that you can achieve something that mimics simplicity, and effortless operation.
Sony A7R3, Sony 16-35mm f/2.8 GM | 1/4 sec, f/14, ISO 100
Personally, I’m always up for challenge. It took me a month of learning, customizing, and re-customizing this mk3-generation of Sony camera bodies, but I got it the way I want it, and now I get the benefits of things like having both  the witchcraft/magic that is Eye-AF, and the traditinal “oldschool” AF methods, at my fingertips. As a working pro who shoots in active conditions, from portraits and weddings to action sports and stage performance, it has been absolutely worth it to tackle the steepest learning curve of my entire career. I have confidence that you’re up to the task, too.
from SLR Lounge https://ift.tt/2U38gUJ via IFTTT
0 notes
photographerguide-blog · 6 years ago
Text
The future of photography is code
New Post has been published on https://photographyguideto.com/must-see/the-future-of-photography-is-code/
The future of photography is code
What’s in a camera? A lens, a shutter, a light-sensitive surface and, increasingly, a set of highly sophisticated algorithms. While the physical components are still improving bit by bit, Google, Samsung and Apple are increasingly investing in (and showcasing) improvements wrought entirely from code. Computational photography is the only real battleground now.
The reason for this shift is pretty simple: Cameras can’t get too much better than they are right now, or at least not without some rather extreme shifts in how they work. Here’s how smartphone makers hit the wall on photography, and how they were forced to jump over it.
Not enough buckets
An image sensor one might find in a digital camera
The sensors in our smartphone cameras are truly amazing things. The work that’s been done by the likes of Sony, OmniVision, Samsung and others to design and fabricate tiny yet sensitive and versatile chips is really pretty mind-blowing. For a photographer who’s watched the evolution of digital photography from the early days, the level of quality these microscopic sensors deliver is nothing short of astonishing.
But there’s no Moore’s Law for those sensors. Or rather, just as Moore’s Law is now running into quantum limits at sub-10-nanometer levels, camera sensors hit physical limits much earlier. Think about light hitting the sensor as rain falling on a bunch of buckets; you can place bigger buckets, but there are fewer of them; you can put smaller ones, but they can’t catch as much each; you can make them square or stagger them or do all kinds of other tricks, but ultimately there are only so many raindrops and no amount of bucket-rearranging can change that.
Sensors are getting better, yes, but not only is this pace too slow to keep consumers buying new phones year after year (imagine trying to sell a camera that’s 3 percent better), but phone manufacturers often use the same or similar camera stacks, so the improvements (like the recent switch to backside illumination) are shared amongst them. So no one is getting ahead on sensors alone.
See the new iPhone’s ‘focus pixels’ up close
Perhaps they could improve the lens? Not really. Lenses have arrived at a level of sophistication and perfection that is hard to improve on, especially at small scale. To say space is limited inside a smartphone’s camera stack is a major understatement — there’s hardly a square micron to spare. You might be able to improve them slightly as far as how much light passes through and how little distortion there is, but these are old problems that have been mostly optimized.
The only way to gather more light would be to increase the size of the lens, either by having it A: project outwards from the body; B: displace critical components within the body; or C: increase the thickness of the phone. Which of those options does Apple seem likely to find acceptable?
In retrospect it was inevitable that Apple (and Samsung, and Huawei, and others) would have to choose D: none of the above. If you can’t get more light, you just have to do more with the light you’ve got.
Isn’t all photography computational?
The broadest definition of computational photography includes just about any digital imaging at all. Unlike film, even the most basic digital camera requires computation to turn the light hitting the sensor into a usable image. And camera makers differ widely in the way they do this, producing different JPEG processing methods, RAW formats and color science.
For a long time there wasn’t much of interest on top of this basic layer, partly from a lack of processing power. Sure, there have been filters, and quick in-camera tweaks to improve contrast and color. But ultimately these just amount to automated dial-twiddling.
The first real computational photography features were arguably object identification and tracking for the purposes of autofocus. Face and eye tracking made it easier to capture people in complex lighting or poses, and object tracking made sports and action photography easier as the system adjusted its AF point to a target moving across the frame.
These were early examples of deriving metadata from the image and using it proactively, to improve that image or feeding forward to the next.
In DSLRs, autofocus accuracy and flexibility are marquee features, so this early use case made sense; but outside a few gimmicks, these “serious” cameras generally deployed computation in a fairly vanilla way. Faster image sensors meant faster sensor offloading and burst speeds, some extra cycles dedicated to color and detail preservation and so on. DSLRs weren’t being used for live video or augmented reality. And until fairly recently, the same was true of smartphone cameras, which were more like point and shoots than the all-purpose media tools we know them as today.
The limits of traditional imaging
Despite experimentation here and there and the occasional outlier, smartphone cameras are pretty much the same. They have to fit within a few millimeters of depth, which limits their optics to a few configurations. The size of the sensor is likewise limited — a DSLR might use an APS-C sensor 23 by 15 millimeters across, making an area of 345 mm2; the sensor in the iPhone XS, probably the largest and most advanced on the market right now, is 7 by 5.8 mm or so, for a total of 40.6 mm2.
Roughly speaking, it’s collecting an order of magnitude less light than a “normal” camera, but is expected to reconstruct a scene with roughly the same fidelity, colors and such — around the same number of megapixels, too. On its face this is sort of an impossible problem.
Improvements in the traditional sense help out — optical and electronic stabilization, for instance, make it possible to expose for longer without blurring, collecting more light. But these devices are still being asked to spin straw into gold.
Luckily, as I mentioned, everyone is pretty much in the same boat. Because of the fundamental limitations in play, there’s no way Apple or Samsung can reinvent the camera or come up with some crazy lens structure that puts them leagues ahead of the competition. They’ve all been given the same basic foundation.
All competition therefore comprises what these companies build on top of that foundation.
Image as stream
The key insight in computational photography is that an image coming from a digital camera’s sensor isn’t a snapshot, the way it is generally thought of. In traditional cameras the shutter opens and closes, exposing the light-sensitive medium for a fraction of a second. That’s not what digital cameras do, or at least not what they can do.
A camera’s sensor is constantly bombarded with light; rain is constantly falling on the field of buckets, to return to our metaphor, but when you’re not taking a picture, these buckets are bottomless and no one is checking their contents. But the rain is falling nevertheless.
To capture an image the camera system picks a point at which to start counting the raindrops, measuring the light that hits the sensor. Then it picks a point to stop. For the purposes of traditional photography, this enables nearly arbitrarily short shutter speeds, which isn’t much use to tiny sensors.
Why not just always be recording? Theoretically you could, but it would drain the battery and produce a lot of heat. Fortunately, in the last few years image processing chips have gotten efficient enough that they can, when the camera app is open, keep a certain duration of that stream — limited resolution captures of the last 60 frames, for instance. Sure, it costs a little battery, but it’s worth it.
Access to the stream allows the camera to do all kinds of things. It adds context.
Context can mean a lot of things. It can be photographic elements like the lighting and distance to subject. But it can also be motion, objects, intention.
A simple example of context is what is commonly referred to as HDR, or high dynamic range imagery. This technique uses multiple images taken in a row with different exposures to more accurately capture areas of the image that might have been underexposed or overexposed in a single exposure. The context in this case is understanding which areas those are and how to intelligently combine the images together.
This can be accomplished with exposure bracketing, a very old photographic technique, but it can be accomplished instantly and without warning if the image stream is being manipulated to produce multiple exposure ranges all the time. That’s exactly what Google and Apple now do.
Something more complex is of course the “portrait mode” and artificial background blur or bokeh that is becoming more and more common. Context here is not simply the distance of a face, but an understanding of what parts of the image constitute a particular physical object, and the exact contours of that object. This can be derived from motion in the stream, from stereo separation in multiple cameras, and from machine learning models that have been trained to identify and delineate human shapes.
These techniques are only possible, first, because the requisite imagery has been captured from the stream in the first place (an advance in image sensor and RAM speed), and second, because companies developed highly efficient algorithms to perform these calculations, trained on enormous data sets and immense amounts of computation time.
What’s important about these techniques, however, is not simply that they can be done, but that one company may do them better than the other. And this quality is entirely a function of the software engineering work and artistic oversight that goes into them.
A system to tell good fake bokeh from bad
DxOMark did a comparison of some early artificial bokeh systems; the results, however, were somewhat unsatisfying. It was less a question of which looked better, and more of whether they failed or succeeded in applying the effect. Computational photography is in such early days that it is enough for the feature to simply work to impress people. Like a dog walking on its hind legs, we are amazed that it occurs at all.
But Apple has pulled ahead with what some would say is an almost absurdly over-engineered solution to the bokeh problem. It didn’t just learn how to replicate the effect — it used the computing power it has at its disposal to create virtual physical models of the optical phenomenon that produces it. It’s like the difference between animating a bouncing ball and simulating realistic gravity and elastic material physics.
Why go to such lengths? Because Apple knows what is becoming clear to others: that it is absurd to worry about the limits of computational capability at all. There are limits to how well an optical phenomenon can be replicated if you are taking shortcuts like Gaussian blurring. There are no limits to how well it can be replicated if you simulate it at the level of the photon.
Similarly the idea of combining five, 10, or 100 images into a single HDR image seems absurd, but the truth is that in photography, more information is almost always better. If the cost of these computational acrobatics is negligible and the results measurable, why shouldn’t our devices be performing these calculations? In a few years they too will seem ordinary.
If the result is a better product, the computational power and engineering ability has been deployed with success; just as Leica or Canon might spend millions to eke fractional performance improvements out of a stable optical system like a $2,000 zoom lens, Apple and others are spending money where they can create value: not in glass, but in silicon.
Double vision
One trend that may appear to conflict with the computational photography narrative I’ve described is the advent of systems comprising multiple cameras.
This technique doesn’t add more light to the sensor — that would be prohibitively complex and expensive optically, and probably wouldn’t work anyway. But if you can free up a little space lengthwise (rather than depthwise, which we found impractical) you can put a whole separate camera right by the first that captures photos extremely similar to those taken by the first.
A mock-up of what a line of color iPhones could look like
Now, if all you want to do is re-enact Wayne’s World at an imperceptible scale (camera one, camera two… camera one, camera two…) that’s all you need. But no one actually wants to take two images simultaneously, a fraction of an inch apart.
These two cameras operate either independently (as wide-angle and zoom) or one is used to augment the other, forming a single system with multiple inputs.
The thing is that taking the data from one camera and using it to enhance the data from another is — you guessed it — extremely computationally intensive. It’s like the HDR problem of multiple exposures, except far more complex as the images aren’t taken with the same lens and sensor. It can be optimized, but that doesn’t make it easy.
So although adding a second camera is indeed a way to improve the imaging system by physical means, the possibility only exists because of the state of computational photography. And it is the quality of that computational imagery that results in a better photograph — or doesn’t. The Light camera with its 16 sensors and lenses is an example of an ambitious effort that simply didn’t produce better images, though it was using established computational photography techniques to harvest and winnow an even larger collection of images.
Light and code
The future of photography is computational, not optical. This is a massive shift in paradigm and one that every company that makes or uses cameras is currently grappling with. There will be repercussions in traditional cameras like SLRs (rapidly giving way to mirrorless systems), in phones, in embedded devices and everywhere that light is captured and turned into images.
Sometimes this means that the cameras we hear about will be much the same as last year’s, as far as megapixel counts, ISO ranges, f-numbers and so on. That’s okay. With some exceptions these have gotten as good as we can reasonably expect them to be: Glass isn’t getting any clearer, and our vision isn’t getting any more acute. The way light moves through our devices and eyeballs isn’t likely to change much.
What those devices do with that light, however, is changing at an incredible rate. This will produce features that sound ridiculous, or pseudoscience babble on stage, or drained batteries. That’s okay, too. Just as we have experimented with other parts of the camera for the last century and brought them to varying levels of perfection, we have moved onto a new, non-physical “part” which nonetheless has a very important effect on the quality and even possibility of the images we take.
Read more: https://techcrunch.com
0 notes