What is feature detection




















Generally, such tests are done via one of the following common patterns:. Note: The double NOT in the above example!! Bear in mind though that some features, however, are known to be undetectable — see Modernizr's list of Undetectables. We also wanted to mention the Window. This is a property that allows you to run media query tests inside JavaScript. It looks like this:. As an example, our Snapshot demo makes use of it to selectively apply the Brick JavaScript library and use it to handle the UI layout, but only for the small screen layout px wide or less.

We first use the media attribute to only apply the Brick CSS to the page if the page width is px or less:. We then use matchMedia in the JavaScript several times, to only run Brick navigation functions if we are on the small screen layout in wider screen layouts, everything can be seen at once, so we don't need to navigate between different views.

It is possible to implement your own feature detection tests using techniques like the ones detailed above. You might as well use a dedicated feature detection library however, as it makes things much easier. The mother of all feature detection libraries is Modernizr , and it can detect just about everything you'll ever need.

Let's look at how to use it now. When you are experimenting with Modernizr you might as well use the development build, which includes every possible feature detection test. Download this now by:. Save it somewhere sensible, like the directory you've been creating your other examples for in this article. When you are using Modernizr in production, you can go to the Download page you've already visited and click the plus buttons for only the features you need feature detects for.

Then when you click the Build button, you'll download a custom build containing only those feature detects, making for a much smaller file size. At this point, try loading your page, and you'll get an idea of how Modernizr works for CSS features. It now contains a large number of classes that indicate the support status of different technology features.

If it did support modern flexbox, it would get a class name of flexbox. If you search through the class list, you'll also see others relating to flexbox, like:. Note: You can find a list of what all the class names mean — see Features detected by Modernizr. Moving on, let's update our CSS to use Modernizr rather than supports. Go into modernizr-css. So how does this work? So here we're applying the top set of rules only to browsers that do support flexbox, and the bottom set of rules only to browsers that don't no-flexbox.

Note: If you have trouble getting this to work, check your code against our modernizr-css. Modernizr is also equally well-prepared for implementing JavaScript feature detects too.

For example, load up our modernizr-css. And this action potential will reach your brain, and your brain will recognize that you're looking at something that is red. So this basically happens regardless of what we're looking at. And this has come to be known as the trichromatic theory of color vision.

So what other features do we need to take into consideration when we're looking at this rose? So aside from breaking the rose down into the different colors, we also need to figure out, OK, what are the boundaries of the rose, so the boundaries of the stem, the boundaries of the leaf, the boundaries of the petals, from the background.

And this is also really important, because not only do we need to distinguish the boundaries, but we also need to figure out, OK, what shape are the leaves, what shape are the petals. And these are all very important things that your brain ultimately is able to break down.

So in order for us to figure out what the form of an object is, we use a very specialized pathway that exists in our brain, which is known as the parvo pathway. So the parvo pathway is responsible for figuring out what the shape of an object is. So another way to say this is that the parvo pathway is really good at spatial resolution.

Let me write that down. So spatial resolution. And what spatial resolution means is that it's really good at figuring out what the boundaries of an object are, what the little details that make up the object are. So if something isn't moving, such as when you're looking at a picture or when you're looking at a rose, you're able to break down and look at the little tiny details.

So you're able to look at the little veins that make up the rose leaf. You're able to see all the little nuances of the rose. And that's because you're using the parvo pathway, which has a really high level of spatial resolution. One negative aspect of the parvo pathway is that it has really poor temporal resolution. And what I mean by this is that temporal resolution is motion. So if a rose is in motion, if I throw it across the room, I can't really use the parvo pathway to track the rose.

The parvo pathway is used for stationary objects to acquire high levels of detail of the object. But as soon as that object starts moving, you start losing your ability to identify tiny little details in the object.

And you've probably noticed this. So when you're in a car, you're driving along, and there's a little Volkswagen Beetle driving right by you, it's going pretty slow. You can acquire a good number of details about the car, about the driver. But if you look down at the wheels, the wheels are spinning really, really quickly. And so if you try and look at the rims and figure out what design are the rims, it's really hard to figure that out.

That's because the wheels are spinning so fast that it's really difficult to acquire any type of detail about the shape of the rims, about what they look like, and things like that.

And finally, the parvo pathway also allows us to see things in color. Automatically create a panorama using feature based image registration techniques.

Stabilize a video that was captured from a jittery platform. One way to stabilize a video is to track a salient feature in the image and use this as an anchor point to cancel out all perturbations relative to it.

This procedure, however, must be bootstrapped with knowledge of where such a salient feature lies in the first video frame. In this example, we explore a method of video stabilization that works without any such a priori knowledge.

It instead automatically searches for the "background plane" in a video sequence, and uses its observed distortion to correct for camera motion. Use a combination of basic morphological operators and blob analysis to extract information from a video stream.

In this case, the example counts the number of E. Coli bacteria in each video frame. Note that the cells are of varying brightness, which makes the task of segmentation more challenging.

Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select:. Select the China site in Chinese or English for best site performance. Other MathWorks country sites are not optimized for visits from your location. Toggle Main Navigation. Search MathWorks. Open Mobile Search. Off-Canvas Navigation Menu Toggle.

Main Content. Feature Detection and Extraction Image registration, interest point detection, feature descriptor extraction, point feature matching, and image retrieval. Functions expand all Detect Features.



0コメント

  • 1000 / 1000