To monitor the S.M.A.R.T status of your disk, we suggest to use the smartctl tool, which is part of the smartmontool package (at least on Debian / Ubuntu).

smartctl is a command line tool, but it especially helps in cases where you need to automate the collection of data, for example, from your servers.

The first step in using of smartctl is to check if S.M.A.R.T is on your drive and if it is supported by the tool:

As you can see, our laptop’s internal hard drive really supports S.M.A.R.T. and it is launched. So, how to get the S.M.A.R.T status now? Are there any fixed errors?

Getting a report about “the entire S.M.A.R.T. disk information “is the option a:

Understanding of the smartctl command output

On the output you usually receive a lot of information that is not always easy to understand. The most interesting part is probably the part labeled “Vendor Specific SMART Attributes with Thresholds”. She reports various statistics collected by S.M.A.R.T. device, and allows you to compare these values (current or worst for all time) with a certain threshold defined by the supplier.

For example, here are our reports about reassigned sectors on disk:

You may notice the “Pre-fail” attribute. It means the value is abnormal. Thus, if the value exceeds the threshold, the probability of failure is high. Another category of “Old_age” is used for attributes that correspond to the values ​​of “normal wear”.

The last field (here with a value of “3”) corresponds to the original attribute value that the drive reports. Usually this number has a physical meaning. Here is the actual number of reassigned sectors. For other attributes, this may be temperature in Celsius degrees, time in hours or minutes, or number of times that a certain condition has been met for a disk.

In addition to the original value, the S.M.A.R.T. should report “normalized values” (field values, the worst and threshold). These values ​​are normalized in the range of 1-254 (0-255 for threshold values). The firmware of the disk performs this normalization using some internal algorithm. In addition, different manufacturers can normalize the same attribute in different ways. Most values ​​are presented as a percentage, and as higher as better, but this is not always like this. When the parameter is lower than or equal to the threshold value specified by the manufacturer, the disk is considered faulty in terms of this attribute. Keeping in mind all the instructions from the first part of the article: if the attribute showing the “pre-fail” value still failed, it is most likely that the disk will soon fail.

As a second example, let’s take the “seek error rate”:

In fact, (and this is the main reporting problem of S.M.A.R.T.), only the provider understands the exact value of the fields of each attribute. In our case, Seagate uses a logarithmic scale to normalize the value. Thus, “71” means approximately one error per 10 million requests. It’s funny that the worst indicator of all time was one error per 1 million requests.

If we understand correctly, this means that the heads of our disk are now located more accurately than before. We did not closely monitor this disc, so analyze the data obtained very subjectively. Perhaps the drive just had to be a little readier since it was put into operation? Or maybe this is a consequence of the mechanical wear of the parts and, therefore, there is now less friction? In any case, whatever the reason, this value is more a measure of performance than an early warning of an error. So it doesn’t bother us much.

In addition to the above and three extremely suspicious errors recorded about six months ago, this disk is in surprisingly good condition (according to S.M.A.R.T.) for a laptop’s stock disk that has worked for more than 1100 days (26423 hours).

For interest, we conducted the same test on a much newer laptop equipped with SSD:

The first thing that catches your eye is that despite the presence of S.M.A.R.T., the device is not in the smartctl database. But this doesn’t prevent the tool from collecting data from the SSD, however, it will not be able to report the exact values of the various attributes specific to the provider:

Above you see the output of a brand new SSD. The data is understandable even if there is no normalization or meta-information for the data of a particular provider, as in our case with “Unknown_SSD_Attribute.”We can only hope that in future versions of smartctl the data on this disk model will appear in the database, and we will be able to better identify potential problems .

Test your SSD on Linux with smartctl

So far, we have reviewed the data collected during normal drive operation. However, the S.M.A.R.T. protocol also supports several commands for offline testing to run on-demand diagnostics.

Offline testing may be performed during normal disk operations, unless otherwise specified. Because the test and host I / O requests will compete, disk performance will drop during the test. Specification of S.M.A.R.T. defines several types of offline testing:

Short autonomous testing (-t short)

Such a test will check the electrical and mechanical performance, as well as disk read performance. Short autonomous testing usually takes only a few minutes (usually from 2 to 10).

Extended autonomous testing (-t long)

This test takes almost twice as much time. This is usually just a more detailed version of a short -t test. In addition, this test will scan the entire surface of the disk for data errors without a time limit. The duration of the test will be proportional to the size of the disk.

Autonomous Shipping Testing (-t conveyance)

This test kit is proposed as a relatively quick way to check for possible damage that occurred during transportation of the device.

Here are examples taken from the same drives that were above. We suggest you guess where is which one:

A check is in progress. Let’s wait for completion to see the result:

Let’s run the same test on another drive:

And again, we’ll send it to sleep for two minutes and see the result:

Interestingly, that in this case we see that the manufacturers of the disk and the computer seem to have already tested the disk (at a life time of 0 hours and 12 hours). We our self were definitely much less concerned about the condition of the drive than they are. So, since we already showed quick tests, we’ll run the advanced one too to see how this happens.

Well, this time it will take much longer to wait than during a short test. So let’s see:

In the last test, note the difference in the results obtained with the short and extended tests, even if they were performed one after the other. Well, maybe this drive is not in such a good condition! We note that the test stopped after the first reading error. Therefore, if you want to get comprehensive information about all reading errors, you will have to continue the test after each error. We urge you to take a look at one very well written smartctl (8) manual page for more information on the -t select, N-max, and -t select options to be able to do this:

Summary

Definitely S.M.A.R.T. – is exactly the technology that should be added to your toolkit for monitoring the health of your server drives. You should also take a look at S.M.A.R.T. Disk Monitoring Daemon smartd (8), which can help you automate monitoring by using syslog reports.

Due to the statistical nature of failure forecasting, we are not sure if the aggressive S.M.A.R.T. monitoring will be very useful on personal computers. Remember that no matter what kind of the drive is it, one day it will fail anyway – and, as we saw earlier, in one third of cases it will do so without any warning. Therefore, nothing will ensure the integrity of your data better than RAID technology and backups!