[ad_1]
Despite the fact that today not only everyone (s) can afford an SSD, nevertheless, few people outside the professional environment soberly understand the selection process other than “sort by price ascending” in an online store. And what is most interesting – they are, in general, almost right. I ask you to postpone throwing vegetables for this thesis and read to the end.
A little old-fashioned
A typical older PC user today could have managed to go from cassettes from which Commodore 64, ATARI or Sinclair was loaded, through floppy drive straight to hard drive and a little to optical disks, and then to essentially modern solid state drives.

We deliberately take out exclusives such as magneto-optics, streamers, zip and punched cards, although in terms of reliability only rock forms of information transfer can be compared with a streamer, but the speed there is not a fountain, coupled with an access sequence. The Vesna tape recorder with the MK-60, which recorded the unforgettable Exolon and SoF, as well as, suddenly, Transformers (yes, they were played even before they became mainstream), was technically also a streamer, but for obvious reasons I wasn’t very good at reliability. It was also not possible to load the OS and prototypes from all of these media.
The photo shows a thematic example of a lot of countless online flea markets and auctions, it is shown with a dual purpose – if you read the insert, you can find out that we have an example of information protection by redundancy – in fact, a tape analogue of a mirror raid of two media!


Source – Skylots.org
Why is the rectangle a disk?
HDD, however, continues to be in demand today, although it has been significantly supplanted by solid state drives, which are called disks by inertia. At the same time, few of those who studied geometry at school ask themselves the question – why is a rectangle confidently called a disk? Everything is simple here – a hard drive, which is rectangular on the outside and round inside, is a well-established figure of speech from the English language – after all, it was preceded by floppy magnetic disks, which up to 3.5 inch floppy disks (namely, floppy disks we called floppy disks) were really flexible. In general, a floppy disk that was round inside (and they were even 8 inch) eventually got a stronger case.

And bulk HDD media was immediately settled in a solid protective case with literally hard and mirror-like magnetic plates inside, of which there could be several – there were already precise mechanics, dust protection, heads, electric motors and other electronics. In general, the round soul of disk drives, which was technically initially due to the axial model of operation, has not been seen in reality for a long time, but, as noted, by inertia it was all called disks.

Incl. and solid-state drives, which are not related to the round at all and inside may look something like the picture below. The size of the board varies from model to model, but they look something like this and this is already the modern mainstream. Black – 2.5″ case.

N.J.M.D.
Hard drives, or hard drives, have been the main media in the mass segment for quite a long time. Their speeds were low by today’s standards, but they were fairly reliable products that were produced by almost two hundred companies, of which only three remain today – Seagate, WD and Toshiba.
The scheme for upgrading the storage subsystem has been clear for years – the old media was changed to a new larger one with a suitable interface, or another one was added. Of course, with the growth of volumes, recording density, spindle speeds and the number of pancakes in the assembly, the speeds, especially linear ones, grew for such disks, but not to say that at an explosive pace. As a result, the matter physically rested on the size and actually on the air, and less dense helium, i.e., began to be pumped into sealed chambers with rotating carriers. growth was generally extensive, and we have not reached the introduction of revolutionary recording technologies even now. At the same time, it is not clear how it will be with reliability, but, perhaps, for the majority it will no longer be necessary, and these issues will rather concern data centers.
New Hope
The thing is that about 20 years ago, scientific and technological progress made it possible to start promoting solid-state drives to the masses. At first, everything was small, incomprehensible and expensive. But over time, denial and anger, given the prices, changed to bargaining and acceptance – after all, solid-state media offered speeds unattainable for mechanics, even in the early stages of penetration!
SLC – Beginning of mainstream SSDs circa 2007
Transistor drives were the first to hit the market, in which one cell was able to store one bit of information by determining only two charge levels – charged and discharged, roughly speaking. The market name SLC (Single level cell) was assigned to the technology – a single-level cell (hereinafter, we use Micron infographic elements).

These cells were assembled into an array, which outwardly represented what ordinary citizens call a chip. In the illustration below, the black outer shell of the chip is a plastic housing to physically protect the array from external factors. It is usually marked for identification. In this particular example, SEC means that the product came out of the Samsung factory. On the reverse side is the contact group.
The very same solid-state “disk” inside a familiar case, and the first series were made in the form factor of 2.5 and 3.5 inch HDDs, is a printed circuit board with cell arrays in the form of surface-mounted chips, a special controller for managing this array and other components. Depending on the specific solution, a DRAM buffer may or may not be available. This is very broadly.
Due to the lack of mechanics for solid state drives, the concept of “access time” has changed the physical expression. In the HDD, it was about the positioning time of the heads, the physical location of the data on the media, the rotation speed, and so on. The SSD did not have all this and the access time rather meant the time the controller processed the request and collected data on the array, which could be physically located anywhere, but it did not matter to the user. The controller itself decided what to physically locate and how to then logically return it as data, how to correct errors, what to do with cells that have exhausted the resource, how to interact with interfaces, and so on. In fact, the implementation of the cell array capability depended on the controller that controls them, the level of its execution, speed and quality of the firmware.
In fact, the solid state drive was deprived of the main feature of the old HDD – precise mechanics, which had the ability to wear out. And although not every user managed to see the development of the mechanical resource of the hard drive, nevertheless, the bearings of the motor that rotates the spindle are not eternal, it is also impossible to accurately ensure the positioning of the heads forever, and most importantly, under the influence of external factors, the media and data could be seriously damaged . Shock, vibration, and so on invariably shortened the life of the mechanical carrier. The temperature, judging by the statistics, plays its own violin in the orchestra of HDD failure factors. True, the question of the number of sector overwrites on the platters of a hard disk, unlike a solid-state one, due to the magnetic technology of the first, has never been raised at all – the mechanics will certainly wear out before the exhaustion of the magnetic resource of the media, which is generally not considered and is not normalized for the masses. The average life of a hard drive can be estimated at 30,000–50,000 hours of operation. Jumps in the network and the facts of switching on and off also negatively affect this indicator. However, in fairness, it should be noted that the excavation of the site Seagate.com can lead us to the right place. There, we suddenly find out that Seagate keeps statistics on the Workload Rate Limit (WRL) indicator in its hard drives. This is something like calculating the annual mileage in terabytes. Seagate conventionally assigns 180 terabytes per year to a non-enterprise mechanical drive, or approximately 340 megabytes per minute of spindle operation in read or write mode (the load in these modes is added up for calculation). At the same time, the company notes that this indicator generally does not affect warranty obligations, but is calculated in order to fix the load threshold at which the probability of drive failure increases. It is unlikely that a household user will have to face such loads. At the same time, according to Seagate’s logic, a corporate segment disk is entitled to 550 terabytes per year included.
For a workstation or rack in a data center, vibrations are of course secondary, and the temperature is adjustable, but in portable technology, all this could lead to serious damage at once. Manufacturers have tried to deal with this in many ways. HP, for example, in the hotel series of corporate laptops, installed software that, having fixed a free fall, parked the media heads, which, however, did not always help. For the same reason, mechanical carriers have not taken root in automotive electronics – vibrations, alternating temperature loads, etc. did not allow such solutions to be mass-produced. The Japanese, of course, would have come up with something, but niche products are either expensive or do not pay off well. Therefore, we will remember offers like AVIC-ZH900MD from Pioneer for almost 3500 dollars as a solid alternative to a small apartment in the regional center of the same years. But Pioneer offered HDD as much as 30 GB in 2004 in the car!
In general, the mechanics seem to have become considered an obsolete solution. Everything seems to be so, if not for one feature. A failed HDD, if it is not physically damaged inside and on the surface of working pancakes, can be temporarily brought back to life in most cases for relatively adequate money. If you’re lucky enough, it will be enough to replace the physically failed parts with the same products of the same hardware revision, and this may be enough to read the recorded. Such a restored disk will not work for a long time due to dust ingress – sooner or later it will permanently disable the disk, but this time is certainly enough to read it in its entirety. Therefore, in the 90s it was quite possible to see a picture of HDD repair with a cigarette in your mouth. Even then there was no calculation for long-term work after removing the cover.
If less fortunate, after the mechanics are repaired, the restorer can work with the content at the logical level. It is more expensive, but there are a lot of technologies for raising even the affected data. If the platters are not physically damaged by the heads, then in the case of mechanical media, the chances of recovering data from other types of breakdowns are very high. Therefore, when decommissioning corporate drives, they are first carefully erased, and then physically destroyed – competitors could carry out industrial espionage, even picking up, it would seem, garbage.
The rejection of mechanics is seen as a boon. No vibration, motor whistle and head positioning sounds! Silence and order.
True, in the event of a physical breakdown of a solid-state drive, the probability of data recovery tends to zero. we remember that, even after transplanting memory chips onto a donor board, no one, except for the old controller, roughly speaking, knows how to work with them, and what and how was recorded there. In some cases, a narrow recovery specialist can remove the protective plastic layer of the memory chip and try to connect directly to the array inside. Under certain circumstances, this can give a result if you are very lucky and the data you are looking for will be entirely physically located in one array, and there can be a lot of them on the solid-state drive board. But even in this case, the process of raising data will need to be continued at the logical level, which is within the power of very few and is therefore quite expensive. This happens with single-chip “flash drives”, but a specific failed SSD is not a “flash drive”, and the logic of operation set by the controller may not provide for the very possibility of such a procedure. And as practice shows, solid-state drives fail almost suddenly. Further, they are not even seen by BIOS or UEFI. Of course, failure is preceded by the use of spare units, but who watches SMART every day? And why does the manufacturer not supply the product with its own little program, which would report in the system tray, on occasion, about the deterioration of the situation with the resource? After all, the signs of such problems have been established empirically, and such a bonus will cost almost nothing to the manufacturer.
In general, solid-state media have features that must be taken into account. And these are the essential features. About them – below.
But back to SLC. These arrays, due to the technology as such and thanks to the thick manufacturing processes, turned out to be very tenacious. It is generally believed that an SLC-type cell can survive up to 100,000 overwrites.
Question for the cuckoo
If you approach the issue broadly, then the life of a solid state drive can be estimated as its volume multiplied by the expected effective number of overwrites. It is to estimate because in the course of work, the technological use of some cells occurs, erasure, recording of different amounts of data and the erasure block is usually larger than the recording block. As a result, when the OS requests to write one amount of data physically into the storage medium, a record for a larger total amount may occur. if, for example, we slightly change the text of this article and save it, then at the level of solid-state media, especially early series, the data will be read, changed, old ones will be erased, new ones will be written, and this will happen through transit cells, which then also must not be forgotten to erase. And all this in whole blocks, and you will have to erase more than write – 4 NTFS clusters are placed in the SSD memory page, and in the case when the data for writing comes in volumes less than a cluster, but not simultaneously, but at short intervals, then at the physical level clusters will be completely overwritten. The controller, of course, will figure out what and how, but physically the disk will receive more data for writing than we actually changed in the text, having found a typo. This is called wear amplification – as a result of the logic of the operation of the solid state drive, the physical recharging of cells in the memory array occurs somewhat more than is actually requested by the system, which is not aware of the inner workings of the drive.
You can fight this phenomenon at the logical level by teaching, for example, the controller, to isolate the block being changed and write it in a new form to a new location, and mark the old one as possible for use after processing, i.e. after physical erasure, for which you need to spend resources i.e. time. In this case, the logical address of the specified data will be assigned a new physical address within the media. Then the irrelevant blocks will be physically erased. As a result, this will lead to the fact that the speed of writing to such a medium will drop due to the need to perform a number of logical procedures before completing a write request.
The more the disk is full, the more the controller will strain to search for options for placement, reading, erasing, writing, moving, etc. Outwardly, this will be expressed as a drop in speed. But all these features are not visible to the OS at the logical level. communication with the hardware of the carrier occurs through the controller and only he knows what and how it is in reality.
Those. For the indicated reasons, the working disk will not show the passport speed, as expected, and this had to be solved somehow.
We decided by highlighting unused areas and the process of the so-called. garbage collection, when the controller, in its free time, physically deletes obsolete blocks and brings the nominally free space into real free space, i.e. into one that does not require the loss of time for preliminary erasure and other actions during the workload.
If garbage collection took place outside the load peak, then when such a peak repeats, the write speed will be close to that on a physically clean and free drive.
Garbage collection is carried out by a command known as TRIM, which is supplied by the OS, but even its first solid-state drives did not execute everything correctly, which is why their speed inevitably fell over time, and it could only be restored by a complete physical erasure.
All of the above happens naturally due to operations with cells that are not eternal. Yes, solid state drive cells physically wear out during operation. And although the logic of working with them does not imply determining the state of each individual cell, nevertheless, the average indicators of their viability are determined by some logic, and when reading becomes impossible, a certain array is removed from circulation by the controller. Let’s remember this moment.
Thus, early generation consumer SSDs were significantly faster than any HDD, especially when working with small data. in the case of a physically fragmented file, the hard drive had to “knock” its heads at different places on the disk to read it, but the SSD had no heads and the response speed was largely determined by the controller’s ability to collect data electronically at the level of the internal logic of the media. The SSD was also devoid of problems with the mechanics due to its absence, but its high speeds degraded over time due to the imperfection of the early control software, i.e. Firmware, however, is far from the level of HDD. The solid-state drive was expensive, but at the same time it had such a resource due to the characteristics of the organization and production capabilities, the exhaustion of which was hard to wait for.
Machines based on system SSDs booted quickly and provided high responsiveness of user interfaces, as well as work with arrays of small files incomparable to HDD, which was extremely important for read-oriented data centers, but SSDs were still far from mass arrival there.
So, SLC is considered to be a very hardy solid-state memory, but … it is practically inaccessible today for the mass user. is expensive, and copies of the first series that have miraculously survived to this day do not shine with volume and speed. So, for example, at a flea market you can find 2 GB SLC media, which probably stood in something like an ATM, even Mtron skips, but today all this is interesting mainly as museum exhibits.
[ad_2]