Post-Mortem: Apple 60W MagSafe Power Adapter (A1344)

Unlike most of my other posts which are relatively straightforward, this post has a little story behind it. In March, Matson of Chicago, Illinois, USA contacted me about his “dead” Apple 60W MagSafe Power Adapter and wondered if I’d be happy to take a look at it.

At the time, I felt that it would be silly to post a dead item internationally, because the postal charges from the US to Australia were not cheap. Besides that, some of the failures are more obvious and so I encouraged him to explore a little on his own to try and see if he could work it out with a few pointers to common issues (e.g. open primary fuse, bad capacitors, open windings on the transformer, PCB trace problems). Anyone who knows me closely will know that I am not a fan of Apple equipment in general and the only Apple gear is the stuff that I’ve won. Even if I had the full brick, I wouldn’t have had anything to test it with – the MagSafe adapters are likely to be a little more sophisticated than a straight DC supply.

Five months later, in August, I got another e-mail from Matson, this time with an offer of a very useful piece of equipment. Of course, I was interested, but at the same time slightly hesitant. After a few back and forths, it was decided that I would accept the piece of equipment on one condition – that I also accept the broken power adapter and do a post-mortem on it. From my point of view, it was a win-win situation – I would have a new toy, and a subject for another blog post which might be somewhat informative.

Earlier this week, the goodies arrived, so the first thing I do is get down to business. The other bits and pieces … well, I’ll get to that later when I have some more time I suppose.

The Deceased Power Adapter

2016082716118293

As international postage is expensive, the adapter is devoid of it’s plug-portion. It also has the MagSafe lead removed, which is a good thing, because that lead has a very special plug at the end which is worth its weight in gold if you want to hack another power supply for your Mac. Both help to shave down the shipping weight and costs.

2016082716128295

As its owner had already decided to try their hands at some exploration, the case was “nicely” opened, which saves me the trouble of prying it myself and maybe hurting myself in the process. The unit is model number A1344.

2016082716128294

Despite all of the Apple vs Samsung rivalry, it’s nice to see that this power adapter comes from Dongguan Samsung Electro-Mechanics. Yep. Your Macbook was being powered by a Samsung power adapter … fancy that. Its rating is 16.5V at 3.65A for a power rating of 60.225W (round down to 60W for convenience).

2016082716128296

Separating the halves shows just how well built genuine Apple power supplies are. In this case, both halves of the outer shell have metal plate on its interior to help spread any generated heat and prevent hot-spots which can be discomforting to the user.

2016082716138297

The interior board is covered by a wrap-around multiple-layer aluminium shielding which both acts as an RF shield to stop interference leakage and works as a heat spreader being thermally bonded to the heatsinks on the board. Copious amounts of tape were used as insulation around sensitive equipment. The metal was even screwed together to heatsinks to improve heat transfer.

2016082716158299

The only sign of distress I saw was some melted plastic near the power plug and slight blackening of the PCB. As it turns out, this was a consequence of a first attempt by Matson to desolder the shield grounding connection (a tab soldered to the board and spot-welded to the shield). As this supply will never be used again, I just decided to tear the trace off the board hoping to see something underneath that could explain its lack of function without needing to get out the soldering iron.

2016082716198300

With the cover removed, we can see just how crammed these power bricks are. They are chock full of components to the point that the board has many plastic shims inserted to maintain isolation, and rubberized self-adhesive pads to keep components from vibrating against the enclosure and damaging their insulation. There really isn’t much space for anything extra in here.

2016082716198302

Even the reverse side is well covered with components. Some helpful silkscreens are provided on the rear – but we can see in the centre, a row of three optoisolators for feedback control, and just to the left and slightly below, the output current shunt resistor. There are rubber feet on the underside as well. The slots for the shims to fit are visible, and they actually weaken the PCB slightly.

2016082716198301

If we look near the output, we can see where the original cable had been cut.

The Autopsy Begins

All this was pretty much just equivalent to “taking off someone’s clothes” and doing a visual inspection. Sometimes, at this point, the failure becomes obvious because you can see it, or you can smell it. Unfortunately, in this case, the power adapter actually looks quite fine – assuming that the melted corner near the fuse and input is from the result of an attempted disassembly. To find the cause of death, we must go deeper.

Because of its compact size, it becomes difficult to access components to probe them directly. Probing in-circuit also potentially results in false readings due to interconnected circuitry. As a result, the strategy has now changed to a more time-consuming and in depth teardown. With a soldering iron. And some desoldering braid.

First step was to remove the heatsinks from the PCB, as that will free the semiconductor components.

2016082716368303

On the primary side, I removed the heatsink that was shared between the primary switching MOSFET (right) and bridge rectifier (middle). One thing I noticed was that the primary MOSFET wasn’t screwed in very tightly to the heatsink, so was probably having poor thermal transfer. The suspicion seems to be backed up by what appears to be a “thick” layer of thermal grease on the rear – this normally spreads out to a thin amount when securely mounted to the heatsink. Even the bridge rectifier next to it with no fastening hardware has a thinner looking layer. This would be a candidate for a suspect component.

2016082716398304

The next heatsink to some off was the secondary side synchronous rectifier MOSFET. This was insulated from the heatsink with a silicone pad, which is appropriate. Because the heatsinks are secured to the PCB with solder, it was a little chore to desolder them.

2016082716398306

That gets us here. Not much of an improvement, but the tear-apart is making progress. At this point, I decide to get serious about probing the primary side …

2016082716408307

… only to find that the fuse (encapsulated in brown, a 4A time-delay type fuse) was intact and not blown at all. Whatever failure appears it might actually be secondary side (e.g. controller goes into shutdown for self-protection perhaps). The board has B22E_R16 written on it, dated 11th April 2011, with a thermistor just next to it (normally for soft-start purposes, but this seems maybe to be a thermal overheat protection). Since I wasn’t going to power up the unit, I decided for a full strip-down of all through-hole components ….

2016082717038308

… there’s a few taken away …. and if we keep going …

2016082717318310

… voila. We’ve reached the base PCB. I like how there’s a spelling error on the silkscreen – the ‘neutral’ connection is marked ‘NEUTURAL’. Interesting, buried underneath all of that, there is F102 – a Littelfuse 125V 5.0A time-delay fuse protecting the output in case everything else fails as a last line defense. I checked this with a meter – it was also intact, so that made things a little curious.

2016082717318309

The underside was very much still littered with SMD components, which I wasn’t going to desolder as many of the markings were hard to read and the functions were not easily tested for. There’s a lot of desoldering braid flux (black/brown) left everywhere, but I think I did a good job of taking everything off.

2016082717338312

Above are all of the components which ended up being removed. Lets do some component level testing to see if anything here is the problem.

Component Functional Testing

Just like in air-crash investigations, once you recover components from a system, it’s a good idea to see if they still work. After all, if they do, I could add them to the junk box, and we would know they were not the cause of failure. I first started with most of the primary side components, hand-drawing on a sheet of paper as I went along.

2016082717558313

Nothing was amiss here, but in case you were interested, here are some datasheets for the identified components:

It was somewhat interesting to see the various components used – I would have assumed they might have opted for the Japanese electrolytic capacitors for all of their filtering needs because of their better lifetime reputation, but instead, they went with South Korean products from Samyoung instead. That being said, the Samyoung capacitors are 6000h+ rating at the load temperature of 105 degrees C, so they are long life capacitors (at least, by rating).

Now we move on to the semiconductors and transformers to see if there’s anything wrong with these …

2016082718128314

The transformer turned out to be a Li Shin Enterprise (LSE) product, which is no surprise, as they’re involved in many SMPS power products and even power cables for PCs. The transformer had three windings, as many SMPSes do – a primary, a feedback and an output winding. All windings were insulated from each other, and continuous – so the transformer was just fine. The bridge rectifier was a Lite-On Semiconductor Glass Passivated Bridge Rectifier GBL406, rated for 600v peak reverse voltage and 4A forward current (matching the primary fuse). I measured the voltages across legs, and all diodes were still functioning, and functioning well as diodes. It seems like it hadn’t failed.

Now it was down to the semiconductors – the first was the primary side MOSFET which was an Infineon SPP11N65C3 CoolMOS Power Transistor with a 650V/11A rating and an Rds of 0.38 ohms. The gate wasn’t conductive to any other pin, which was good, but the source and drain were shorted through in both directions. This indicates the primary MOSFET had failed a dead short – this would stop the unit from functioning, but would normally result in a blown primary fuse and potentially smelly result with the overheating of the MOSFET.

This didn’t happen in this case, which seemed a little strange. But I have a theory – this transistor wasn’t well mounted to its heatsink, so it might have overheated over a long time and failed shorted due to internal melting of silicon. But maybe (just maybe) the bond wires which connect the legs to the package also separated by thermal expansion, so that the unit is shorted, but as soon as current flows and it heats, it “breaks” the connection due to thermal expansion thus preventing an explosion or blowing the fuse. This is similar to the flickering which can happen when LEDs overheat and their bond wires start making intermittent contact with the LED dies.

Another possibility might be that the MOSFET was static-damaged in the process of removal – but this is unusual and relatively unlikely. It’s never happened to me before.

The secondary side MOSFET was an Intineon IPP12CN10N OptiMOS2 with a 100V/67A rating and an Rds of 0.0124 ohm. Testing of this MOSFET resulted in the appropriate isolation of gate, body diode between drain and gate, and an open circuit in the other direction as expected.

As mentioned earlier, checking the PCB’s fuse didn’t detect any anomalies. To round out the checks, I decided to check some other components on the PCB:

2016082718268315

As it turns out, no anomalies were detected with the diodes on the boards, nor the LEDs in the optocouplers. It would be expected that such controllers would fail to output if the feedback from the optocouplers is not received, and LED failure is a major cause of this. Aside from that, another form of feedback is the current sensing shunt, and the resistor came up okay as well.

Bonus: Transformer Teardown

A point of contention when it comes to cheap Chinese transformers is the poor insulation of the transformer, often held together with just tape and barely one-wrap of tape to insulate primary from secondary windings. Seeing as I wouldn’t have much use for the transformer, I thought taking it apart would be educational and beneficial for comparison purposes.

2016082723308316

Step one was to remove the tape on the outside. Then, the transformer was found to be encased in a plastic holder, and was carefully prised out. Already, we can tell it’s something better than most cheap Chinese efforts which fall apart at the first layer of tape.

2016082723318317

The outside of the transformer is wrapped in two rings of copper tape at right angles, soldered to one end of a winding. This is probably used to sink any stray leakage.

2016082723348319

The transformer is wound on a bobbin with a core that goes around the bobbin, encasing the windings almost completely resulting in a highly efficient coupling. This results in a higher efficiency transformer with less magnetic flux leakage. The two halves aren’t quite in perfect alignment, and are varnished together. Unfortunately, I couldn’t easily separate the two halves with a few taps of the screwdriver or the pliers … so I went down to the garage, got a hammer and smashed the ferrite to bits.

2016082723418320

In the process, one side of the former cracked away as well – but this does go to show how the unit is constructed winding-over-winding for improved efficiency, but with copious layers of insulation.

2016082723428321

Here, we see a multi-stranded copper wire winding end soldered to an insulated leader wire that connects to the pins that go to the PCB. The other end will be connected directly to another set of pins. Unwrapping each and every layer brings a few surprises, such as the secondary winding.

2016082723468324

The secondary winding is actually a tri-filar winding with all windings in parallel. The difference is that the winding itself is made of insulated wire that is like hook-up wire with some sort of plastic-like insulation. This is addition to the wraps of tape between each layer which adds to the insulation. Its clear based on this how much they care about primary to secondary isolation.

2016082723478325

There is also an internal screening layer, likely to reduce interference between primary and secondary coils.

2016082723488326

In all, that was what was recovered from the transformer – a total of four wires – a secondary, what appears to be two primaries and a thin winding which is probably a screening winding. The scrap of bobbin and former is shown, minus all the ferrite fragments which now litter the garage, and also the remains of the layers of tape within the former.

2016082723498327

In case you were wondering – the copper in the transformer is about 7.95 grams – but probably a little less due to the enamel and plastic insulation.

Conclusion

Through a long and dedicated afternoon, I basically desoldered all the through-hole components off the board and checked them each for their functionality. The only anomaly spotted was in the primary side MOSFET being a dead short – this would normally trigger a cascade failure which would be smelly and blow the primary side fuse. Interestingly, this didn’t happen – and I theorize it may be due to intermittent internal wire bonds as I’ve seen in some LEDs causing them to break the connection as they heat up. But who knows, if it had been plugged in and turned on another 10-20 times, maybe it would have blown a fuse.

Another possibility is that the MOSFET was damaged from extraction, but I find that unlikely. Supporting evidence includes extracting the other MOSFET without damage, and the poor heatsink connection on the failed MOSFET which may have been the primary cause.

Why this particular MOSFET may have failed is a bit of a mystery. While I list heat as a major contributor, it may have not been the only contributing factor. The supply doesn’t seem to have any surge protective devices on its input (e.g. MOVs). Without this protection, transients from lightning or switching events on the network are likely to get through to some degree, and over time, they could overstress the MOSFET and cause it to fail prematurely. Some other larger power supplies and even LED downlights have a MOV as a low-cost insurance policy against surges – basically having a surge protector within the PSU itself. Not seeing one inside the Apple supply is a little bit of a disappointment.

Of course, no other anomalies were detected in the components I probed, but equally possible is a failure in a controller IC, or surface mount transistors which were not tested.

While it is unfortunate that the power supply had failed, its design seemed fairly decent. Isolation was ensured from primary to secondary even with its compact size by using lots of anti-tracking slots and insulating plastic sheets. Capacitors, while not my preferred quality Japanese make throughout, are at least long-life units with a good temperature rating. The number of inductive filters was also good, and the shielding was excellent, doubling as a heatsink. The insulation in the transformer was excellent as well, as expected.

I hope this satisfies your curiosity Matson, and thanks for your contributions.

Posted in Computing, Electronics, Obituary | Tagged , , , , , | Leave a comment

Video Compression Testing: x264 vs x265 CRF in Handbrake 0.10.5

Having played around with video since I had a few multimedia CD-ROMs and a BT878-based TV tuner card, video compression is one area that has amazed me. I watched as early “simple” compression efforts such as Cinepak and Indeo bought multimedia to CD-ROMs running at 1x to 2x, good enough for interactive encyclopedias and music video clips. The quality wasn’t as good as TV, but it was constrained by the computing power available then.

Because of the continual increase in computing power, I watched as MPEG-1 bought VCDs and VHS quality to the same amount of storage as normally taken by uncompressed CD-quality audio. Then MPEG-2 heralded the era of the DVD, SVCD and most of the DVB-T/DVB-S transmissions, with a claimed doubling of compression efficiency. Before long, MPEG-4/H.263 (ASP) was upon us, with another doubling, enabling a lot of “internet” video (e.g. DIVX/XVID). Another bump was achieved with MPEG-4/H.264 (Part 10 – AVC) which improved efficiency to the point where standard definition “near-DVD-quality” could be fit into the same sort of space as CD-quality audio.

Throughout the whole journey, I have been doing my own video comparisons, but mostly empirically by testing out several settings and seeing how I liked them. In the “early” days of each of these standards, it was a painful but almost necessary procedure to optimize the encoding workflow and achieve the required quality. I had to endure encode rates of about an hour for each minute of video when I first started with MPEG-1, then with MPEG-2, MPEG-4 ASP, and then MPEG-4 AVC. Luckily, the decode rates were often “sufficiently fast” to be able to render the output in real-time.

Developments in compression don’t stop. Increased computing power allows more sophisticated algorithms to be implemented. Increasing use of internet distribution and continual pressure on storage and bandwidth provide motivation to transition to an even more efficient form of compression, trading off computational time for better efficiency. Higher resolutions, such as UHD 4K and 8K, are likely to demand such improvements to become mainstream and to avoid overtaxing the limited bandwidth available in distribution channels.

The successor, at least in the MPEG suite of codecs, is MPEG-H Part 2, otherwise known as High Efficiency Video Coding (HEVC) or H.265. This standard was first completed in 2013, and is slowly seeing adoption owing to the increase in 4K cameras and smartphone SoCs with inbuilt hardware accelerated decoding/encoding and promises another almost halving of bitrate for the same perceptual quality. Unfortunately, licensing appears to be one of the areas which are holding HEVC back.

Of course, it’s not the only “next generation” codec available. VP9 (from Google) directly competes with HEVC, and has been shown by some to have superior encoding speed and similar video performance, although support is more limited. Its successor has been rolled into AOMedia Video 1, which is somewhat obscure at this time. From the Xiph.Org team, there is Daala, and from Cisco there is Thor. However, in my opinion, none of these codecs have quite reached the “critical mass” of adoption to make it hardware-embraced and universally-accessible as the MPEG suite of codecs has.

I did some initial informal testing on H.265 using x265 late last year, but it was not particularly extensive because of time limitations and needing to complete my PhD. As a result, I didn’t end up writing anything about it. This time around, I’ve decided to be a little more scientific to see what would turn up.

Before I go any further, I’ll point out that video compression testing is an area where there are many differing opinions and objections to certain types of testing and certain sorts of metrics. As a science, it’s quite imprecise because the human physiological perception of video isn’t fully understood, thus there are many dissenting views. There are also many settings which can be altered in the encoding software which can impact on the output quality, and some people have very strong opinions about how some things should be done. The purpose of this article isn’t to debate such issues, although where there are foreseeable objections, I will enclose some details in blockquotes, such as this paragraph.

Motivation

The main motivation of the experiment was to understand more about how x265 compares in encoding efficiency compared to x264. Specifically, I was motivated by this tooltip dialog in Handbrake that basically says “you’re on your own.

rf-window

As a result, I had quite a few questions I wanted to answer in as short a time as possible:

  • What is the approximate bitrate scale for the CRF values and how does it differ for x264 vs. x265?
  • How does this differ for content that’s moderately easy to encode, and others which are more difficult?
  • How do x264 CRF values and x265 CRF values compare in subjective and synthetic video quality benchmarks?
  • What are the encoding speed differences for different CRF values (and consequently bitrates), and how does x264 speed compare to x265 speed?
  • How do my different CPUs compare in terms of encoding speed?
  • Does x265 handle interlaced content properly?

As a result, I had to develop a test methodology to try and address these issues.

Methodology

Two computers running Windows 7 (updated to the latest set of patches at publication) were used throughout the experiment – an AMD Phenom II x6 1090T BE @ 3.9Ghz was used to encode the “difficult case” set of clips, and an Intel i7-4770k @ 3.9Ghz was used to encode the “average case” set of clips. The encoding software was Handbrake 0.10.5 64-bit edition. The x264 encoding was performed by x264 core 142 r 2479 dd79a61, and the x265 encode was performed by x265 1.9.

The test clips were encoded with Handbrake in H.264 and H.265 for comparison at 11 different CRF values, evenly spaced from 8 to 48 inclusive (i.e. spaced by 4). For both formats, the preset was set to Very Slow, and encoding tuning was not used. The H.264 profile selected was High/L4.1, whereas for H.265, the profile selected was Main. It was later determined that the H.265 level was L5, thus there is some disparity in the featuresets, however, High/L4.1 is most common for Blu-Ray quality 1080p content, and a matching setting was not available in Handbrake for x265. In additional options, interlace=tff was used for the difficult case to correspond with the interlaced status of the content. No picture processing (cropping, deinterlacing, detelecining, etc.) within Handbrake was enabled.

Final bitrates were determined using Media Player Classic – Home Cinema’s information dialog and confirmed with MediaInfo. Encoding rate was determined from the encode logs. As the AMD system was my “day to day” system, it was in use during several encodes resulting in outlying reduced encode rate numbers. These have been marked as outliers.

The encoded files and the source file were then transcoded into a lossless FFV1 AVI file using FFmpeg (version N-80066-g566be4f built by Zeranoe) for comparison (noting that no colourspace conversion occured, the file remained YUV 4:2:0). This was due to unusual behaviour being witnessed if this was not done resulting in implausible SSIM/PSNR figures. Frame alignment of the files was verified using Virtualdub and checking for scene change frames – in the case of the “difficult case” video, the first frame of the source file was discarded as Handbrake did not encode that frame to maintain video length and frame alignment. The “average case” video did not need any adjustments.

Pairs of files were compared for SSIM and PSNR using the following command:

ffmpeg -i [test] -i [ref] -lavfi "ssim;[0:v][1:v]psnr" -f null -

Results were recorded and reported. Produced data is available in the Appendix at the end of this post. If it is not visible, please click the more link to access it.

Two frames from each video were extracted, and a 320×200 crop from a detailed section was assembled into a collage for still image comparison. The frames were chosen to be at least two frames away from a scene cut to avoid picking a keyframe. This was performed using FFmpeg extracting into .bmp files (conversion from YUV 4:2:0 to RGB24), and then using Photoshop and exporting to a lossless PNG to avoid corrupting the output.

Subjective video quality was assessed using my Lenovo E431 laptop connected to a Kogan 50″ LED TV. This was prior calibrated by eye to ensure highlights and shadows do not clip. Testing was done with viewing at 2.5*H distance from the screen in a darkened room. Overscan correction was applied, however, all other driver-related enhancements were disabled. Use of frame rate mode switching in MPC-HC was used to avoid software frame-rate conversion. TV motion smoothing was not available, thus ensuring the viewed result is consistent with the encoded data. Subjective opinions at each rate were recorded.

The clips used were:

  • Gfriend – Navillera (Average Case) – H.264 [email protected] (34943kbit/s) 3m32s [email protected] 8-bit 4:2:0.
  • Girls’ Generation – Galaxy Supernova (Difficult Case) – H.264 [email protected] (27163kbit/s) 3m29s [email protected] 8-bit 4:2:0.

Approximations of the clips used are linked above (YouTube), however, the actual video files differ slightly (especially with difficult case where the online video is missing a few tens of seconds). The encoding by YouTube is also relatively poor by comparison to the source. Unfortunately, as the source clips are copyrighted, I can’t distribute them.

The choice of the clips was for several reasons – I had good quality sources of both samples which meant a better chance of seeing encoding issues, I was familiar with both clips, and both clips feature segments with high sharpness details. In the case of the difficult case, that clip is especially tricky to encode as the background has high spatial frequency detail, whereas the “focal point” of the dancing girl-group members have relatively “low” frequency detail, thus encoders often get it wrong and devote a lot of attention to the background. It also has a lot of flashing patterns which are quite “random” and require high bitrates to avoid turning into “mush”. (I did consider using T-ARA – Bo Peep as the difficult case clip, but that was mostly “fast cuts” increasing the difficulty, rather than any tricky imagery, plus my source quality was slightly lower.)

At this point, some people will have objection about the use of compressed material as the source. Normal objections include the potential for preferencing H.264 as the material was H.264 coded before, and the potential for loss of detail as to render high CRF encodes “meaningless”.

However, I think it’s important to keep in mind that if you expect the output to resemble the potentially imperfect result of the compressed input, this is less of an issue. The reference is the once-encoded video.

The second thing to note is that I’ve chosen sample clips I have with the highest bitrate and cleanest quality I have available – this maximises the potential for noticing encoding problems.

Thirdly, it’s also important to note that transcoding is a legitimate use of the codec – most people do not have the equipment to acquire raw footage and most consumer grade cameras already have compressed the footage. Other users are likely to be format-shifting and transcoding compressed to compressed. Thus testing in a compressed to compressed scenario is not invalid.

Results: Bitrate vs CRF

It’s an often touted piece of advice that a change of CRF by +/- 6 will halve/double the bitrate. Suggested rate-factors are normally around 19 to 23 roughly. Because I had no idea what a certain CRF value would produce bit-rate wise, and whether x265 adheres to the same convention, I found out by plotting the resulting bitrates on a semi-log plot and curve fitting.

bitrate-vs-crf-value

In the case of difficult case for x264, the upper end CRF 8 bitrate fell off because it had reached the limits of the [email protected] profile. Aside from that, the lines are somewhat wavy but still close to an exponential function with exponent ranges from -0.108 to 0.136.

As a result, from the curve fits, it seems that for x265, we observed that it takes a CRF movement of 5.09667 to 5.5899 to see a halving/doubling in size. For x264, it took 5.68153 to 6.41801 to see a halving/doubling in size. It seems that x265 is slightly more sensitive to the CRF value in setting its bitrate (average ~5.34 as opposed to 6.05).

Readers may be concerned that my x264 examples involve using a different profile and level ([email protected]) versus the x265 ([email protected]). It is acknowledged that it will cap the output quality – in future, I’ll try to match the encode levels but that is not directly configurable for x265 at present from Handbrake.

Results: Bitrate Savings at CRF Value

On the assumption that the CRF values correspond to the same quality of output, how much bitrate do we save? I tried to find out by comparing the bitrate values at given CRFs.

bitrate-ratio-graph

The answer is less straightforward than expected. For the difficult case, the x265 output averaged 92% of the x264 output but varied quite a bit – in some cases at higher CRFs being larger than the x264 output. The average case displayed an average size of 59% which is more in-line with expectations and is mostly stable around the commonly-used CRF ranges.

Then, naturally, comes the actual question of whether the CRF values provide the same perceived quality.

Results: SSIM and PSNR

There are two main methods used to evaluate video quality – namely Structual Similarity (SSIM) and Peak Signal-to-Noise Ratio (PSNR). These metrics are widely used, and are easily accessible thanks to FFmpeg filters. Their characteristics differ somewhat, with SSIM attempting to be more perceptual, so it’s helpful to look at both.

At this point, many encodists may point out the existence of many other, potentially better, video quality judgement schemes. Unfortunately, they’re less easily accessible, they’re less widely used, and there will almost certainly be debates as to whether they correlate with perception or not.

This area is continually being contested, so I’d rather stick to something which has been widely used and known as the caveats are also known to some extent. In the case of SSIM and PSNR, one of the biggest disadvantages to my knowledge is that it has no temporal assessment of quality. They are also source-material sensitive, and are not very valid when comparing across different codecs. Of course, we can’t rely solely on synthetic benchmarks.

ssimnorm-vs-crf-value

We first take a look at the SSIM versus CRF graph. In this graph using the normalized (to 1) scale of SSIM, we can see the quality “fall-off” as CRF values are increased. The slope is steeper for the difficult case clips compared to the average case. In the case of the average case, the SSIM is almost tit-for-tat x265 vs x264 at each CRF value with the exception of CRF 48. Between the difficult case clips, there is a ~0.015 quality difference favouring x264.

ssimnorm-vs-bitrate

For fun, we can also plot this against bitrate to see what happens. In the average case, the lines are very close together, and the quality takes an abrupt turn for the worse at about 4Mbit/s. In all but the highest bitrates, x265 has an advantage. The difficult case shows a less pronounced knee, and has x264 leading. A potential explanation for this can be seen in the subjective viewing section.

ssimdb-vs-crf-value

To see differences in the high end more clearly, we can plot the dB value of SSIM. We can see that at lower CRFs (<20) for the average case, x264 actually pulls ahead for a higher SSIM. Whether this is visible, or even a positive impact will need to be checked, as cross-codec comparisons are not as straightforward.

ssimdb-vs-bitrate

Repeating for bitrate, we see the same sort of story as we saw with the normalized values.

psnraverage-vs-crf-value

Looking at the PSNR behaviour shows that there are only minor differences throughout, with an exception at the lowest CRF . The minimum PSNR also seems to “level out” at high CRF values, so the “difference” in quality between the best and worst frames is lower. In all, there’s really no big difference between PSNRs for the average case between x264 and x265 on a CRF value basis.

psnrdifficult-vs-crf-value

The difficult case shows a fairly similar result, without major differences with the exception at the low CRF end where H.264 profile restrictions prevented the bitrate from going any higher, limiting the potential PSNR. Interestingly, the PSNR variance increased for x264 as the CRF was increased so as to hit the bitrate limits – so while the PSNR average is better, the worst frame was more poorly encoded to make that happen.

psnr-vs-bitrate

Plotting the same plots versus bitrate doesn’t reveal much more.

It seems on the whole, both PSNR and SSIM metrics achieved similar values for corresponding x264 and x265 CRF values. As a result, at least from a synthetic quality standpoint, the quality of x264 and x265 encodes at the same CRF are nearly identical, implying a bitrate saving averaging 41% can be achieved in the average case (and just 8% for the difficult case).

Results: Encode Rate

Of course, with every bitrate saving comes a compute penalty, so it’s time to work that out.

encrate-vs-crf-value

First by plotting CRF values, we can see that the Intel machine that encoded the “average case” files was much faster than the older AMD machine that encoded the “difficult case” files. Interestingly, the encode speed increased as the CRF increased (i.e. lower bitrates) for the Intel machine but didn’t really show as strong of a relationship for the AMD machine. The fall off in encode rate as CRF increased to 48 may have to do with reaching “other” resource limitations within the CPU.

encrate-vs-bitrate

The same thing is plotted versus the resulting bitrate. Overall, the encode rates (excluding purple outlier data points) show that x265 achieves on average just 15.7% of the speed of x264 on the Intel machine, and 4.8% of the speed on an AMD machine. Older machines are probably best sticking to x264 because of the significant speed difference. The difference in the encode rates at lower bitrates/higher CRFs may be due to different performance optimizations and cache sizes between the CPUs.

This also highlights a potential pitfall for buyers deciding whether to upgrade or not, and are basing their decision on a single metric such as CPUBenchmark scores. In our case:

AMD PhenomII x6 1090T BE
5676 @ 3.2Ghz
6918 @ 3.9Ghz (scaled for clock rate)

Intel Core i7-4770k
10131 @ 3.5Ghz
11289 @ 3.9Ghz (scaled for clock rate)

This would mean that we would expect that the i7-4770k would perform at 163% of the AMD PhenomII x6 1090T BE. In reality, it performed at 213% on x264 and 637% on x265. Quite a big margin of difference.

Results: Still Image Samples

Lets take a look at some selected still image samples to see how the different CRFs compare. I suppose publishing small image samples for the case of illustrating the encoding quality is fair use … and while I could theoretically use artificially generated clips or self-shot clips, I don’t think that would represent the quality and characteristics of a professionally produced presentation which would skew the encoding results.

Yes, I know, you’re going to scream at me because the human eye doesn’t perceive video as “each frame being a still picture” and some of the quality degradation might not be noticeable. But hey, this is the next best thing …

Average Case #1

This is frame #215 from the source, where SinB stares inquisitively into a sideways camera. This frame is chosen due to pure detail, especially in shadows.

comparison-nav0

For x265, starting at CRF 20, I can notice some alterations in hair structure where some of the finer hairs have been “spread” slightly. Even CRF 16 isn’t immune to this, but its image quality is good. CRF 12 is indistinguishable from source. CRF 24 continues the quality slide and makes it a bit blotchy, whereas CRF 28 is obviously corrupting the quality of the eyebrows as well which are now just a smear and subtle details in the eyebrows and lower eyelid edge are missing.

The character of x264 is different, where impairments are not primarily in detail loss initially, instead, edges seem to gain noise. CRF 20 in the hair, has some odd coloured blocks, and the skin edge seems to be tainted with edge colour issues. The hair is slightly smoother than CRF 16 which appears much sharper and “straighter”. CRF24 makes a royal mess of the hair, turning it into blotches, and CRF 28 turns it into an almost solid block while losing details in the eyebrows and eyelid.

Average Case #2

This is frame #4484 from the source, a bridge scene where the members of Gfriend are seen running across. The scene is particularly sharp, and the bars of the bridge form a difficult encoding challenge, with high detail in the planks and the water running below.

comparison-nav1

The x265 encode at CRF 16 seems indistinguishable for the most part. However, at CRF 20, Yuju’s finger has a “halo”to the left of it, and Sowon’s red pants are starting to “merge” into the bars of the bridge somewhat. CRF 24 seems to worsen the halos around the fingers, and now, noise around heads passing the concrete can be seen, and the pants merging with the bridge bars is getting worse. CRF 28 is obviously starting to smooth a lot, and blockiness is obvious in the pants.

For x264, the impairments at CRF 28 was more sparkles and blocky posterization/quilting. CRF 24 showed a “pre-echo” of Yuju’s finger as well, which disappeared in CRF 20. CRF 20 appears to have lost some detail in the concrete beam behind, but isn’t bad at all.

Difficult Case #1

This is frame #1092, where Jessica (now ex-member of Girls’ Generation) had a solo shot. The frame was chosen because of the high detail in the eyes and hair.

comparison-gg0

Unfortunately, in the case of this clip, some of the detail was already lost in the encoding at the “source”, so we need to compare with an obviously degraded original.

For x265, the most obvious quality losses begin at about CRF 24 where the hair to the side seems to go slightly flatter in definition and some of the original blockiness (a desirable quality) is lost. By CRF 28, the hair looks like it’s pasted on with the loose strands being a little ill defined, and CRF 32 causes her to lose her eyebrows entirely.

For x264, CRF 20 maintains some of the original blockiness, but CRF 24 is visibly less defined in the hair in terms of the original blockiness. The difference is very minor, but by CRF 28, a similar loss of hair fidelity is seen but instead, it looks a little sharper but much noisier.

Difficult Case #2

This was frame #5827 where Yoona (left) and Tiffany (right) are dancing in front of the LED display board.

comparison-gg1

In the x265 case, in light of the messiness of the source, even CRF 24 looks acceptable. By CRF 28, Yoona’s almost completely lost her eyebrows and most of the facial definition, whereas Tiffany’s nose has a secondary “echo” outline. By comparison, the x264 encode looks a bit sharper, with some more visual noise around the facial features as if they’ve been sharpened resulting in some bright noise spots in CRF 24 and CRF 28. This clip is particularly tough to judge.

Summary

The still image samples seem to show that the necessary CRF to attain visually acceptable performance varies as a function of the input material. This is not unexpected, however, in the case of the more clear and simple material, CRF 12 was indistinguishable, CRF 16 was extremely good and CRF 20 was considered acceptable. For the more complex material, CRF 20 was considered good, and CRF 24 was considered somewhat acceptable.

Results: Subjective Viewing

I spent quite a few hours in front of my large TV checking out the quality of the video. In this way, the temporal quality and perception-based quality of the videos can be assessed.

average-case-summary-table

On the whole, I would have to agree that the x264 CRF values produce very similar acceptance levels on x265. I would probably accept CRF 12 as being visually lossless for the average case material, CRF 16 as hard to discern near-lossless and CRF 20 as “watchable”. This is because I’m especially picky when it comes to quality and minor flaws when I watch material that I’m familiar with (and I always wonder how people put up with YouTube and other streaming services which so obviously haven’t got enough bitrate).

The key difference is the type of impairments that occur with x264 vs x265. In bitrate starvation, x264 appears to be sharper and goes into a blocky-mode of degradation preferring to retain sharp details even if it makes it look noisy. In contrast, x265 starts smoothing areas of lower detail, while “popping” sharpness into the areas that have finer details. This does sometimes look a bit un-natural. It also starts dropping motion where it is small, resulting in motion artifacts and jumpiness, but on the whole, this might be slightly less objectionable depending on your personal opinion.

difficult-case-summary-table

With the difficult case data, we have a bit of a different opinion where CRF 16 is visually indistinguishable, and CRF 20 is almost indistinguishable. I would have to agree that x264 is better for this case and appeared more visually clean even at higher CRFs. This seems to be because the noise in x264 is “disguised” better in the patterning of the LED lights, whereas the smoothing in x265 becomes more obvious.

But a second, and more important issue, is the presence of a field oddity post-deinterlacing for the x265 clips, especially at CRF > 20.

decode-field-oddityThe oddity results in “stripes” appearing every n pixels vertically as if there is something wrong with the fields there.

block-boundariesUsing FFmpeg’s FFV1 decoded lossless file, examining it seems to show the encoded result actually does have the oddity in the fields. The reasoning for it isn’t clear at this stage, but may be related to a encode unit block boundary condition of sorts or a poor implementation of interlaced encoding. Whatever the case is, it makes interlaced files CRF > 20 difficult to watch during panning sequences especially.

This may go to explain why the SSIM/PSNR values were more smooth compared to the “average” case and were lower – these errors were not critical to the comparison, but are very temporally evident patterns.

Speaking of interlaced video, it’s a sad fact of life we still have to deal with it due to the storage of old videos, and due to some cameras still recording true interlaced content despite the majority of the world using progressive displays. Apparently H.265 supports interlaced encoding, although there was some confusion. One naive solution that some users may think of is just simply to deinterlace the video first and then encode it. The problem is that you will lose information through deinterlacing – if you’re going 50 fields per second to 25 frames, you’ve lost half the temporal information. If you frame double, then you can keep the temporal resolution but will have to generate the missing field for each frame – computationally intensive and can potentially introduce artifacts. It can also result in a file that is incompatible with many players, and if your motion compensation/prediction algorithm is poor, you might lose sharpness in some areas. I personally prefer to keep each format (progressive / interlaced) in its respective format through to the final display stage where the “best” deinterlacing for the situation can be applied.

However, as it turns out, the difficult case video is a Blu-Ray standard video, but it isn’t native interlaced material at all despite being 29.97fps. It’s 23.976fps that’s gone through a telecine process to make it 29.97fps. Why they would do such a thing, I don’t know, as Blu-ray supports 23.976p natively.

Conclusion

After a week and a bit of encoding and playing around with things, I think there are some rather interesting results.

On the whole, for the average case, x265 showed bitrate of about 59% of that of x264 at the same CRF. The CRF value sensitivity of x265 was slightly higher than x264, being about +/- 5.34 for a doubling/halving rather than +/- 6.05. Synthetically, the corresponding CRF values produced very similar SSIM and PSNR values for both x264 and x265, so the same “rules of thumb” might be applied, although the bitrate saving will vary depending on the specific CRF selected.

Encode rates for x265 were significantly slower than x264, as to be expected, due to the increased computational complexity. However, it seemed that lower CRF values/lower bitrates were much faster to encode on modern hardware (possibly due to better cache use). This wasn’t reflected with my older AMD Phenom II based system (possibly due to difference in instruction set and optimization).

Subjectively speaking, I’d have to say CRF 12 is indistinguishable and CRF 16 is good enough for virtually all cases. For the less discerning, CRF 20 is probably fine for watching, but CRF 24 is beginning to become annoying and CRF 28 is the least that could be considered acceptable. The result seems to be consistent across x264 and x265, although (unexpectedly) the difficult case seemed to tolerate higher CRF values probably as the harsh patterns were not as easily resolved by the eye and noise was less easily seen. As a result, even having a “rule of thumb” CRF can be hard, as it depends on the viewer, viewing equipment, source characteristics and sensitivity to artifacts.

Unfortunately, it seems that the “difficult case” data is really hard to interpret. This appears to be because x265 isn’t very good about handling interlaced content, and by using the “experimental” feature, the output wasn’t quite correct as seen in the subjective viewing. As a result, the synthetic benchmarks may have been reflective of the strange field blending on the edge of blocks resulting in a loss of fidelity that only resolved at fairly high quality values (CRF <=20). As a result, the mature x264 encoder was much more adept at handling interlaced content correctly, and I suppose we should take the difficult case data as being “atypical” and not representative of what properly encoded interlaced H.265 video would be like.

It looks like I’ve got another round of encoding ahead for testing the difficult case – as I discovered that the material was actually 23.976fps pulled up to 29.97fps, I’ll perform an inverse telecine on it and encode the progressive output to see what happens. This time, I’ll use H.264 [email protected] for consistency as well. With any luck, the results might be more consistent with the average case.

Continue reading

Posted in Computing | Tagged , , , , , , | Leave a comment

Tech Flashback: Canon DM-2500 Intelligent Organizer

Seeing as I’m not having too much luck getting to sleep, I might as well do something productive, so here comes another post. Apologies in advance if I make any mistakes …

It’s hard to imagine now in an era of smartphones that it wasn’t that long ago that even PDA’s were non-existent or unaffordable. In the early-to-mid 1990’s, in an effort to appear somewhat technologically advanced, some people replaced their pocket notebook or diary with a “digital diary”, or “electronic organizer”.

In fact, I had two, but sadly both suffered damage in one way or another and have been long disposed of. But the other day, in a thrift shop, I came across a Canon DM-2500 for a few dollars, and I thought it was worth it to buy it just to blog about it.

The “Intelligent Organizer”

2016082311278237

The organizer is from Canon, specifically their business machines division which is responsible for desktop calculators and the like. This unit came with its box, but was otherwise missing all of its documentation. At least, we get to see the box which boasts a list of features which doesn’t seem so remarkable today, and a picture of the unit itself.

2016082311278238

The rear of the unit makes a very honest depiction of the features and how it looks on screen. That’s far from what can be said about many advertising materials nowadays … The unit is Made in China.

2016082311288240 2016082311288239 2016082311288242 2016082311288241

To appeal to international markets, the same text is written in a variety of languages on the other sides of the box.

2016082311298243

2016082311298244The unit has seen some work, so it’s a little scuffed. It boasts a 10Kb RAM memory, and has a slot cut in its cover so that the function buttons and search are accessible through the cover. This means that for simple “reference” purposes, the cover doesn’t need to be opened, whereas when programming is desired, the cover is opened and that exposes the QWERTY keyboard and programming function keys.

The unit itself is almost the size of my 5.5″ smartphone, both in footprint and in thickness. It weighs slightly less at 84.65 grams, but this is 20 years of progress in a picture.

2016082311308246

Opening up the cover, we see the quick reference label on the inside panel. This is necessary as some of the organizers have fairly complicated features, and thick manuals to go along with it. Having the basic instructions available at a glance helps you to work the device when you’re “away from home”.

A QWERTY style keyboard is available, made of the rubbery buttons you find on older calculators. It’s not particularly tactile, but serves for more convenient text entry. The symbols available are hidden behind the SYM button, and everything is in upper case. Rather annoyingly, the PROG button is where you might expect backspace to be, so when correcting errors, you might instead exit the programming mode and destroy any progress you’ve made in programming a record.

Because of the close spacing of the buttons, and the slightly awkward layout, it’s not an easy job entering text. It doesn’t help that the unit seems to lag as well, and the LCD refreshes slowly, so it’s really a thumb board for a one-by-one character entry. Not so good for long addresses.

Some later units had additional features, such as free-form notes, redefinable fields for the address book, expenses lists, data exchange, etc.

2016082311318251

The rear is somewhat scuffed as well, but features a screw-down battery hatch requiring the use of three CR2032 cells, two for main and one for back-up. There is a reset hole to reset the memory of the unit and erase all data, and there is a piezo buzzer hole to let the sound out of the case.

As such organizers pre-date the availability of Flash memory, they all almost universally use SRAM which requires power to retain data. It is a volatile type of RAM, and hence this is why battery replacement can be such a daunting task. Get the polarity wrong, or remove the wrong combination of cells, or take too long while replacing the cells and all your data is lost.

To combat this, some units had their own data ports for data transfer to a PC (additional cable and software at an additional cost) for back-up purposes. This unit doesn’t have any of these features. However, because of the risk of sudden data loss, I can’t think too many people would have favoured such units over “physical” diaries which don’t have this potential for catastrophic data loss without an easy back-up option.

Features

It seems like the unit may have seen better days, as the LCD has some scratches on it and it doesn’t have particularly good contrast unless viewed from an oblique angle. I will continue anyway, but apologies for the slanted LCD images.

When powered on, the first thing you are welcomed with is the clock.

2016082311328252

It’s good to see that even though it’s probably an early 90’s product, it still appears to be Y2K compliant. Pressing on the TEL button allows us to look at the phone directory.

2016082311328253

We are prompted to search, so if you enter a few characters and enter, you can search by name, or you can just press the up-down search keys to scroll through the whole database “rolodex” style.

2016082311338257

The display consists of one dot-matrix line, and two 7-segment lines, no doubt a cost-saving measure to reduce the complexity of the device and also reduce its cost. Field information is displayed in fixed segments underneath. More expensive (and later units) have full matrix displays which can render bolder text, lowercase text and more natural numbers.

2016082311338261 2016082311338262 2016082311338259 2016082311338260

The whole character set can be seen in the above images, and there really aren’t that many characters (45 in total). Everything is only available in upper case as well. To program a new entry, we can press the PROG button which briefly flashes up the capacity where U stands for bytes used, and E stands for bytes free.

2016082311328254

Then it prompts you field by field and you enter the data followed by enter until the record is complete (Name, Company, Address, TEL1, TEL2).

2016082311328255

The schedule feature is not particularly interesting. Each schedule entry is a line of text, a start time/date pair and an end time/date pair. Optionally, the alarm “flag” can be set to have the unit warn you of the event. I guess this is one big advantage of having an “electronic” diary – the possibility for alerts.

2016082311348264

The calculator is a 10-digit “regular” calculator with no special functions. A regular desktop calculator is more functional owing to the traditional keypad layout which is faster and easier to use.

2016082311348266

The calendar feature is a bit “lame” as it’s basically a week-by-week view of the dates, using both rows of numeric segments to display the days owing to space limitations. It’s hardly practical by any stretch of the imagination.

2016082311348267

You do have a world-clock feature, which is useful for travellers and those doing business in different countries, but it’s probably got a few out of date timezones now, due to changes which happen occasionally.

2016082311348268

The alarm feature is not anything special either. You get one alarm, that’s it.

2016082311348269

The “secret” area is basically a partitioning scheme where any data stored in the modes in the secret area is only visible once logged in. It’s probably handy to protect your data from occasional prying eyes, but there’s no way to change the password once set … so lets just hope nobody oversees you typing it in because it’s not covered by asterisks either!

2016082311358270

Once logged in, the “key” icon appears in the corner to let you know that any actions are being performed in the secure area. Pressing on the secret button logs you out back into the openly accessible area.

2016082311358271

Capacity

One thing that’s not very commonly discussed is the issue of capacity. In the period when the units were sold, aside from the “features” on the box, the next most common parameter to compare was the capacity stated in kB. I’ve seen units from 2kB through to 256kB, and formerly owned a 2kB and a 64kB unit. But how much can you actually get out of that is not clear – depending on how the data is stored, you could make more or less of the available RAM. As a result, I conducted a few calculations and experiments to flesh it out.

Available Capacity

A capacity of 10kB should equal 10240 bytes for storage. According to the screen post-reset, the unit has 10048 bytes available, so it’s likely 192 bytes are taken away for the system’s internal usage (e.g. storing the password, storing the fixed alarm, storing the calculator memory, last timezone displayed).

Record Sizes

A telephone record has the following fixed maximum field lengths:

  • Name – 24 characters
  • Company – 24 characters
  • Address – 48 characters
  • Telephone 1 – 24 characters
  • Telephone 2 – 24 characters

This totals to a record length of 144 bytes maximum. Upon storing a maximum length record, I saw a total of 153 bytes stored, thus there is an overhead of 9 bytes per record, which probably is used to separate the fields (5 bytes), and perform other administrative tasks.

Record lengths are not fixed length. I tried storing a record with a single character name, and null values for the remaining and ended up with 10 bytes used. This indicates a semi-efficient use of memory.

A schedule record has a 48 character field, with two time values recorded. Storage decreases by 60 bytes for a maximum-size schedule record.

As a result, with 10240 bytes available for storage, you could store 66 full-size phone records and two full size schedules with 22 bytes to spare.

Bytes? Really?

Because of the limited character set of just 45 characters, I’m not sure that they 10kB they talk about means 8-bit bytes at all. After all, 64 possibilities can be expressed with 6-bits, so each character can be stored in 6-bits. For the case of a numeric digit (TEL1/TEL2 fields), they have just 10 possibilities, so can be stored in 4-bits (e.g. BCD).

If 8-bit bytes were used, then the top two bits are practically free to be used as flags, and might be for indicating secret/non-secret data, helping with deletions to “defragment” records or to indicate active alarms etc. If they were not used, then it could be a bit of a waste.

If BCD was used to store the numeric phone numbers, then the TEL1/TEL2 fields would essentially be the same size as the Name field (assuming 8-bit bytes for text), and that might explain the 10kB “claimed” capacity which isn’t really a power-of-two value, although it would quickly come undone if I started storing all text and no numbers in the phone book. Alternatively, the SRAM memory might be made by combining 8kB + 2kB dies.

Teardown

Here, we get to the possibly fun part – the taking apart “part” of the post.

2016082311368272

2016082311368273Under the battery hatch, the warnings about battery replacement are repeated. They’re even repeated on a piece of transparent plastic on top of one of the batteries. The serial number and date code are on the inside of the cover as well.

While all seems to be well, the unit wasn’t in the best condition as the previous owner had replaced the cells with Energizer cells, and then forgotten about the unit. While I’ve never seen a lithium coin cell leak, I definitely saw one now. I actually spent a bit of time cleaning the mess and scraping off some of the corrosion to get it to work again. Note the central contact.

2016082311378274

Two screws on the edge hold the cover in place, along with some internal clips. The internal PCB shows some SMD components, a few diodes to prevent mishaps in case of batteries running down or being inserted incorrectly, a tanalum capacitor and gob-top “chip on board” type construction.

2016082311418275

The LCD is connected by a many conductor flexible cable that’s probably fairly brittle, so I didn’t touch it. I didn’t take it apart any further, as the other side would have predictably been the keyboard printed trace pattern.

2016082311418276

The rear cover houses the piezo buzzer under a bit of tape. That’s basically it.

Conclusion

I suppose that in the early-90’s when anything digital and computer related was considered advanced, these units may have been considered “cool” and in some ways, the “poor man’s PDA”. Unfortunately, while they helped some people “go paperless”, they needed a battery change roughly every year which comes with a risk of complete and total data loss if not performed correctly. They were also relatively cumbersome to use as the data entry was slow, and the forms of data that could be stored were relatively limited compared to pen and paper. There were advantages in security, reusability and in having schedule alarms, but some units were also very pricey and difficult to use without having the manual. They were also vulnerable to everything that electronics would be vulnerable to – external EMI could cause some units to lock up and freeze, requiring a reset that could kill off all the data as well. In all, I suppose their “lack” of universal popularity probably explains just how impractical these might have been compared to a good “pen and paper” diary or notepad.

Posted in Computing, Electronics, Tech Flashback | Tagged , , | 4 Comments