Skip to main content
18 events
when toggle format what by license comment
Nov 11, 2021 at 19:41 comment added supercat The popular Atari 2600 Video Computer System, for example, uses a "1MHz" 6502 at 1.19Mhz, and machines that clock that processor at 1.00Mhz or less are probably less common than those that clock it at 1.022Mhz.
Nov 11, 2021 at 19:40 comment added supercat @cjs: Which approach is cheaper depends upon whether a system has other timing constraints, and by how much one would need to exceed the specified timings. The idea of testing chips prior to assembly at 70C was probably an exaggeration, but if a device can be easily made to disable automatic refreshes, and a device maker has a heated room where assembled machines are burn-in tested at e.g. 50C and machines can reliably run a test program at that temperature, that would imply that the machines would likely be reliable in normal use. Pushing devices at least a little beyond spec was common.
Nov 10, 2021 at 22:14 comment added cjs Looking again, I think the 4816 datasheet I linked above does specify a 2 ms refresh period. It's given as "2 ns" near the bottom of p.3 (p.91 marked), which was obviously absurdly for a refresh interval, so I assumed it was something else. But the Fairchild F4116 datasheet gives "2 ms" for the same figure, so I think that "ns" in the 4816 datasheet is a typo.
Nov 10, 2021 at 21:57 comment added cjs @supercat Interesting specualation, but is seems a lot cheaper, not to mention less risky, just to build a design that refreshes within the specified interval.
Nov 10, 2021 at 19:39 comment added supercat ...could pass the more stressful test. I don't know if BBC micro ever did that, but it wouldn't have been difficult (at least if one didn't care how precise the temperature was for testing).
Nov 10, 2021 at 19:38 comment added supercat @cjs: If a DRAM has a specified 4ms interval, then it would be likely that at a temperature of 25C most chips would work correctly even if the interval were extended to 8ms or probably even 16ms. As the interval gets longer and temperatures get higher, however, the fraction of manufactured chips that would continue to work reliably would decrease. If some particular design would make it easy to support a refresh interval that was somewhat longer than specified, a vendor could test out DRAM chips at 70C using an interval that was half again as long as that, and only use chips that...
Nov 10, 2021 at 13:58 comment added cjs Are you sure the BBC Micro's 4816's did not need to be refreshed as often as other RAM chips of the day? I know that the 4816 had faster access time to support the BBC's faster clock, but I don't see any mention in the datasheet I found of longer refresh intervals.
Nov 10, 2021 at 12:19 comment added Chromatix @cjs I think that should suffice.
Nov 10, 2021 at 12:19 history edited Chromatix CC BY-SA 4.0
added 341 characters in body
Nov 9, 2021 at 6:16 comment added cjs Now that we have other answers clarifying that the DRAM refresh interval is 2-4 ms., not tens of ms., it would be nice to update this answer match that.
Jun 18, 2021 at 0:42 comment added Chromatix @SingleMalt DRAM arrays always need a row buffer, and it is built into the DRAM chip. Relying on DMA to perform DRAM refresh is a common technique, and generally just requires reading each row address in turn, which in early micros was a natural result of video scanout; the PC has a separate display module with its own memory, so the main memory has to be refreshed explicitly. More modern memory controllers have dedicated logic which activates a self-refresh function in FPM and later DRAM chips.
Jun 17, 2021 at 19:57 comment added Single Malt Do you know if early x86 processors had the refresh the circuitry on chip, and if so up until what generation? One thing that I have not understood related to this is why the original IBM PC has channel zero of the 8237 / 8257 DMA chips for refresh. A guess was the refresh was done by reading and writing to the same memory cells, that is there is not a need for a row buffer.
Mar 5, 2020 at 2:50 comment added cjs No, it definitively did not. The DRAM refresh as designed is made no easier by the video circuitry; it would be exactly the same were the frame buffers linear, or even were the video circuitry completely replaced by just a 7-bit counter generating addresses used to read any part of RAM. See my answer for details. Raffzahn's answer provides a bit more detail than mine on the reasons for the funny frame buffer layout, also emphasising that it's nothing to do with DRAM refresh.
Mar 5, 2020 at 2:33 comment added Chromatix @cjs That just means the display refresh has to iterate over the DRAM rows multiple times per frame, at least 9 times. Woz was always keen to reduce component count, so the bizarre screen memory layout still plausibly resulted from a confluence of those two pressures.
Mar 5, 2020 at 0:30 comment added cjs Also another point here: the screen refresh rate on the Apple II (once per 16.66 ms) is far too slow to refresh the DRAM (2 ms. max interval, per the 4116 data sheet). This answer is not correct, at least as far as the Apple II goes. It's unfortunate that it's still upvoted far past any of the more correct answers.
Mar 4, 2020 at 3:19 comment added cjs Because the Apple II uses 4116 chips, the requirement is quite small: 128 accesses will refresh the entire DRAM. (See my answer for details.) So no, none of the the "contortions" regarding mapping of video memory are related to refresh, beyond not having any video modes using a frame buffer of less than 128 bytes. (That's pretty trivial; 40x24 text mode needs a minimum of 960 bytes.) Nor does any the video design affect the DRAM address multiplexer, which is quite straightfoward anyway.
Mar 3, 2020 at 14:59 history edited Chromatix CC BY-SA 4.0
added 707 characters in body
Mar 3, 2020 at 14:19 history answered Chromatix CC BY-SA 4.0