Perhaps I should make that a more high-level question: how on earth do 'silicon chips' store things? What happens when something is 'remembered' on one?
The lies-to-children version of the main flavours of computer memory:
- A circuit that's permanently wired in a certain way. This one always outputs '1', that one always outputs '0': it's a Read Only Memory (ROM). More useful that it sounds, for storing things like which combination of segments to light up to display which number on your digital watch, or the basic program that a computer needs to check it's not broken, find a disk drive and start loading an operating system. Can usefully be constructed out of fuses: It starts out as all '1's and if you want a particular bit to be a '0', you apply high current and blow that fuse, making it Programmable (but only once) Read Only Memory - PROM. Doesn't need power, and keeps its contents effectively forever.
- Then there's the 'bistable' or 'flip-flop' circuit: An electrical circuit that, as the name suggests, will happily settle into one of two stable states. There will be an output that is on when it's in in one state but not the other. There will also be inputs that allow you to nudge it from one state to the other by applying different combinations of voles or no-voles. When you turn the power off, it loses its state, and when you turn it back on it will settle into one state or the other, at the whim of physics (unless circuitry is present to ensure it starts up in a particular condition).
This is more or less how SRAM[1] and DRAM[2] work - they're simple to build (even out of things like relays and vacuum tubes if you've got a big enough room and lots of electricity) and nice and fast, but require power to maintain their contents. This will be what your computer uses for random access working memory. - Capacitors: A thing that stores a charge (like a battery, but using physics rather than chemistry - you'll be familiar with them providing standlight power in dynamo lights). Have a circuit to charge or discharge it in response to an input and then disconnect the power source to leave it in the desired state. Some more circuitry senses the voltage at the capacitor and provides an output. This is how EEPROM (Electrically Erasable PROM) works, and - perhaps unintiutively - most modern camera sensors[3]. It's slow and relatively expensive, but keeps its contents for a long time without external power.
- Flash memory: Same basic principle as EEPROM, but with much smaller capacitors that are effectively part of a transistor. We're into fiendishly clever semiconductor physics, and the word 'quantum' is bound to crop up at some point. Suffice to say, it lets you build circuits that behave a lot like EEPROM, but more cheaply and at much higher storage densities. It keeps its contents without power and is faster than EEPROM, but a bit more sensitive to long-term wear and ionising radiation. This is what [any computer built this decade - Ed] uses for file storage, and it's frequently used as an alternative to *ROM for bootstrapping computers large and small (hence "re-flashing the BIOS" or "re-flashing the firmware").
- Spinning rust, magnetic tape, optical and magneto-optical discs, punchycards, etc. - all that other stuff that isn't based on silicon, becoming increasingly obsolete as flash memory becomes cheaper and more dense. You've probably got a decent idea how these work already. Notable in a computing context for being used as 'storage' rather than 'memory'[4] - it would be a very slow computer that used any of these for its RAM.
What gets rearranged and how does it then get interpreted into something humans can see, hear, etc?
As any good hacker movie will tell you, "it's all just ones and zeros". What I've described above covers a single one-or-zero 'bit'. To do anything useful, you need a metric fuckload of them. 8 bits gives you a byte, which can hold one of 256 possible values, for example a number between 0 and 255, or between -127 and 128. That's starting to become useful, as you have enough values to hold one character from most alphabets, in upper and lower case, with room for some punctuation and things like 'backspace', 'carriage return' and 'end of file'. Given enough kilobytes, you can store and manipulate useful amounts of text.
Images, well, we could use that text to store a list of instructions: Draw a horizontal line this long; draw a vertical one twice as long at the end; join them up, fill it in. That's vector graphics, and while it's extremely useful, it went out of fashion for most purposes as computer memories got big enough for:
Bitmap graphics: Start at the top left corner; fill the pixels with balck, white, black, black, black, white, black; next line: black, black, white, white, white, black... etc. One bit for each pixel, and with a few hundred bits you've got a grainy black & white image. Want colour? No problem. Let's use three bytes for each pixel, giving the relative amounts of red, green and blue. Yay, colour. Cripes, what a lot of bytes. Maybe a few million.
Want video? Well, it's just 25 of those every second. Gigabytes.
Audio? Well, as found on the back of your record player, it's just a voltage somewhere between -1 and +1 volts. We can map that to a value between -127 and 128 and store it in a byte. Do that a few thousand times a second, and feed it through the right circuitry to convert it back to analogue and drive a speaker, and you've got sound. Megabytes.
The thing that's made modern digital audio and video possible is clever programs that can take an image or audio file and work out a series of instructions to re-create it that's smaller than the original file. Instead of saying "black; black; black, white, black, black, white, white" you might say "three black, one white, two black, two white" (this is how GIF and fax works). For video you can compare the difference between the frames and just copy the stuff that hasn't changed from the last one (MPEG). For audio you can look at what frequencies are present in the signal at any given time, and throw away the insignificant ones (MP3, AAC, etc) and hope that nobody hears the difference.
It's algorithms all the ways down.
[1] Static and Dynamic Random Access Memory, respectively. 'RAM' comes from the early days of computing, when the distinguishing feature was that you could read or write to any part of it at any time, rather than spooling through a paper or magnetic tape.
[2] This is a lie, DRAM relies on the charge in a capcitor being constantly refreshed, but that's not important right now.
[3] It turns out you can also discharge the capacitor by shining light on it, due to the photoelectric effect - arrange your memory bits in a line or grid, and you've got some pixels. There was also a predecessor to EEPROM, EPROM, where the erase process required you to shine ultraviolet light through a little window in the chip package - same principle.
[4] "A place to keep files" and "What the computer' currently thinking about". Non-trivial computers, like your phone or laptop will use SRAM or DRAM for RAM, and flash memory (or magnetic disk) for storage.