Wikipedia:Reference desk/Archives/Computing/2017 November 3

From Wikipedia, the free encyclopedia
Computing desk
< November 2 << Oct | November | Dec >> November 4 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


November 3[edit]

Portable device with SSH and Wi-Fi[edit]

What's the simplest device which is portable, has Wi-Fi, can connect to a server using SSH?B8-tome (talk) 13:41, 3 November 2017 (UTC)[reply]

You say "Simplest", are you looking for a device with a keyboard and a screen? Or just a chip?
I won't say it's the most simple, but a literal answer to your question is a ESP8266. Here's a dev board.
If you're looking for a device with more features, I think we'll need to know your requirements a bit more. ApLundell (talk) 15:45, 3 November 2017 (UTC)[reply]
Cool device ApLundell . But I wanted one with a keyboard and screen. Not need for fancy colors, just typing text on it would do it. --B8-tome (talk) 18:37, 3 November 2017 (UTC)[reply]
I'm not aware of a portable device that is purely a terminal. I wish I was, I'd pick one up if it was cheap.
One alternative is "PocketCHIP" [1]. It's a pocket linux machine that's designed to be hackable. Their marketing focus is education and games, but it would make a handy terminal. It even has pins for a UART port. "Turn Your PocketCHIP Into a Badass On-The-Go Hardware Hacker’s Terminal"
ApLundell (talk) 20:11, 3 November 2017 (UTC)[reply]
An android tablet would certainly work. So would a chromebook.(Chromium used to have a built-in SSH shell, but for some reason that was removed in recent versions. However, you can still install SSH shells as an extension.)
If you're not concerned with having a high-powered machine, you can get perfectly usable Chromebooks for under $100.[2].
I'm not sure if any of these general-purpose devices we're recommending are in the spirit of the question, though. ApLundell (talk) 17:47, 4 November 2017 (UTC)[reply]
All these devices are fine. But does no one produce a purely dumb terminal with battery? I said that, because no matter how cheap the Android or Chromebook might be, their fancy color would be drawing battery power. I assume PocketCHIP is the closest so far to my wishes. I suppose a device could be built on the top of ESP8266 too. B8-tome (talk) 20:32, 4 November 2017 (UTC)[reply]
Devices are produced according to what demand there is for it. Who wants a monochrome text-only screen? There's no saving in battery power unless it's a tech that doesn't need backlighting anyway, and if it's a really thin client then the radio(s) will be running full-time anyway. You could build such a thing (I have the hardware here - I've at least one rig with a few lines of LCD text display and an ESP8266, if you ignore the various other servos and things attached to it), but why would you bother when last year's Android is available for scrap price? Andy Dingley (talk) 21:32, 4 November 2017 (UTC)[reply]
If you don't mind looking really deeply into its eyes (and their tiny screens), there are some Furby models that might suit you... Andy Dingley (talk) 21:33, 4 November 2017 (UTC)[reply]
You say that, but there are low-energy screens that are either B/W or have lousy color. Like the screens on the Pebble watches, or the screen on the [getfreewrite.com Freewrite] digital typewriter. (If you could put a custom firmware on the Freeright, it would make a hilariously awesome terminal.)
I'll bet if you Kickstarted a pocket dumb-terminal with a high-contrast, low-power display (Like a Pebble watch only larger) you could get enough interest for a limited production run. If you could get appropriate parts. The screen being the trickiest.
I'd buy one. ApLundell (talk) 01:16, 5 November 2017 (UTC)[reply]
Old qwerty phones like the Droid3 are very cheap. They have WiFi and a keyboard and a screen. Technically, you could activate one to do ssh using data on a phone plan. I used to use a qwerty slider to ssh all the time. — Preceding unsigned comment added by 71.85.51.150 (talk) 01:02, 5 November 2017 (UTC)[reply]


Kindle Keyboard
Wait! I just thought of a radical solution!
Older model Kindles have keyboards. They can be jailbroken. They have very low power consumption.
This may be the closest you'll get to an answer. ApLundell (talk) 01:20, 5 November 2017 (UTC)[reply]
Here we go : Liberate your Kindle 3 and get full-screen terminal!
A quick glance at the kindle hacking forums here makes me think this is doable. I dunno if it'll be easy, though. So don't blame me if you buy an old Kindle, then brick it.
You still wouldn't have a single-purpose terminal machine. You'd basically have a unix machine in your pocket with a laggy B/W display. It might be the closest you can get without building your own. ApLundell (talk) 01:29, 5 November 2017 (UTC)[reply]
The screens are pretty hateful for anything at this sort of speed. Andy Dingley (talk) 01:28, 5 November 2017 (UTC)[reply]
There's a video of someone using one with an external keyboard. The lag is something alright. I guess you could pretend you've got a 300baud modem and a phosphorus display. ApLundell (talk) 01:35, 5 November 2017 (UTC)[reply]

How does the motherboard translate Assembly language?[edit]

12.130.157.65 (talk) 14:45, 3 November 2017 (UTC).[reply]

(I assume you mean Machine code, which is built (or 'assembled') from the Assembly code.)
It happens on the CPU chip.
Machine code is stored in memory and the CPU can interpret it. In the simplest form, the CPU can load an instruction from memory into an instruction buffer. Remember that data takes the form of tiny elements that are either electrically charged (1) or not electrically charged (0), and then there's circuitry in the chip that's connected to that buffer and reacts differently depending on which bits of the instruction are turned on. Then every so many clock-ticks, that buffer is flushed and a new instruction is loaded.
That's a hugely simplified explanation, though. (So simplified it's almost wrong.) If you want to learn how the CPU interprets machine code, the subject you want to read about is Microarchitecture. This is a vast topic. People get their PhDs studying this subject.
You may find it's easier to first try to understand Digital logic. That's sort of the building blocks that go towards making a CPU.
As usual, I know I'm not being too informative here. But if you read those articles you'll find they're very in-depth. If you have more specific questions on this topic we can try to answer them.
ApLundell (talk) 15:39, 3 November 2017 (UTC)[reply]
Re: "Remember that data takes the form of tiny elements that are either electrically charged (1) or not electrically charged (0)," Ah, the 1's and 0's. But I didn't know them as being electrically charged or uncharged. I thought they were about different directions. This is the best explanation I have found on 1's an 0's "They're not really 1's and 0's. A computer's hard drive is made up of metal platters divided into zones whose molecules have their magnetic fields somewhat aligned in 1 of 2 possible directions." Maybe someone could expand on that? Thanks. 12.130.157.65 (talk) 11:29, 4 November 2017 (UTC).[reply]
If you do mean assembly language and not machine code then a program called an assembler is required to translate from the assembly language to machine code which can then be executed by the CPU. The assembler must be installed like other programs. The translation is one-to-one and not like higher level languages where the translater (called a compiler) must figure out how to put machine code instructions together to get the wanted result. PrimeHunter (talk) 15:54, 3 November 2017 (UTC)[reply]
Okay so a compiler translated high-level language to Assembly, and then an assembler translates Assembly to machine code. What's an example of typical assembler for Windows? And, is the machine code the binary 1's and 0's? Or what's the last result that translates everything to 1's and 0's. Thanks. 12.130.157.65 (talk) 11:35, 4 November 2017 (UTC).[reply]


Assembly language is little more than a human-understandable representation of machine code. As such, most compilers create the machine code without going through assembly language (although they can be told to also generate the assembly language). Microsoft Visual Studio is normally used on MS-Windows to compile programs from the source code.
Apart from a few specialised areas, no-one needs to write assembly for modern PCs. LongHairedFop (talk) 12:32, 4 November 2017 (UTC)[reply]


Unfortunately, we don't seem to have a good article on how simple (8 or 16 bit) CPUs work. LongHairedFop (talk) 12:32, 4 November 2017 (UTC)[reply]

There are several assemblers that will run under Windows. Here's a Stack Overflow discussion on that: https://stackoverflow.com/questions/19349246/what-do-i-need-in-order-to-write-assembly-on-windows-7

As for the chain of events: every processor has a list of instructions, which are hard-coded into the chip, and represent everything it can ever do. These are typically very simple operations, such as adding two numbers, or jumping from part of the program to another. Each instruction has an opcode, which is a binary sequence identifying the instruction. For example, in the 80x86 architecture (which is what Windows runs on), but instruction ADC (add with carry) has the opcode 00010000 (decimal 16, hexadecimal 0x10). If you look at a file that has been assembled, you will see these opcodes, along with whatever data the program needs to run, and whatever header information the operating system needs to execute the program. Technically, you could open up Notepad and type in 00010000 and other opcodes: if you did that correctly, and you added the data and header information, your program would run. The assembler lets you type in ADC instead of 00010000, because it's much easier to work with these mnemonics. Most assemblers also let you structure your code into modules, and define macros for commonly-used tasks. At the end of the day, you don't really *need* any of this, but it's impractical to write any non-trivial program without it. OldTimeNESter (talk) 15:15, 4 November 2017 (UTC)[reply]

Here is a really good description of programming a computer on the bare hardware (this is the Z80 architecture, but the idea is the same). http://lateblt.tripod.com/z80proj1.htm The programmer is actually entering raw machine code (i.e. the opcodes) into the EEPROM, rather than using an assembler. However, if he instead wrote the program using the mnemonics (i.e. ADC), then used a Z80 assembler to convert them into opcodes, he would be doing what I think you are trying to do. OldTimeNESter (talk) 15:15, 4 November 2017 (UTC)[reply]
(ec) Quote: "And, is the machine code the binary 1's and 0's?" Yes. If you had read the Assembly language section of assembly language, you would have seen that the Assembler creates machine code and it gives the example 10110000 01100001 which shifts data value 01100001 into register AL. 01100001 is the hex value 61h, decimal value 97, and letter "a". In old-time assembler writing, you could write this any of these ways:
  • mov al, 01100001xb
  • mov al, 61h
  • mov al, 97
  • mov al, "a"
The assembler would do the rest to produce the 16-bit opcode. You might like to check out A86 (software), an assembler by Eric Isaacson. I have used A86 extensively and still run it on a 1998 PC-DOS machine as a hobby. Akld guy (talk) 15:18, 4 November 2017 (UTC)[reply]