How to Make a Sandwich

Eli Huebner
4 min readMar 26, 2021

There is a team building game that everyone who works with children has done. In this game, one person is designated a “sandwich maker”, and is given a loaf of bread (sliced if the leaders of the game are nice, unsliced if not), a knife or two, and the fixings to make some sort of sandwich, traditionally peanut butter and jelly or an allergy friendly version. The sandwich maker pretends they have no idea how to make a sandwich, and it is the job of the rest of the group to instruct them. There are two wrinkles: the “instructors” cannot see the sandwich being made, and the sandwich maker must follow all instructions 100% literally without any assumptions. For example, if the instructors start by saying to put peanut butter on the bread, the sandwich maker might take the jar of peanut butter and place it on top of the loaf. Shenanigans ensue, and everyone learns a lesson about being mindful and giving clear instructions.

You might be asking yourself why this game has a place in a blog about tech. As a programmer, you are telling a computer how to do something, and the computer is “making a sandwich” based on your instructions. A computer cannot think for itself (yet), it can only do what the programmer tells it, so we as programmers need to be mindful about what instructions we give, and how we give them. Code is of course the method with which we communicate our needs to the computer, and many languages exist to do just that.

Programming languages are described as having a “level”. A low level language has little abstraction, being a simply interface through which the programmer gives instructions to the hardware of the machine. At their core, these languages are telling the machine to access points in memory, store data there, and flip switches on and off (i.e. work with data in binary). High level languages contain more abstraction, automating certain tasks and allowing the computer room to interpret thanks to code within the language hidden behind the scenes.

Using a low level language is like telling someone how to make a sandwich in the game, while a high level language is like asking a friend. We can see the difference in how two languages, C++ and Ruby, print to the console.

In C++, the line of code that prints to the console look like:

std::cout << "Hello, World!\n";

Let’s unpack this line. std::cout tells the program that we are outputting some value. The problem is that C++ on its own does not know how to do that, so we have to tell the computer to look in the standard namespace, which is a library of functions. The program has to be explicitly be told where to find the functions you want to use, if you don’t define them yourself. We then wrap the desired output in double quotation marks to tell the computer it is a string. The striking part of the line, and the one that may look the most foreign, is \n at the end of the string. Remember the computer takes everything literally? Without being told, it will continue outputting on the same line in the terminal. \n tells the computer explicitly to move to the next line. Finally, the semicolon tells the computer we are done writing this line of code, and it can execute the next instruction.

Compare this with the same function in Ruby:

puts "Hello, World!"

And that’s it. puts is a command, built in to Ruby, that tells the computer to output the following value to the terminal, and to add a new line after. If you DON’T want a new line, use print instead of puts. The language has the commands fully built in, with no need for an additional library, and it can intuit based on the command you use whether or not to add a new line. Easy. Semicolons are optional, but easy to miss, so Ruby lets you skip them. It just knows, or can reasonably guess, when you are done and want it to execute something new.

So, when would you want to use a high or low level language? For a beginner, high level languages are much more forgiving. We don’t want to carefully consider every little thing we write, we want it to just work (see the popularity of declarative systems like React). The issue comes in when we want more control over what happens under the hood. What if we ask out friend to make a PB&J sandwich, and they do, but the peanut butter and jelly are on the wrong parts? Or in the wrong proportion? Or we wanted the crust removed first or the bread cut into triangles not squares? Sure, our sandwich/code works and it took little effort on our part, but it isn’t right.

With a low level language, we have more granular control over what happens. In the case of C++, we can save time by directing the computer to exactly where to find the definition of cout. We can be very clear where the line ends, and remove any ambiguity on the computer’s part about what we want. If our goal is an incredibly fast and optimized program, that is ideal. In a more complicated program, we can explicitly manage memory to safe space and shave off runtime, or have explicit control over how we iterate through an array with a for loop. High level languages might allow this kind of control, but low level ones require it.

There is no “best” level of language to use. Different tasks require different solutions, and it is important to choose the right language for the job. For a beginner, it can be good to get exposure to both levels, so they are ready for whatever the job may throw at them.

--

--

Eli Huebner

I taught high school history for 4 years, before pivoting to software development in search of something more creative.