Posts

Building a Better Bitmap (font system)

Image
Intro There is a common need in the computer world to display text on dot-addressable displays. As with everything else, there are many ways to accomplish the same task. For computers with sufficient memory and CPU power, a popular solution has been to use vector graphics fonts such as the popular system called  TrueType . This system uses a bit of math (Bezier curves) to define scalable characters. It can also include Kerning information. Another challenge is how to represent the text - e.g. ASCII or UTF-8. TrueType supports the Unicode character set which is composed of thousands of possible symbols. TrueType font files are usually in the range of 30-200K bytes in size. The data size and math don't represent a big challenge for mobile or desktop level systems, but how can you display nice looking fonts on a microcontroller with only 1K of RAM? A Clever Workaround Anyone who works with computers long enough will be familiar with the workaround - a way of getting something done i...

How much power does gzip save on IoT web access?

Image
Intro I've recently been working on an optimized library for deflate/zip/gzip decompression. I released it to Github and the Arduino Library Manager as  zlib_turbo . With zlib_turbo, I've been exploring HTTP GET requests on the ESP32 and enabling gzip compression to reduce the payload size. I assumed that asking the server to compress the content and sending it as gzip would reduce the connection time. I decided to test how much time and energy this could save. When using the ESP32, or other network-capable MCUs, how much power can we save by requesting compressed content? What is gzip compression? Before getting into the project details, a prerequisite is a quick overview of gzip. If you're already familiar with it, you can skip to the next section.  gzip is an ancient (relative to the age of the internet) standard based on the deflate compression algorithm. gzip is a file container used to store a single 'deflated' file. There are simple GUI and command line tool...

Displaying Unicode with MCUs

Image
Intro I grew up in the era of 8-bit computers (late 1970s). At that time, basically anything done on a computer meant it was written in English and text was written using 7-bit ASCII characters stored in 8-bit bytes. It wasn't until a few years later that IBM and Microsoft added support for accented characters to be more inclusive of European languages. The initial solution was to shoehorn a bunch of characters and symbols into the "unused space" of values 128 to 255. This became known as Extended ASCII. There were multiple de-facto standards, but Microsoft's seemed to win; they called it codepage 1252 . This stuck for a while until the various countries got together to create "one character set to rule them all" and called it Unicode. In this new world, the old ASCII 7-bit set was slotted into the first 128 spots followed by the accented characters, symbols and then Asian and middle-eastern characters came after that. There are literally thousands of symbol...

Use your ESP32 as a remote web cam viewer

Image
Intro The ESP32 (there are now quite a few varieties), is an incredibly versatile microcontroller. It's main value proposition has always been that it's a low cost MCU with WiFi capability - something that was relatively unique when it was first released. The processing speed and internal RAM size prevent it from doing tasks that Linux machines like the Raspberry Pi series can do, but there is some overlap where the ESP32 can still surprise you with its abilities. I've been working on projects which use a variety of ESP32s as video and animation playback devices. I recently released an optimized MPEG-1 player ( https://github.com/bitbank2/pl_mpeg ) and was looking at other applications for video playback. Until now, I've seen multiple projects which have been using my JPEGDEC jpeg decoder library to play motion-jpeg videos. I was curious to see if anyone had published a project to play motion-jpeg streams from public IP camera URLs. I didn't find any, so I thought ...

Code Optimization Lesson - Simplification

Image
I've been wanting to create a series of lessons on code optimization and something I encountered yesterday seemed like a good starting point. There are many ways to get the same results in software; not all are efficient. Optimizing code is one of those skills that any programmer can do, it just takes a little extra experience and patience. In this lesson I'll walk you through the thought process of how I analyze code followed by the steps to improve it. I will attempt to show you how code looks "through my eyes". We're going to look at part of the font rendering code within LVGL. By optimizing this code, I'm not intending this lesson to be a criticism of that project, it's just a good example of code I analyze. Here's the code in question: ( LVGL Github link ) The purpose of this type of optimization is to improve the C code for all target systems, not to slice 1 clock cycle off of register reads. A 'normal' C programmer will look at this code...

A simulator to boost embedded software productivity

Image
Intro This is an idea I've had for quite a while and have created limited-scope versions for a few customers. The idea is that developing software (any software) is challenging and often frustrating. Developing software for embedded devices is often much more challenging and frustrating. The extra time spent waiting for edit/compile/run cycles and more limited tooling hurts productivity. The simulators I've created allow you to work on Arduino projects as native MacOS/iOS projects. (Photo of an old iPhone running my simulation of a Guition ESP32-2432S028R) What's the benefit? Using the Apple tools, specifically Xcode and Instruments, I'm able to run/debug/profile my code much quicker and with a much friendlier set of tools. For my own work, the productivity boost has been tremendous. My Arduino imaging and video codecs ( JPEGDEC , JPEGENC, PNGDEC, PNGENG, AnimatedGIF, pl_mpeg, TIFF_G4, G4_ENC) were all written, debugged and profiled on the Mac as a native MacOS app. It...

How to speed up your project with DMA

Image
Intro DMA ( direct memory access ) is a topic that's similar to pointers in C - it's not easy for everyone to visualize how it works. My goal for this blog post is to explain, in the simplest way possible, how it works and why your project can benefit from proper use of it. What's it all about? DMA is a useful feature for a CPU/MCU to have because it means that data can move around without the CPU (your code) having to do the work. In other words, DMA can move a block of data from memory-to-memory, peripheral-to-memory or memory-to-peripheral independently from the CPU. For people used to programming multi-core CPUs with a multi-threaded operating system, that may not sound very special. For those of us familiar with programming slow, low power, single threaded, embedded processors, it can make quite a difference. Here's a practical example - sending data to an SPI device (e.g. a small LCD display): Without DMA <prepare data1 - 10ms > <send data1 to SPI - 10ms...