Category: 2018

RTL-SDR Radio Adventures

RTL-SDR Radio Adventures

The  RTL-SDR is a TV Dongle that is used to pick up digital TV. Using a feature on the chip called I/Q sampling, it is possible to tune to pretty much any frequency between 22mhz and 1200mhz. With a filter tuning to frequencies lower than 22mhz is also possible.

 

During my attempts, I have used everything between paperclip, to a long wire, to dipole antennas. I dealt with weird interference patterns that made tuning into weaker signals hard.

Interference within 1090MHZ while looking for aircraft transponders

For Aircraft Transponders I used a dipole with the current length for the frequency. I didn’t do any calculations as the majority of people have found two 7.5cm antennae in a dipole configuration give good results.

This write up is in progress.

Picking up passive aircraft ADS-B Transponder signals

Listening to ATC/Air Traffic.

 

Android support for SDR Driver

 

 
Data collected from a 11.5cm vertical dipole outside my house in San Diego.

Created using modified code from https://github.com/JorgeGT/PlotRTL1090.
 

 

Updated: August 20, 2019 — 7:58 pm

Neural Network Zoom Experiments

Neural Network Zoom Experiments

 

.

A visual representation of the input to a trained TensorFlow neural network.

 

Neural Networks as a tool to both zoom and enhance images

Here I will document my experiences using recurrent neural networks to supersample and enhance images. A properly trained neural network should be able to approximate a zoomed picture. It won’t be able to show information that was not visible before, but it is able to remove certain artifacts that other transformations can create.

Neural Networks use an input layer that consists of the image information, a series of hidden layers of various dimensions, and an output layer that is the corrected image. Neural Network layers consist of matrices containing gradient weights and biases that when multiplied with an activation function such as ReLU or Sigmoid will produce a simulation of a neuron firing.

Training this network of weights can be done via recursion. This calculates the error between the actual output and the expected output and uses it to train the neural network to minimize that error.

For our first example, I will take a very small circle with a picture of a Linux penguin on the inside. Our starting picture will be 100×100 and we are going to upsize it to 1000×1000 via both neural networks and traditional transformation mechanisms.

This is the original file that will be resized/supersampled. 100×100

Different methods of resizing

I will be attempting multiple methods of both neural and traditional resizing and will compare their results as objectively as possible.

Traditional Resizing (linear filtering)

“Traditionally” Resized using Paint – CLICK FOR ACCURATE SIZE

 

 

⇒ Traditional Algorithms

 

 

 

 

As a control, I will resize using paint.net using the default settings. This uses a series of algorithms to fill the empty pixels with a mixed gradient allowing for a somewhat smooth resize.

 

 

 

 

Attempts using Neural Networks

Attempt 1 – alexjc/Neural-Enhance

My first attempt at resizing this photograph was to use the python toolNeural-Enhance with Windows 10. It should have been simple, but it turned into a disaster. I was never able to figure out how to mount the filesystem correctly. All the documentation that I could find on this issue is for other operating systems and was specifically not applicable to windows. This is due to the way the Unix filesystem is structured.  I tried both forward, and

One of the many similar file not found errors I was receiving.

backslash, full path, relative path, mounting the directory and trying, all to no avail. It is able to load the model, but I am unable to load the image file that I want to enhance.

I have had this working before in Linux, and plan on looking back into it again in the future. When I do I will update this article.

Attempt 1 Results- Failed to enhance images. Will look into it in the future.

 

 

Attempt 2 – Waifu2x Tool (x4 times)

I had to use this 4 times in a row to achieve the desired result as it only supported supersampling by factor x2. I initially wanted to steer clear of tools like this, but it’s always nice to have all the available information.

Single-Image Super-Resolution for Anime-Style Art using Deep Convolutional Neural Networks. And it supports photo.

It seems to work on a similar neural network principle as Neural-Enhance. It would not surprise me if they share the same (TensorFlow?) roots.

Original Size

Resized using Neural Networks via Waifu2x (4x)

 

 

 

 

 

 

 

 

 

 

 

Attempt 2 Results: Promising

Attempt 2 gave promising results. There are noticeable artifacts present around the lower fins of the penguin, present in the form of yellow color bleed. This is an issue absent in normal resizing tools. Besides that the image is clear, the lines concise, and there is no noise, much unlike its traditional counterpart.

 

Attempt #3 – A Paid service (It’s not good)  https://letsenhance.io/

I signed up for their trial. Uploaded my 100×100 image, and clicked the “magic” button. Here was the result.

Attempt #3 Results: Very Poor

Very disappointing to say the least, there is no reason I can see to use this over the free alternatives. Note the loss of pattern between the checkers, and the general noise in the image. I was able to get much better results off the above Waitfu2x tool for free, with less effort. Perhaps this is some sort of sophisticated proof of concept that flies right over my head. Or maybe people do buy into this marketing hype. Obviously much better results can be had for free than using https://letsenhance.io/ or any other paid service.

Attempt #4  – TensorZoom(Android)

This one is certainly the most frustrating to test. There are builds that can run on Windows, but I was unable to get it working. I didn’t document the process, but there was nothing useful learned. Instead, I will be using my Android Google Pixel OG phone to magnify the photo. Unlike the other examples, there is no limit to TensorZooms zoom setting.

TensorZoom with Android

 

Attempt #4 Results: Outstanding.

 

Neural Networks seem to do an alright job of upscaling small images with minimal artifacts. Depending on the method used the resized version may be much clearer than its traditionally scaled counterpart. It seems like areas of struggle in the image was during transitions between the penguin and the background. This means that Neural Network image scaling may be more prone to artifacts when there is more noise (such as a crowd of people in an image)

 

 

 

This is an article in progress. Neural Networks stand as a promising way to resize images in the future. They are currently capable of outperforming traditional resizing in some cases.

Updated: September 22, 2018 — 11:15 am

3D Printing II – Adding an external SoC and modeling a basic part

3D Printing II – Adding an external SoC and modeling a basic part

The issue of print reliability

One of the issues that become apparent early on with large prints is the reliance the 3D printer has on the computer operating it. When 3d prints can take 10+ hours, the odds of a computer crash wasting both time and physical resources becomes non-negligible. The solution to this problem is an external SoC such as a Raspberry Pi. We can use the SoC to control the print remotely with a web console. This SoC will stay always on and has the ability to remotely start the printer.

When choosing an SoC to use, it is necessary to understand the functions it will be performing. Our SoC needs to be able to serve a webpage and move a series of stepper motors quickly. This is not computationally expensive, so the logical choice is the Raspberry Pi Zero. Although it lacks the specifications of better SoC’s, its small power requirements and form factor make it perfect for this relatively simple task.

OctoPrint

Raspberry Pi 0 using OctiPi Linux controlling the printer.

OctoPrint is a prebuilt Pi image that contains Raspbian and OctoPi (A web control framework for 3D Printers). It can be found for download here: https://octoprint.org/download/

The installation process is fairly self-explanatory and I won’t be covering it here. Go take a look at the site for yourself.

This article is in progress, please check back for more.

Replacement grill handle

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

OctoPi printing a Pig model.

Updated: September 10, 2018 — 11:45 pm

3D Printing Part I – The Basics

My Printer: MonoPrice Maker Select V2

I am currently using a Monoprice Maker Select v.II. I have removed the default build surface and added borosilicate glass with silicon thermal pads to the headed aluminum gantry. I plan on replacing the Y-axis gantry as well. There are thicker blocks of milled aluminum online.

Learning about 3D Printing

Note: This is not meant to be a full guide, this is just my process of learning about 3D printing. You will need additional resources and should do ample research before attempting 3D printing. It is unfortunately still in the early adopter phase.

3D printing is an awesome way to prototype and makes some pretty cool things quickly. It has the ability to change the way we think about creating objects the home printer has in modern households. It won’t replace it, rather it will work alongside it.

Basics of 3D Printing

3D Printing is accomplished through depositing layers of a material down in increments. The total sum of all the layers(X and Y) is the final 3D object. Below are the mechanics of a basic 3D printer based on the first model of the Prusa i3.

Several important steps are required to 3D print a model. Basically, the model must first be loaded into a slicer. This slicer takes the model and computes the fastest way for the nozzle to deposit the material along with any support structure if needed. It stores the file as a GCODE, and inside are the basic movement instructions for the microcontroller inside.

XYZ Axis on a Monoprice Maker Select V2 with a borosilicate glass plate

Printing Mechanisms
XYZ Mechanisms

Usually, there is a moving base that controls the Y axis, the X axis is controlled by a sliding mechanism(on the Z-axis gantry) and the Z axis moves the whole X-axis gantry upwards. This means that X and Z create a plane where Y (the baseplate) is the moving part that is being printed too.

The X-Axis uses a belt and slider mechanism to move along the Z-axis gantry. It must be manually leveled and calibrated with the Z-axis threaded rod.

The Y-Axis uses a belt and two rods on a slider. The Y-Axis contains the baseplate and its heating components if present.

The Z-Axis uses twin stepping motors connected to two threaded rods to give it precise lift. It contains both the X-Axis and the Nozzle.

Extrusion nozzle

The Nozzle mounted on the Z-Axis gantry is responsible to the depositing of the plastic material. It does this using a small gear assembly connected to a stepper motor. This allows a precise amount of material to be deposited, known as the Extrusion rate.

Extrusion rate is dependent on the speed of the nozzle with relation to the baseplate. Temperature and filament often make a significant difference of the operating nozzle temperature. Some filaments use custom GCODE(Low-level code) to change the temperature in small calibrated amounts. Chemicals inside the filament react to this change by turning to a desired, usually pre-calibrated color. This is most often utilized by wood filaments to emulate the natural rings that form on wood as it weathers the seasons.

Motor Control

The main control unit and Power Supply

Motor Control is attained through a central box combining both a microcontroller and a DC power supply. This combination allows for a compact, easy to manufacture component but is the one flaw of this specific (MP Maker Select V2). Inside lie a large fire hazard and vulnerability. The connector for the heated base plate is not rated to draw the amount of current required. This has caused numerous people to watch as smoke pours out of their unit. The overload caused the connectors to smolder and melt. See This for more and see This for the MOSFET correction, which resolves this issue.

 

The motor control box generally supports two methods of connecting SD Card, and direct control.  Direct interfacing with a computer will show it as an Arduino. The computer has control over the Microcontroller and is able to run whatever it likes. This allows for use of Raspberry Pi computers to directly control the 3D printers movements. All of this requires the model to be sliced into a GCODE file by a slicer

 

Bed Leveling, Calibration, Settings

This is by far the most tedious part of dealing with a cheap 3D printer. Many new expensive printers auto level and have calibration options that negate this issue. It is normal for this process to take multiple hours if you have a bent baseplate or are just unlucky. It is a very precise process that lacks any sort of accuracy.

In this post, I don’t want to delve into those specifics as they are just busywork and practice. Here is a VERY helpful video on leveling your baseplate and ensuring the X-Axis is ready to go. He also gives some nice starter print mods such as the thumbscrews.

Slicing a model into GCODE

Cura Ultimaker with the Prusa i3 Configuration

In order to 3D print a model, it must first be sliced into a GCODE. This GODE contains the direct motor movement instructions for the microcontroller to interpret. That is why it is critial you only use GCODE files configured for your printer, otherwise your printer may attempt to operate outside of its parameters.

The Slicer I use is called Ultimaker Cura. It has most features of the paid (and expensive) competitors. There is a default configuration in Cura called “i3 Prusa”. This configuration works perfectly for the MP Maker Select V2 I owned as it is a clone of it. Most clones will work fine with the same settings so long as they are a bit slower.

Ultimaker and most other Slicer programs use something called an STL file as input. This is a standard AutoCAD mesh and most modeling software can output directly to them. Conversion solutions are readily available for everything else.

 

 

 

Uploading a GCODE to the printer

Once a GCODE file is generated, it must be uploaded to the microprocessor on the 3D Printer.

Using an SD card – Most printers will allow you to select it from an interface for printing.

Using a USB interface – Most printers will show up as an Arduino computer. By opening the slicer and pressing ‘Print’ with the correct interface selected the 3D printer will begin printing. During this process, the computer is directly controlling the movements of the motors and must remain on with the slicer open.

My first print. Whoops. I was having issues leveling my X axis.

 

That should do it! That is all a basic print needs. In a future article, I will be outlining the process of adding a small SoC such as a Raspberry Pi 0 to control the printer with a web interface. This will remove the need to upload any sort of GCODE to the printer.