The whole Truttle1 channel
is full of videos about esoteric programming languages—but I also find
its homemade computer animations to be terribly charming. There seem to
be just a few productive kids behind these videos.
I ran across this site while out link hunting. Since I’m not planning to include
software-related links in my directory—since business and software already have
many directories—I will post it here. There is a discussion of this site on a
blog called esoteric.codes,
which has been a second fascinating discovery!
All this recent discussion about link directories and one of the biggest innovations
was sitting under my nose! The awesome-style directory, which I was reminded of
by the Dat Project’s Awesome list.
An “awesome” list is—well, it isn’t described very well on the about page,
which simply says: only awesome is awesome. I think the description here
is a bit better:
Curated lists of awesome links around a specific topic.
The “awesome” part to me: these independently-managed directories are then brought together
into a single, larger directory. Both at the master repo
and at stylized versions of the master repo, such as AwesomeSearch.
In a way, there’s nothing more to say. You create a list of links. Make sure they
are all awesome. Organize them under subtopics. And, for extra credit, write a
sentence about each one.
Generally, awesome lists are hosted on Github. They are plain Markdown READMEs.
They use h2 and h3 headers for topics; ul tags for the link lists. They are
unstyled, reminiscent of a wiki.
This plain presentation is possibly to its benefit—you don’t stare at the
directory, you move through it. It’s a conduit, designed to take you to the
Hierarchical But Flat in Display
Awesome lists do not use tags; they are hierarchical. But they never nest too deeply.
(Take the Testing Frameworks
like Frameworks annd Coverage.)
Sometimes the actual ul list of links will go down three or four levels.
But they’ve solved one of the major problems with hierarchical directories: needing
to click too much to get down through the levels. The entire list is displayed on
a single page. This is great.
Curation Not Collection
The emphasis on “awesome” implies that this is not just a complete directory of
the world’s links—just a list of those the editor finds value in. It also
means that, in defense of each link, there’s usually a bit of explanatory text
for that link. I think this is great too!!
Wiki-Style But Moderated
The reason why most awesome lists use Github is because it allows people to submit
links to the directory without having direct access to modify it. To submit, you
make a copy of the directory, make your changes, then send back a pull request.
with 224 approved for inclusion.
So this is starting to seem like a rebirth of the old “expert” pages (on sites
like About.com). Except that there is no photo or bio of the expert.
As I’ve been browsing these lists, I’m starting to see that there is a wide
variety of quality. In fact, one of the worst lists is the master list!! (It’s
also the most difficult list to curate.)
I also think the lack of styling can be a detriment to these lists. Compare
the Static Web Site awesome list
with staticgen.com. The awesome list is definitely
easier to scan. But the rich metadata gathered by the StaticGen site can be very
helpful! Not the Twitter follower count—that is pointless. But it is interesting
to see the popularity, because that can be very helpful sign of the community’s
robustness around that software.
Anyway, I’m interested to see how these sites survive linkrot. I have a feeling
we’re going to be left with a whole lot of broken awesome lists. But they’ve
been very successful in bringing back small, niche directories. So perhaps we
can expect some further innovations.
Mostly I’m posting this to test emoji in posts to Indieweb.xyz. But I also am
interested in using hyperdb (or perhaps just discovery-swarm) to further
decentralize the previously mentioned Indieweb.xyz.
It seems that Indieweb is made of these
loosely connected pieces that follow as much of the protocol as they individually
want to. While this is supposed to make it approachable—I mean you don’t
need to adopt any of it to participate—it can be tough to know how much of
it you’re obeying. (The whole thing actually reminds me a lot of HTML itself:
elaborate, idealistic, but hellbent on leaving all that behind in order to be
First time I’ve seen something like this – a simplified Arduino project to simulate the 302 neurons of a certain worm’s brain. Also led me to the Open Connectome Project – can you imagine loading a ROM of a mouse brain onto a device?
Writing shaders has clicked for me today. By shaders, I mean: 3-D shaders, GLSL shaders, OpenGL shaders. They are little programs that draw things. With math. Right on the graphics card.
Being a newb, I can see what tripped me up. The tough thing about shaders is that they aren’t like other programs. They don’t just run once through and there’s your drawing.
Take this circle:
Well, here’s the shader that did that:
How could THAT POSSIBLY draw a cirCLE?!
The Little Blue Fragments
Let’s start with something easy: why is it blue?
This line sets the color of a single pixel. Or—as the shader calls it—a single fragment.
In shaders, colors are represented by a series of four numbers—0.0 to 1.0 for red, green, blue and alpha.
The vector of (0.3, 0.4, 0.7) is this blue color. In hex colors, we’re looking at
Our circle starts its life as a square. I’ve already setup a 200 pixel by 200 pixel canvas. The shader will be painting this entire area.
Oh hey, I know: let’s just set the alpha to 1.0. This means the blue will not be transparent at all—it will be solid.
That paints our whole square:
So I am specifically talking about fragment shaders here.
In this fragment shader, we’re now saying, “Paint this pixel solid blue.” And it’s just doing that for every pixel in the square.
200 x 200 = 40,000.
This shader runs—not once—but 40,000 times!
Now Let’s Zoom Out
I’ve setup a canvas that goes from (-1, -1) to (1, 1)—with (0, 0) in the middle—and I’ve told the fragment shader to paint this area.
Here we’ve set up four vertices—those are the points on the corners—going from -1 to 1 on each side. We’ll set up OpenGL to pass in these four coordinates—and the vertex shader will pass these on to our fragment shader. And we’re drawing the circle inside that box.
This all seems pretty easy, right? 🤷
Explain This One
Ok, this time, we’re going to go with a dead simple fragment shader:
We’re going to pass in the color through a variable.
In order to do this, we need to write a separate vertex shader:
Not too bad, yeah? We have a variable v_color we’re using to pass the color. And we’re passing the color blue.
The gl_Position variable is passing a corner over—such as (-1, -1) or (1, -1). These were set up in my OpenGL code.
Our vertex shader runs once for every corner; our fragment shader runs for every pixel. Together, they draw the square:
Okie—now let’s make a subtle change. Let’s make it so one corner is blue. The (1, 1) upper right corner will be blue.
And the rest will be green.
Our vertex shader could be written like this:
Ok, good—if the coordinate is (1, 1) then the color will be blue.
And—with that change—we get this:
That makes sense—the upper corner is certainly blue. But how in the world did we get a gradient??
Look at that vector shader again. Shouldn’t just the (1, 1) coordinate be blue? Why is the (0.99, 0.99) coordinate also blue??
Like I Said: It Varies
How do we pass these coordinates to the fragment shader? We use varying.
The key to all of this is varying.
🙌 PLEASE NOTE: More modern OpenGL fragment shaders (such as those in OpenGL 3) use in and out rather than varying. They STILL vary, though!
Now, normally when you pass stuff around in a program, you use a variable.
You might say a = 1. Ok, so now a is one. And it’ll always be one. Until you change it. So a = 911. Now a is nine-eleven. Nice, you turned an innocent variable into a crisis. And it will stay that way.
But WOAH WOAH WOAH—that is NOT what varying is. When you pass something through varying it is not just a variable. It is now a range.
😳 A range??
A crazy, slidy, slippery range.
I’m sorry, it’s true.
🙄 Don’t apologize, I’m sure there’s a reasonable explanation.
Oh, you’re right—I forgot—it’s awesome!! I apologize for apologizing.
Remember how the vertex shader ran once for each corner of the box? Here it is again, so you can look at it.
The vertex shader DOES color just the corners. Like this:
And then when those colors are passed through the varying variable—mind you, an ACTUAL varying variable!—the fragment shader sees a range from green to blue along the top x-axis and the right y-axis.
And because of varying, our little v_color is going to be different for every pixel. It’s going to be a color—a vec4, that is—somewhere between green and blue.
It’s going to go smoothly from green to blue, without your interference.
Why did it paint a solid blue earlier? Because all the corners were same. So—even though we used varying—all corners were the same, so all values in between were the same.
Back to our circle. The trick here is: we are going to use—not the varying color—but the varying position to determine if we are inside or outside the circle.
You already understand the color. The vertex shader has given it to us.
What we turn our focus to now are these two built-in functions: step and length.
🙌 SPECIAL NOTE: You can browse a complete list of built-in functions at Shaderific.
The length function gives us the distance of our position from the origin. The origin is the center of our circle. So, in this case, that’s our radius.
Now, remember, we’re dealing with a varying variable here, so r is the distance of the current pixel we’re painting from the center.
The length function is also going to get rid of negative values here. The point (-1, -1) has a distance of 1.414, just like the point (1, 1) does.
The step function is going to take everything above 1.0 and make it exactly 1.0. And everything below 1.0 will be 0.0.
So, after step, the value will be 0.0 if we’re inside the circle. And 1.0 if we’re outside it.
See how the shader looks without the step function.
Now—if you want to play with these shaders (in a WebGL-compatible browser), you can do so here.
Just go to the upper-right corner and switch to Cube. Which will give you a 2-D plane.
(You can also View Source on this page—this page contains all of its shaders and renders them inline with the text.)
A Few Things You May Still Wonder About
Like take this line. How does it make the comparison here?
Vector math is built-in with shaders. So you can ask if two vectors are equal using the double-equals.
You can also add, multiply, subtract vectors and so on—just by using the operators.
However, I am comparing a vec4 position with the vec2. And I am using “swizzling”.
This special syntax lets me quickly grab the x, y, z or a parts of a vector.
Composing vectors also has a simplified syntax. Do you remember these lines from our final circle shader?
This could be simplified. You can use the vec4 function to compose a vector from multiple vectors.
Here I am passing in a vec3 and a float—which will be assembled, in order, to form a new vec4.
Similarly, I could pass in two vec2 vectors.
As long as your arguments have four parts in total, you can build a vec4 from them. Of course, this is
also true of vec2, vec3 and any other type in the GL shader language.
A neural net/A.I. kind of thing writes a Christmas song. I think I like the pop song even better. “I’d seen the men in his life, who guided me at the beach once more.” The vibe reminds me of a sterile, eyes-don’t-close version of The Shaggs.
The brick phone brought into this century! While the hardware design is definitely impressive, the software looks good, too. To come up with a nice UI for a 96x64 pixel screen is something of an accomplishment.
An incredibly thorough review of JSON specifications and parsers. Fantastic criticism of the RFC, but beyond that: the benchmarking and concise bug hunting here is something every parser project should count themselves lucky to have.
Complete code and build instructions for a glowing pear-shaped night light. The discovery of this cheap, translucent enclosure is a boon. I am definitely going to build this project with the fourth- and fifth-graders.
It seems like this information hasn’t been disclosed quite enough as it should: Makey Makey’s version 1.2, produced by JoyLabz, cannot be reprogrammed with the Arduino software. In previous versions, you could customize the firmware – remap the keys, access the AVR chip directly – using an Arduino sketch.
Now, this isn’t necessarily bad: version 1.2 has a very nice way to remap the keys. This page here. You use alligator clips to connect the up and down arrows of the Makey Makey, as well as the left and right arrows, then plug it into the USB port. The remapping page then communicates with the Makey Makey through keyboard events. (See Communication.js.)
This is all very neat, but it might be nice to see warnings on firmware projects like this one that they only support pre-1.2 versions of the Makey Makey. (I realize the page refers to “Sparkfun’s version” but it might not be clear that there are two Makey Makeys floating about–it wasn’t to me.)
⛺ UPDATE: The text on the chip of the version 1.2 appears to read: PIC18F25K50. That would be this.
Some Notes About Connecting to iPads
Now, how I came upon this problem was while experimenting with connecting the Makey Makey to an iPad. Instructions for doing this with the pre-1.2 Makey Makey are here in the forums–by one of the creators of the MM.
With the 1.2 version, it appears that the power draw is too great. I received this message with both an iPad Air and an original iPad Mini.
Obviously a Makey Makey isn’t quite as interesting with an iPad – but I was messing with potentially communicating through a custom app.
Anyway, without being able to recompile the firmware, the iPad seems no longer an option. (The forum post should note this as well, no?)
Interfacing the Sparkfun Makey Makey with Arduino 1.6.7
If you do end up trying to get a pre-1.2 Makey Makey working with the latest Arduino, I ran into many problems just getting the settings right. The github repos for the various Makey Makey firmwares are quite dated.
One of the first problems is getting boards.txt to find my avr compiler. I had this problem both on Linux and Windows. Here’s my boards.txt that finally clicked for me:
I also ended up copying the main Arduino platform.txt straight over.
Debugging this was difficult: arduino-builder was crashing (“panic: invalid memory address”) in create_build_options_map.go. This turned out to be a misspelled “arudino” in boards.txt. I later got null pointer exceptions coming from SerialUploader.java:78 – this was also due to using “arduino:avrdude” instead of just “avrdude” in platforms.txt.
I really need to start taking a look at using Ino to work with sketches instead of the Arduino software.
Right now the spotlight is stolen by lovely chips like the ESP8266 and the BCM2835 (the chip powering the new Raspberry Pi Zero). However, personally, I still find myself spending a lot of time with the ATtiny44a. With 14 pins, it’s not as restrictive as the ATtiny85. Yet it’s still just a sliver of a chip. (And I confess to being a sucker for its numbering.)
My current project involves an RF circuit (the nRF24l01+) and an RGB LED. But the LED needed some of the same pins that the RF module needs. Can I use this chip?
The Rise and Fall of PWM
The LED is controlled using PWM – pulse-width modulation – a technique for creating an analog signal from code. PWM creates a wave – a rise and a fall.
This involves a hardware timer – you toggle a few settings in the chip and it begins counting. When the timer crosses a certain threshold, it can cut the voltage. Change the threshold (the OCR) and you change the length of the wave. So, basically, if I set the OCR longer, I can get a higher voltage. If I set a lower OCR, I get a lower voltage.
I can have the PWM send voltage to the green pin on my RGB LED. And that pin can be either up at 3V (from the two AA batteries powering the ATtiny44a) or it can be down at zero – or PWM can do about anything in between.
My problem, though, was that the SPI pins – which I use to communicate with the RF chip – overlap my second set of PWM pins.
You see – pin 7 has multiple roles. It can be OC1A and it can also be DI. I’m already using its DI mode to communicate with the RF module. The OC1B pin is similarly tied up acting as DO.
I’m already using OC0A and OC0B for my green and blue pins. These pins correspond to TIMER0 – the 8-bit timer used to control those two PWM channels on OC0A and OC0B. To get this timer working, I followed a few steps:
Okay, here are the three pins I want to use. PB2 and PA7 are the TIMER0 pins I was just talking about. I’m going to use another one of the free pins (PA0) for the red pin if I can.
Obviously I need these pins to be outputs – they are going to be sending out this PWM wave. This code informs the Data Direction Register (DDR) that these pins are outputs. DDRA for PA0 and PA7. DDRB for PB2.
Alright. Yeah, so these are TIMER0’s PWM settings. We’re turning on mode 3 (fast PWM) and setting the frequency (the line about the prescaler.) I’m not going to go into any detail here. Suffice to say: it’s on.
And now I can just use OCR0A and OCR0B to the analog levels I need.
TIMER1, 16-bit is Better, Right?
Most of these AVR chips have multiple timers and the ATtiny44a is no different – TIMER1 is a 16-bit timer with hardware PWM. Somehow I need to use this second timer to power th PWM on my red pin.
I could use software to kind of emulate what the hardware PWM does. Like using delays or something like that. The Make: AVR Programming book mentions using a timer’s interrupt to handcraft a hardware-based PWM.
This is problematic with a 16-bit timer, though. An 8-bit timer maxes out at 255. But a 16-bit timer maxes out at 65535. So it’ll take too long for the timer to overflow. I could lower the prescaler, but – I tried that, it’s still too slow.
Then I stumbled on mode 5. An 8-bit PWM for the 16-bit timer. What I can do is to run the 8-bit PWM on TIMER1 and not hook it up to the actual pin.
Okay, now we have a second PWM that runs at the same speed as our first PWM.
What we’re going to do now is to hijaak the interrupts from TIMER1.
Good, good. OCIE1A gives us an interrupt that will go off when we hit our threshold – same as OCR0A and OCR0B from earlier.
And TOIE1 supplies an interrupt for when the thing overflows – when it hits 255.
Now we manually change the voltage on the red pin.
And we control red. It’s not going to be as fast as pure PWM, but it’s not a software PWM either.
Why Not Use Another Chip?
I probably would have been better off to use the ATtiny2313 (which has PWM channels on separate pins from the SPI used by the RF) but I needed to lower cost as much as possible – 60¢ for the ATtiny44a was just right. This is a project funded by a small afterschool club stipend. I am trying to come up with some alternatives to the Makey Makey – which the kids enjoyed at first, but which alienated at least half of them by the end. So we’re going to play with radio frequencies instead.
I imagine there are better other solutions – probably even for this same chip – but I’m happy with the discovery that the PWM’s interrupts can be messed with. Moving away from Arduino’s analogWrite and toward manipulating registers directly is very freeing – in that I can exploit the chip’s full potential. It does come with the trade off that my code won’t run on another chip without a bunch of renaming – and perhaps rethinking everything.
Whatever the case, understanding the chip’s internals can only help out in the long run.
If you’d like to see the code in its full context, take a look through the Blippydot project.