A note on Hungarian notation


Back in the old days of Visual Basic, I got into the habit of using the Hungarian notation. That habit stuck with me through the Java days, but I’ve kicked it off almost completely now. My conclusion is that variable names should be clear, free of obscure prefixes, and explain their purpose. If I want to know the scope or type of the variable, I’ll just look it up using an IDE or grep.

Please read the post I’ve linked to above, to understand that there are at least two kinds of Hungarian notation. One comes from Charles Simonyi, the other popularized by Charles Petzold.

Advertisements

Color depth


While working with the RFB protocol, I came upon a situation where I receive 16-bit pixels, with red, green, and blue, each at 5-bit color depth i.e. each color value ranges from 0 to 31. I need to change each pixel to 24-bit color depth for displaying as a bitmap, or 8-bit for each of red, green, and blue. What works is left shifting each of the 5-bit colors by 3 bits so that each color is 8-bit.

Integer division and timer resolution


I have this very specific need to send a list of pre-timed messages from an embedded system. Each message has a specific time when it needs to be sent out. The time is specified in milliseconds (ms) with respect to the previous message in the list. The first message being at time zero.

The algorithm to send messages out is very simple. I send out the first message, and arm a timer with the time of the second message. As soon as the timer expires, I send out the second message, and rearm the timer to send the third message, and so on. Thus, it is pretty clear that timer resolution is very important. Since the smallest integer time interval I require is 1 ms, a timer resolution of 1 ms or less would be ideal.

The embedded system I am dealing with, has a timer resolution of 2.5 ms i.e. 1 tick of the timer is 2.5 ms long. The timer routine does not accept a time of 0, 1 tick is the smallest integer value the routine expects. To convert time in milliseconds to ticks, I need a routine that can convert milliseconds to ticks. The conversion routine, putting it simply, would receive time in milliseconds, divide it by 2.5, and round the result to an integer value. I don’t have access to floating point math though, so the routine I’ve developed looks like this

typedef unsigned int uint32_t;
const unsigned short MULTIPLY_BY = 2;
const unsigned short DIVIDE_BY = 5; // never ever set to zero

uint32_t MillisToTicks(uint32_t timeInMillis, int *remainder)
{
    uint32_t timesTwo;
    uint32_t result;

    timesTwo = timeInMillis * MULTIPLY_BY;
    result =  timesTwo / DIVIDE_BY;
    *remainder += timesTwo % DIVIDE_BY;
    if (*remainder >= DIVIDE_BY)
    {
        result += 1;
        *remainder -= DIVIDE_BY;
    }
    else if(*remainder <= -1 * DIVIDE_BY && result > 1)
    {
        result -= 1;
        *remainder += DIVIDE_BY;
    }
    else if (result == 0)
    {
        result = 1;
        *remainder -= DIVIDE_BY - result;
    }
    return result;
}

MillisToTicks never returns zero. Dividing by 2.5 is the same as multiplying by 2 and dividing by 5. 2 is read from a constant called MULTIPLY_BY and 5 from another constant called DIVIDE_BY. Since the integer division will produce a remainder, MillisToTicks requires a pointer to integer where it can store the remainder of the integer division. That remainder is updated and checked during each call. If it exceeds DIVIDE_BY, the result is incremented by one, and remainder decremented by DIVIDE_BY.

If the result of integer division is zero, I force it to a value of 1, and decrement remainder by DIVIDE_BY. Next time MillisToTicks gets called, if remainder is less than or equal to -DIVIDE_BY, I decrement result by 1, and increment remainder by DIVIDE_BY. Putting it simply, there will be moments when I’ll stray away from timing the messages perfectly, but given enough messages, I’ll stray back on track. To test that, I have implemented the following code

int remainderMillisToTicks = 0;

uint32_t times1[] = {6, 6, 6, 2, 6, 6, 2, 2, 6, 6, 7, 8, 1, 1, 2, 3, 6, 20, 30, 9, 30, 100, 3000, 1, 1, 8000, 10000, 23, 1, 1, 19, 6, 5, 26, 201, 503, 901};

uint32_t times2[] = {6, 6, 6, 2, 6, 6, 2, 2, 6, 6, 7, 8, 503, 901, 1, 1, 1, 1, 1, 1, 1, 1, 2, 3, 6, 20, 30, 9, 30, 100, 3000, 1, 1, 8000, 10000, 23, 1, 1, 19, 6, 5, 26, 201, 503, 901};

uint32_t times3[] = {6, 6, 6, 2, 6, 6, 2, 2, 6, 6, 7, 8, 1, 1, 2, 3, 6, 20, 30, 9, 30, 100, 3000, 8000, 10000, 23, 19, 6, 5, 26, 201, 503, 901, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1};

void test(uint32_t *times, unsigned short size)
{
    int i;
    uint32_t totalMillis = 0;
    uint32_t ticks = 0;
    double ticksToMillis = 0;
    double totalTicksToMillis = 0;

    for (i = 0; i < size; i++)
    {
        totalMillis += times[i];
        ticks = MillisToTicks(times[i], &remainderMillisToTicks);
        ticksToMillis = ((double)ticks*DIVIDE_BY) / MULTIPLY_BY;
        totalTicksToMillis += ticksToMillis;
        //printf("%d\t%f\t%d\t%d\t%f\n", times[i], ticksToMillis, remainderMillisToTicks, totalMillis, totalTicksToMillis);
    }
    printf("Ideal total time required: %d, time achieved: %f\n", totalMillis, totalTicksToMillis);
}

int main()
{
    test(times1, sizeof(times1) / sizeof(uint32_t));
    remainderMillisToTicks = 0;
    test(times2, sizeof(times2) / sizeof(uint32_t));
    remainderMillisToTicks = 0;
    test(times3, sizeof(times3) / sizeof(uint32_t));
}

Here’s how the output of that test code looks like

Ideal total time required: 22953, time achieved: 22952.500000
Ideal total time required: 24363, time achieved: 24362.500000
Ideal total time required: 22965, time achieved: 22985.000000

Enable the commented printf statement in test and you’ll see a call by call log. At times, when the time value requested is too small, you’ll see the messages stray away from perfect timing, and stray back to a normal cadence later. The overall time is thus unaffected or affected only slightly. The last call to test represents a failure condition that can only be avoided by using a higher resolution timer.

Common calls in a communications API


Inspired by several generic interfaces in the Unix/Linux world such as the socket API, and event-driven programming, I have been using the following calls (functions or methods, and event notifications) in my communications APIs.

  • init – May be replaced by the constructor of a class. Creates resources that are held for the lifetime of this object. A corresponding call destroy may also be implemented for deterministic finalization of resources.
  • connect(configuration, callback) – Where configuration provides whatever configuration information is required for connection. It may also be passed to init or obtained by some other means. callback is invoked when connection is established. If implementing a server API, use onConnect for notification of incoming connection.
  • onConnect(connection) – Event notification when an incoming connection is established.
  • disconnect(callback) – Finalizes communication, and calls callback when done. In a server API disconnect may receive an additional parameter indicating which connection to disconnect from.
  • onDisconnect() – Event notification when a disconnection happens and disconnect has not been invoked.
  • onReceive(data) – Event notification when incoming data is received.
  • send(data) – Called when data needs to be sent. If data is an array of bytes, it may receive additional parameters such as start index and length. It may additionally receive a callback that is called when data is sent out (if the API buffers data).
  • onError(error) – Event notification when an error needs to be communicated.

Event notifications can be implemented differently in various languages. I have used callback function in C, delegates in .NET, event listeners (observers) in Java, Blocks in Objective C, and EventEmitter in Node.js.

Thinking in shades of gray


There is black and there is white. That is how monochrome displays represented colors, maybe the white was sometimes green, but it was white nevertheless. Shades of gray began to appear, the grayscale display was born. It wouldn’t be long before color displays sprung up. That is how technology evolves, from the bottom, whatever is possible, up to a point it becomes like clay. You make whatever you want to out of it, just get your hands dirty and keep an open mind.

For a computer programmer, a bit is the smallest unit of information, a sequence of 8 bits is a byte. The beauty of the bit is that, any information that long is either there or not there, black or white, true or false. The moment you go beyond a bit, you need to be prepared to think in shades of gray. There cease to be absolutes, several other possibilities emerge between the two absolutes.

We know that even the physical world is not what our eyes would have us believe. It is a quantum world that appears to us as a cohesive whole. Whatever gives rise to the quantum world is yet, and probably meant to be, beyond our comprehension. As an aside, my take is that we are a simulation, and cannot step out of the simulation, just as a computer program cannot step out of the computer. We would need to step out of the simulation to observe what keeps us going. Easier thought than done. I know a very popular movie that was based on the idea.

As software and product designers we need to think in shades of gray, we need to let our absolutes be marred by a series of possibilities. It is easy to think of a simple solution to a problem, it is harder to step back and see all the solutions to a problem, and pick the best. It takes hard iterative work, because often we are led astray. We need to step back, learn, iterate.

Exposing the right level of complexity to the user, who we ourselves should be very often, is a challenge software designers face daily. The lay user interacting with the product will never know the layer upon layer of complex interactions happening below the hood. As developers we regularly embrace complexity to expose simplicity.

If I sound like I have a practice to preach, I’d like to clarify that I don’t. I just go along with whatever works, or refuses to work otherwise.

User-Centered Design by Travis Lowdermilk; O’Reilly Media


User-Centered Design

By a developer for developers, the book is short, succinct, and can be read in a few short bouts of reading. Experienced developers will find that several user-centered design (UCD) practices have clearly been borrowed from, or absorbed into, classic software engineering practices.

Remainder of this review discusses the contents of each chapter.

Chapter 1 starts with “On January 9, 2007, a man quietly walked onto a stage and changed the course of technological history.” That had me hooked.

Chapter 2 explains Usability, Human Computer Interaction (human factors), UCD, user experience, and how they relate to each other. It then goes on to explain UCD by describing what it isn’t.

Chapter 3 is about working with users, knowing who your users are, understanding the different kinds of users, knowing when to listen to them, and when not to listen too literally.

Chapter 4 describes how to define a plan for your project. It starts with crafting a team mission statement. Project details are defined next with a title, description, stakeholders, and impact assessment. Importance of collecting user requirements without thinking about solutions follows next. That is followed by a discussion on collecting functional requirements, and how the application will solve the user requirements. The chapter proceeds with capturing data and workflow modeling, and ends with prototyping. Prototyping is also explained in greater detail in Chapter 8. Appendix A has a nice sample project template.

Chapter 5 starts with the importance of defining a manifesto or vision statement for an application. That is followed by a discussion on exercising restraint when adding features. The chapter delves into narrative, personas and scenarios, and their usage in the UCD process to refine the vision.

Chapter 6 delves into the need for creativity, and why it takes courage and hard work to exercise it. It discusses several ways for enhancing creativity, including studying how others do their work.

Chapter 7 delves into the study of design principles and their importance to UCD. It discusses the Principle of Proximity (Gestalt Principle), Visibility, Hierarchy, Mental Models and Metaphors, Progressive Disclosure, Consistency, Affordance and Constraints, Confirmation, Reaction Time (Hick’s law), and Movement Time (Fitt’s law).

Chapter 8 delves into gathering feedback for a new application using surveys, informal interviews, formal interviews, contextual inquiry, task analysis, heuristic evaluation, and A/B testing. Story boarding and prototyping as a means of visualizing the new application are also discussed in detail.

Chapter 9 discusses usability studies of existing applications as a means of gaining feedback. It details procedures and tools that may be used.

Every developer should read this book and absorb the practices described here, especially older developers who have been hearing about UCD but are not aware of its practices.

Will threads become a relic?


Newer concurrency APIs have hidden the concept of threads. They instead expose task parallelism by applying the thread pool pattern, making migration from thread pools to other means of task parallelism easier. Some implementations expose asynchronous calls, doing away with the need to even create tasks. If the process has a single thread of execution, as in the case of Node.js, you don’t have to bother with locks. Scaling is achieved using multiple processes and shared data.

Evidence of this change can be seen in several popular programming APIs:

  • Concurrency Utilities in Java
  • Parallel Extensions for .NET framework, and C# async and await
  • Grand Central Dispatch in OS X and iOS 4

Hopefully in the future we’ll see tasks magically dispatched to dedicated co-processors such as Epiphany and the GPU.

All that leads me to conclude that threads will become a relic in most programming languages.