Interesting Articles for perusing
I have written, debugged, and fixed software for most of my life. My conversations with other software developers have often ended up with wondering thoughts of how certain lines of code are worth more than than gold or platinum. Then there are others where which many people have spent countless amounts of time pouring over, become worth less than the dirt that you are standing upon. It is very much the same way in the careers of people. Some end up richer than anyone can even imagine while many others toil and grind their entire lives just to squeek by with the demands of life.
In the world of technology, there is a disconnect between the value and difficulty of a product. As human beings, we tend to equate what is hard with its worth. It is hard to make so it must be worth a lot of money. This is not necessarily true. Engineers, scientists, developers always tend to choose the path that they think will lead to the money. This inevitably leads to complicated designs and ideas. They end up discussing it to death when a business opportunity comes along. It comes, it wilts, and it dies without anyone becoming the wiser. Business people has a sort-of street smarts in that they understand that an idea has to be tried out before you can jump on the bandwagon. This is a thought that was previously explored: Ideas are cheap, words are even cheaper
Like NIKE says: Just Do It. That's where the rubber meets the road. And the results of doing it may surprise you more than you think. One step to getting to reality is to do it rather than to just talk about it. It may turn out or it may not. But you are not going to know if it is still on the drawing board.
Software developers understand this as the mantra: release early, release often.
Addendum: Good ideas are like any sharp sword, it is very easy to cut yourself if you are not careful with it. This idea release early, release often -- is a double-edged one.
I have run into a similar problem to the default-allocation waste that is encountered when using the hard-disk. However, as that I am currently working in the data networking arena, that is where it occurs.
First, let me just review the disk-allocation waste situation with those who aren't familiar. When you format your hard-disk, you must choose a specific cluster-size. This size dictates that the smallest unit of disk-space your computer will refer to will have this size. Having a large cluster size is beneficial for large files because the computer doesn't have to index as many blocks when accessing such a file. With a small cluster size, there will be so many more index entries just to keep track of that file. But the biggest drawback here is that if you have a small file, a minimum chunk space is still allocated to contain that very small file. The difference between that small file and your cluster size is the waste that is not being used.
A prime example of this is when you realize that very big disk that you just bought for your windows 98 machine suddenly runs out of space so quickly. You didn't use up all the space. It is because your disk defaulted to a large cluster size and most of your disk space is located in the unused portions of your disk clusters. I have written about Hard Disk Sizes before.
In networking, it is similar. Having a small "cluster" will optimize usage of pipe bandwidth but large packets are just not accepted by your switch or router. They would be dropped as they don't fit in the cluster. Having a large "cluster", jumbo packets are routed just fine; however, there is a large amount of waste whenever you are sending very small packets. The throughput is sub-optimal and can degrade as much as 35%. Again, the reason here is that the space occupied by the frame and the cluster size is essentially wasted.
The two situations above are exactly the same problem but in entirely different domains. In truth, you can't win them all. You choose your cluster size somewhere in the middle and hope that your usage will fall somewhere there. There will be benefits and wastes but like many things in life, they are out of your control.
I have been fixing a bug that involves writing to memories which take a long time to accept the charge. This used to be very common when FLASH memories were first introduced. Now, we see it in specialized ASICS.
Writing a value to an area of buffer memory which behaves differently than the rest of DRAM or SRAM involves tricking or faking out the processor. CPUs tend to set a timer before it does a write to memory. If the the timer goes out and the CPU doesn’t receive a DTACK from the memories, a processor exception happens. In our case, DTACK would not happen for 100s of milliseconds.
In order to prevent the exception, the CPU must turn off the DTACK timer just before it proceeds to do the WRITEs. In the same manner, it would have to remember to re-enable the timer after such accesses are done.
From a software perspective, each of the long WRITEs would seem like jumping into a black hole. Microsoft has solved this problem in Win32 with a mechanism called Asynchrous IO. In it, the software just hooks worker function and a callback function and goes on its merry way. As the worker finishes, the callback is called and software knows the process has completed. In firmware, we don't have any of those particular luxuries. "Printf" is about the best luxury that we can afford. How do we wait for IO completion? What you (or I) tend to find in a lot of firmware is fixed "for xxx to yyy" loops. Nothing fancy. The processor doesn't do anything but spin until the time it thinks the memory WRITEs are done.
We see a lot of silly code in firmware. However, if it gets the job done... what more can you want?
Is anyone still experimenting with IronPython? The last time I really mucked with it , it was barely usable. Version .72 has just come out and I would dare say not much has changed in the last 8 months. Here is the simplest working snippet of invoking a .NET ListView from IronPython.
import sys sys.LoadAssemblyByName("System.Drawing") sys.LoadAssemblyByName("System.Windows.Forms") from System.Windows.Forms import * from System.Drawing import * f = Form(Text=" Forms ListView ") f.FormBorderStyle = FormBorderStyle.FixedDialog f.StartPosition = FormStartPosition.CenterScreen mainList = ListView(Location=Point(30,30), Size=Size(300,300)) mainList.View = View.Details mainList.GridLines = True mainList.Columns.Add("Column A", 100, HorizontalAlignment.Left) mainList.Columns.Add("Column B", 200, HorizontalAlignment.Left) mainList.Columns.Add("Column C",300, HorizontalAlignment.Left) for i in range(20): mainList.Items.Add("item " + str(i), i) f.Controls.Add(mainList) f.ShowDialog()
I tried making the sample dump a list of running processes in the system. It would be a very simplistic mimic of the task manager. Unfortunately import os bombs in IronPython .7. In .72, it succeeds but the methods and attributes lists are empty. Quite a useless import if the module is empty.
Oh well, I suggest people wait a couple of years before using IronPython for serious work. For now though, experiment away.