It is a habit that you pick up after working with Python for a while. I hate to say it but it is a good habit. As a C/C++/C# programmer for a long time, I have been brought up with the "curly braces" mentality. We learn it in school, we do it at work, we read it in other people's code. If you look at C/C++/C# code, you will see that programs tend to extend to pages and pages of code. But they don't do much since most of the pages are dedicated to whitespace. Sometimes it takes reading 2 or 3 pages (scrolling if you use an editor) to actually finally learn that this block of code does this (in one sentence). Urghh.....
I am currently porting a C# .NET application to Linux C++ for Intel. There is reams and reams of legacy C# code to wade through. Corresponding there is so much whitespace to slog through before you can figure out in your mind what it is doing. It is a process. It is work. But it is so unnecessary. I now refer to one item in the Zen of Python: Simple is better than complex. This little diddy should apply to all languages as well.
It's the "curly braces" that add so much whitespace. It is these necessary habits which we learned in school and in practice. Maybe it is time to unlearn them. I don't know if other C/C++/C# developers out there feel the same. So much of our work is wading through complexity. It keeps us busy but sometimes I wonder if we should stop and ask: "is all this complexity really necessary?"
I just attended a session at SD2008 on parallelism put on by an Intel engineer this week.The topic was on increasing executable performance by utilizing threading and Intel's Threading Building Blocks (TBB) library. TBB offers some Template style constructs to easily utilize the CPU's multiple cores. Unfortunately, Intel's focus is on C++ and Fortran (statically typed languages). Pondering over execution speed, I was just thinking that Python programs could sure use a bump. Does anyone know whether Python 2.5 or IronPython makes use of the parallel nature of the newer multi-core processors when threading is used? What about utilizing the multi-core even on single threaded programs?
After jotting this down, I did a search to see if anyone else has posed the same question. These are what I found:
I had done some embedding of the Python interpreter into a C++ executable before and had to learn about the GIL. Apparently Python simulates multithreading by allowing each of its threads to obtain the Global Interpreter Lock. Each thread grabs the lock and runs its allotted time while blocking all other threads. From the OS point-of-view, this is still a single thread being time-shared. Adding CPUs won't help performance because only 1 will be utilized while all others are idle.
Anyone have further thoughts or links to share on this topic?
Linux sockets has this peculiarity that I have run across while developing some protocols. Say you have set up a server which sends replies to client requests. The client connects and would like to read N number of bytes from the server. That N number of bytes may need to be broken up into several recv() calls. Effectively recv() are called until the number of bytes you expect are reached.
The same occurs when you try to send() something. You have to loop until the number of bytes you end up sending completes. I don't know about other people but I find this very counter-intuitive. Having to read the same sort of code again-and-again bugs my brain.
If the only language a person uses is C or C++ then reusing the library won't be a problem but try developing sockets in another language.... say Python. You have to deal with the same weirdness in behavior once again. The pity is that the weirdness is not encapsulated. It leaks into your code each and every time you need to reimplement your protocol.
I have been using CVS in quite a few places now and it has proved quite useful for day-to-day work. Not bad for something that is free. However, like all things confronted with the passing time, it is dated. Mind you, it is not as old as the ole SCCS but it has been a standard in the unix/linux realm for over a decade. With larger companies, I have seen migration to ClearCase. Now ClearCase is mighty powerful but it is also expensive. How much? I don't know... but I have only seen use of it in companies in excess of 5000 people.
For the past 3 years, there has been much talk of SubVersion. It is supposed to trumph CVS . Many people have even claimed that it should replace CVS . The opinion in 2004 may just be that.... an opinion . However, it is now 3 years after and if the SubVersion adoption is more prevalent, then this may be a trend that should not be disregarded.
Below are just some links comparing/differentiating the two Sourcecode Control Systems:
Just Useful Tools
There was a time when the entire realm of computers fit on a 64K memory footprint. The operating system, the main running program, as well as any stay-in resident programs must all fit in this limitted space. Software people writing new programs have to be wary of the amount of code space and run-time space that their new program will incur on a computer. Over-allocate and your program won't run; worse yet, you may crash the machine. Those were the older days of DOS and 80286 machines. Those were the days when expansion isn't an option and you can't get more by just swapping out a SIMM, changing CPUs, or swapping out a board.
You think those days are gone but today's real embedded developers face the same constraints. Hardware is sold to a customer with a fixed set of resources. Once it is out in the market, there is no painless way to upgrade unless you are willing to take the hit of the customer doing an RMA (Return to Manufacturing) for the swapping out of parts. Software for these devices are all over the place, it is just that you don't easily recognize them. Things such as: MP3 players, routers, TVs, DVD players, cell phones, etc... abound with realtime software.
With the advent of virtual memory and process/thread spawning, new designers don't even think about the problem of limitted resources anymore. Today's programs that run on our spanking new Intel or AMD dual-core chips have a tendency to:
All these things are what lead a user's computer to be a literal trashcan. This is what slows a person's computer down no matter how much memory or Gigahertz a person might throw at it. For computing, this is a major step in a wrong direction. A gentle plead to other program writers: A user's computer is not your playground. You are a guest in their computer: play nice and clean up after yourselves.