Armed to the gills – Dell C6100 build out

The fine folks over at ServeTheHome have done a fantastic job documenting the Dell C6100. This is a rather interesting system because:

  1. It packs quite a bit of compute power into 2U for colocation
  2. They are surprisingly cheap on the secondary markets

I’ll lead you over to the ServeTheHome blog posts which deliver a nice overview:

My ideal config is a redundant set of servers running border services including routing, NAT, firewall, VPN and a redundant set of servers running a variety of applications, VMs, etc.

The C6100 comes in either a 12 disk 3.5″ chassis, or a 24 disk 2.5″ chassis. The disks are factory split between the systems in either three 3.5″ per node, or six 2.5″ per node.

My border networking nodes don’t need a lot of disk space. In fact, they don’t even really need RAID thanks to the magic of PF and CARP. So preferably I don’t want to waste the hot swap bays on them. There are several options here.. PXE or iSCSI boot, USB thumb disk boot, or figure out how to cram some storage inside the unit. Turns out the last option isn’t that hard if you need traditional local storage.

Using info from the ServeTheHome forums, I built a 5V power tap. I bound the two 5V rails and two grounds from the internal USB header to a donor 4-pin Molex to SATA power converter. I recommend sticking with a low power SSD like the Crucial M500 or certain Samsung units as this is stretching the USB-standard power envelope a bit at .5A per port.

Dell c6100 5v USB to SATA power tap

Dell c6100 5v USB to SATA power tap

Here’s a parts list per node:

  • Qty: 4 – Molex 12″ picoblade precrimped wire – Mouser link
  • Qty: 1 – Molex 8 position picoblade connector – Mouser link
  • Qty: 1 – SATA power connectors salvaged from a 4-pin Molex converter
  • Qty: 1 – 6″ SATA cables
Dell c6100 internal SSD storage

Dell c6100 internal SSD storage

I mounted the internal SSDs with a stack of automotive trim tape on top of the Southbridge heatsink. A block of foam usually sits here to support mezzanine cards so I’m not too concerned. If need be, I can cold swap these disks without too much trouble while the other nodes continue to run.

This in turn frees up the hot swap bays to be split between the two app nodes. This is great because six 3.5″ disks gives me the right balance of flexibility, capacity, performance, and cost. Rewiring this is a bit of a chore. You need to order two SFF-8087 to 4x SATA 7pin breakout cables (check Monoprice). You can reuse the two original cables with some creative wiring (altering the numbering of the front bays so they are right-to-left numbered), or purchase four of the aforementioned cables for correct numbering.

Full c6100 loadout

Full c6100 loadout

If you’re running SSDs in an array, I recommend hunting down the LSI SAS2008 based “XX2X2″ mezzanine card which runs at 6gbps and supports >2TB disks. You need various bits to install this, including new SATA cables, PCI-E riser, and metal brackets. At the time of my purchase, the easiest way to get this was buying the older “Y8Y69″ mezzanine card with all those bits and buying a bare “XX2X2″ card.

These hacks probably don’t make much sense in all settings as they’re fairly time consuming. If you have the budget, the 24 2.5″ disk chassis is the way to go. But for personal use, this is a great build for me!

Be Sociable, Share!
  • email
  • Reddit
  • HackerNews

Reusable Pagination in Play! 2

On the Play! Framework mailing lists I’ve seen reference to the sample Computer Database as a canonical example of paginating data.  It’s a good start but it’s pretty specific to one data type.  Follow along and we’ll make a more general utility.

If we think abstractly about paginating a web application using Model-View-Controller:

  • M – A way to filter the dataset
  • V – A UI element for displaying the pagination and linking to others
  • C – A way to get Request parameters that define the page, the length of the page, and other filters

A screen shot of what we will be creating

Model

The specifics here depend upon how you are retrieving the data to display.

Squeryl provides a page(offset, pageLength) method that uses the DB’s LIMIT and OFFSET.  I use this to create a subset collection that I pass to view for iterating over.  I also have a helper method to get a total data count.

Controller

Play’s route and reverse routes take care of passing the page number around:

GET  /list       controllers.Notifications.list(page:Int=1)
GET  /list/help  controllers.Notifications.help
# NOTE: /list/:page MUST COME AFTER /list/[string] due to route priorities
GET  /list/:page controllers.Notifications.list(page:Int)

We get pretty URLs like /list /list/2 list/3 etc. Note that @routes.Notification.query(1) will result in /list, which is a nice touch. If /list/1 is your preference you may consolidate to one route. Also note that route priority (order) may affect things depending on the URL parameters you are using.

In the application controller, you need to pass an offset and pageLength to the Model. This returns a collection of a page worth of items.

Then, pass it all through to the view:

  def list(page:Int) = withUser { user => implicit request =>
    val pageLength = 10
    val notifications = Model.getNotifiactionsByUser(user, (page-1)*pageLength, pageLength)
    val count = Model.getCountByUser(user)
    Ok(views.html.notifications.index(notifications, count, page, pageLength))
  }

View

In our list view, we pass some variables through and iterate over the filtered collection of data to display.  Then, we call a helper which creates the pagination UI element.  The biggest thing to note here is the use of Scala’s first class functions, specifically partial application, to delegate the page parameter to the paginate helper:

@(notifications:List[models.Notification], count:Int, page:Int, pageLength:Int)
  @for(n <- notifications) {
    <li>@n</li>
  }
  @includes.paginate(page, pageLength, count, routes.Notification.index(_))

View helper

“I am sorry I have had to write you such a long letter, but I did not have time to write you a short one” — Blaise Pascal

This template code is pretty awful.  I may refine it on the gist if time permits.  Please comment if you have suggestions!

We take the page we’re on, the items per page, the total query count, and the partial route and build a UI wiget.  The lowbound and highbound helpers functions define how many pages to link.

Conclusion

I’m very interested in your take, as well as ways to clean this up!  I’ll update the post with good suggestions.

Be Sociable, Share!
  • email
  • Reddit
  • HackerNews

Playing with Play Framework and Plivo

One of the interesting things about Scala is how well it composes. This is the primary “Scalable” in Scalable Language.  I’m writing this up to share my experience and track my progression in a partially contrived but interesting example.

So I’m playing around with Plivo, which is a telecoms IaaS web service (similar to Twilio), and the Play! Framework.  It uses a RESTful API, and we want to make sure the API callbacks are authentic.  HMAC as usual.

This is your brain on Functional Programming

Scala makes it very easy to break up your problems into functions.  Rather than loops with counters, mutable variables, and such, Scala encourages (but doesn’t mandate) a functional style with immutable variables, first class functions, nested functions, etc.  It generally leads to code that is both more concise and contains less bugs.

Nested functions

Play! provides an HMAC implementation (play.api.libs.Crypto.sign), but it converts the signature into a Hex ASCII string.  Plivo Base64 encodes the HMAC byte (non-ASCII) signature.  We have a few options to get the keys into equivalent formats.  I chose to make some nested helper functions and encode my signature in their format.  Since the functions are unlikely to be used elsewhere, the nesting keeps the namespace clean and locally reenforces that we have a specific format.

for comprehension (syntactic sugar around map)

One of the interesting things here is the use of Scala’s for comprehension as a control structure, a technique I learned on the Play! mailing list.  In the body, we have two variables that return Option type.  If both are defined, the yield block fires with the params defined directly.  Otherwise, the getOrElse block fires and handles errors.

List fun[ctions]

Play! returns the url-encoded POST data as a Map[String, Seq[String]] (because forms keys can have multiple values, i.e. check boxes).  Plivo’s signature is based on the full URL of the callback, and all the POST parameters, sorted by key, concatenated together.

I convert the parameter Map into a List[String, Seq[String]] (List of Tuples) because Maps don’t have a sort operation. We’ll soon be flattening it anyway, so the List is a pretty good structure (aside.. another approach superior in more general circumstances would be to convert the Map to a TreeMap which is always sorted)

The _ is an interesting character in Scala and is used in a few different ways.  Programming in Scala calls it, in passing, “filling in the blank” which generally holds true.  Ultimately, we are “filling in” the sortBy with the key of our List.  Finally, we flatten the list, providing a flatten with a means of combining the key and the Seq of values.  mkString works perfectly here since it just concatenates everything.  x => indicates x as a placeholder for each of the List items and is analogous to _, which won’t work here since we need to reference it twice.

OO Design tradeoffs

So I’m using Action Composition to take a Request, validate the HMAC key, and first class functions to pass on Response responsibility or otherwise respond with BadRequest if there is a problem with the signature or params.

A more Java-influenced design might throw Exceptions on error and in the delegated chain of command.  This has the benefit of being more abstract.. if this function isn’t called in response to an HTTP request, the invoking Exception handler could do something other than responding BadRequest as we do here.  The cool thing about Scala is that I could just as well do that, but I make a more opinionated choice since I know how my class will be used in all cases.

Scala keeps OO, but turns the use on it’s head and makes it always possible to ask:  “gee, does this REALLY need to be abstracted?”  More often than not, a more elegant and concise answer is to reach in the functional bag of tricks.

Be Sociable, Share!
  • email
  • Reddit
  • HackerNews

Zabbix 1.8.9 Debian Squeeze Backport

I was beginning to get hit by many bad things in the Debian Squeeze zabbix 1.8.2 package.  If you aren’t aware, zabbix is a nifty data center monitoring system and is only slightly annoying compared to most other systems which are very annoying to set up and use.

Most notably, this package will safely run on PostgreSQL 9.1 from squeeze-backports and contains many performance improvements.  It should be a drop in upgrade for the distro package.

Get it here:
http://kev009.com/files/zabbix-1.8.9-squeeze.tar.gz

Be Sociable, Share!
  • email
  • Reddit
  • HackerNews

Configuration Management Software Sucks

Yes.  Configuration Management Software Sucks.  Horribly.

The main problem is that n-th order tweakability is preferred over convention.  It’s just stupid.  There are a core set of things that just about everybody needs to do.  Those should be dead simple.  Ready to uncomment and run.  The set operating systems used in the enterprise is fairly small:  RHEL5, RHEL6, Debian 6, Ubuntu LTS.  A configuration system should be opinionated and have complete out of the box support for these platforms.  Simple rulesets for the basics that nearly everyone uses should be ready to go..  package management, process initialization, file management, ssh, sudo, DNS, Apache, PAM, PostgreSQL, MySQL, OpenLDAP, etc.  Keep it simple.  Keep it simple.  Keep it simple.  Resist all urges to add complexity.

That’s not the case.

You’d think after 30 years of Unix, BSD and Linux network deployments this would be pretty well trodden ground.  Wrong.  It’s a complete crapshoot and everybody does things differently.  Pick your poisons and reinvent the stack ad infinitum.

This is one of the few areas I’m green with envy of the Microsoft side of the fence.  Between Active Directory, Group Policy,  and maybe a third party tool or two for cloning and installs and such, Microsoft environments can easily be set up and managed well by complete morons (and often are).

Puppet

Puppet seems to have potential.  Of course, out of the box you’re pissing in the wind with a blank slate and most books and sites will have you following tutorials to rewrite rulesets that thousands of other people before you have similarly cobbled poorly together.  As a Ruby project, it unsurprisingly has vocal hipster fanboys.  Unfortunately, they forgot to parrot their DRY principle to each other.

It centers around a domain specific convention which isn’t so bad..  but in no time flat you’ll start seeing full blown Ruby programs intermingled.  Ugh.  But it’s not so bad if you stick to the basics.

If you look around you can find reasonably complete module sets, i.e. http://www.example42.com/.  It’s not all gravy as these are heavily interdependent and kludgy.  If you want a clean, simple solution you’re back to rolling your own with some healthy copy and paste.

Since it’s a Ruby project, aside from the annoying fanboys, you’re also going to run into scalability problems past a few hundred nodes.  There are mitigation strategies, but it’s a joke compared to something like Cfengine.

Due to hype, you’ll find decent versions in the Debian and Ubuntu backports repos.  RHEL 5 and 6 are covered by a Puppet Labs repo.  2.6 and 2.7 are therefore readily available and as long as your master is running the later version you shouldn’t have interop problems.

All things considered, Puppet is probably the best choice at the moment.  It sucks, but it’s got a lot of momentum behind it.  There are mountains of docs, books, and tutorials to get you going and nothing is too foreign or hard to grasp.

Cfengine 3

I really want to like Cfengine.  It’s incredibly light weight and hardcore ROFLscale.  It’s got serious theory behind it and older versions have been used in massive deployments.  But it’s not just a blank slate.  It’s even lower level and incomplete compared to the others.

You really need to add a promise library to get features that should be included by default.  These are all stagnate though, and still leave much to be desired.

There’s a company behind it doing something or another, but the open source version is raw.  If you have more than one Linux distribution, I’ll pretty much guarantee the packages are incompatible.

The repo choices aren’t great either.  Uncle Bob’s PPA on Ubuntu, out of luck on Debian.  RPMs in the EL repos look out of date.  You can of course get source and binaries from the Cfengine company, but it’s not my preferred way to install things and makes bootstrapping harder than it needs to be.

I haven’t tried the latest release, but quickly gave this one up when I found severe incompatibilities between point releases.  Madness.  You’d think people inventing something like promise theory could handle something as simple as version stability.

Ping me when a corporation backs Cfengine with a good promise library, some standard tasks, and repos for the common operating systems.

Bcfg2

Bcfg2 made the most sense to me out of the box.  XML is yucky and out of fashion these days, but Bcfg2 manages to use it acceptably.  Consequently, most things are declarative, easily read, and overall easy to mimic.  Beyond that, you can tap into some Python template and generator stuff.  But yes, these guys finally didn’t put n-th order above the common cases!  Installing packages and ensuring services are on is a snap.

They’ve got their own repos for many distros so installation isn’t bad.

The client and server are Python so you’ll have similar scaling problems to Puppet in large environments.

My biggest grievance with Bcfg2 is that the server needs intimate knowledge of each operating system version’s package repos.  You’ll fumble around writing a good bit of XML definitions for this in a heterogeneous environment.

The main thing Bcfg2 is lacking right now is community momentum.  Including repo definitions by default and some  more doc work.. I think this would be a great system for small to medium deployments.

Conclusions

The lot of this stuff is really terrible.  End to end system management under *nix is a major pain point.  On top of this, you’ll need a fairly free form monitoring framework (these also all suck) and directory service.  Mix and match an impossible array of projects and eventually you’ll find your own recipe that sort of works.  Except everyone does it differently so you’ll constantly be learning and redoing the same things over and over anyway.

It’s not fun.  What we need is end to end integrated thinking.  This area is still ripe for picking.  Oh RedHat, where art thou?

Be Sociable, Share!
  • email
  • Reddit
  • HackerNews
GUNNAR Rocket Onyx

GUNNAR Optiks Computer Glasses Review

Intro

As one might expect from a computing professional, I spend a lot of time in front of LCD screens.  This is coupled to the modern assault of Compact Florescent Lights (CFL).  These are both heavy on the blue spectrum.  Depending on what you read and believe [1], this can have a number of health implications.

For the past few weeks, I’ve been swapping yellow lenses into my sport sunglasses while at the computer.  I’ve been pleased with the result and noted an easier time going to bed at night and subjectively my eyes feel less tired.

Swapping the lenses so often became a bit tedious.  I had to clean them every use.  I began looking for a dedicated pair of glasses.  GUNNAR Optiks has a decent marketing machine that swooned me into purchasing their purpose-built product.  I was eager to try them out and see if there were any benefits to the specialty computer glasses.

Pros

  • Amber tint reduces blue shift as expected.
  • Look decent.  I would comfortably wear these at my desk in an office environment.  The GUNNAR lens coating gives off a bluish reflection that tones down the impact of the yellow lens to onlookers.
  • Comfortable frame.  No problems here, I could wear the frames all day.

Cons

  • Magnification.  Deal breaker.  I wasn’t expecting a product like this to alter my eye’s focus, especially without ample warning.  I wasn’t sure about the magnification until reading another review [2].
  • Quality.  Chinese made, feel flimsy, and other reviews say the finish can flake off.
  • Cost.  For the relative quality, these are retailing for 2-3x the price they should be.

Conclusion and Verdict

I’m not afraid to spend money a decent product, even if a good chunk of that is marketing and brand name.  However, the GUNNARs fall short in both quality and execution and I would advise looking elsewhere.

The magnification is a deal breaker for me.  I don’t need it, I’m not used to it, and it gave me a headache.  I find it borderline negligent [3] to not mention this beyond fine print about “diopters” (note:  I’m not an Rx wearer; this wasn’t a familiar term) on the bottom of the packaging.  It made taking eyes off the monitor and looking around awkward — a tried and true ergonomic tip.

I’m going to stop by an optometrist and see if they can do a neutral real glass lens (superior optics and scratch resistance) with a slight yellow or orange tint and anti-reflective coating for a reasonable price.  Alternatively, sport, safety, or shooting glasses from a reputable vendor can be cheaper than the GUNNARs and do a fine job.  Prescription glasses wearers should consult their optometrist but I saw passing discussion that the anti-reflective coatings are great and of course they have the expertise to adjust the power and focal length directly to your needs.

The GUNNARs are getting returned.

Further Readings

Be Sociable, Share!
  • email
  • Reddit
  • HackerNews

Something good about every language I used in 2010

Inspired by Samuel Tardieu’s post, I want to do a year in review of all the languages I have used this year.  A lot of times we prima donna programmers complain about anything and everything. I really enjoyed the positive outlook of Samuel’s post and want to take note of my experiences with similar attitude.

Bread & Butter

  • Python – My language of choice for the year.  Whether prototyping, experimenting, developing a Facebook application, maintaining a test framework I wrote for my workplace, or implementing cryptographic algorithms for a security course, Python continued to serve me well.  Between a copy of “Python Essential Reference” and PyPI, I feel there are very few problems beyond my means thanks to the power of this beautiful language and its surrounding community.
  • Java – As a student, I pounded out many a line of Java throughout the wee hours of the morning in my capstone classes.  Java seems to be the New Age language of academia and I can speak it universally to my classmates and professors as a lingua franca.  I’ve noticed that my Java programs tend to structure themselves well without much effort thanks to the strong object influence forced by Java and expansive standard library and Collections classes.  Also used at two collegiate programming competitions which is an entirely different experience than normal software development where the large standard library again came in handy.

Good Progress

  • C – I launched a good sized networking project in C as my first big project in the language and contributed a number of portability fixes to the libevent project.  Fast to compile, fast at runtime, and full low level control, C is a great language for Unix Systems Programming.  I greatly expanded my knowledge of the POSIX interfaces this year and really enjoy programming at this level.  I’ve noticed that some principles from other higher order languages have rubbed off on my C style; namely, data hiding and well formed/adaptable interfaces (see the post right before this one).
  • C++ – Been putting this one off because of all the FUD and intimidation at the sheer size of it.  C++ is pretty much the Latin of our field and is used in everything from safety-critical Jet aircraft systems, to GUIs, to games, to JITs, to cutting edge research.  As some of the pundits say, C++ is the language for “Demanding Applications”.  If you consider Java as the Flight Engineer of a large aircraft, C++ is definitely in the Pilot seat.  You have full control and high visibility of what is going on, but if you aren’t careful you can crash and burn.  I’ve probably progressed to the advanced beginner stage where I can use it as a better C but haven’t endured the trials and tribulations of an expert in the art of C++, nor read important references like Scott Meyers’ “Effective C++” series.  I really like the power and efficiency of the STL and plan on knowing enough C++ to use it when called upon.

Breaking New Ground

  • VHDL – After a required Electrical Engineering course, I was exposed to the entirely different paradigm of programmable hardware (FPGAs).  This was an eye opening experience.  Fundamentally, digital design is concurrent.  There may be valuable lessons here for both academic and professional Computer Science and I need to explore more here.  In 2011, I’d like to buy my own FPGA development board and work through the design of a simple CPU to gain further appreciation of hardware and VHDL or Verilog.

Back Burner

  • PHP – The first language I seriously learned and used some 12 years ago (I dabbled in Perl before that at the ripe age of 8, and probably Lego Logo a year before that :-P ).  I’ve been keeping an eye on it and it seems some of the Framework movement that stole a lot of developers away to other languages has sprouted mature analogues in PHP land.  No longer just C for the web, PHP 5.3 continues the lineage of the 5-series as a serious object-oriented language for web development that is basically universally available and dead simple to scale.  The extent of my PHP coding in 2010 was limited to maintaining some programs I’d written in years past (aside from merely installing/using PHP products like this blog).

On to 2011

  • D – D2 has me really excited.  For some intents and purposes, it seems like an evolution of C++ with a healthy removal of backward compatibility.  Embracing fast compile times, integrating concurrency and message passing, allowing easy interfacing to C libraries, and more mean this is a language capable of “Demanding Applications”.  Perhaps most intriguing is the use of the language proper for metaprogramming and compile-time programs.  I have Andrei Alexandrescu’s book on my shelf and have thumbed through it a few times.  The fact that he is involved speak volumes of D’s potential and his book looks superbly written. 2011 means working my way through the book and working on at least one sizable project in D.
  • Erlang – Erlang has been on my radar for a couple years now.  The fact that the OTP has roots in the demanding and critical realm of telecom means this is a serious language and seems to deliver interesting take on concurrency.  Erlang has already proven itself effective for XMPP servers and Message Queues.  This may yet be one of the best languages around for scalable networking applications and I’d like to get some hands on experience with it in 2011.
  • Haskell – I don’t know much about Haskell other than playing around with TryHaskell.  What I do know is that Haskell has a fairly mature Software Transactional Memory and that alone interests me.  I’ve also heard the optimizing compiler is pretty good.  Through investigation is due in the second half of the year.
Be Sociable, Share!
  • email
  • Reddit
  • HackerNews

No Nonsense Logging in C (and C++)

A lot of times people do zany things and try and reinvent wheels when it comes to programming. Sometimes this is good: when learning, when trying to improve state of the art, or when trying to simplify when only Two-Ton solutions are available.

For a current daemon project I need good, fast, thread-safe logging. syslog fits the bill to a tee and using anything else would be downright foolish — akin to implementing my own relational database. There’s one caveat. For development and debugging, I’d like to not fork/daemonize and instead output messages to stdout. Some implementations of syslog() define LOG_PERROR, but this is not in POSIX.1-2008 and it also logs to both stderr and wherever the syslog sink is set. That may not be desired.

So, the goals here are: continue to use syslog() for the normal case as it is awesome, but allow console output in a portable way. Non-goals were using something asinine like a reimplementation of Log4Bloat or other large attempt at thread-safe logging from scratch.

Using function pointers, we can get a close approximation of an Interface or Virtual Function of Object Oriented languages:

void (*LOG)(int, const char *, ...);
int (*LOG_setmask)(int);

These are the same parameters that POSIX syslog() and setlogmask() take. Now, at runtime, if we desire to use the the “real” syslog:

LOG = &syslog;
LOG_setmask = &setlogmask;

If we wish to instead log to console, a little more work is in order. Essentially, we need to define a console logging function “inheriting” the syslog() “method signature” (or arguments for non-OO types).

/* In a header somewhere */
void log_console(int priority, const char *format, ...);
int log_console_setlogmask(int mask);

And finally, a basic console output format:

/* Private storage for the current mask */
static int log_consolemask;

int log_console_setlogmask(int mask)
{
  int oldmask = log_consolemask;
  if(mask == 0)
    return oldmask; /* POSIX definition for 0 mask */
  log_consolemask = mask;
  return oldmask;
}

void log_console(int priority, const char *format, ...)
{
  va_list arglist;
  const char *loglevel;
  va_start(arglist, format);

  /* Return on MASKed log priorities */
  if (LOG_MASK(priority) & log_consolemask)
    return;

  switch(priority)
  {
  case LOG_ALERT:
    loglevel = "ALERT: ";
    break;
  case LOG_CRIT:
    loglevel = "CRIT: ";
    break;
  case LOG_DEBUG:
    loglevel = "DEBUG: ";
    break;
  case LOG_EMERG:
    loglevel = "EMERG: ";
    break;
  case LOG_ERR:
    loglevel = "ERR: ";
    break;
  case LOG_INFO:
    loglevel = "INFO: ";
    break;
  case LOG_NOTICE:
    loglevel = "NOTICE: ";
    break;
  case LOG_WARNING:
    loglevel = "WARNING: ";
    break;
  default:
    loglevel = "UNKNOWN: ";
    break;
  }

  printf("%s", loglevel);
  vprintf(format, arglist);
  printf("\n");
  va_end(arglist);
}

Now, if console output is what you desire at runtime you could use something like this:

LOG = &log_console;
LOG_setmask = &log_console_setlogmask;
LOG_setmask(LOG_MASK(LOG_DEBUG));

LOG(LOG_INFO, "Program Started!");

In about 60 lines of code we got the desired functionality by slightly extending rather than reinventing things or pulling in a large external dependency. If C++ is your cup of tea, it is left as a trivial reimplementation where you can store the console logmask as a private class variable.

Some notes:

  1. You should still call openlog() at the beginning of your program in case syslog() is selected at runtime. Likewise, you should still call closelog() at exit.
  2. It’s left as a trivial exercise to the reader to define another function to do logging to both stdout and, using vsyslog(), the syslog. This implements LOG_PERROR in a portable way.
  3. I chose stdout because it is line buffered by default. If you use stderr, you should combine the loglevel, format, and newline with sprintf before calling vprintf on the variable arglist to prevent jumbled messages.
  4. Of course, make sure you are cognizant that the format string is passed in and do not allow any user-supplied format strings as usual.
Be Sociable, Share!
  • email
  • Reddit
  • HackerNews

Why VIM is not my favorite editor


UPDATE:
clang_complete is what the people want and what the doctor ordered:

let g:clang_snippets=1
let g:clang_conceal_snippets=1

C^X, C^U, profit. Another awesome development for LLVM!

VIM clang_complete


It sucks for C and C++ development.

Popup code completion (“IntelliSense”) is a godsend.  Instead of flipping back and forth between an API reference and your code, a non-invasive popup of available functions, method signature, struct members, instance variables, etc is right at your fingertips.  It’s especially useful when it contains the declaration’s comment/Doxygen/JavaDoc.

Building a ‘ctags’ file of my system libraries takes ten minutes and weighs 1.5GB for VIM’s integrated omnicomplete.  Any time the headers are updated, it has to be manually rebuilt. The project’s tags need to be rebuilt per change.  Unbearable.

I also can’t get inline function/method signatures or automatic struct member completion without a three year old script, omnicppcomplete.  Yes, this is true for plain old C too.

The one editor I’ve found that provides the level of introspection I expect yet otherwise stays out of the way is KDevelop 4.1.  It basically takes the Kate text editor, with awesome syntax highlighting and standard editing features, and adds some of the best auto-completion I’ve seen for C or C++.  It’s fast too, and doesn’t require a ridiculous manual scan or gigabyte symbol database.  It just works – automatically.

Well, editing from the console is pretty convenient, especially on slow remote SSH connections.  Lazyweb, am I missing something that VIM gurus know and I don’t?  Does emacs provide the level of completion I’m looking for on the console out of the box?

I know the llvm devs have some code brewing that uses llvm for syntax completion.  Maybe there’s light at the end of the tunnel.

My .vimrc looks like this if anyone has any suggestions: Continue reading

Be Sociable, Share!
  • email
  • Reddit
  • HackerNews

Stop Social Distributed Version Control Diaspora!

A ton of people use github, gitorious, and bitbucket these days.  Aside from the obvious benefits of dVCS, these sites have excellent features such as:

  • Code review
  • Merge requests
  • Repository “forking” (in a good way)/cloning and easy methods to let upstream know about your branch
  • Project following/notification

Basically, the buzzword they billet this as is social coding.  The idea isn’t entirely new considering mailing lists with patches have been in use for ages, but the new interfaces are intuitive and visually appealing.

The problem?  Walled garden.  One system doesn’t work with the other.  Worse if you want to run your own project infrastructure.  You’d think as FOSS programmers we’d know better than this and not fall into the same pitfall that social networks are currently in.

I plead with you, social coding providers, come together and create a universal API in which we can uniformly share and exchange information such as followers, branches, notifications, and merge requests.

I believe this situation is important enough to warrant the attention of community leaders.  I have sent email to the Free Software Foundation and The Linux Foundation asking for guidance.   If you would kindly pass this along to them and members of Canonical, Debian, and other large FOSS groups we will stymie community fragmentation and develop a way to further the openness that has allowed the creation of such fantastic tools.

Sincerely,
The Undersigned Free Software Advocates: place comment on this entry

———-

Edit #1: Response to a couple frequently asked questions:

  • The idea is to keep the information open, free, and interchangeable as information is what these sites add. We’ve had public VCS repos and web interfaces for ages.  The fact that gitorious is open source does not address this.
  • The value of these sites is the social factor, and for the most part they bottle it up. If you prefer github, but a project you want to follow uses gitorious (like Qt), there is no means of exchanging the social factor.  This is nearly identical to facebook->Diaspora.While some of these sites do have extended APIs, we need to come together and make sure that they inter-operate and also work with self-hosted systems like trac and redmine.

Edit #2: A good summary of what needs to be done on reddit.

Edit #3:  A response from Richard Stallman, Atlassian (bitbucket), and github!

An email from RMS:

> There is a chance to increase openness and productivity by making these
> systems as well as self-hosted solutions such as trac, redmine, and
> reviewboard inter-operate. Developing a standard protocol for the social
> features such as clone notification, merge requests, and commit comments
> would also make integration into the commandline and GUI tools easily
> possible.

It seems like a good idea to me, to the extent it is feasible. I
don’t know how these tools work, but I think that most of them use
communication protocols based on their data formats, which are unique
to each program. So we could only unify the aspects of the protocols
which don’t depend on those internal data formats. But that much
could be useful.

Both Atlassian and github saw this as a valid point and were not opposed to such a standard but did not indicate they were working toward anything. Can anyone advise on how to start a working group or RFC to define such a protocol?

Be Sociable, Share!
  • email
  • Reddit
  • HackerNews

Speed and Accuracy are fine, kev009 is final: Projects and Ventures of Kevin Bowling