Interesting new language named fan? Seemed nifty. However their stance of generics seems a bit odd. Generics suck so lets build a couple cases into the language... Hmmmmmm
Hmmmmmmmmmmm......
Discus!
Where the bacon hits the non-deterministic fan.
Interesting new language named fan? Seemed nifty. However their stance of generics seems a bit odd. Generics suck so lets build a couple cases into the language... Hmmmmmm
Hmmmmmmmmmmm......
Discus!
ANTLRWorks is like crack. I am trying to build a bastard child of ml and ruby and maybe a little scheme mixed back in for good measure. One of my favorite features from haskell is the pattern matching which seems like a no brainer for adding to a scripting language.
I have also been picking up ruby lately and its surprising how good it is. Some people compare it to being similar to python. This is not the case--it has a lot more delicious features. Succulent indeed.
Then there is groovy which is subtly disappointing but sure as shit beats writing actual java but not by enough. At least java makes sense in its own crippled and ridiculous way.
Try passing a function as a value in groovy from inside it self. Sure it has whatever hack is needed for recursion but apparently it is only half-baked. Very disappointing. How am I supposed to use recursion in conjunction with higher order functions? I may post something somewhere so it can some how get fixed. Its kind of embarrassing to bring up so it seems more sensible just to make my own language (yup I am crazy).
Wont someone please think of the functions!
Anyways ANTLR is awesome and I am sleepy. I will probably post more of whatever language I cook up if it ever amounts to anything.
Java is like a mysterious creature. The language itself has no grandeur or mystery with its decrepit non-unsigned-integer body but as a technology it is at the very least a nice way to get a garbage collector that doesn't suck too much out of things. It's also amusing seeing just how hard people are willing to work to get the language into a state that is actually usable.
This stems from a long line of tradition most likely (I am entirely making this up) of taking boring language X and tarting it up. Some languages have stream lined this process by adding macros, templates, dsls and all manners of delicious treats to tantalize programmers with their deviously delicious syntactical and semantic confectionery but at the core of most of the language is a dull and powerless language.
What is interesting is the lengths we go to pave over the obvious problems. Your agile OOP programming language not as agile as you wished? Integration getting you down? Have no fear inversion of control--IOC is here to randomly give you a chain saw and a stirring spoon so you can recombine your classes like some kind of Frankenstein soup.
Its actually not an entirely terrible idea. This is because while programming in a language whose name shall be protected to well protect things I ended up pretty much coding one. A bad one mind you but now that I started learning spring I realized it. But like most random-chainsaw enhanced Frankenstein soups they must be used with caution.
I also more or less discovered a really bad version of lisp this way (thanks C++). The problem with programming languages is perspective. A person wearing their java goggles only sees insanity in the C and dynamically typed/interpreter camps. Same goes for languages. Functional programmers can only think of how stupid C++ is etc...
At the heart of the problem is that every programmer is secretly and silently searching for "the solution". Its hardwired in our brains like some kind of brain damaged salmon. We can not escape getting lost up the metaphorical drain pipe of language zealotry. The fact that something so important could mired in duplicity is like some inefficiency: to be compressed, optimized and ultimately erased.
This doesn't mean java is a steaming pile of poorly design dog poo as a programming language its more profound than that. We maybe mired one collective dog poo of non-orthogonality. However it does not help things when people are seemingly willfully ignorant.
Some of it is good old fashion good nature idiocy that we maintain at some level pretty much as soon as we are capable of having an opinion. Part of the problem is that computer science gets distilled as technology and through this process of decanting becomes a form of canon. Relations go in a relational database. Lisp is for crazy guys who write AI. C is a good idea--along with the Von Neumann architecture. The practical reasons for these decisions forever lost to most people as forests whose trees will never emerge again.
We use these familiar tools as crutches to form our ideas on how to solve problems never stopping to wonder in greater detail or perhaps fearful of dizzying height technology has built upon itself.
I am somewhat sleepy but I think its time I ill advisedly hurled some thoughts into my blog.
I inherited a project recently that I got around to finally working on involving synchronizing network device data in a db using diffing/journaling using xml documents.
The problem with finding the difference between two xml documents or anything for that matter is deciding on a set of operations that will be used to transform one to the other. Then real problem though in the case of my problem is given two xml documents what is the optimal set edits that transform one to the other.
Traditionally utilities like diff find the LCS to minimize the number of edits. In this case I wanted the edits to be as granular and as simple as possible. With a one to one correspondence it makes updating things like a database with the change set an easy task.
The method for actually determining what is different in an xml document is the interesting part however. In the case of structured data like xml you often have collections of entities such as network interfaces which all have some defining characteristic such as a name or id code. This can be used as a key to compare the nodes in the xml tree so that even if they are out of order they can still be properly compared.
The reason for wanting to be able to handle unordered collections of things (sets some might call them) stems from the initial problem it self mainly lack of information. If for some reason the interface is discovered in a different order you really don't want adds and deletes generated simply because it moved.
The way this all ends up being computes is using xpath expressions. One expression expression to determine a node set in each document that is potentially comparable and another to which is applied to each individual node in each document which returns the key used for determining if the nodes have the same name/identity.
Sometimes problems are only as simple as you allow them to be. In this case a fully generic diff or comparison on a pair of xml documents hardly makes sense. Without some information about the meaning of the documents the data that would have a high potential for being unwieldy and useless.
Of course I have thought about abandoning the diff/journaling model all together but unfortunately having a list of exact changes that have taken place is too useful.