Programmers?

Forgot about the step between machine code to interpreted, so you are correct it is a 3gl.

However it is still incredibly bold to compare writing sort in C to using Lambda functions Haskell.

How about I throw in some Smalltalk?
Code:
x := SortedCollection new: [:a :b | a > b].
x add: 1; add: 2; add: 3
Note I don't need to tell it to sort/resort.. it does it as I add objects
 
Forgot about the step between machine code to interpreted, so you are correct it is a 3gl.

However it is still incredibly bold to compare writing sort in C to using Lambda functions Haskell.

How about I throw in some Smalltalk?
Code:
x := SortedCollection new: [:a :b | a > b].
x add: 1; add: 2; add: 3
Note I don't need to tell it to sort/resort.. it does it as I add objects

I'm going to take a guess as I don't know smalltalk that well and say that's a already defined Searching collection written before hand, haskell defines it using a filter function which is essential for list handling.

Am I right about the already defined thing?

Haskell doesn't really have collections like that there just lists, and return new list for every new operation on the list because functional languages are mostly immutable due to the state thing. Pretty much everything in haskell involves some sort of list, and preforming filter map or reduce on it. :P
The google search engine theory is based on map reduce stuff.

It's the difference between writing an array sort function or calling .sort() in java.

I ain't going to argue against smalltalk though, it pretty much one the languages that implement oo well from what i've learned of it so far. It's got stuff in common with objective C which i know.

Quite, but I don't need to know the somewhat complex algorithms needed to add my delivery to a container, and have it assigned to a van which will be given a delivery address, do I? I just need to know my Delivery object, is contained in a Container object, which in turn is assigned to and contained within a Van object, which has an Address to deliver to.

Although graph theory can find the route for the van, sticking a box in a van object is a trival thing and it is acutally a tree data structure(Graph theory). The van being a node and stuff in it being child nodes.

Finding the route would simply be placing nodes to make a map, and preforming A* on the graph to find the route, the van node would then travel a long the nodes on the graph. Graph theory easily allows you to preform finding roots, whats the most efficent vans for stuff to go in(Might be N-P Complete I'm not sure), and estimated times etc.

The most efficent vans you might have to preform all posible combinations on it, or use a estimated algorithm which is a bit nath really but possibly the only way unless i think about it some more.

If you did not know Graph theory those might be hard problems, yet infact they are trival. Simplifying complexity is what this is about, and it only took me 10 seconds to structure it. It's stuff like this, espically putting stuff in the vans most efficiently to reduce costs, travel, and time to develivery is what saves big companies a lot of money. It may save only 10 pound a van but when you run many vans it's a saving which adds up quickly.
 
Last edited:
This thread is funny. All this talk of professionalism and algorithms and not a single mention so far of design patterns :eek:
 
This thread is funny. All this talk of professionalism and algorithms and not a single mention so far of design patterns :eek:

Why would we mention design patterns above any of the other important development concepts that haven't been mention thus far?
 
I'm going to take a guess as I don't know smalltalk that well and say that's a already defined Searching collection written before hand, haskell defines it using a filter function which is essential for list handling.

Am I right about the already defined thing?
Which is my point. Why should I have to redefine sorting when it is older than the wheel?
 
This thread is funny. All this talk of professionalism and algorithms and not a single mention so far of design patterns :eek:
This dicussion is way below Design Patterns. Tbh, it looks like you've just tried to "name drop" to make yourself look above everyone else in this thread.
 
While I admit that for most uses, redefining a sort function is unnecessary, however what everyone is neglecting to consider is that algorithms that do the same thing have different trade off's, ie quick sort is preferable over merge sort if memory is a consideration as it has generally better memory efficiency, however merge sort is has better time complexity but uses more memory.

But this is the real nitty gritty and for most people not necessary espec for languages like java.

As for haskell, I cant see it being useful in real life applications, but it is a completely different style of programming and your train of thought has to completely change which is a useful thing to realize when one program can vary so much between haskell and java.

All in all everyone should have fun :-P
 
Well Tim Sweeney(the guy who's the lead programmer for unreal engine), did a presentation on next generation languages for writing games, basically he gave you list of statements in java or C and pointed out how many things can wrong in just that section, then gave you haskell 'statements' and there's hardly any. He did a whole section on the 'genius of haskell', but he also admits the syntax scares away 'mainstream' programmers, he also gave some slides on what he dislikes about haskell.



Heres some of the slides

http://www.st.cs.uni-saarland.de/edu/seminare/2005/advanced-fp/docs/sweeny.pdf
 
This thread is a little of topic guys, but I feel the need to argue with all of you :0


Programming, in itself, is not (of course) mathematics. What language you decide to use for a particular task should not be determined by how you view the syntax, it should be determined by how efficient it is at solving the problem. I know some excellent mathematicians who are more than happy to use Perl when they know the marginal increment in performance gained from moving to C isn't worth the effort. A smart programmer is much more useful than a stubborn one.

And FFS map reduce is the most pathetic thing to be hailed as a revolution in programming ever :0
 
And FFS map reduce is the most pathetic thing to be hailed as a revolution in programming ever :0

No one ever said it was, yet it is very good at parallel data processing, which is one of the big problems coming up in computing. Google uses a map reduce inspired library that indexes thousands of web pages. There's a white paper on it, and thousands libarys that do it.

Programming languages just express algorithms
http://en.wikipedia.org/wiki/Programming_language

Algorithms are a subset of math.
http://en.wikipedia.org/wiki/Algorithm

I program in the easiest language I can get. I don't care about performance(within reason), I prefer maintainability and speed of development.
 
Last edited:
Well Tim Sweeney(the guy who's the lead programmer for unreal engine), did a presentation on next generation languages for writing games, basically he gave you list of statements in java or C and pointed out how many things can wrong in just that section, then gave you haskell 'statements' and there's hardly any. He did a whole section on the 'genius of haskell', but he also admits the syntax scares away 'mainstream' programmers, he also gave some slides on what he dislikes about haskell.



Heres some of the slides

http://www.st.cs.uni-saarland.de/edu/seminare/2005/advanced-fp/docs/sweeny.pdf
It took you a day to google for some (semi)reputable evidence for Haskell above C.. that's all the info I need. The defence rests, your honor.
 
No one ever said it was, yet it is very good at parallel data processing, which is one of the big problems coming up in computing. Google indexes thousands of web pages with it. There's a white paper on it, and thousands libarys that do it.

Programming languages just express algorithms
http://en.wikipedia.org/wiki/Programming_language

Algorithms are a subset of math.
http://en.wikipedia.org/wiki/Algorithm

No it is not "very good at parallel data processing" it's an idea that's existed since the 60's a is massive step backwards in parallel processing! Jesus google introduces it and we're all suddenly brain washed.

And no algorithms ARE NOT a subset of maths, we may often produce them but saying algorithms are a subset of maths is akin to saying writing a prescription is a subset of medicine:rolleyes:
 
It took you a day to google for some (semi)reputable evidence for Haskell above C.. that's all the info I need. The defence rests, your honor.

What are you talking about? I didn't google anything, I came across some slides and thought someone might be interested.

I could have come up with tons of stuff about C undefined behavior, NULL pointers for example.

C isn't bad, it's just has lot's of things that can go wrong, although it gives you a lot of places to perform performance enhancements for example quick sort in haskell it creates allocates new memory behind the scenes for the quick sort, but you can optimize C so it doesn't.

It is very hard to write reliable software in straight c, which is why it's not used that much anymore, but for systems programming.

No it is not "very good at parallel data processing" it's an idea that's existed since the 60's a is massive step backwards in parallel processing! Jesus google introduces it and we're all suddenly brain washed.

Do you want to say why? Google implemented a modified map reduce.
 
Last edited:
Last edited:
Well he was judging it as a database, map reduce isn't really a database. It's two functions that can process lists/arrays. Because you can split arrays up, it can spread across cores.

Same guy next week

http://www.databasecolumn.com/2008/01/mapreduce-continued.html

It's not about splitting over cores it's about splitting over nodes in a cluster (as the number of cores is finite). The only reason map reduce is exists is that it's cheaper to through more crap boxes at a problem than buying one decent box using well implemented code. And that's why I hate it.

I use map reduce a fair bit, and it's evident, that it's a false economy. In the sense that you think initially that "hey great this can answer by question easily in one job" but then you realize that it doesn't and that you have more / or slightly different things that you need to know, at which point it dawns on you just how useful it would have been to properly index the data in the first place.
 
That's what i ment by cores, as in cores in a cluster. If you can spread across core you can spread across clusters, it just the matter of getting the state over to all the boxes, which many programs will do for you.

It's just getting it parallel in the first place, which is hard.
 
Last edited:
Back
Top Bottom