[lnkForumImage]
TotalShareware - Download Free Software

Confronta i prezzi di migliaia di prodotti.
Asp Forum
 Home | Login | Register | Search 


 

Forums >

comp.lang.ruby

[ANN| Bayesian Classification for Ruby

Lucas Carlson

4/11/2005 6:53:00 AM

I would like to announce a new module called Classifier for Ruby. It is
available from:

http://rubyforge.org/projects/c...

or simply

gem install classifier

With it, you can do things like:

===
require 'classifier'
b = Classifier::Bayes.new 'Interesting', 'Uninteresting' # supports any
number of categories of any name
b.train_interesting "here are some good words. I hope you love them"
b.train_uninteresting "here are some bad words, I hate you"
b.classify "I hate bad words and you" # returns 'Uninsteresting'
===

Or if you would like persistence:

===
require 'classifier'
require 'madeleine'
m = SnapshotMadeleine.new("bayes_data") {
Classifier::Bayes.new 'Interesting', 'Uninteresting'
}
m.system.train_interesting "here are some good words. I hope you love
them"
m.system.train_uninteresting "here are some bad words, I hate you"
m.take_snapshot
m.system.classify "I love you" # returns 'Interesting'
===

Please send me any feedback about this library, including how you plan
to use it or extend it.

Thank you!
-Lucas Carlson
http:/...

22 Answers

Florian Groß

4/11/2005 1:53:00 PM

0

Lucas Carlson wrote:

> I would like to announce a new module called Classifier for Ruby.
>
> With it, you can do things like:
>
> ===
> require 'classifier'
> b = Classifier::Bayes.new 'Interesting', 'Uninteresting' # supports any
> number of categories of any name
> b.train_interesting "here are some good words. I hope you love them"
> b.train_uninteresting "here are some bad words, I hate you"
> b.classify "I hate bad words and you" # returns 'Uninsteresting'
> ===

This is wonderful and might make a nice addition to Rails software that
already offers manual tagging and/or categorization which is quite a
common thing to have. Perhaps it would be a good idea to also announce
it over there.

I don't know if this is already possible, but b.train(:interesting, ...)
would make an interesting alternative API which would be more flexible.



Jamis Buck

4/11/2005 2:12:00 PM

0

On Apr 11, 2005, at 7:53 AM, Florian Groß wrote:

> Lucas Carlson wrote:
>
>> I would like to announce a new module called Classifier for Ruby.
>> With it, you can do things like:
>> ===
>> require 'classifier'
>> b = Classifier::Bayes.new 'Interesting', 'Uninteresting' # supports
>> any
>> number of categories of any name
>> b.train_interesting "here are some good words. I hope you love them"
>> b.train_uninteresting "here are some bad words, I hate you"
>> b.classify "I hate bad words and you" # returns 'Uninsteresting'
>> ===
>
> This is wonderful and might make a nice addition to Rails software
> that already offers manual tagging and/or categorization which is
> quite a common thing to have. Perhaps it would be a good idea to also
> announce it over there.
>
> I don't know if this is already possible, but b.train(:interesting,
> ...) would make an interesting alternative API which would be more
> flexible.
>

+1. I'd like to see a more general #train API as well, but that's a
minor quibble. Thanks, Lucas, for this lib! I've been wanting something
like this for a while now. :)

- Jamis




Matt Mower

4/11/2005 6:06:00 PM

0

On Apr 11, 2005 7:54 AM, Lucas Carlson <lucas@rufy.com> wrote:
> I would like to announce a new module called Classifier for Ruby. It is
> available from:
>
> http://rubyforge.org/projects/c...
>

;-)

I ported the Reverend bayesian classifier from Python to Ruby over the
weekend. If only I'd waited ;-)

M

--
Matt Mower :: http://matt...


Lucas Carlson

4/11/2005 7:37:00 PM

0

Due to popular demand, #train has been added. If you are using gem, try
gem update classifier. Now you can do anything from:

b.train "Interesting", "here are some good words. I hope you love them"


to

b.train :Interesting, "here are some good words. I hope you love them"

Also, lowercase categories and categories with spaces are now supported.

Tom Reilly

4/12/2005 1:47:00 AM

0

I happened to notice your posting about classifier.

My problem is this and I wonder if your program would be useful.

I am a MD taking care of nursing home patients. I wrote a data base program
to keep track of all of the phone calls we get. We have used the
program for 2
years. We have over 80,000 phone records which contain the problem
about which
the nursing home called and the recommended treatment.

It occurred to me that given these messages, there ought to be some way
that they
could be classified according to problem type and the summary could be used
to determine what problems a given nursing home is not handling very well.

Using Hash.new, I determined that there are about 22,000 words some
abbreviations,
some correct spellings, some others incorrect.. There are on the
average of 20 words
per message though many of the words are adjitives, prepositions, verbs
which don't
help classifications.

Using a Levenshtein Distance algorithm for the larger words, it does a
pretty
good job of eliminating misspellings though it works quite poorly on 3,
4, and 5 character
words.

Determine Levenshtein distance of two strings

def Ld(s,t)
n = s.size
m = t.size
a = Array.new

if n != 0 && m != 0

#2 create array
r = Array.new
rz = Array.new

0.upto(m) {|x| r.push(0)}

0.upto(n) {|x|a.push(r.dup)}
a.each_index {|x| a[x][0] = x}
0.upto(m) {|x| a[0][x] = x}

#a.each {|x| p x}

cost = 0
1.upto(n) do |i|
1.upto(m) do |j|
if s[i] == t[j]
cost =0
else
cost = 1
end
a[i][j] = [a[ i- 1][j] +1,a[i][j - 1] + 1,a[i - 1][j -
1] + cost].min
end
end
a[n][m]
#a.each {|x| p x}
else
0
end
end

I'd appreciate any comments you might have.

Thanks

Tom Reilly.


Bob Aman

4/12/2005 2:05:00 AM

0

> Please send me any feedback about this library, including how you plan
> to use it or extend it.

I think I'm in love. Not sure what I'll do with it yet, but I'm sure
I'll dream something up!
--
Bob Aman


David Garamond

4/12/2005 2:09:00 AM

0

Lucas Carlson wrote:
> Due to popular demand, #train has been added. If you are using gem, try
> gem update classifier. Now you can do anything from:
>
> b.train "Interesting", "here are some good words. I hope you love them"
>
> to
>
> b.train :Interesting, "here are some good words. I hope you love them"
>
> Also, lowercase categories and categories with spaces are now supported.

I'd even suggest removing the individual #train_... methods. It makes
the API simpler, and how many characters do they save anyway. Plus
consider these use cases: 1) category names are changed; 2) name of
categories contain whitespaces, etc; 3) there are 1000+ categories.

--
dave


Glenn Parker

4/12/2005 2:59:00 AM

0

David Garamond wrote:
>
> I'd even suggest removing the individual #train_... methods.

+1. BTW, I think this is a nifty tool.

--
Glenn Parker | glenn.parker-AT-comcast.net | <http://www.tetrafoi...


Lucas Carlson

4/12/2005 4:59:00 AM

0

> I'd even suggest removing the individual #train_... methods. It makes

> the API simpler, and how many characters do they save anyway. Plus
> consider these use cases: 1) category names are changed; 2) name of
> categories contain whitespaces, etc; 3) there are 1000+ categories.

1) Category names can't change, but even if they could, this is
implemented via method_missing
2) I have elegantly handled white spaces in category names
3) This is implemented via method_missing, not define_method, so
objects don't get bloated

Florian Groß

4/12/2005 11:47:00 AM

0

Tom Reilly wrote:

> Using Hash.new, I determined that there are about 22,000 words some
> abbreviations, some correct spellings, some others incorrect.. There
> are on the average of 20 words per message though many of the words
> are adjitives, prepositions, verbs which don't help classifications.

Regarding spelling mistakes: Giving enough overlap between the correct
and incorrect word that will not be a problem. The Thunderbird Spam
filter has learned to deal with the on purpose misspellings and abusing
of spam senders over the course of time. I think it works like this:

Spam Message A: Deve|oped Commercia|ized Price
Spam Message B: Pr1ce Commercia|ized
Spam Message C: Developed Commercialized Pr1ce
Spam Message D: Developed Commercialized Price

It will see that there is quite some overlap between those messages and
when it classifies one as spam it will also learn new data from that
message which will make it adapt given enough data.

It is however a good idea to examine a good amount of the results of its
classifying of the data and to manually correct them if necessary.