lnu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Bayesian Regression on segmented data using Kernel Density Estimation
Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM). (DISA;DISTA;DSIQ)
Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM). (DISA;DISTA;DSIQ)ORCID iD: 0000-0003-1173-5187
Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM). (DISA;DISTA;DSIQ)ORCID iD: 0000-0002-7565-3714
Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM). (DISA;DISTA;DSIQ)ORCID iD: 0000-0002-0835-823X
2019 (English)In: 5th annual Big Data Conference: Linnaeus University, Växjö, Sweden, 5-6 December 2019, Zenodo , 2019Conference paper, Poster (with or without abstract) (Other academic)
Abstract [en]

The challenge of having to deal with dependent variables in classification and regression using techniques based on Bayes' theorem is often avoided by assuming a strong independence between them, hence such techniques are said to be naive. While analytical solutions supporting classification on arbitrary amounts of discrete and continuous random variables exist, practical solutions are scarce. We are evaluating a few Bayesian models empirically and consider their computational complexity. To overcome the often assumed independence, those models attempt to resolve the dependencies using empirical joint conditional probabilities and joint conditional probability densities. These are obtained by posterior probabilities of the dependent variable after segmenting the dataset for each random variable's value. We demonstrate the advantages of these models, such as their nature being deterministic (no randomization or weights required), that no training is required, that each random variable may have any kind of probability distribution, how robustness is upheld without having to impute missing data, and that online learning is effortlessly possible. We compare such Bayesian models against well-established classifiers and regression models, using some well-known datasets. We conclude that our evaluated models can outperform other models in certain settings, using classification. The regression models deliver respectable performance, without leading the field.

Place, publisher, year, edition, pages
Zenodo , 2019.
Keywords [en]
Bayes Theorem, Classification, Regression
National Category
Computer Sciences
Research subject
Computer and Information Sciences Computer Science, Computer Science
Identifiers
URN: urn:nbn:se:lnu:diva-90518DOI: 10.5281/zenodo.3571980OAI: oai:DiVA.org:lnu-90518DiVA, id: diva2:1377603
Conference
5th annual Big Data Conference, Linnaeus University, Växjö, Sweden, 5-6 December 2019
Available from: 2019-12-12 Created: 2019-12-12 Last updated: 2023-04-14Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Authority records

Hönel, SebastianEricsson, MorganLöwe, WelfWingkvist, Anna

Search in DiVA

By author/editor
Hönel, SebastianEricsson, MorganLöwe, WelfWingkvist, Anna
By organisation
Department of computer science and media technology (CM)
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 195 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf