lnu.sePublikasjoner
Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Bayesian Regression on segmented data using Kernel Density Estimation
Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap och medieteknik (DM). (DISA ; DSIQ)
Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap och medieteknik (DM). (DISA ; DSIQ)ORCID-id: 0000-0003-1173-5187
Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap och medieteknik (DM). (DISA ; DSIQ)ORCID-id: 0000-0002-7565-3714
Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap och medieteknik (DM). (DISA ; DSIQ)ORCID-id: 0000-0002-0835-823X
2019 (engelsk)Inngår i: 5th annual Big Data Conference: Linnaeus University, Växjö, Sweden, 5-6 December 2019, Zenodo , 2019Konferansepaper, Poster (with or without abstract) (Annet vitenskapelig)
Abstract [en]

The challenge of having to deal with dependent variables in classification and regression using techniques based on Bayes' theorem is often avoided by assuming a strong independence between them, hence such techniques are said to be naive. While analytical solutions supporting classification on arbitrary amounts of discrete and continuous random variables exist, practical solutions are scarce. We are evaluating a few Bayesian models empirically and consider their computational complexity. To overcome the often assumed independence, those models attempt to resolve the dependencies using empirical joint conditional probabilities and joint conditional probability densities. These are obtained by posterior probabilities of the dependent variable after segmenting the dataset for each random variable's value. We demonstrate the advantages of these models, such as their nature being deterministic (no randomization or weights required), that no training is required, that each random variable may have any kind of probability distribution, how robustness is upheld without having to impute missing data, and that online learning is effortlessly possible. We compare such Bayesian models against well-established classifiers and regression models, using some well-known datasets. We conclude that our evaluated models can outperform other models in certain settings, using classification. The regression models deliver respectable performance, without leading the field.

sted, utgiver, år, opplag, sider
Zenodo , 2019.
Emneord [en]
Bayes Theorem, Classification, Regression
HSV kategori
Forskningsprogram
Data- och informationsvetenskap, Datavetenskap
Identifikatorer
URN: urn:nbn:se:lnu:diva-90518DOI: 10.5281/zenodo.3571980OAI: oai:DiVA.org:lnu-90518DiVA, id: diva2:1377603
Konferanse
5th annual Big Data Conference, Linnaeus University, Växjö, Sweden, 5-6 December 2019
Tilgjengelig fra: 2019-12-12 Laget: 2019-12-12 Sist oppdatert: 2019-12-19bibliografisk kontrollert

Open Access i DiVA

Fulltekst mangler i DiVA

Andre lenker

Forlagets fulltekst

Personposter BETA

Hönel, SebastianEricsson, MorganLöwe, WelfWingkvist, Anna

Søk i DiVA

Av forfatter/redaktør
Hönel, SebastianEricsson, MorganLöwe, WelfWingkvist, Anna
Av organisasjonen

Søk utenfor DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric

doi
urn-nbn
Totalt: 8 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf