Learning Mixtures of Truncated Basis Functions from Data

In this paper we investigate methods for learning hybrid Bayesian networks from data. First we utilize a kernel density estimate of the data in order to translate the data into a mixture of truncated basis functions (MoTBF) representation using a convex optimization technique. When utilizing a kerne...

Full description

Bibliographic Details
Main Authors: Langseth, Helge, Nielsen, Thomas D., Pérez-Bernabé, Inmaculada, Salmerón Cerdán, Antonio
Format: info:eu-repo/semantics/article
Language:English
Published: 2017
Subjects:
Online Access:http://hdl.handle.net/10835/4894
https://doi.org/10.1016/j.ijar.2013.09.012
_version_ 1789406706778767360
author Langseth, Helge
Nielsen, Thomas D.
Pérez-Bernabé, Inmaculada
Salmerón Cerdán, Antonio
author_facet Langseth, Helge
Nielsen, Thomas D.
Pérez-Bernabé, Inmaculada
Salmerón Cerdán, Antonio
author_sort Langseth, Helge
collection DSpace
description In this paper we investigate methods for learning hybrid Bayesian networks from data. First we utilize a kernel density estimate of the data in order to translate the data into a mixture of truncated basis functions (MoTBF) representation using a convex optimization technique. When utilizing a kernel density representation of the data, the estimation method relies on the specification of a kernel bandwidth. We show that in most cases the method is robust wrt. the choice of bandwidth, but for certain data sets the bandwidth has a strong impact on the result. Based on this observation, we propose an alternative learning method that relies on the cumulative distribution function of the data. Empirical results demonstrate the usefulness of the approaches: Even though the methods produce estimators that are slightly poorer than the state of the art (in terms of log-likelihood), they are significantly faster, and therefore indicate that the MoTBF framework can be used for inference and learning in reasonably sized domains. Furthermore, we show how a particular subclass of MoTBF potentials (learnable by the proposed methods) can be exploited to significantly reduce complexity during inference.
format info:eu-repo/semantics/article
id oai:repositorio.ual.es:10835-4894
institution Universidad de Cuenca
language English
publishDate 2017
record_format dspace
spelling oai:repositorio.ual.es:10835-48942023-04-12T19:39:26Z Learning Mixtures of Truncated Basis Functions from Data Langseth, Helge Nielsen, Thomas D. Pérez-Bernabé, Inmaculada Salmerón Cerdán, Antonio Mixtures of truncated basis functions Hybrid Bayesian networks Learning In this paper we investigate methods for learning hybrid Bayesian networks from data. First we utilize a kernel density estimate of the data in order to translate the data into a mixture of truncated basis functions (MoTBF) representation using a convex optimization technique. When utilizing a kernel density representation of the data, the estimation method relies on the specification of a kernel bandwidth. We show that in most cases the method is robust wrt. the choice of bandwidth, but for certain data sets the bandwidth has a strong impact on the result. Based on this observation, we propose an alternative learning method that relies on the cumulative distribution function of the data. Empirical results demonstrate the usefulness of the approaches: Even though the methods produce estimators that are slightly poorer than the state of the art (in terms of log-likelihood), they are significantly faster, and therefore indicate that the MoTBF framework can be used for inference and learning in reasonably sized domains. Furthermore, we show how a particular subclass of MoTBF potentials (learnable by the proposed methods) can be exploited to significantly reduce complexity during inference. 2017-07-07T07:17:21Z 2017-07-07T07:17:21Z 2014 info:eu-repo/semantics/article http://hdl.handle.net/10835/4894 https://doi.org/10.1016/j.ijar.2013.09.012 en Attribution-NonCommercial-NoDerivatives 4.0 Internacional http://creativecommons.org/licenses/by-nc-nd/4.0/ info:eu-repo/semantics/openAccess
spellingShingle Mixtures of truncated basis functions
Hybrid Bayesian networks
Learning
Langseth, Helge
Nielsen, Thomas D.
Pérez-Bernabé, Inmaculada
Salmerón Cerdán, Antonio
Learning Mixtures of Truncated Basis Functions from Data
title Learning Mixtures of Truncated Basis Functions from Data
title_full Learning Mixtures of Truncated Basis Functions from Data
title_fullStr Learning Mixtures of Truncated Basis Functions from Data
title_full_unstemmed Learning Mixtures of Truncated Basis Functions from Data
title_short Learning Mixtures of Truncated Basis Functions from Data
title_sort learning mixtures of truncated basis functions from data
topic Mixtures of truncated basis functions
Hybrid Bayesian networks
Learning
url http://hdl.handle.net/10835/4894
https://doi.org/10.1016/j.ijar.2013.09.012
work_keys_str_mv AT langsethhelge learningmixturesoftruncatedbasisfunctionsfromdata
AT nielsenthomasd learningmixturesoftruncatedbasisfunctionsfromdata
AT perezbernabeinmaculada learningmixturesoftruncatedbasisfunctionsfromdata
AT salmeroncerdanantonio learningmixturesoftruncatedbasisfunctionsfromdata