Map Home
Type / to search with text or keywords
/
keyboard shortcut: Type "/" on your keyboard for a quick search
Search button
Loading...
Loading...
Scroll left
Grants
Citations
H-Index
Patents
News
Books
Scroll right
Collapse sidebar
Data issues & feedback
Adjust height of sidebar
KMap
People
Adjust height of sidebar
KMap
Profile
Kwang-Sung Jun
Assistant Professor, Computer Science | Member of the Graduate Faculty
Computer Science
Full Page
Overview
Research
More
Collaboration
(16)
Hao Zhang
Mutual work: 3 Proposals
Collaboration Details
Clayton Morrison
Mutual work: 2 Proposals
Collaboration Details
Larry Head
Mutual work: 1 Proposal
Collaboration Details
Michael Chertkov
Mutual work: 1 Proposal
Collaboration Details
Mihai Surdeanu
Mutual work: 2 Proposals
Collaboration Details
Page 1 of 4
Previous page
Next page
Grants
(1)
Sign in to view all grants
Publications
(45)
Recent
Better-than-KL PAC-Bayes bounds
2024
bayesian learning,
algorithm analysis
Improved Regret Bounds of Multinomial) Logistic Bandits via Regret-to-Confidence-Set Conversion
2024
bandit algorithms,
regret bounds,
confidence sets
Adaptive Experimentation When You Can't Experiment
2024
observational studies,
quantitative analysis
Efficient low-rank matrix estimation, experimental design, and arm-set-dependent low-rank bandits
2024
experimental design,
bandit algorithms,
machine learning,
optimization
Noise-Adaptive Confidence Sets for Linear Bandits and Application to Bayesian Optimization
2024
linear bandits,
bayesian
A Unified Confidence Sequence for Generalized Linear Models, with Applications to Bandits
2024
statistical inference,
bandit algorithms,
generalized linear models,
machine learning
Kullback-leibler maillard sampling for multi-armed bandits with bounded rewards
2024
reinforcement learning,
stochastic processes,
optimization,
probability theory,
data sampling
Tighter PAC-Bayes bounds through coin-betting
2023
stochastic processes
Revisiting Simple Regret: Fast Rates for Returning a Good Arm
2023
multi-armed bandits,
regret minimization,
machine learning,
online learning,
algorithm analysis
Norm-agnostic linear bandits
2022
bandit algorithms,
linear models