Dataset Viewer
Auto-converted to Parquet Duplicate
context
stringlengths
100
2.74k
A
stringlengths
107
1.69k
B
stringlengths
105
1.85k
C
stringlengths
102
2.35k
D
stringlengths
104
2.11k
label
stringclasses
4 values
For simplicity, we limit ourselves to ordinary PSIS, although consistency of self-normalized PSIS follows from Slutsky’s theorem
not generally possible. Furthermore, even if the variance would be finite, it is possible that the pre-asymptotic behavior is indistinguishable from the infinite variance case as discussed in Section 3.
the real pre-asymptotic convergence behavior. The k^^𝑘\hat{k}over^ start_ARG italic_k end_ARG diagnostic correctly
Section 4 proves asymptotic consistency and finite variance. In this section we use various large sample results to characterize finite sample behavior of IS, TIS, and PSIS. There is no strict definition of large sample in each case, but the theory is able to explain many empirical results shown by our experiments.
Section 3 discusses pre-asymptotic behavior, and we demonstrated in Section 3.3 that reaching the asymptotic regime can require infeasible
D
β∼N⁢(0,λ−1⁢(Mw⊤⁢Mw)−)similar-to𝛽𝑁0superscript𝜆1superscriptsuperscriptsubscript𝑀𝑤topsubscript𝑀𝑤\beta\sim N(0,\lambda^{-1}{(M_{w}^{\top}M_{w})}^{-})italic_β ∼ italic_N ( 0 , italic_λ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( italic_M start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT italic_M start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT - end_POSTSUPERSCRIPT ).
The maximum a posteriori (MAP) estimate β^^𝛽\widehat{\beta}over^ start_ARG italic_β end_ARG for β𝛽\betaitalic_β is
\text{exp}(\beta(v))].italic_Y start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT | italic_β start_RELOP SUPERSCRIPTOP start_ARG ∼ end_ARG start_ARG ind end_ARG end_RELOP Po [ exp ( italic_β ( italic_v ) ) ] .
{v}^{T}\beta)]italic_Y start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT start_RELOP SUPERSCRIPTOP start_ARG ∼ end_ARG start_ARG ind end_ARG end_RELOP Po [ roman_exp ( x start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT italic_β ) ].
rv=(Yv−μ^v)/V⁢(μ^v)subscript𝑟𝑣subscript𝑌𝑣subscript^𝜇𝑣𝑉subscript^𝜇𝑣r_{v}=(Y_{v}-\widehat{\mu}_{v})/\sqrt{V(\widehat{\mu}_{v})}italic_r start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT = ( italic_Y start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT - over^ start_ARG italic_μ end_ARG start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT ) / square-root start_ARG italic_V ( over^ start_ARG italic_μ end_ARG start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT ) end_ARG is the v𝑣vitalic_v-th Pearson
A
By using the proposed method, we are able to detect weak signals and reveal clear groupings in the patterns of associations between explanatory variables and responses and apply our method to many applications, such as variable selection, effect sizes estimation, and response prediction.
In Figure 2(a) as n𝑛nitalic_n increases, and in Figure 2(b) as p𝑝pitalic_p decreases, the ratio of pn𝑝𝑛\frac{p}{n}divide start_ARG italic_p end_ARG start_ARG italic_n end_ARG gets smaller and the performance gets better as expected. Compared to Tree-Lasso along with other methods, our method is more robust with big data sets, which suits the real-world situation. As we increase the number of response variables in Figure 2(c), increase the number of distributions in Figure 2(d), or decrease the proportion of active variables in β𝛽\betaitalic_β as Figure 2(e), the problem becomes more challenging. Figure 2(f) and Figure 2(g) show that our method is more flexible to different magnitudes of covariance of explanatory variables and response variables. In Figure 2(e), we notice that when the proportion of active variables in β𝛽\betaitalic_β is large, the performance of TgSLMM and LMM-Lasso is similar. However, it contradicts the background of our research that the active variables should be sparse among data. Through our experiments, it is hard for Tree-Lasso to identify the active variables on high dimensional heterogeneous data.
Having shown the capacity of TgSLMM in recovering explanatory variables of synthetic data sets, we now demonstrate how TgSLMM can be used in real-world genome data and discover meaningful information. To evaluate the method, we focus on some practical data sets, Arabidopsis thaliana, Heterogeneous Stock Mice and Human Alzheimer Disease. Since Arabidopsis thaliana and Heterogeneous Stock Mice have been studied for over a decade, the scientific community has reached a general consensus regarding these species [49]. With such authentic golden standard, we could plot the ROC curve and assess the model’s performance using the area under it. However, since Alzheimer’s disease is a very active area of research with no ground truth available, we list the genetic variables identified by our proposed model and verify the top genetic variables by searching the relevant literature.
Since we have access to a validated gold standard of the Arabidopsis thaliana data set, we compare the alternative algorithms in terms of their ability in recovering explanatory variables with a true association. Figure 5 illustrates the area under the ROC curve for each response variable for Arabidopsis thaliana. By analyzing the results, we conclude that TgSLMM equals or exceeds the other methods for all of responses. TgSLMM allows for dissecting individual explanatory variable effects from global genetic effects driven by population structure.
For Heterogeneous Stock Mice data set, ground truth is also available so that we could evaluate the methods based on the area under their ROC Curve as Figure 6. TgSLMM behaves as the best one on 22.2% of the traits and achieves the highest ROC area for the whole data set as 0.627. The second best model is MCP with the area of 0.604. The areas under ROC of Tree-Lasso, Lasso and SCAD are 0.582, 0.591 and 0.590 respectively. The areas of the remaining models are all around 0.5, showing little ability to process such complex data sets. On traits Glucose_75, Glucose_30, Glucose.DeadFromAnesthetic, Insulin.AUC, Insulin.Delta and FN.postWeight, our method TgSLMM behaves the best. The results are interesting: the left side of the figure mostly consists of traits regarding glucose and insulin in the mice, while the right side of the figure consists of traits related to immunity. This raises the inspiring question of whether or not immune levels in stock mice are largely independent of family origin.
B
However, SMC-based Thompson sampling and Bayes-UCB agents are able to learn the evolution of the dynamic latent parameters,
Figure 2(e) is clear evidence of the SMC-based agents’ ability to recover from linear to no-regret regimes.
The regret loss associated with the uncertainty about σa2superscriptsubscript𝜎𝑎2\sigma_{a}^{2}italic_σ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT is minimal for SMC-based Bayesian agents,
—used by the SMC-based agents to propagate uncertainty to each bandit arms’ expected reward estimates—
However, SMC-based Thompson sampling and Bayes-UCB agents are able to learn the evolution of the dynamic latent parameters,
A
Figures 1- 5 show measurements of blood glucose, carbohydrates and insulin per hour of day for each patient.
Overall, the distribution of all three kinds of values throughout the day roughly correspond to each other.
In particular, for most patients the number of glucose measurements roughly matches or exceeds the number of rapid insulin applications throughout the days.
Patient 10 on the other hand has a surprisingly low median of 0 active 10 minutes intervals per day, indicating missing values due to, for instance, not carrying the smartphone at all times.
Figures 1- 5 show measurements of blood glucose, carbohydrates and insulin per hour of day for each patient.
A
Once the MIIVs for each equation are determined, they are used to compute intermediate estimates of the endogenous predictors (via OLS) within the equation (stage 1 of Two Stage Least Squares). Those intermediate estimates are then used to estimate the associations between the endogenous predictors and the dependent variable with each equation. If the MIIVs are valid instruments, then this MIIV-2SLS estimator is consistent and asymptotically unbiased on an equation by equation basis (Bollen, \APACyear1996). Because MIIV-2SLS operates equation by equation, this limited information approach to estimation has a marked advantage over full information estimators such as ML, in that model misspecification that impacts one equation is less likely to impact the estimation of other equations (Bollen \BOthers., \APACyear2007, \APACyear2018).
Recall the requirement that an instrument must not correlate with the equation error. We term variables that violate this requirement but are still inappropriately used as instruments, invalid instruments. Importantly, invalid instruments in the context of MIIVs arise when the model is misspecified. Although the validity of the assumption cannot be evaluated directly, we can assess the appropriateness of an instrument set in the context of an overidentified equation (i.e., one where the number of instruments exceed the number of endogenous predictors111The use of the term identification in a maximum likelihood estimated SEM vs. MIIV-SEM setting are referring to related considerations. In the traditional ML setting, a model is overidentified if there are multiple ways to estimate one or more parameters. In the MIIV-SEM setting, this consideration is applied on a equation by equation basis, wherein an equation is overidentified if there are multiple ways of estimating the parameter values.) using overidentification tests such as Sargan’s χ2superscript𝜒2\chi^{2}italic_χ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT test (Sargan, \APACyear1958). In the context of latent variable models Kirby and Bollen (\APACyear2010) found Sargan’s Test performed better than other overidentification tests and we use it here.
However, as was mentioned previously, Sargan’s Test lacks the ability to pinpoint sources of model misspecification beyond the set of MIIVs of a specific equation. The Sargan’s Test assesses if at least one instrument is invalid. Though this is a local (equation) test of overidentification, it does not reveal which of the MIIVs are the source of the problem. This issue is compounded in the MIIV setting as for each equation there are normally many MIIVs, and an omnibus test of instrument validity would not provide specific enough information as to the source of model misspecification. A more useful test would be one that identifies a specific failed instrument. We will show how our MIIV-2SBMA approach can better isolate the problematic instruments.
While the MIIV-2SLS approach has several advantages over maximum likelihood estimation when model misspecification is present, there are a number of open questions in the MIIV literature regarding the relationship between model misspecification diagnostics and instrument quality. One consideration is that if the structural model is misspecified, then some MIIVs will be invalid (as they were derived under the assumption that the specified model is the true model). Analysts can apply an overidentification test to each overidentified equation to assess evidence of misspecification (Kirby \BBA Bollen, \APACyear2010). Here, rejection of the null hypothesis suggests model misspecification, as the model specification led to the equation-specific MIIVs. Despite the utility of these overidentification tests, it is not always clear where in the model the misspecification originated as the equation failing the overidentification test may not be the equation where the misspecification occurred. Furthermore, it is also unclear which MIIVs are responsible for the failed test as the null hypothesis states all the overidentifying constraints hold.
If all MIIVs in an overidentified equation are valid instruments, then each overidentified coefficient should lead to the same value in the population. Even if this true, sampling fluctuations can lead to different values. Sargan’s Test of overidentification determines whether these different solutions are within sampling error. Researchers can estimate the test statistic as n⁢R2𝑛superscript𝑅2nR^{2}italic_n italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT, where n𝑛nitalic_n is the sample size and R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT is the squared multiple correlation from the regression of the equation sample residuals on the instruments and the resulting statistic asymptotically follows a χ2superscript𝜒2\chi^{2}italic_χ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT distribution. The degrees of freedom associated with this distribution are equal to the degree of overidentification for the equation (i.e., p𝐙j−pjsubscript𝑝subscript𝐙𝑗subscript𝑝𝑗p_{\mathbf{Z}_{j}}-p_{j}italic_p start_POSTSUBSCRIPT bold_Z start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT end_POSTSUBSCRIPT - italic_p start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT, where p𝐙jsubscript𝑝subscript𝐙𝑗p_{\mathbf{Z}_{j}}italic_p start_POSTSUBSCRIPT bold_Z start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT end_POSTSUBSCRIPT denotes the number of MIIVs and pjsubscript𝑝𝑗p_{j}italic_p start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT is the number of regressors). Rejection of the null hypothesis suggests that one or more of the instruments for that equation correlates with the equation error, and as such is an invalid instrument. In the MIIV setting, these tests indicate whether the model is misspecified. After all, the instruments for any given equation are a priori determined by the model structure, so if an instrument is invalid, the model structure itself is invalid (Kirby \BBA Bollen, \APACyear2010).
C
The iterative process of training the model, training the policy, and collecting data is crucial for non-trivial tasks where random data collection is insufficient. In a game-by-game analysis, we quantified the number of games where the best results were obtained in later iterations of training. In some games, good policies could be learned very early. While this might have been due to the high variability of training, it does suggest the possibility of much faster training (i.e. in fewer step than 100k) with more directed exploration policies. In Figure 9 in the Appendix we present the cumulative distribution plot for the (first) point during learning when the maximum score for the run was achieved in the main training loop of Algorithm 1.
As for the length of rollouts from simulated e⁢n⁢v′𝑒𝑛superscript𝑣′env^{\prime}italic_e italic_n italic_v start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, we use N=50𝑁50N=50italic_N = 50 by default. We experimentally shown that N=25𝑁25N=25italic_N = 25 performs roughly on par, while N=100𝑁100N=100italic_N = 100 is slightly worse, likely due to compounding model errors.
Random starts. Using short rollouts is crucial to mitigate the compounding errors in the model. To ensure exploration, SimPLe starts rollouts from randomly selected states taken from the real data buffer D. Figure 9 compares the baseline with an experiment without random starts and rollouts of length 1000100010001000 on Seaquest which shows much worse results without random starts.
We will now describe the details of SimPLe, outlined in Algorithm 1. In step 6 we use the proximal policy optimization (PPO) algorithm (Schulman et al., 2017) with γ=0.95𝛾0.95\gamma=0.95italic_γ = 0.95. The algorithm generates rollouts in the simulated environment e⁢n⁢v′𝑒𝑛superscript𝑣′env^{\prime}italic_e italic_n italic_v start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT and uses them to improve policy π𝜋\piitalic_π. The fundamental difficulty lays in imperfections of the model compounding over time. To mitigate this problem we use short rollouts of e⁢n⁢v′𝑒𝑛superscript𝑣′env^{\prime}italic_e italic_n italic_v start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT. Typically every N=50𝑁50N=50italic_N = 50 steps we uniformly sample the starting state from the ground-truth buffer D𝐷Ditalic_D and restart e⁢n⁢v′𝑒𝑛superscript𝑣′env^{\prime}italic_e italic_n italic_v start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT (for experiments with the value
Figure 1: Main loop of SimPLe. 1) the agent starts interacting with the real environment following the latest policy (initialized to random). 2) the collected observations will be used to train (update) the current world model. 3) the agent updates the policy by acting inside the world model. The new policy will be evaluated to measure the performance of the agent as well as collecting more data (back to 1). Note that world model training is self-supervised for the observed states and supervised for the reward.
B
This paper fits the generalized form of Heterogeneous Lanchester equations to the Battle of Kursk data using the method of Maximum Likelihood estimation and compares the performance of MLE with the techniques studied earlier such as the Sum of squared residuals (SSR), Linear regression and Newton-Raphson iteration.Different authors applied different methodologies for fitting Lanchester equations to the different battle data. The methodologies of Bracken, Fricker, and Clemen are applied to the Tank data of Battle of Kursk and results are shown in Table 3.
The basic idea of using GRG algorithm is to quickly find optimal parameters that maximize the log-likelihood. The objective is to find the parameters that maximize the log-likelihood or in other words provide the best fit. Given the values in Table 1, we investigate what values of the parameters best fit the data.Although we derived the estimates fora and b using the MLE approach in equations (8) and (9), they are not applied directly. Log Likelihood is calculated using the equation (7) considering 0.5 as the initial value of the parameters. Then, we optimized the entire duration of the battle of the likelihood function using the GRG algorithm.The model obtained after estimation of parameters is:
In the next section we have discussed in detail the mathematical formulations of homogeneous and heterogeneous situations. We have seen in Bracken [4], Fricker [16] , Clemens [7] , Turkes[38], Lucas [28] that LSE method have been applied for evaluating the parameters for fitting the homogeneous Lanchester equations to the historical battle data.The MLE method [32, 36] has not been explored particularly for fitting the historical battle data till date. Also only one measure i.e. Sum-of-squared-residuals (SSR) has been explored for measuring the Goodness-of-fit (GOF). The main objective of this study is to assess the performance of the MLE approach for fitting homogeneous as well as heterogeneous Lanchester equations to the Battle of Kursk. Various measures of GOF [8] viz. Kolmogorov-Smirnov, Chi-square and R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT have been computed for comparing the fits and to test how well the model fits the observed data. Applying the various GOF measures considering the artillery strength and casualties of Soviet and German sides from the Kursk battle data of World War-II validates the performance of MLE technique.Section 2 presents in brief the overview of the battle of Kursk. Section 3 describes the mathematical formulation of likelihood estimation in case of both homogeneous as well as heterogeneous situations. Section 4describes the Tank and Artillery data of Battle of Kursk and discusses the methodology for implementing the proposed as well as other approaches. Also, this section contains a performance appraisal of the MLE using various GOF measures. Section 5analyses the results after observing various tables and figures and discusses how well the MLE fits the data. Section 6 summarizes the important aspects of the paper.empirical
For implementing this expression from table 1 we have taken zero as initial values for all the unknown parameters. Then we start running the GRG algorithm iterative. The GRG algorithm is available with the Microsoft Office Excel (2007) Solver [15] and MATLAB [30]. The GRG solver uses iterative numerical method. The derivatives (and Gradients) play a crucial role in GRG.We have run the program for 1000 iterations for getting the stabilized values of these parameters. Once, we have the parameters we compute the estimated casualties. With the difference between the estimated and observed casualties, we computed the Sum of Squared Residuals. Similarly, we applied the GRG algorithm for optimizing the objective function as given in equation (9).We check the graphs of estimated and observed casualties for both the LS and MLE based approaches and found that if we divide the data set into several subsets then we can improve the fit. As we increase the number of divisions,the fit turns out to be better. The estimated casualty converges to the observed casualty. We have considered tank and artillery data for mixing the forces therefor ea1(or b1) represents effectiveness of Soviet(or Germans) tanks against Germans(or Soviets) tanks and a2(or b2) represents effectiveness of Soviets (or Germans)Arty against Germans (or Soviets) tanks.The variation of attrition rates throughout battle tells us how the different player in the battle performs. Whether they are acting defensively or offensively.
First, we applied the technique of Least Square for estimating the parameters of the heterogeneous Lanchester model.The GRG algorithm [15, 30] is applied for maximizing the MLE and for minimizing the LSE. For implementing the Least Square approach, the Sum of Squared Residuals (SSR) is minimized. The expression of SSR for the equation (1) and (2) is given as:
D
To the best of our knowledge, this is the first work to introduce global momentum into sparse communication methods.
Since RBGS introduces a larger compressed error compared with top-s𝑠sitalic_s when selecting the same number of components of the original vector to communicate, vanilla error feedback methods usually fail to converge when using RBGS as the sparsification compressor.
Furthermore, to enhance the convergence performance when using more aggressive sparsification compressors (e.g., RBGS), we extend GMC to GMC+ by introducing global momentum to the detached error feedback technique.
Due to the larger compressed error introduced by RBGS compared with top-s𝑠sitalic_s when selecting the same number of components of the original vector to communicate, vanilla error feedback methods usually fail to converge. Xu and Huang (2022) propose DEF-A to solve the convergence problem by using detached error feedback (DEF) technique 111Xu and Huang (2022) proposes two algorithms: DEF and DEF-A. Since DEF-A enhances the generalization performance of DEF, we only consider DEF-A in this paper..
In this paper, we propose a novel method, called global momentum compression (GMC), for sparse communication in distributed learning. To the best of our knowledge, this is the first work that introduces global momentum for sparse communication in DMSGD. Furthermore, to enhance the convergence performance when using more aggressive sparsification compressors (e.g., RBGS), we extend GMC to GMC+. We prove the convergence of GMC and GMC+ theoretically. Empirical results verify the superiority of global momentum and show that GMC and GMC+ can outperform other baselines to achieve state-of-the-art performance.
B
It is interesting to note that in some cases SANs reconstructions, such as for the Extrema-Pool indices, performed even better than the original data.
The cost of the description of the data could be seen as proportional to the number of weights and the number of non-zero activations, while the quality of the description is proportional to the reconstruction loss.
This suggests the overwhelming presence of redundant information that resides in the raw pixels of the original data and further indicates that SANs extract the most representative features of the data.
What are the implications of trading off the reconstruction error of the representations with their compression ratio w.r.t to the original data?
As shown in Table. II, although we use a significantly reduced representation size, the classification accuracy differences (A±plus-or-minus\pm±%) are retained which suggests that SANs choose the most important features to represent the data.
B
This assumption is generally mild and aims to preclude degenerate definitions of the test statistic. For example, the assumption holds true in regular settings where tn⁢(G⁢ε)subscript𝑡𝑛𝐺𝜀t_{n}(G\varepsilon)italic_t start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_G italic_ε ) converges in distribution, as in Hoeffding’s condition (4). It should be noted that (A1) and (A2) do not imply (A3) because these two assumptions do not preclude cases where the probability density of ΛnssuperscriptsubscriptΛ𝑛𝑠\Lambda_{n}^{s}roman_Λ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT concentrates in an ever-decreasing band around 0. Although such cases are pathological, an assumption like (A3) is needed to ensure that the multiplicity in the test statistic values is controlled.
Moreover, under the limit hypothesis, Condition (C1) guarantees the asymptotic validity of the approximate test provided that the studentized test statistic based on the true variables satisfies Hoeffding’s condition.
The key implication of this result is that the approximate randomization test ‘inherits’ the asymptotic properties of the original randomization test as long as
With these three assumptions in place, Condition (C1) is key for the asymptotic performance of the approximate randomization test.
Indeed, Theorem 2 of this paper shows that the rate of convergence of Condition (C1) determines a finite-sample bound between the Type I error rates of ϕnsubscriptitalic-ϕ𝑛\phi_{n}italic_ϕ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT and ϕn∗superscriptsubscriptitalic-ϕ𝑛\phi_{n}^{*}italic_ϕ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT. This bound also depends on the smoothness constant of the cdf of the ‘spacings variable’ in the denominator of (C1). Intuitively, the approximate randomization test performs as well as the true test unless the multiplicity of values in the randomization distribution is too erratic. For example, if the test statistic is asymptotically normal, the bound is of order O⁢(n−1/3)𝑂superscript𝑛13O(n^{-1/3})italic_O ( italic_n start_POSTSUPERSCRIPT - 1 / 3 end_POSTSUPERSCRIPT ), suggesting a robustness-efficiency trade-off that we discuss throughout the paper. These results are especially valuable under the invariant hypothesis (1) as prior randomization literature has not studied the performance of approximate randomization tests under the invariant regime.
C
_{\ell}}_{\hat{\beta}_{\ell}}.over^ start_ARG italic_τ end_ARG ( italic_c ) = italic_c start_POSTSUPERSCRIPT ⋆ italic_T end_POSTSUPERSCRIPT under⏟ start_ARG ( overbold_~ start_ARG bold_italic_X end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT bold_italic_M start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT overbold_~ start_ARG bold_italic_X end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT overbold_~ start_ARG bold_italic_X end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT bold_italic_M start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT italic_Y start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT end_ARG start_POSTSUBSCRIPT over^ start_ARG italic_β end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT end_POSTSUBSCRIPT - italic_c start_POSTSUPERSCRIPT ⋆ italic_T end_POSTSUPERSCRIPT under⏟ start_ARG ( overbold_~ start_ARG bold_italic_X end_ARG start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT bold_italic_M start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT overbold_~ start_ARG bold_italic_X end_ARG start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT overbold_~ start_ARG bold_italic_X end_ARG start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT bold_italic_M start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT italic_Y start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT end_ARG start_POSTSUBSCRIPT over^ start_ARG italic_β end_ARG start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT end_POSTSUBSCRIPT .
The impact of the exclusions induced by our optimization problem can be seen most clearly in the right panel. The upper bound on the causal effect is obtained primarily by tagging as manipulators those women for whom the hemoglobin level is 12.5 and who did not attempt to donate again in one year. These women are then excluded from the estimation, resulting in the downward curvature of the green line as it approaches the cutoff from the right. Similarly, the lower bound is obtained by tagging as manipulators those women for whom the hemoglobin level is 12.5 and who did attempt to donate again in one year. Their exclusion yields the upward curvature of the red line as it approaches the cutoff from the right.
where Zrsubscript𝑍𝑟Z_{r}italic_Z start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT is the vector of treatment indicators to the right of the cutoff and α^ℓsubscript^𝛼ℓ\hat{\alpha}_{\ell}over^ start_ARG italic_α end_ARG start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT is the fitted coefficient corresponding to the regression of Z𝑍Zitalic_Z on X𝑋Xitalic_X to the left of the cutoff.
Our approach can be extended easily to the case of the fuzzy RDD. In this case, we suppose the estimate of the causal effect is obtained via an instrumental variable approach. The numerator is the difference of the mean treated outcomes just above and just below the cutoff, and the denominator is the difference of the treatment probabilities just above and just below the cutoff.
Per the under-bracketed quantities, these estimators separately calculate two coefficient vectors: one from a regression relating outcomes to the running variable below the cutoff, the other above the cutoff. The causal estimate is given by the difference in these two regression predictions at the cutoff c𝑐citalic_c.
D
The sources of DQN variance are Approximation Gradient Error(AGE)[23] and Target Approximation Error(TAE)[24]. In Approximation Gradient Error, the error in gradient direction estimation of the cost function leads to inaccurate and extremely different predictions on the learning trajectory through different episodes because of the unseen state transitions and the finite size of experience reply buffer. This type of variance leads to converging to sub-optimal policies and brutally hurts DQN performance. The second source of variance Target Approximation Error which is the error coming from the inexact minimization of DQN parameters. Many of the proposed extensions focus on minimizing the variance that comes from AGE by finding methods to optimize the learning trajectory or from TAE by using methods like averaging to exact DQN parameters. Dropout methods have the ability to assemble these two solutions which minimize different source of variance. Dropout methods can achieve a consistence learning trajectory and exact DQN parameters with averaging, which comes inherently with Dropout methods.
In the experiments we detected variance using standard deviation from average score collected from many independent learning trails.
We detected the variance between DQN and Dropout-DQN visually and numerically as Figure 3 and Table I show.
To evaluate the Dropout-DQN, we employ the standard reinforcement learning (RL) methodology, where the performance of the agent is assessed at the conclusion of the training epochs. Thus we ran ten consecutive learning trails and averaged them. We have evaluated Dropout-DQN algorithm on CARTPOLE problem from the Classic Control Environment. The game of CARTPOLE was selected due to its widespread use and the ease with which the DQN can achieve a steady state policy.
Figure 3: Dropout DQN with different Dropout methods in CARTPOLE environment. The bold lines represent the average scores obtained over 10 independent learning trials, while the shaded areas indicate the range of the standard deviation.
A
In this task, different graph signals 𝐗isubscript𝐗𝑖\mathbf{X}_{i}bold_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, defined on the same adjacency matrix 𝐀𝐀\mathbf{A}bold_A, must be classified with a label 𝐲isubscript𝐲𝑖\mathbf{y}_{i}bold_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT.
In particular, we used a Temporal Convolution Network [57] with 7 residual blocks with dilations [1,2,4,8,16,32,64]1248163264[1,2,4,8,16,32,64][ 1 , 2 , 4 , 8 , 16 , 32 , 64 ], kernel size 6, causal padding, and dropout probability 0.3.
In each experiment we adopt a fixed network architecture, MP(32)-P(2)-MP(32)-P(2)-MP(32)-AvgPool-Softmax, where MP(32) stands for a MP layer as described in (1) configured with 32 hidden units and ReLU activations, P(2) is a pooling operation with stride 2, AvgPool is a global average pooling operation on all the remaining graph nodes, and Softmax indicates a dense layer with Softmax activation.
\textrm{MP}})= MP ( bold_X start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT , bold_A ; bold_Θ start_POSTSUBSCRIPT MP end_POSTSUBSCRIPT )
We use the same architecture adopted for graph classification, with the only difference that each pooling operation is now implemented with stride 4: MP(32)-P(4)-MP(32)-P(4)-MP(32)-AvgPool-Softmax.
D
Another advantage is that the proposed method does not create a predefined architecture but enables arbitrary network architectures.
To study the sampling process, we analyze the variability of the generated data as well as different sampling modes in the next experiment.
Imitation learning performance (in accuracy [%]) of different data sampling modes on Soybean. NRFI achieves better results than random data generation. When optimizing the selection of the decision trees, the performance is improved due to more diverse sampling.
In the next step, the imitation learning performance of the sampling modes is evaluated. The results are shown in Table 3.
In the next experiment, we study the effects of training with original data, NRFI data, and combinations of both. For that, the
A
Considering the convergence rate for σ^±Vsuperscriptsubscript^𝜎plus-or-minus𝑉\widehat{\sigma}_{\pm}^{V}over^ start_ARG italic_σ end_ARG start_POSTSUBSCRIPT ± end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_V end_POSTSUPERSCRIPT and further studying the joint behavior goes beyond the aims of this article.
To provide the ideas and step of the proof, and for the reader convenience, a proof of Proposition 1 for OBM is provided in Appendix B as an introduction to the proof of Proposition 4.
Appendix B is an introduction to Appendix C: some of the main ideas are already given in this section through a proof of the convergence (without rates) towards the local time of the statistics.
As already mentioned, in the case of SBM, Proposition 1 follows from [40, Proposition 2] (with T=1𝑇1T=1italic_T = 1) and the scaling property (A.1) in Appendix A.1.
We first deal with the convergence in probability to the local time in Proposition 1, which was already known for SBM. Another proof of Proposition 1 is also given in Appendix B.
D
Coupled with powerful function approximators such as neural networks, policy optimization plays a key role in the tremendous empirical successes of deep reinforcement learning (Silver et al., 2016, 2017; Duan et al., 2016; OpenAI, 2019; Wang et al., 2018). In sharp contrast, the theoretical understandings of policy optimization remain rather limited from both computational and statistical perspectives. More specifically, from the computational perspective, it remains unclear until recently whether policy optimization converges to the globally optimal policy in a finite number of iterations, even given infinite data. Meanwhile, from the statistical perspective, it still remains unclear how to attain the globally optimal policy with a finite regret or sample complexity.
A line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) answers the computational question affirmatively by proving that a wide variety of policy optimization algorithms, such as policy gradient (PG) (Williams, 1992; Baxter and Bartlett, 2000; Sutton et al., 2000), natural policy gradient (NPG) (Kakade, 2002), trust-region policy optimization (TRPO) (Schulman et al., 2015), proximal policy optimization (PPO) (Schulman et al., 2017), and actor-critic (AC) (Konda and Tsitsiklis, 2000), converge to the globally optimal policy at sublinear rates of convergence, even when they are coupled with neural networks (Liu et al., 2019; Wang et al., 2019). However, such computational efficiency guarantees rely on the regularity condition that the state space is already well explored. Such a condition is often implied by assuming either the access to a “simulator” (also known as the generative model) (Koenig and Simmons, 1993; Azar et al., 2011, 2012a, 2012b; Sidford et al., 2018a, b; Wainwright, 2019) or finite concentratability coefficients (Munos and Szepesvári, 2008; Antos et al., 2008; Farahmand et al., 2010; Tosatto et al., 2017; Yang et al., 2019b; Chen and Jiang, 2019), both of which are often unavailable in practice.
Our work is based on the aforementioned line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) on the computational efficiency of policy optimization, which covers PG, NPG, TRPO, PPO, and AC. In particular, OPPO is based on PPO (and similarly, NPG and TRPO), which is shown to converge to the globally optimal policy at sublinear rates in tabular and linear settings, as well as nonlinear settings involving neural networks (Liu et al., 2019; Wang et al., 2019). However, without assuming the access to a “simulator” or finite concentratability coefficients, both of which imply that the state space is already well explored, it remains unclear whether any of such algorithms is sample-efficient, that is, attains a finite regret or sample complexity. In comparison, by incorporating uncertainty quantification into the action-value function at each update, which explicitly encourages exploration, OPPO not only attains the same computational efficiency as NPG, TRPO, and PPO, but is also shown to be sample-efficient with a d2⁢H3⁢Tsuperscript𝑑2superscript𝐻3𝑇\sqrt{d^{2}H^{3}T}square-root start_ARG italic_d start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT italic_H start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT italic_T end_ARG-regret up to logarithmic factors.
Broadly speaking, our work is related to a vast body of work on value-based reinforcement learning in tabular (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018) and linear settings (Yang and Wang, 2019b, a; Jin et al., 2019; Ayoub et al., 2020; Zhou et al., 2020), as well as nonlinear settings involving general function approximators (Wen and Van Roy, 2017; Jiang et al., 2017; Du et al., 2019b; Dong et al., 2019). In particular, our setting is the same as the linear setting studied by Ayoub et al. (2020); Zhou et al. (2020), which generalizes the one proposed by Yang and Wang (2019a). We remark that our setting differs from the linear setting studied by Yang and Wang (2019b); Jin et al. (2019). It can be shown that the two settings are incomparable in the sense that one does not imply the other (Zhou et al., 2020). Also, our setting is related to the low-Bellman-rank setting studied by Jiang et al. (2017); Dong et al. (2019). In comparison, we focus on policy-based reinforcement learning, which is significantly less studied in theory. In particular, compared with the work of Yang and Wang (2019b, a); Jin et al. (2019); Ayoub et al. (2020); Zhou et al. (2020), which focuses on value-based reinforcement learning, OPPO attains the same T𝑇\sqrt{T}square-root start_ARG italic_T end_ARG-regret even in the presence of adversarially chosen reward functions. Compared with optimism-led iterative value-function elimination (OLIVE) (Jiang et al., 2017; Dong et al., 2019), which handles the more general low-Bellman-rank setting but is only sample-efficient, OPPO simultaneously attains computational efficiency and sample efficiency in the linear setting. Despite the differences between policy-based and value-based reinforcement learning, our work shows that the general principle of “optimism in the face of uncertainty” (Auer et al., 2002; Bubeck and Cesa-Bianchi, 2012) can be carried over from existing algorithms based on value iteration, e.g., optimistic LSVI, into policy optimization algorithms, e.g., NPG, TRPO, and PPO, to make them sample-efficient, which further leads to a new general principle of “conservative optimism in the face of uncertainty and adversary” that additionally allows adversarially chosen reward functions.
step, which is commonly adopted by the existing work on value-based reinforcement learning (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018, 2019; Yang and Wang, 2019b, a), lacks such a notion of robustness.
A
MobileNetV2 (Sandler et al., 2018a) extends this concept by introducing additive skip connections and bottleneck layers.
MobileNetV3 (Howard et al., 2019) extends this even further by also incorporating the neural architecture search (NAS) proposed in MnasNet (Tan et al., 2018).
Wu et al. (2018a) performed mixed-precision quantization using similar NAS concepts to those used by Liu et al. (2019a) and Cai et al. (2019).
Tan and Le (2019) proposed EfficientNet which employs NAS for finding a resource-efficient architecture as a key component.
In MnasNet (Tan et al., 2018), a RNN controller is trained by also considering the latency of the sampled DNN architecture measured on a real mobile device.
A
One way to obtain an indication of a projection’s quality is to compute a single scalar value, equivalent to a final score. Examples are Normalized Stress [7], Trustworthiness and Continuity [24], and Distance Consistency (DSC) [25]. More recently, ClustMe [26] was proposed as a perception-based measure that ranks scatterplots based on cluster-related patterns.
We start by executing a grid search and, after a few seconds, we are presented with 25 representative projections. As we notice that the projections lack high values in continuity, we choose to sort the projections based on this quality metric for further investigation. Next, as the projections are quite different and none of them appears to have a clear advantage over the others, we pick one with good values for all the rest of the quality metrics (i.e., greater than 40%). The overview in Figure 7(a) shows the selected projection with three clear clusters of varying sizes (marked with C1, C2, and C3). However, the labels seem to be mixed in all of them. That means either the projections are not very good, or the labels are simply very hard to separate. By analyzing the Shepard Heatmap (Figure 7(b)), it seems that there is a distortion in how the projection represents the original N-D distances: the darker cells of the heatmap are above the diagonal and concentrated near the origin, which means that the lowest N-D distances (up to 30% of the maximum) have been represented in the projection with a wide range of 2-D distances (up to 60% of the maximum). While it may be argued that the data is too spread in the projection, we must always consider that t-SNE’s goal is not to preserve all pairwise distances, but only close neighborhoods. The projection has used most of its available 2-D space to represent (as best as possible) the smallest N-D distances, which can be considered a good trade-off for this specific objective. In the following paragraphs, we concentrate on some of the goals described in Subsection 4.3 and Subsection 4.4 for each of the three clusters.
While this might be useful for quick overviews or automatic selection of projections, a single score fails to capture more intricate details, such as where and why a projection is good or bad [27]. In contrast, local measures such as the projection precision score (pps) [18] describe the quality for each individual point of the projection, which can then be visualized as an extra layer on top of the scatterplot itself. These measures usually focus on the preservation of neighborhoods [28, 29, 30] or distances [27, 31, 32].
After choosing a projection, users will proceed with the visual analysis using all the functionalities described in the next sections. However, the hyper-parameter exploration does not necessarily stop here. The top 6 representatives (according to a user-selected quality measure) are still shown at the top of the main view (Figure 1(e)), and the projection can be switched at any time if the user is not satisfied with the initial choice. We also provide the mechanism for a selection-based ranking of the representatives. During the exploration of the projection, if the user finds a certain pattern of interest (i.e., cluster, shape, etc.), one possible question might be whether this specific pattern is better visible or better represented in another projection. After selecting these points, the list of top representatives can be ranked again to contain the projections with the best quality regarding the selection (as opposed to the best global quality, which is the default). The way this “selection-based quality” is computed is by adapting the global quality measures we used, taking advantage of the fact that they all work by aggregating a measure-specific quality computation over all the points of the projection. In the case of the selection-based quality, we aggregate only over the selected points to reach the final value of the quality measure, which is then used to re-rank the representatives.
t-viSNE is similar to these works in its use of measures to guide the user’s exploration, but we use measures and mappings that are either specific to t-SNE’s algorithm or customized to be more useful in this scenario.
B
From the comparison of 3 extra experiments, we confirm that the adaptive graph update plays a positive role. Besides, the novel architecture with weighted graph improves the performance on most of datasets.
To illustrate the process of AdaGAE, Figure 2 shows the learned embedding on USPS at the i𝑖iitalic_i-th epoch. An epoch means a complete training of GAE and an update of the graph. The maximum number of epochs, T𝑇Titalic_T, is set as 10. In other words, the graph is updated 10 times. Clearly, the embedding becomes more cohesive with the update.
Figure 2: Visualization of the learning process of AdaGAE on USPS. Figure 2(b)-2(i) show the embedding learned by AdaGAE at the i𝑖iitalic_i-th epoch, while the raw features and the final results are shown in Figure 2(a) and 2(j), respectively. An epoch corresponds to an update of the graph.
To study the impact of different parts of the loss in Eq. (12), the performance with different λ𝜆\lambdaitalic_λ is reported in Figure 4.
Figure 1: Framework of AdaGAE. k0subscript𝑘0k_{0}italic_k start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is the initial sparsity. First, we construct a sparse graph via the generative model defined in Eq. (7). The learned graph is employed to apply the GAE designed for the weighted graphs. After training the GAE, we update the graph from the learned embedding with a larger sparsity, k𝑘kitalic_k. With the new graph, we re-train the GAE. These steps are repeated until the convergence.
C
Importantly, Gregory et al. (2021) do not explicitly focus on inference, and their analysis requires much stronger assumptions to obtain the oracle property. For example, these assumptions include normally distributed errors independent of X𝑋Xitalic_X, as well as a bounded support of X𝑋Xitalic_X. Similar to our framework, a large set of basis functions is chosen in their work, such as polynomials or splines, to approximate the components f1subscript𝑓1f_{1}italic_f start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and f−1subscript𝑓1f_{-1}italic_f start_POSTSUBSCRIPT - 1 end_POSTSUBSCRIPT. One distinctive feature of our work compared to Gregory et al. (2021) is that we allow the degree of approximating functions to grow to infinity with increasing sample size.
In general, the performance of the estimator and the confidence bands depends on the specification of the cubic B-splines used to approximate the target functions (in terms of the number of knots). In our simulations, we observe that the quality of estimation and width of the confidence bands change only moderately when varying the number of knots. Overall, we conclude that our results are relatively robust to the specifications of the B-splines. We find that the performance of the point estimator for the target component fj⁢(xj)subscript𝑓𝑗subscript𝑥𝑗f_{j}(x_{j})italic_f start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) relies on variable selection through the lasso estimator. In line with our expectations, the lasso estimator is better at selecting the sparse components of the B-splines in larger samples and settings with weaker correlation structure, i.e., ρ=0𝜌0\rho=0italic_ρ = 0. In our simulations, we chose the specification of the B-splines estimator based on preliminary evidence. We also investigated the performance of a cross-validated choice (results available upon request). While the latter lacks theoretical justification, we found it to be relatively conservative, resulting in wide confidence bands. During our simulation experiments, we also investigated the performance of an alternative lasso learner that is based on a cross-validated choice of the penalty term. We found that the cross-validated penalty choice was computationally more expensive and inferior in terms of the estimation performance, as indicated by selecting very few of the sparse components.
The primary aim of our paper is to provide a method for constructing uniformly valid inference and confidence bands in sparse high-dimensional models in the sieve framework. In doing so, we contribute to the growing literature on high-dimensional inference in additive models, especially that on debiased/double machine learning. The double machine learning approach (Belloni et al., 2014b; Chernozhukov et al., 2018) offers a general framework for uniformly valid inference in high-dimensional settings. Similar methods, such as those proposed by van de Geer et al. (2014) and Zhang and Zhang (2014), have also produced valid confidence intervals for low-dimensional parameters in high-dimensional linear models. These studies are based on the so-called debiasing approach, which provides an alternative framework for valid inference. The framework entails a one-step correction of the lasso estimator, resulting in an asymptotically normally distributed estimator of the low-dimensional target parameter. For a survey on post-selection inference in high-dimensional settings and its generalizations, we refer to Chernozhukov et al. (2015b).
In a recent study based on the previously mentioned debiasing approach by Zhang and Zhang (2014), Gregory et al. (2021) propose an estimator for the first component f1subscript𝑓1f_{1}italic_f start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT in a high-dimensional additive model in which the number of additive components p𝑝pitalic_p may increase with the sample size. The estimator is constructed in two steps. The first step entails constructing an undersmoothed estimator based on near-orthogonal projections with a group lasso bias correction. A debiased version of the first step estimator is used to generate pseudo responses Y^^𝑌\hat{Y}over^ start_ARG italic_Y end_ARG. These pseudo responses are then used in the second step, which involves applying a smoothing method to a nonparametric regression problem with Y^^𝑌\hat{Y}over^ start_ARG italic_Y end_ARG and covariates X1subscript𝑋1X_{1}italic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT. Under sparsity assumptions concerning the number of nonzero additive components, the so-called oracle property is shown. Accordingly, the proposed estimator in Gregory et al. (2021) is asymptotically equivalent to the oracle estimator, which is based on the true functions f2,…,fpsubscript𝑓2…subscript𝑓𝑝f_{2},\dots,f_{p}italic_f start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , … , italic_f start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT. The asymptotic properties of the oracle estimator are well understood and carry over to the proposed debiasing estimate, including the methodology for constructing uniformly valid confidence intervals for f1subscript𝑓1f_{1}italic_f start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT.
A procedure explicitly addressing the construction of uniformly valid confidence bands for the components in high-dimensional additive models has been developed by Lu et al. (2020). The authors emphasize that achieving uniformly valid inference in these models is challenging due to the difficulty of directly generalizing the ideas from the fixed-dimensional case. Whereas confidence bands in the low-dimensional case are mostly built using kernel methods, the estimators for high-dimensional sparse additive models typically rely on sieve estimators based on dictionaries. To derive their results, Lu et al. (2020) combine both kernel and sieve methods to draw upon the advantages of each, resulting in a kernel-sieve hybrid estimator. This is a two-step estimator with tuning parameters for kernel estimation and sieves estimation, such as the bandwidth and penalization levels, which must be chosen by cross-validation. Because of the local structure of the hybrid estimator, the framework
D
Figure 3(a) is a t-SNE projection [61] of the instances (MDS [22] and UMAP [31] are also available in order to empower the users with various perspectives for the same problem, based on the DR guidelines from Schneider et al. [47]).
(iii) during the data wrangling phase, we manipulate the instances and features with two different views for each of them; (iv) model exploration allows us to reduce the size of the stacking ensemble, discard any unnecessary models, and observe the predictions of the models collectively (StackGenVis: Alignment of Data, Algorithms, and Models for Stacking Ensemble Learning Using Performance Metrics(d));
Figure 3: The data space projection with the importance of each instance measured by the accuracy achieved by the stack models (a). The parallel coordinates plot view for the exploration of the values of the features (b); a problematic case is highlighted in red with values being null (‘4’ has no meaning for Ca). (c.1) shows the brushed instance from the selection in (b) and (c.2) a problematic point that causes troubles to the stacking ensemble. (c.3) indicates the various functionalities that StackGenVis is able to perform for instances.
The point size is based on the predictive accuracy calculated using all the chosen models, with smaller size encoding higher accuracy value.
Figure 4: Our feature selection view that provides three different feature selection techniques. The y-axis of the table heatmap depicts the data set’s features, and the x-axis depicts the selected models in the current stored stack. Univariate-, permutation-, and accuracy-based feature selection is available as long with any combination of them (a). (b) displays the normalized importance color legend. The per-model feature accuracy is depicted in (c), and (d) presents the user’s interaction to disable specific features to be used for all the models (only seven features are shown here). This could also happen on an individual basis for every model.
C
Based on Theorem 4.3 and Lemma 4.4, we establish the following corollary, which characterizes the global optimality and convergence of the TD dynamics θ(m)⁢(k)superscript𝜃𝑚𝑘\theta^{(m)}(k)italic_θ start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT ( italic_k ) in (3.3).
Under the same conditions of Theorem 4.3, it holds with probability at least 1−δ1𝛿1-\delta1 - italic_δ that
where C∗>0subscript𝐶0C_{*}>0italic_C start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT > 0 is a constant depending on Dχ2⁢(ν¯∥ν0)subscript𝐷superscript𝜒2conditional¯𝜈subscript𝜈0D_{\chi^{2}}(\underline{\nu}\,\|\,\nu_{0})italic_D start_POSTSUBSCRIPT italic_χ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( under¯ start_ARG italic_ν end_ARG ∥ italic_ν start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ), B1subscript𝐵1B_{1}italic_B start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, B2subscript𝐵2B_{2}italic_B start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, and Brsubscript𝐵𝑟B_{r}italic_B start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT. Moreover, it holds with probability at least 1−δ1𝛿1-\delta1 - italic_δ that
where C∗>0subscript𝐶0C_{*}>0italic_C start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT > 0 is a constant depending on Dχ2⁢(ν¯∥ν0)subscript𝐷superscript𝜒2conditional¯𝜈subscript𝜈0D_{\chi^{2}}(\bar{\nu}\,\|\,\nu_{0})italic_D start_POSTSUBSCRIPT italic_χ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( over¯ start_ARG italic_ν end_ARG ∥ italic_ν start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ), B1subscript𝐵1B_{1}italic_B start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, B2subscript𝐵2B_{2}italic_B start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, and Brsubscript𝐵𝑟B_{r}italic_B start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT. Moreover, it holds with probability at least 1−δ1𝛿1-\delta1 - italic_δ that
Under Assumptions 4.1, 4.2, and 6.3, it holds for η=α−2𝜂superscript𝛼2\eta=\alpha^{-2}italic_η = italic_α start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT that
A
While not having been formalised by the original authors, one could interpret the area under the curves as an intuitive notion of effect ‘density’, as opposed to sparsity: an input with a sparse (dense) effect will have a relatively high (low) area under the pvalue curve. This is because many parts of the p-value function will have relatively high (low) values).
Predicting a quantity for the long time scales which matter for the climate is a hard task, with a great degree of uncertainty involved. Many efforts have been undertaken to model and control this and other uncertainties, such as the development of standardized scenarios of future development, called Shared Socio-economic Pathways (SSPs) [22, 30] or the use of model ensembles to tackle the issue of model uncertainty. Given also the relative opaqueness and the complexity of IAMs, post-hoc diagnostic methods have been used, for instance with the purpose of performing Global Sensitivity Analysis. In fact, GSA methods can provide fundamental information to policymakers in terms of the relevance of specific factors over model outputs [17]. Moreover, the specific methodology employed in the paper [4] is able to detect both main and interaction effects with a very parsimonious experimental design, and to do so in the case of finite changes for the input variables.
A fundamental tool to understand and explore the complex dynamics that regulates this phenomenon is the use of computer models. In particular, the scientific community has oriented itself towards the use of coupled climate-energy-economy models, also known as Integrated Assessment Models (IAM). These are pieces of software that integrate climate, energy, land and economic modules, to generate predictions about decision variables for a given period (usually, the next century). They belong to two very different paradigms [see e.g. 38]: detailed process models which have provided major input to climate policy making and assessment reviews such as those of the IPCC. And benefit-cost models such as the Dynamic Integrated Climate-Economy (DICE) model [20], for which the economics Nobel prize was awarded in 2018. A classic variable of interest in this kind of analysis is the level of future C⁢O2𝐶subscript𝑂2CO_{2}italic_C italic_O start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT emissions, since these directly affect climatic variables, such as global average temperature.
Some fundamental pieces of knowledge are still missing: given a dynamic phenomenon such as the evolution of C⁢O2𝐶subscript𝑂2CO_{2}italic_C italic_O start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT emissions in time a policymaker is interested if the input of the factor varies across time, and how. Moreover, given the presence of a model ensemble, with different modelling choices, and thus different impacts of identical input factors across different models, a key information to provide to policymakers is if the evidence provided by the model ensemble is significant, in the sense that it is ‘higher’ than the natural variability of the model ensemble. In this specific setting we do not want just to provide a ‘global’ idea of significance, but we also want to explore the temporal sparsity of it (e.g. I would like to know if the impact of a specific input variable is significant in a given timeframe, but fails to be ‘detectable’ in the model ensemble after a given date). Our aim in the present work is thus threefold: we want to introduce a way to express sensitivity that allows to account for time-varying impacts, and we also want to assess the significance of such sensitivities, being able to explore the presence of temporal sparsity of the significance.
For this paper we focus on \chCO2 emissions as the main output of an ensemble of coupled climate-economy-energy models. Each model-scenario produces a vector of \chCO2 emissions defined from the year 2010 to 2090 at 10-years time intervals. This discretization of the output space is in any case arbitrary, since \chCO2 emissions do exist in every time instant in the interval T=[2010,2090]𝑇20102090T=[2010,2090]italic_T = [ 2010 , 2090 ]. A thorough description of the dataset used as a testbed for the application of the methods described before can be found in [17]. This was one of the first paper to apply global sensitivity techniques to an ensemble of climate economy models, thus addressing both parametric and model uncertainty. We use the scenarios developed in [17] which involve five models (IMAGE, IMACLIM, MESSAGE-GLOBIOM, TIAM-UCL and WITCH-GLOBIOM) that provide output data until the end of the interval T𝑇Titalic_T.
D
Second, we mentioned briefly that, when there are multiple observations, one can apply IP-SVD on the sample covariance tensor.
The STEFA model is related to a list of tensor response regression models (Raskutti et al., 2019) with a low-rank coefficient tensor.
Last but not the least, it is of great need to develop new methods to use STEFA in tensor regression or other tensor data related applications.
The STEFA model is to the MMC tensor regression as the projected PCA is to the reduce-rank regression.
STEFA is a generalization of the semi-parametric vector factor model (Fan et al., 2016) to the tensor data.
B
Hence, with the same number of gradient computations, SNGM can adopt a larger batch size than MSGD to converge to the ϵitalic-ϵ\epsilonitalic_ϵ-stationary point.
Empirical results on deep learning show that with the same large batch size, SNGM can achieve better test accuracy than MSGD and other state-of-the-art large-batch training methods.
Figure 2 shows the learning curves of the five methods. We can observe that in the small-batch training, SNGM and other large-batch training methods achieve similar performance in terms of training loss and test accuracy as MSGD.
Empirical results on deep learning further verify that SNGM can achieve better test accuracy than MSGD and other state-of-the-art large-batch training methods.
Table 6 shows the test perplexity of the three methods with different batch sizes. We can observe that for small batch size, SNGM achieves test perplexity comparable to that of MSGD, and for large batch size, SNGM is better than MSGD. Similar to the results of image classification, SNGM outperforms LARS for different batch sizes.
C
The second is vector-on-tensor regression in which we have a vector response (Miranda et al., 2018).
To evaluate the estimation performance on the regression function, we define the integrated squared error (ISE) of the regression function as
In this section, we propose an interpretable nonparametric model for the regression function m𝑚mitalic_m.
In this work, we focus on the scalar-on-tensor regression model, and we denote the regression function by m𝑚mitalic_m.
We propose an estimator of the regression function m𝑚mitalic_m and a corresponding estimation algorithm.
C
)\in\mathcal{E}\times[H].∥ ∑ start_POSTSUBSCRIPT italic_l = italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k - 1 end_POSTSUPERSCRIPT bold_italic_ϕ start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT [ italic_V start_POSTSUBSCRIPT italic_h + 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT ( italic_s start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_h + 1 end_POSTSUBSCRIPT ) - blackboard_P start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT italic_V start_POSTSUBSCRIPT italic_h + 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT ( italic_s start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT , italic_a start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT ) ] ∥ start_POSTSUBSCRIPT ( roman_Λ start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ≤ italic_C italic_d italic_H square-root start_ARG roman_log [ 2 ( italic_c start_POSTSUBSCRIPT italic_β end_POSTSUBSCRIPT + 1 ) italic_d italic_W / italic_p ] end_ARG , ∀ ( italic_k , italic_h ) ∈ caligraphic_E × [ italic_H ] .
Next we proceed to derive the dynamic regret bounds for two cases: (1) local variations are known, and (2) local variations are unknown.
We develop the LSVI-UCB-Restart algorithm and analyze the dynamic regret bound for both cases that local variations are known or unknown, assuming the total variations are known. We define local variations (Eq. (2)) as the change in the environment between two consecutive epochs instead of the total changes over the entire time horizon. When local variations are known, LSVI-UCB-Restart achieves O~⁢(B1/3⁢d4/3⁢H4/3⁢T2/3)~𝑂superscript𝐵13superscript𝑑43superscript𝐻43superscript𝑇23\tilde{O}(B^{1/3}d^{4/3}H^{4/3}T^{2/3})over~ start_ARG italic_O end_ARG ( italic_B start_POSTSUPERSCRIPT 1 / 3 end_POSTSUPERSCRIPT italic_d start_POSTSUPERSCRIPT 4 / 3 end_POSTSUPERSCRIPT italic_H start_POSTSUPERSCRIPT 4 / 3 end_POSTSUPERSCRIPT italic_T start_POSTSUPERSCRIPT 2 / 3 end_POSTSUPERSCRIPT ) dynamic regret, which matches the lower bound in B𝐵Bitalic_B and T𝑇Titalic_T, up to polylogarithmic factors. When local variations are unknown, LSVI-UCB-Restart achieves O~⁢(B1/4⁢d5/4⁢H5/4⁢T3/4)~𝑂superscript𝐵14superscript𝑑54superscript𝐻54superscript𝑇34\tilde{O}(B^{1/4}d^{5/4}H^{5/4}T^{3/4})over~ start_ARG italic_O end_ARG ( italic_B start_POSTSUPERSCRIPT 1 / 4 end_POSTSUPERSCRIPT italic_d start_POSTSUPERSCRIPT 5 / 4 end_POSTSUPERSCRIPT italic_H start_POSTSUPERSCRIPT 5 / 4 end_POSTSUPERSCRIPT italic_T start_POSTSUPERSCRIPT 3 / 4 end_POSTSUPERSCRIPT ) dynamic regret.
By applying a similar proof technique as Theorem 3, we can derive the dynamic regret within one epoch when local variations are unknown.
Now we derive the dynamic regret bounds for LSVI-UCB-Restart, first introducing additional notation for local variations. We let
A
Existing works including [31, 32] also talk about the sample complexity bounds for the projected Wasserstein distance.
As suggested in [23], the power of MMD test with the median heuristic decays quickly into zero when the dimension d𝑑ditalic_d increases.
Except for the second-order moment term, the acceptance region does not depend on the dimension of the support of distributions, but only on the sample size and the dimension of projected spaces.
[31, 6] find the worst-case direction that maximizes the Wasserstein distance between projected sample points in one-dimension.
However, the bound presented in [31] depends on the input dimension d𝑑ditalic_d and focuses on the case k=1𝑘1k=1italic_k = 1 only.
D
I think I would make what these methods doing clearer. They aren’t really separating into nuisance and independent only.. they are also throwing away nuisance.
Prior work in unsupervised DR learning suggests the objective of learning statistically independent factors of the latent space as means to obtain DR. The underlying assumption is that the latent variables H𝐻Hitalic_H can be partitioned into independent components C𝐶Citalic_C (i.e. the disentangled factors) and correlated components Z𝑍Zitalic_Z, a.k.a as nuisance variables, which encode the details information not stored in the independent components. A series of works starting from [beta] aims to achieve that via regularizing the models by up-weighting certain terms in the ELBO formulation which penalize the (aggregate) posterior to be factorized over all or some of the latent dimensions [kumar2017variational, factor, mig].
The model has two parts. First, we apply a DGM to learn only the disentangled part, C𝐶Citalic_C, of the latent space. We do that by applying any of the above mentioned VAEs111In this exposition we use unspervised trained VAEs as our base models but the framework also works with GAN-based or FLOW-based DGMs, supervised, semi-supervised or unsupervised. In the Appendix we present such implementations. where we significantly constrain the capacity of the learned representation and heavily regularize the model to produce independent factors. As we explained above, such a model will likely learn a good disentangled representation, however, its reconstruction will be of low quality as it will only be able to generate the information captured by the disentangled factors while averaging the details. For example, in Figure 1, the model uses β𝛽\betaitalic_β-TCVAE [mig] to retrieve the pose of the model as a latent factor. In the reconstruction, the rest of the details are averaged, resulting in a blurry image (1b). The goal of the second part of the model, is to add the details while maintaining the semantic information retrieved in the first stage. In Figure 1 that means to transform Image 1b (the output of the first stage) to be as similar as possible to Image 1a (the target observation). We can view this as a style transfer task and use a technique from [adaIN] to achieve our goal.
Specifically, we apply a DGM to learn the nuisance variables Z𝑍Zitalic_Z, conditioned on the output image of the first part, and use Z𝑍Zitalic_Z in the normalization layers of the decoder network to shift and scale the features extracted from the input image. This process adds the details information captured in Z𝑍Zitalic_Z while maintaining the semantic information captured in C𝐶Citalic_C to obtain the final reconstruction (Image 1d in our example).
While the aforementioned models made significant progress on the problem, they suffer from an inherent trade-off between learning DR and reconstruction quality. If the latent space is heavily regularized, not allowing enough capacity for the nuisance variables, reconstruction quality is diminished. On the other hand, if the unconstrained nuisance variables have enough capacity, the model can use them to achieve a high quality reconstruction while ignoring the latent variables related to the disentangled factors. This phenomena is sometimes called the "shortcut problem" and has been discussed in previous works [DBLP:conf/iclr/SzaboHPZF18].
D
Forward selection is a simple, greedy feature selection algorithm (Guyon \BBA Elisseeff, \APACyear2003). It is a so-called wrapper method, which means it can be used in combination with any learner (Guyon \BBA Elisseeff, \APACyear2003). The basic strategy is to start with a model with no features, and then add the single feature to the model which is “best” according to some criterion. One then proceeds to sequentially add the next “best” feature at every step until some stopping criterion is met. Here we consider forward selection based on the Akaike Information Criterion (AIC). In order to impose nonnegativity of the coefficients, we will use a slightly modified procedure which we will call nonnegative forward selection (NNFS). This procedure can be described as follows:
Excluding the interpolating predictor, nonnegative ridge regression produced the least sparse models. This is not surprising considering it performs view selection only through its nonnegativity constraints. Its high FPR in view selection appeared to negatively influence its test accuracy, as there was generally at least one sparser model with better accuracy in both our simulations and real data examples. Although nonnegative ridge regression shows that the nonnegativy constrains alone already cause many coefficients to be set to zero, if one assumes the true underlying model to be sparse, one should probably choose one of the meta-learners specifically aimed at view selection.
Consider the view corresponding to the largest reduction in AIC. If the coefficients (excluding the intercept) of the resulting model are all nonnegative, update the model and repeat starting at step 2.
If some of the coefficients (excluding the intercept) of the resulting model are negative, remove the view (from step 3) from the list of candidates and repeat starting at step 3.
In MVS, the meta-learner takes as input the matrix of cross-validated predictions 𝒁𝒁\bm{Z}bold_italic_Z. To perform view selection, the meta-learner should be chosen such that it returns (potentially) sparse models. The matrix 𝒁𝒁\bm{Z}bold_italic_Z has a few special characteristics which can be exploited, and which distinguishes it from standard settings. First, assuming that the f^vsubscript^𝑓𝑣\hat{f}_{v}over^ start_ARG italic_f end_ARG start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT, v=1,…,V𝑣1…𝑉v=1,\dots,Vitalic_v = 1 , … , italic_V are probabilistic classifiers (such as logistic regression models), the features in 𝒁𝒁\bm{Z}bold_italic_Z are all in the same range [0,1]01[0,1][ 0 , 1 ]. Second, the dispersion of each feature contains information about the magnitude of the class probabilities predicted by the corresponding base classifier. To preserve this information it is reasonable to omit the usual step of standardizing all features to zero mean and unit variance before applying penalized regression. Third, since the features in 𝒁𝒁\bm{Z}bold_italic_Z correspond to predictions of models trained using the same outcomes 𝒚𝒚\bm{y}bold_italic_y, it is likely that at least some of them are highly correlated. Different penalization methods lead to different behavior in the presence of highly correlated features (Friedman \BOthers., \APACyear2009). Finally, it is sensible to constrain the parameters of the meta-learner to be nonnegative (Breiman, \APACyear1996; Ting \BBA Witten, \APACyear1999; Van Loon \BOthers., \APACyear2020). There are several arguments for inducing such constraints. One intuitive argument is that a negative coefficient leads to problems with interpretation, since this would suggest that if the corresponding base classifier predicts a higher probability of belonging to a certain class, then the meta-learner would translate this to a lower probability of belonging to that same class. Additionally, from a view selection perspective, nonnegativity constraints are crucial in preventing unimportant views from entering the model (Van Loon \BOthers., \APACyear2020).
B
Comparison with Oh & Iyengar [2019] The Thompson Sampling based approach is inherently different from our Optimism in the face of uncertainty (OFU) style Algorithm CB-MNL. However, the main result in Oh & Iyengar [2019] also relies on a confidence set based analysis along the lines of Filippi et al. [2010] but has a multiplicative κ𝜅\kappaitalic_κ factor in the bound.
Comparison with Faury et al. [2020] Faury et al. [2020] use a bonus term for optimization in each round, and their algorithm performs non-trivial projections on the admissible log-odds. While we do reuse the Bernstein-style concentration inequality as proposed by them, their results do not seem to extend directly to the MNL setting without requiring significantly more work. Further, our algorithm CB-MNL performs an optimistic parameter search for making decisions instead of using a bonus term, which allow for a cleaner and shorter analysis.
In this paper, we build on recent developments for generalized linear bandits (Faury et al. [2020]) to propose a new optimistic algorithm, CB-MNL for the problem of contextual multinomial logit bandits. CB-MNL follows the standard template of optimistic parameter search strategies (also known as optimism in the face of uncertainty approaches)  [Abbasi-Yadkori et al., 2011, Abeille et al., 2021]. We use Bernstein-style concentration for self-normalized martingales, which were previously proposed in the context of scalar logistic bandits in Faury et al. [2020], to define our confidence set over the true parameter, taking into account the effects of the local curvature of the reward function. We show that the performance of CB-MNL (as measured by regret) is bounded as O~⁢\del⁢d⁢T+κ~O\del𝑑𝑇𝜅\tilde{\mathrm{O}}\del{d\sqrt{T}+\kappa}over~ start_ARG roman_O end_ARG italic_d square-root start_ARG italic_T end_ARG + italic_κ, significantly improving the theoretical performance over existing algorithms where κ𝜅\kappaitalic_κ appears as a multiplicative factor in the leading term. We also leverage a self-concordance [Bach, 2010] like relation for the multinomial logit reward function [Zhang & Lin, 2015], which helps us limit the effect of κ𝜅\kappaitalic_κ on the final regret upper bound to only the higher-order terms. Finally, we propose a different convex confidence set for the optimization problem in the decision set of CB-MNL, which reduces the optimization problem to a constrained convex problem.
In this work, we proposed an optimistic algorithm for learning under the MNL contextual bandit framework. Using techniques from Faury et al. [2020], we developed an improved technical analysis to deal with the non-linear nature of the MNL reward function. As a result, the leading term in our regret bound does not suffer from the problem-dependent parameter κ𝜅\kappaitalic_κ. This contribution is significant as κ𝜅\kappaitalic_κ can be very large (refer to Section 1.2). For example, for κ=O⁢(T)𝜅O𝑇\kappa=\mathrm{O}(\sqrt{T})italic_κ = roman_O ( square-root start_ARG italic_T end_ARG ), the results of Oh & Iyengar [2021, 2019] suffer O~⁢(T)~O𝑇\tilde{\mathrm{O}}(T)over~ start_ARG roman_O end_ARG ( italic_T ) regret, while our algorithm continues to enjoy O~⁢(T)~O𝑇\tilde{\mathrm{O}}(\sqrt{T})over~ start_ARG roman_O end_ARG ( square-root start_ARG italic_T end_ARG ). Further, we also presented a tractable version of the decision-making step of the algorithm by constructing a convex relaxation of the confidence set.
CB-MNL enforces optimism via an optimistic parameter search (e.g. in Abbasi-Yadkori et al. [2011]), which is in contrast to the use of an exploration bonus as seen in Faury et al. [2020], Filippi et al. [2010]. Optimistic parameter search provides a cleaner description of the learning strategy. In non-linear reward models, both approaches may not follow similar trajectory but may have overlapping analysis styles (see Filippi et al. [2010] for a short discussion).
A
The analytical requirements (R1–R5) originate from the analysis of the related work in Section 2, including the three analytical needs from Park et al. [PNKC21], the three key decisions from Wang et al. [WMJ∗19], and the five sub-steps from Li et al. [LCW∗18].
Also, our own experiences played a vital role, for instance, VA tools for ML such as t-viSNE [CMK20] and StackGenVis [CMKK21], and recently-conducted literature reviews [CMJK20, CMJ∗20].
The use of parallel coordinates plots [ID87] is rather prominent for the visualization of automatic hyperparameter tuners such as HyperOpt [BKE∗15]. Most of the time, less interactive visualizations have been developed for monitoring automatic frameworks [ASY∗19, GSM∗17, KKP∗18, LLN∗18, LTKS19, TBCT∗18]. Visualizations arranged into dashboard-styled interfaces are the preferred norm for managing ML experiments and their associated models [SKJ∗17, TMB∗18, WRW∗20, WWO∗20].
Visualization tools have been implemented for sequential-based, bandit-based, and population-based approaches [PNKC21], and for more straightforward techniques such as grid and random search [LCW∗18]. Evolutionary optimization, however, has not experienced similar consideration by the InfoVis and VA communities, with the exception of more general visualization approaches such as EAVis [KE05, Ker06] and interactive evolutionary computation (IEC) [Tak01]. To the best of our knowledge, there is no literature describing the use of VA in hyperparameter tuning of evolutionary optimization (as defined in Section 1) with the improvement of performance based on majority-voting ensembles.
There are relevant works that involve the human in interpreting, debugging, refining, and comparing ensembles of models [DCCE19, LXL∗18, NP20, SJS∗18, XXM∗19, ZWLC19]. These papers use bagging [Bre01] and boosting [CG16, FSA99, KMF∗17] techniques for ranking and identifying the best combination of models in different application scenarios. StackGenVis [CMKK21] is a VA system for composing powerful and diverse stacking ensembles [Wol92] from a pool of pre-trained models. On the one hand, we also enable the user to assess the various models and build his/her own ensemble of models. On the other hand, we support the process of generating new models through genetic algorithms and highlight the necessity for the best and most diverse models in the simplest possible voting ensemble. Finally, our approach is model-agnostic and generalizable, since we use both bagging and boosting techniques along with both NNs and simpler models [LXL∗18, NP20, ZWLC19].
A
The stochastic blockmodel (SBM) (SBM, ) is one of the most used models for community detection in which all nodes in the same community are assumed to have equal expected degrees. Some recent developments of SBM can be found in (abbe2017community, ) and references therein. Since in empirical network data sets, the degree distributions are often highly inhomogeneous across nodes, a natural extension of SBM is proposed: the degree-corrected stochastic block model (DCSBM) (DCSBM, ) which allows the existence of degree heterogeneity within communities. DCSBM is widely used for community detection for non-mixed membership networks (zhao2012consistency, ; SCORE, ; cai2015robust, ; chen2018convexified, ; chen2018network, ; ma2021determining, ). MMSB constructed a mixed membership stochastic blockmodel (MMSB) which is an extension of SBM by letting each node have different weights of membership in all communities. However, in MMSB, nodes in the same communities still share the same degrees. To overcome this shortcoming, mixedSCORE proposed a degree-corrected mixed membership (DCMM) model. DCMM model allows that nodes for the same communities have different degrees and some nodes could belong to two or more communities, thus it is more realistic and flexible. In this paper, we design community detection algorithms based on the DCMM model.
In this paper, we extend the symmetric Laplacian inverse matrix (SLIM) method (SLIM, ) to mixed membership networks and call this proposed method as mixed-SLIM. As mentioned in SLIM , the idea of using the symmetric Laplacian inverse matrix to measure the closeness of nodes comes from the first hitting time in a random walk. SLIM combined the SLIM with the spectral method based on DCSBM for community detection. And the SLIM method outperforms state-of-art methods in many real and simulated datasets. Therefore, it is worth modifying this method to mixed membership networks. Numerical results of simulations and substantial empirical datasets in Section 5 show that our proposed Mixed-SLIM indeed enjoys satisfactory performances when compared to the benchmark methods for both community detection problem and mixed membership community detection problem.
In this section, first, we investigate the performances of Mixed-SLIM methods for the problem of mixed membership community detection via synthetic data. Then we apply some real-world networks with true label information to test Mixed-SLIM methods’ performances for community detection, and we apply the SNAP ego-networks (leskovec2012learning, ) to investigate their performances for mixed membership community detection. We omit the numerical results on synthetic data experiments for Mixed-SLIMτsubscriptSLIM𝜏\mathrm{SLIM}_{\tau}roman_SLIM start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT, Mixed-SLIMa⁢p⁢p⁢r⁢osubscriptSLIM𝑎𝑝𝑝𝑟𝑜\mathrm{SLIM}_{appro}roman_SLIM start_POSTSUBSCRIPT italic_a italic_p italic_p italic_r italic_o end_POSTSUBSCRIPT and Mixed-SLIMτ⁢a⁢p⁢p⁢r⁢osubscriptSLIM𝜏𝑎𝑝𝑝𝑟𝑜\mathrm{SLIM}_{\tau appro}roman_SLIM start_POSTSUBSCRIPT italic_τ italic_a italic_p italic_p italic_r italic_o end_POSTSUBSCRIPT since they behave similarly to Mixed-SLIM. Finally, we provide a discussion on the choice of T𝑇Titalic_T for Mixed-SLIMa⁢p⁢p⁢r⁢osubscriptSLIM𝑎𝑝𝑝𝑟𝑜\mathrm{SLIM}_{appro}roman_SLIM start_POSTSUBSCRIPT italic_a italic_p italic_p italic_r italic_o end_POSTSUBSCRIPT.
In this section, we first introduce the main algorithm mixed-SLIM which can be taken as a natural extension of the SLIM (SLIM, ) to the mixed membership community detection problem. Then we discuss the choice of some tuning parameters in the proposed algorithm.
This paper makes one major contribution: modified SLIM methods to mixed membership community detection under the DCMM model. When dealing with large networks in practice, we apply Mixed-SLIMa⁢p⁢p⁢r⁢osubscriptSLIM𝑎𝑝𝑝𝑟𝑜\mathrm{SLIM}_{appro}roman_SLIM start_POSTSUBSCRIPT italic_a italic_p italic_p italic_r italic_o end_POSTSUBSCRIPT and its regularized version Mixed-SLIMτ⁢a⁢p⁢p⁢r⁢osubscriptSLIM𝜏𝑎𝑝𝑝𝑟𝑜\mathrm{SLIM}_{\tau appro}roman_SLIM start_POSTSUBSCRIPT italic_τ italic_a italic_p italic_p italic_r italic_o end_POSTSUBSCRIPT. We showed the estimation consistency of the regularized version Mixed-SLIMτsubscriptSLIM𝜏\mathrm{SLIM}_{\tau}roman_SLIM start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT under the DCMM model. Both simulation and empirical results for community detection and mixed membership community detection demonstrate that Mixed-SLIM methods enjoy satisfactory performances and they perform better than most of the benchmark methods.
A
Detommaso et al. (2018); Han and Liu (2018); Chen et al. (2018); Liu et al. (2019); Gong et al. (2019); Wang et al. (2019); Zhang et al. (2020); Ye et al. (2020)
we prove that variational transport constructs a sequence of probability distributions that converges linearly to the global minimizer of the objective functional up to a statistical error due to estimating the Wasserstein gradient with finite particles. Moreover, such a statistical error converges to zero as the number of particles goes to infinity.
use the empirical distribution of the particles to approximate the probability measure and the iterates are updated via pushing the particles in directions specified the solution to a variational problem.
Departing from MCMC where independent stochastic particles are used, it leverages interacting deterministic particles to approximate the probability measure of interest. In the mean-field limit where the number of particles go to infinity, it can be viewed as the gradient flow of the KL-divergence with respect to a modified Wasserstein metric (Liu, 2017).
In other words, as the number of particles and the number of iterations both go to infinity, variational transport finds the global minimum of F𝐹Fitalic_F.
C
A unit-specific covariate process, 𝐙⁢(t)=Z1:U⁢(t)𝐙𝑡subscript𝑍:1𝑈𝑡\mathbf{Z}(t)=Z_{1:U}(t)bold_Z ( italic_t ) = italic_Z start_POSTSUBSCRIPT 1 : italic_U end_POSTSUBSCRIPT ( italic_t ), has a value, Zu⁢(t)subscript𝑍𝑢𝑡Z_{u}(t)italic_Z start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT ( italic_t ), for each unit, u𝑢uitalic_u.
Slots in this object encode the components of the SpatPOMP model, and can be filled or changed using the constructor function spatPomp() and various other convenience functions.
If any of the variables in the covariates data.frame is common among all units the user must supply the variable names as class ‘character’ vectors to the shared_covarnames argument of the spatPomp() constructor function.
Optionally, simulate can be made to return a class ‘data.frame’ object by supplying the argument format=‘data.frame’ in the call to simulate.
In spatPomp, covariate processes can be supplied as a class ‘data.frame’ object to the covar argument of the spatPomp() constructor function.
D
(i) choose four suitable data space slices, which are then used for evaluating the impact of each feature on particular groups of instances (Fig. 1(a));
After the feature selection phase, we use the graph view to transform the most contributing features (F4 in Fig. 5(e) and F18 in Fig. 6(a)).
(ii) in the exploration phase, choose subsets of features using diverse automatic feature selection techniques (see Fig. 1(b));
Various visualization techniques have been proposed for the task of feature selection, including correlation matrices [42, 43], radial visualizations [44, 45, 46], scatterplots [47], scatterplot matrices [48], feature ranking [49, 50, 51, 52, 53, 54, 55, 56], feature clustering [57], and dimensionality reduction (DR) [53, 58, 59]. The category of techniques more related to our work is feature ranking, since we use automatic feature selection techniques to rank the importance of the different features. For example, a VA tool called INFUSE [50] was designed to aid users in understanding how features are being ranked by the automated feature selection techniques. It presents an aggregated view of results produced by automatic techniques, assisting the user in learning how these work and compare their results with multiple algorithms. Similarly, Klemm et al. [60] propose an approach that performs regression analysis exhaustively between independent features and the target class. These approaches take into account the user’s ability to identify patterns from analyzing the data (e.g., with the colors in a heatmap representation) or choose the feature subset by some quantitative metric. A few other VA systems have leveraged a balanced blending between automatic and visual feature selection techniques. RegressionExplorer [61] is one example for examining logistic regression models. Additionally, the exploration of linear relationships among features was studied by Barlowe et al. [62]. FeatureEnVi offers rather similar characteristics with the tools analyzed above. However, we combine several automatic feature selection techniques and statistical heuristics in cohesive visualizations for evaluating feature selection and feature extraction concurrently.
There are several different techniques for computing feature importance that produce diverse outcomes per feature. The tool should facilitate the visual comparison of alternative feature selection techniques for each feature (T2). Another key point is that users should have the ability to include and exclude features during the entire exploration phase.
B
We have pointed to issues with the existing bias mitigation approaches, which alter the loss or use resampling. An orthogonal avenue for attacking bias mitigation is to use alternative architectures. Neuro-symbolic and graph-based systems could be created that focus on learning and grounding predictions on structured concepts, which have shown promising generalization capabilities [68, 44, 34, 24, 60]. Causality is another relevant line of research, where the goal is to uncover the underlying causal mechanisms [49, 45, 9, 2]. Discovery and usage of causal concepts is a promising direction for building robust systems. These areas have not been explicitly studied for their ability to overcome dataset bias.
This work was supported in part by the DARPA/SRI Lifelong Learning Machines program [HR0011-18-C-0051], AFOSR grant [FA9550-18-1-0121], and NSF award #1909696. The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies or endorsements of any sponsor.
In this set of experiments, we compare the resistance to explicit and implicit biases. We primarily focus on the Biased MNISTv1 dataset, reserving each individual variable as the explicit bias in separate runs of the explicit methods, while treating the remaining variables as implicit biases. To ease analysis, we compute the accuracy gap between the majority and minority groups i.e., majority/minority difference (MMD). Majority/minority groups are defined per variable e.g., for foreground color, green 1’s, red 2’s etc are placed in the majority group and the rest in the minority group and MMD simply computes the accuracy difference between the two groups for each variable. High MMDs indicate that the methods rely heavily on spurious patterns favoring the majority groups and thus fail on the minority groups.
In Fig. 3(a), we present the MMD boxplots for all bias variables, comparing cases when the label of the variable is either explicitly specified (explicit bias), or kept hidden (implicit bias) from the methods. Barring digit position, we observe that the MMD values are higher when the variables are not explicitly labeled for the methods, indicating that the explicit methods in general fail to mitigate implicit biases. Fig. 3(b) breaks down exploitation of explicit and implicit biases for each method. UpWt, GDRO and RUBi have low MMD values for explicit biases, but high MMD values for implicit biases, showing that they mitigate the explicit biases to some extent, but are not robust to the implicit biases. LNL and IRMv1 seem to be equanimously affected by both explicit and implicit biases, and thus fail to improve upon the baseline as previously shown in Table 1. LFF has a relatively low range of MMDs and as shown by the improvements in Table 1, the method outperforms others on Biased MNIST.
It is unknown how well the methods scale up to multiple sources of biases and large number of groups, even when they are explicitly annotated. To study this, we train the explicit methods with multiple explicit variables for Biased MNISTv1 and individual variables that lead to hundreds and thousands of groups for GQA and compare them with the implicit methods. For Biased MNISTv1, we first sort the seven total variables in the descending order of MMD (obtained by StdM) and then conduct a series of experiments. In the first experiment, the most exploited variable, distractor shape, is used as the explicit bias. In the second experiment, the two most exploited variables, distractor shape and texture, are used as explicit biases. This is repeated until all seven variables are used333The exact order is given in the Appendix.. Note that conducting the seventh experiment entails annotating each instance with every possible source of bias. While this may not be realistic in practice, such a controlled setup will reveal if the explicit methods can generalize when they have complete information about every bias source.
A
The GP correlation function is the squared exponential kernel as is recommended in [13, 38]. The trend function is a first order regression model: μ⁢(𝐱0)=q⁢(𝐱0)⊤⁢𝜷𝜇subscript𝐱0𝑞superscriptsubscript𝐱0top𝜷\mu(\mathbf{x}_{0})=q(\mathbf{x}_{0})^{\top}\boldsymbol{\beta}italic_μ ( bold_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) = italic_q ( bold_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT bold_italic_β with q⁢(𝐱0)=[1,𝐱0⊤]𝑞subscript𝐱01superscriptsubscript𝐱0topq(\mathbf{x}_{0})=[1,\mathbf{x}_{0}^{\top}]italic_q ( bold_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) = [ 1 , bold_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT ]. The regression coefficients (𝜷𝜷\boldsymbol{\beta}bold_italic_β) together with the covariance function parameters such as the length-scales (𝜹𝜹\boldsymbol{\delta}bold_italic_δ) and process variance (σ2superscript𝜎2\sigma^{2}italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT) are estimated using the maximum likelihood method, see Section 2. These parameters are separately estimated for each element of the state vector using data from the initial time step as elaborated in the previous section. The number of random features (M𝑀Mitalic_M) determines the quality of Monte Carlo approximation of the kernel, see Equation 18. In [43, 33] it is shown that using a number of random features proportional to the size of the training set (Ω⁢(n)Ω𝑛\Omega(n)roman_Ω ( italic_n )) is sufficient to achieve a comparable performance to that of the original kernel. We set M=250𝑀250M=250italic_M = 250 which is higher than the size of the training sets in all our experiments and is also used in [21]. The number of realisations drawn from the approximate flow map is S=100𝑆100S=100italic_S = 100. Notice that S𝑆Sitalic_S is the number of simulated time series, and gives, at any time t𝑡titalic_t, S𝑆Sitalic_S samples whose mean (x^i⁢(t)subscript^𝑥𝑖𝑡\hat{x}_{i}(t)over^ start_ARG italic_x end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_t )) serves as the model output prediction. In the specific scenario of sampling from a normal population, the distribution of x^i⁢(t)subscript^𝑥𝑖𝑡\hat{x}_{i}(t)over^ start_ARG italic_x end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_t ), with an unknown population variance, conforms to a Student’s t-distribution with S−1𝑆1S-1italic_S - 1 degrees of freedom. The simulation time step is fixed and equal to Δ⁢t=0.01Δ𝑡0.01\Delta t=0.01roman_Δ italic_t = 0.01. The ODE is solved on [t0,t1]subscript𝑡0subscript𝑡1[t_{0},t_{1}][ italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ] by the default solver of the R package deSolve [52].
The accuracy of the time series prediction is measured via the mean absolute error (MAE) and root mean square error (RMSE) criteria. They are defined as
We proposed a novel data-driven approach for emulating deterministic complex dynamical systems implemented as computer codes. The output of such models is a time series and presents the evolving state of a physical phenomenon over time. Our method is based on emulating the short-time numerical flow map of the system and using draws of the emulated flow map in an iterative manner to perform one-step ahead predictions. The flow map is a function that returns the solution of a dynamic system at a certain time point, given initial conditions. In this paper, the numerical flow map is emulated via a GP and its approximate sample paths are generated with random Fourier features. The approximate GP draws are employed in the one-step ahead prediction paradigm which results in a distribution over the time series. The mean and variance of that distribution serve as the time series prediction and the associated uncertainty, respectively. The proposed method is tested on several nonlinear dynamic simulators such as the Lorenz, van der Pol, and Hindmarsh-Rose models. The results suggest that our approach can emulate those systems accurately and the prediction uncertainty can capture the true trajectory with a good accuracy. A future work direction is to conduct quantitative studies such as uncertainty quantification and sensitivity analysis on computationally expensive dynamical simulators emulated by the method suggested in this paper.
We note that the Lorenz attractor cannot be predicted perfectly due to its chaotic behaviour. The vertical dashed blue lines indicate the “predictability horizon” defined as the time at which a change point occurs in the SD of prediction [38]. The predictability horizon is acquired by applying the cpt.mean function implemented in the R package changepoint [28, 29] to the SD of predictions. This is depicted in Figure 4, illustrating the SD of predictions for the state variables x1subscript𝑥1x_{1}italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT (black), x2subscript𝑥2x_{2}italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT (red), and x3subscript𝑥3x_{3}italic_x start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT (green) obtained by Method 1 in the Lorenz system. The vertical dotted lines are the corresponding change points. The predictability horizon looks slightly better for Method 2. However, Method 1 appears to capture the uncertainty better in that it encompasses all the trajectory and more tightly follows the fluctuation in dynamics. This can be investigated using the coverage probability defined as the percentage of the times that the true model is within the 95%percent9595\%95 % uncertainty bounds. The coverage probability obtained by the two methods is given below.
Following the above procedure renders only one prediction of the time series. However, we wish to have an estimation of uncertainty associated with the prediction accuracy. This can be achieved by repeating the above steps with different draws from the emulated flow map to obtain a distribution over the time series. The mean and variance of that distribution at a given time point serve as the model output prediction and the associated uncertainty there, respectively. More rigorously, let x^i⁢(t)subscript^𝑥𝑖𝑡\hat{x}_{i}(t)over^ start_ARG italic_x end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_t ) be the model output prediction corresponding to the i𝑖iitalic_i-th component of F, and S⁢D⁢(x^i⁢(t))𝑆𝐷subscript^𝑥𝑖𝑡SD\left(\hat{x}_{i}(t)\right)italic_S italic_D ( over^ start_ARG italic_x end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_t ) ) represent the corresponding standard deviation at time t=t1,…,T𝑡subscript𝑡1…𝑇t=t_{1},\ldots,Titalic_t = italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_T, with T𝑇Titalic_T being the final time of the simulation. We can write
A
One of the classical and important problems in statistics is testing the independence between two or more components of a random vector. Testing for mutual independence, which characterizes the structural relationships between random variables and is strictly stronger than pairwise independence, is a fundamental task in inference. Independent component analysis of a random vector, which consists of searching for a linear transformation that minimizes the statistical dependence between its components, is one of the fields in which mutual independence plays a central role; for instance, see
implemented in the R package dHSIC [Pfister and Peters (2017)]. The test based on ranks of distances introduced in Heller, Heller and Gorfine (2013)
Zhang, Gao and Ng (2023) proposed a new class of independence measures based on the maximum mean discrepancy in Reproducing Kernel Hilbert Space. In the literature, additional methods for testing the independence of two multidimensional random variables have emerged, including those based on the L1subscript𝐿1L_{1}italic_L start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-norm between the distribution of the vector and the product of the distributions of its components (Gretton and Györfi (2010)), on ranks of distances (Heller, Heller and Gorfine, 2013), on nearest neighbor methods (Berrett and Samworth, 2019) and on applying distance covariance to center-outward ranks and signs (Shi, Drton and Han, 2022a). In Jin and Matteson (2018), three distinct measures of mutual dependency are proposed. One of the approaches extends the concept of distance covariance from pairwise dependency to mutual dependence, and the remaining two measures are derived by summing the squared distance covariances. Finally,
[Bach and Jordan (2003)], [Chen and Bickel (2006)], [Samworth and Yuan (2012)] and [Matteson and Tsay (2017)]. Testing independence also has many applications, including causal inference ([Pearl (2009)], [Peters et al. (2014)],
[Pfister et al. (2018)], [Chakraborty and Zhang (2019)]), graphical modeling ([Lauritzen (1996)], [Gan, Narisetty and Liang (2019)]), linguistics ([Nguyen and Eisenstein (2017)]), clustering (Székely and Rizzo, 2005), dimension reduction (Fukumizu, Bach and Jordan, 2004; Sheng and Yin, 2016). The traditional approach for testing independence is based on Pearson’s correlation coefficient; for instance, refer to Binet and Vaschide (1897), Pearson (1920), Spearman (1904), Kendall (1938). However, its lack of robustness to outliers and departures from normality eventually led researchers to consider alternative nonparametric procedures.
C
{t})bold_x start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBSCRIPT ← bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT + italic_γ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ( bold_v start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT - bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT )
In Table 2 we provide a detailed complexity comparison between the Monotonic Frank-Wolfe (M-FW) algorithm (Algorithm 1), and other comparable algorithms in the literature.
In Table 3 we provide an oracle complexity breakdown for the Frank-Wolfe algorithm with Backtrack (B-FW), also referred to as LBTFW-GSC in Dvurechensky et al. [2022], when minimizing over a (κ,q)𝜅𝑞(\kappa,q)( italic_κ , italic_q )-uniformly convex set.
In Table 4 we provide a detailed complexity comparison between the Backtracking AFW (B-AFW) Algorithm 5, and other comparable algorithms in the literature.
We note that the LBTFW-GSC algorithm from Dvurechensky et al. [2022] is in essence the Frank-Wolfe algorithm with a modified version of the backtracking line search of Pedregosa et al. [2020]. In the next section, we provide improved convergence guarantees for various cases of interest for this algorithm, which we refer to as the Frank-Wolfe algorithm with Backtrack (B-FW) for simplicity.
A
Differential privacy essentially provides the optimal asymptotic generalization guarantees given adaptive queries (Hardt and Ullman, 2014; Steinke and Ullman, 2015). However, its optimality is for worst-case adaptive queries, and the guarantees that it offers only beat the naive intervention—of splitting a dataset so that each query gets fresh data—when the input dataset is quite huge (Jung et al., 2020). A worst-case approach makes sense for privacy, but for statistical guarantees like generalization, we only need statements that hold with high probability with respect to the sampled dataset, and only on the actual queries issued.
One cluster of works that steps away from this worst-case perspective focuses on giving privacy guarantees that are tailored to the dataset at hand (Nissim et al., 2007; Ghosh and Roth, 2011; Ebadi et al., 2015; Wang, 2019). In  Feldman and Zrnic (2021) in particular, the authors elegantly manage to track the individual privacy loss of the elements in the dataset. However, their results do not enjoy a dependence on the standard deviation in place of the range of the queries.
In order to complete the triangle inequality, we have to define the stability of the mechanism. Bayes stability captures the concept that the results returned by a mechanism and the queries selected by the adaptive adversary are such that the queries behave similarly on the true data distribution and on the posterior distribution induced by those results. This notion first appeared in  Jung et al. (2020), under the name Posterior Sensitivity, as did the following theorem.
Another line of work (e.g., Gehrke et al. (2012); Bassily et al. (2013); Bhaskar et al. (2011)) proposes relaxed privacy definitions that leverage the natural noise introduced by dataset sampling to achieve more average-case notions of privacy. This builds on intuition that average-case privacy can be viewed from a Bayesian perspective, by restricting some distance measure between some prior distribution and some posterior distribution induced by the mechanism’s behavior (Dwork et al., 2006; Kasiviswanathan and Smith, 2014). This perspective was used Shenfeld and Ligett (2019) to propose a stability notion which is both necessary and sufficient for adaptive generalization under several assumptions. Unfortunately, these definitions have at best extremely limited adaptive composition guarantees.  Bassily and Freund (2016) connect this Bayesian intuition to statistical validity via typical stability, an approach that discards “unlikely” databases that do not obey a differential privacy guarantee, but their results require a sample size that grows linearly with the number of queries even for iid distributions. Triastcyn and Faltings (2020) propose the notion of Bayesian differential privacy which leverages the underlying distribution to improve generalization guarantees, but their results still scale with the range in the general case.
Differential privacy (Dwork et al., 2006) is a privacy notion based on a bound on the max divergence between the output distributions induced by any two neighboring input datasets (datasets which differ in one element). One natural way to enforce differential privacy is by directly adding noise to the results of a numeric-valued query, where the noise is calibrated to the global sensitivity of the function to be computed—the maximal change in its value between any two neighboring datasets. Dwork et al. (2015) and Bassily et al. (2021) showed that differential privacy is also useful as tool for ensuring generalization in settings where the queries are chosen adaptively.
A
{x}^{*},\theta)p(\theta\,|\,\mathcal{D})\,\mathrm{d}\theta\,.italic_p ( italic_y start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT | bold_x start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT , caligraphic_D ) = ∫ italic_p ( italic_y start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT | bold_x start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT , italic_θ ) italic_p ( italic_θ | caligraphic_D ) roman_d italic_θ .
Instead of computing the posterior distribution through Eq. (4), the problem is reformulated as a variational problem, i.e. the posterior distribution p⁢(θ|𝒟)𝑝conditional𝜃𝒟p(\theta\,|\,\mathcal{D})italic_p ( italic_θ | caligraphic_D ) is replaced by a parametric family of distributions q⁢(θ;λ)𝑞𝜃𝜆q(\theta;\lambda)italic_q ( italic_θ ; italic_λ ) and a divergence measure is optimized within this family blei2017variational . For the Kullback-Leibler divergence DKLsubscript𝐷KLD_{\text{KL}}italic_D start_POSTSUBSCRIPT KL end_POSTSUBSCRIPT, the loss function
To see the influence of the training-calibration split on the resulting prediction intervals, two smaller experiments were performed where the training-calibration ratio was modified. In the first experiment the split ratio was changed from 50/50 to 75/25, i.e. more data was reserved for the training step. The average coverage was not significantly changed. However, the average width of the intervals decreased on average by 7%. Using more data to train the underlying models, thereby obtaining better predictions, will lead to tighter prediction intervals as long as the calibration set is not too small. This conclusion is in line with the observations from Figs. 1 and 2. This experiment was then repeated in an extreme fashion, where the models were trained on the full data set. Due to the lack of an independent calibration set, the post-hoc calibration was also performed on the same training set. This way the influence of violating the assumptions of the ICP Algorithm 4 and the associated validity theorem can also be investigated. It was found that for some models the coverage decreased sharply. Both of these observations should not come as a surprise. The amount by which the intervals are scaled can be interpreted as a hyperparameter of the model. In general it is better to use more data to train than to validate, as long as the validation data set is representative of the true population. Moreover, optimizing hyperparameters on the training set is known to lead to overfitting, which in this case corresponds to overly optimistic prediction intervals. The CP algorithm looks at how large the errors are on the calibration set, so as to be able to correct for them in the future. However, by using the training set to calibrate, future errors are underestimated and, therefore, the CP algorithm cannot fully correct for them.
In Bayesian inference one tries to model the distribution of interest by updating a prior estimate using a collection of observed data. The conditional distribution p⁢(Y|X,𝒟)𝑝conditional𝑌𝑋𝒟p(Y\,|\,X,\mathcal{D})italic_p ( italic_Y | italic_X , caligraphic_D ) is inferred from a given parametric model or likelihood function p⁢(Y|X,θ)𝑝conditional𝑌𝑋𝜃p(Y\,|\,X,\theta)italic_p ( italic_Y | italic_X , italic_θ ), a prior distribution p⁢(θ)𝑝𝜃p(\theta)italic_p ( italic_θ ) over the model parameters and a data set 𝒟≡(𝐗,𝐲)𝒟𝐗𝐲\mathcal{D}\equiv(\mathbf{X},\mathbf{y})caligraphic_D ≡ ( bold_X , bold_y ). The first step is to update the prior belief based on the data set using Bayes’ rule:
This process is summarized in Algorithm 1. Note that the algorithm can simply be repeated in an on-line fashion when more data becomes available. One simply takes the “old” posterior distribution p⁢(θ|𝒟)𝑝conditional𝜃𝒟p(\theta\,|\,\mathcal{D})italic_p ( italic_θ | caligraphic_D ) as the new prior distribution.
D
Elliott and Golub (2019) characterize outcomes in public goods games on exogenous networks by the spectrum of a matrix called the benefits matrix, in which each entry gives the marginal rate of substitution between decreasing own contribution and increased benefits from a neighbor in a fixed network. Their results tie the existence of Pareto-efficient outcomes to the spectral radius of the benefits matrix and characterize the Lindahl outcomes as those with effort proportional to each individual’s eigenvector centrality in the graph described by the benefits matrix. Many of their results rely on the connectedness of the benefit graph, which is not guaranteed in random graphs. In the case of endogenous formation this assumption may not be satisfied, rendering spectral methods difficult to implement outside of particular special cases involving links that are fully independent (e.g. Dasaratha, 2020; Parise and Ozdaglar, 2023).
Our model departs from the existing literature on public goods in endogenous networks in a number of ways. Primarily, we model a situation in which individuals choose others with whom they would like to share the externalities generated by their resources. This is the reverse of the situations studied in the previous literature on public goods and sharing on endogenous networks (e.g. Galeotti and Goyal, 2010; Kinateder and Merlino, 2017; Brown, 2024), in that individuals choose the outgoing direction of their externalities, rather than the incoming direction of others’ externalities. The cost of linking in this study’s environment is explicitly tied to the effort or contribution level and can be flexibly specified to represent pure or impure (congestive) externalities. Also in this voluntary sharing environment, there is a unique stage-game Nash equilibrium of no contributions and no linking. This is, however, in stark contrast to what we actually observe in the laboratory implementation, and provides a rich environment to identify and analyze the structure of social preferences.
Cross-sectional network formation estimators rely on assumptions about the meeting process and dynamics that guarantee convergence to a stochastically stable stationary distribution, also called a Quantal Response Equilibrium (QRE). While the QRE (McKelvey and Palfrey, 1995, 1998) is a fixed point stationary distribution of the logit-response (logit best-reply) dynamics, in the case of simultaneous revision opportunities this fixed point is potentially unstable. This means that play may instead exhibit a Hopf bifurcation and converge to a limit cycle or stable orbit, rather than to the fixed point QRE distribution (Alós-Ferrer and Netzer, 2010). Estimating the individual strategies, however, rather than imposing stability and estimating the QRE, allows us to comment on the convergence of the calibrated system, as well as to draw from the steady-state distribution under arbitrary specifications of the revision opportunity (that is, since we estimate the utility parameters directly). This means that we can simulate draws of the steady-state distribution for large networks under sequential-move individual revisions, using the methods of Badev (2021)—the first paper to examine identification in discrete-choice games taking place on endogenous networks—in which agents choose both a set of links and an action or investment level. The method used is closely related to the one used by Mele (2017) to estimate structural parameters in such a setting, but differs by leveraging panel structure to avoid the assumption that the network has already converged to its steady-state distribution after sufficient iterations of an individual revision process.
In another relevant study by Rand et al. (2011), the authors conducted an experiment to gauge the effects of endogenous networks on cooperation in a repeated prisoner’s dilemma. By varying the opportunity for network updates, they showed that subjects are able to take advantage of their ability to change social ties in order to refine their neighborhoods and increase efficiency. Our results show that, while the endogeneity of the network itself does allow for this fine-tuning of the social neighborhood, the dynamics of the network alone are not sufficient to support long-term efficient outcomes. Instead, a platform that aims to nudge players toward efficient social structure should take advantage of its ability to shape and distribute information to its users.
Finally, in the exogenous/fixed network case, Boosey (2017) uses data from a laboratory experiment to examine the mechanisms for cooperation in a repeated network public goods game. Experimental results showed a significant portion of subjects playing strategies of conditional cooperation, in which subjects play strategies which react strongly to the behavior of their neighbors in previous rounds. We incorporate this phenomenon into our structural model, by placing strategies on an evolutionary spectrum from reactionary to predictive. When playing a purely reactive strategy under bounded rationality, simultaneous play may not converge to a stage-game equilibrium (Alós-Ferrer and Netzer, 2010; Hommes and Ochea, 2012).
D
For illustration purposes, we employ in this work open source data from the “Telecom Italia Big Data Challenge”, which contains telecommunications activity aggregated over a fixed spatial grid of the city of Milan during the months of November and December 2013.
Table 1 shows the posterior mean and standard deviation of the satisfaction accuracy, satisfaction F1 score and robustness RMSE for all four properties. We observe that the CAR-AR-BNP model is the best-performing one in terms of the measures inspected, however, the difference in performance for some properties is not large.
for the evaluation of the city in terms of safety and quality of life, it is interesting to look at how the city is performing with respect to the reachability of some key points of interest. For example, in an emergency scenario, a traffic monitoring body would be interested in the following requirement (assuming that our crowdedness measure is indeed a proxy for population density in the city):
Figure 6 presents the average value of the measures in Table 1 for all testing periods, together with 80% credible intervals. This figure can be used for deciding which model performs best in terms of specific interest in the verified properties. For example, it can be seen that the autoregressive models perform similarly in terms of satisfaction measures for all properties, while the robustness of the model CAR-AR-BNP is better for properties P.2 and P.3. The same model also outperforms the others in terms of property P.4 during the rush hours 07:00-09:00, so it should be chosen if the performance in this specific time frame is of interest to the modeler.
Our results provide a deeper understanding of urban dynamics in Milan in terms of the best-performing model which identifies clusters of areas with similar temporal patterns and in terms of when and how well the formulated properties are satisfied.
D
In Figure 4, we can observe a comparison of performance among different methods for the totchg (total charge) variable: Kriging/BLUP, KNN-Reg, KNN, GLS, and DDL. This comparison is conducted across varying training/validation proportions of the dataset (from (10%/90%) to (90%/10%) of the data. The horizontal axis depicts the percentage of the dataset used for training, while the vertical axis represents metrics such as rMSE, MAPE, and mean lnQ. Notably, the Kriging/BLUP method consistently outperforms the other methods across all metrics (rMSE, MAPE, and mean lnQ), with its superiority evident in nearly all scenarios.
other columns. For example, for the total charge variable we use the data available for total charge, length of stay, number of procedures, number of diagnoses and age.
For the experiments of predicting length of stay, totchg, npr and ndx are used as predictors due to the high
age. For predicting total charge, we employ los, npr, ndx and age as predictors for datasets of size N=2,000𝑁2000N=2,000italic_N = 2 , 000 to N=100,000𝑁100000N=100,000italic_N = 100 , 000 with a 90% training and 10% validation
The operator 𝐋𝐋\mathbf{L}bold_L and 𝐖𝐖\mathbf{W}bold_W are constructed from a multilevel decomposition of the location of predictors. This process is somewhat elaborate and the reader is referred to [31] and [32] for all of the details. However, for the exposition in this section it sufficient to know what the properties of the operators 𝐋𝐋\mathbf{L}bold_L and 𝐖𝐖\mathbf{W}bold_W are.
B
The first above condition is meaningful for small τ>0𝜏0\tau>0italic_τ > 0 and hence deals with the boundary of the set SXsubscript𝑆𝑋S_{X}italic_S start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT while the second means that the measure PXsubscript𝑃𝑋P_{X}italic_P start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT charges uniformly the set SXsubscript𝑆𝑋S_{X}italic_S start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT (no region with no point).
The proof of the next theorem is given in Section 5.3 using two lemmas, namely Lemma 3 and Lemma 4, that are proved in the Appendix.
})}italic_ρ start_POSTSUBSCRIPT italic_δ end_POSTSUBSCRIPT ( italic_g start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_g start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) = roman_sup start_POSTSUBSCRIPT | italic_u | ≤ italic_δ end_POSTSUBSCRIPT ∥ italic_g start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - italic_g start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ∥ start_POSTSUBSCRIPT italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_μ start_POSTSUBSCRIPT italic_x , italic_u end_POSTSUBSCRIPT ) end_POSTSUBSCRIPT. The proof of the next theorem is given in Section 5 up to two technical Lemmas, namely Lemma 1 and Lemma 2, that are proved in the Appendix.
We now give the proofs of Theorem 3, 4 and 5 by relying on several technical lemmas, namely Lemma 1, 2, 3 and 4, which proofs are given in the Appendix.
Based on some technical lemmas, proofs of which are given in the Appendix, the proofs of the main results (Theorem 3, 4 and 5) are presented in Section 5.
A
In this paper, we develop a new estimation procedure, named as High-Order Projection Estimators (HOPE), for TFM-cp in (1).
The procedure includes a warm-start initialization using a newly developed composite principal component analysis (cPCA), and an iterative simultaneous orthogonalization scheme to refine the estimator. The procedure is designed to take the advantage of the special structure of TFM-cp whose autocovariance tensor has a specific CP structure with components close to being orthogonal
The estimation procedure takes advantage of the special structure of the model, resulting in faster convergence rate and more accurate estimations comparing to the standard procedures designed for the more general TFM-tucker, and the more general tensor CP decomposition. Numerical study illustrates the finite sample properties of the proposed estimators. The results show that HOPE uniformly outperforms the other methods, when the observations follow the specified TFM-cp.
Although these methods can be used directly to obtain the low-rank CP components of the autocovariance tensors, they have been designed for general tensors and do not utilize the special structure embedded in the TFM-cp.
In this section, we focus on the estimation of the factors and loading vectors of model (1). The proposed procedure includes two steps: an initialization step using a new composite PCA (cPCA) procedure, presented in Algorithm 1, and an iterative refinement step using a new iterative simultaneous orthogonalization (ISO) procedure, presented in Algorithm 2. We call this two-step procedure HOPE (High-Order Projection Estimators) as it repeatedly perform high order projections on high order moments of the tensor observations. It utilizes the special structure of the model and leads to higher statistical and computational efficiency, which will be demonstrated later.
A
}\big{(}Y_{i}^{*},g_{i}(Y^{*})\,\big{|}\,Y\big{)}.blackboard_E [ over^ start_ARG roman_Cov end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT | italic_Y ] = roman_Cov ( italic_Y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT , italic_g start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_Y start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ) | italic_Y ) .
Here, for the bias term, we used the fact that CBα⁢(g)subscriptCB𝛼𝑔\mathrm{CB}_{\alpha}(g)roman_CB start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT ( italic_g ) is unbiased for
by the law of total covariance, and where we used Cov⁢(𝔼⁢[Yi∗|Y],𝔼⁢[gi⁢(Y∗)|Y])=Cov⁢(Yi,gi⁢(Y∗))Cov𝔼delimited-[]conditionalsuperscriptsubscript𝑌𝑖𝑌𝔼delimited-[]conditionalsubscript𝑔𝑖superscript𝑌𝑌Covsubscript𝑌𝑖subscript𝑔𝑖superscript𝑌\mathrm{Cov}(\mathbb{E}[Y_{i}^{*}\,|\,Y],\mathbb{E}[g_{i}(Y^{*})\,|\,Y])=%
The intuition here is that each pair (Y∗b,Y†b)superscript𝑌absent𝑏superscript𝑌†absent𝑏(Y^{*b},Y^{\dagger{b}})( italic_Y start_POSTSUPERSCRIPT ∗ italic_b end_POSTSUPERSCRIPT , italic_Y start_POSTSUPERSCRIPT † italic_b end_POSTSUPERSCRIPT ) comprises two
Here we simply used the fact that an empirical covariance computed from i.i.d. samples of a pair of random variables is unbiased for their covariance
D
In the following subsections, we explain VisRuler by describing a use case with the World Happiness Report 2019 Helliwell2019World data set obtained from the Kaggle repository. Kaggle2019 This data set contains 156 countries (i.e., instances) ranked according to an index representing how happy the citizens of each country are. The six other variables that could be considered as features are: (1) GDP per capita, (2) social support, (3) healthy life expectancy, (4) freedom to make life choices, (5) generosity, and (6) corruption perception. Because this data set does not contain any categorical class labels, we follow the same approach as in Neto and Paulovich Neto2021Multivariate to discretize the happiness score in three different bins. Hence, we are converting this regression problem into a multi-class classification problem. Salman2012Regression Also in our case, the original variable Score becomes the target variable that our ML models should predict. In detail, the HS-Level-3 class contains 42 countries with happiness scores (HS) ranging from 6.136.136.136.13 to 7.767.767.767.76, the HS-Level-2 groups 79 countries from 4.494.494.494.49 to 6.136.136.136.13, and the HS-Level-1 class encloses 35 countries from 2.852.852.852.85 to 4.494.494.494.49.
The exploration starts with an overview of how 10 RF and 10 AB models performed based on three validation metrics: accuracy, precision, and recall. The models are initially sorted according to the overall score, which is the average sum of the three metrics. This choice guides users to focus mostly on the right-hand side of the line chart (as showcased in Section Use Case). Green is used for the RF algorithm, while blue is for AB. All visual representations share the same x-axis: the identification (ID) number of each model. The design decision to align views vertically enables us to avoid repetition and follows the best practices. The line chart in Figure 1(a) always presents the worst to best models from left to right. The y-axis denotes the score for each metric as a percentage, with distinct symbols used for the different metrics.
From the analyses and the overall score of the RF and AB models, we observe that the most performant models for RF consider only 2 features when splitting the nodes (i.e., max_features hyperparameter). The PCPs in Figure 7(d) enable us to scan the internal regions of the hyperparameters’ solution space for RF. As for AB, the learning_rate should be as low as possible for this specific data set, as seen in Figure 7(d). Also, by searching for models with high values for min_samples_leaf, AB models are created with complex decision trees compared to simple decision stumps, which seems to be an appropriate limitation of the hyperparameter space that could lead to better models. After all these constraints, we move the Search for New Models slider from 0 to 10 in Figure 7(d) to request 10 additional models for each algorithm with the hope of discovering more powerful ones. In summary, VisRuler supported the exploration of diverse decision rules extracted from two different ML algorithms and boosted the trustworthiness of the decision making process (RQ1).
Exploration and Selection of Algorithms and Models. Following the workflow in Section System Overview and Use Case, Amy loads the data set and checks the score of each model based on the three validation metrics (Figure 1(a)). For the AB algorithm, in blue, all models have a relatively low value for the recall metric, except for AB8. Also, AB7 performs very well for the Accepted class (orange), since the false-negative (FN) line reduces in height compared to all other models. Therefore, she decides to keep only AB7 and AB8. By looking at the confusion plot in Figure 1(a), Amy infers that RF5 is the model with low confusion regarding the Rejected class (purple). She is determined to use RF5 because it carries over only 104 different false-positive (FP) instances compared to RF4 with 114. The top RF models on the right-hand side also caught her attention, with RF9 and RF10 being the best options. She thinks that either of them could do the job, as they appear redundant due to similar confusion and values in both the confusion plot and the line chart (cf. Figure 1(a)). The bar charts below—which highlight the difference in the architectures of these RF models—help her to choose: with only 7 decision trees and 589 decision paths (compared to 18 and 1,483), RF9 is simpler. She concludes that RF9’s simplicity will make Joe’s exploration of decisions more manageable later. Consequently, she deactivates RF10 and continues the feature contribution analysis with RF5, RF9, AB7, and AB8 models.
The green color in the center of a point indicates that a decision is from RF, while blue is for AB. The outline color reflects the training instances’ class based on a decision’s prediction. The size maps the number of training instances that are classified by a specific decision, and the opacity encodes the impurity of each decision. Low impurity (with only a few training instances from other classes) makes the points more opaque. The positioning of the points can be used to observe if the RF and AB models produced similar rules, offering a comparison between algorithm decisions. The histogram in Figure 1(c) shows the number of decisions (y-axis) and the distribution of training instances in these paths (x-axis), and can also be used to filter the number of visible decisions in the projection-based view to avoid overfitting rules containing only a few instances (as shown in Figure 6(a)) or general rules that might not apply in problematic cases.
A
README.md exists but content is empty.
Downloads last month
6

Collection including liangzid/robench2024b_all_setstatSCP-p-50