text_with_holes
stringlengths 106
5.69k
| text_candidates
stringlengths 64
1.88k
| A
stringclasses 6
values | B
stringclasses 6
values | C
stringclasses 6
values | D
stringclasses 6
values | label
stringclasses 4
values |
|---|---|---|---|---|---|---|
<|MaskedSetence|> <|MaskedSetence|> These contextual cues are learned using our proposed self-supervised objectives. The segmentation models are based on encoder-decoder network design. <|MaskedSetence|> Our contributions can be summarized as follows:
.
|
**A**: Therefore, to learn contextual cues from a given volumetric input, our approach encourages the encoder to predict the correct order of shuffled sub-volumes while training the decoder to reconstruct the masked organs or part of an organ (Section.3.2).
As a result, data-dependent priors about the input structure can be effectively captured within the model weights across different scans of the volumetric input, resulting in better segmentation performance.
**B**: To this end, we introduce a learnable weight initialization approach that strives to explicitly exploit the volumetric nature of the medical data to induce contextual cues within the model at an early stage of training.
**C**: In this work, we argue that self-supervised inductive biases that can capture the nature of volumetric data are likely to perform better than the conventional weight initialization schemes that are data-independent.
|
CBA
|
CBA
|
CBA
|
BAC
|
Selection 3
|
Another precaution we adopted was to ensure a uniform distribution of samples within each of the classes in the training set. <|MaskedSetence|> For example, dataset D1𝐷1D1italic_D 1 contains 345 positive samples and 495 negative samples. <|MaskedSetence|> <|MaskedSetence|> In the development process, we use heatmaps based on Gradient-weighted Class Activation Mapping (Selvaraju et al., 2017). In this process, we extract the spectrograms of different positive and negative samples for Ae. aegypti mosquitoes and submit them for classification through our pre-trained neural network model.
.
|
**A**: To that end, we added an extra step in the cross-validation process, to establish different probabilities of selecting a sample of a given class based on the proportion of similar samples present in the set.
**B**: An imbalance can degrade the classifying process.
**C**: This is because the number of labels is not uniform in the segmented audio datasets (Table 2).
|
CBA
|
CBA
|
BCA
|
CBA
|
Selection 1
|
<|MaskedSetence|> ShapeNet is an implicit prototype contrast because it does not introduce explicit prototypes (cluster centers) during the training phase. TapNet [137] and DVSL [138] are explicit prototypes contrast because explicit prototypes are introduced. <|MaskedSetence|> DVSL defines virtual sequences, which have the same function as prototypes, i.e., minimize the distance between samples and virtual sequences, but maximize the distance between virtual sequences. <|MaskedSetence|> In the upward mask strategy, MHCCL believes that outliers greatly impact prototypes, so these outliers should be removed when updating prototypes. The downward masking strategy, in turn, uses the clustering results to select positive and negative samples, i.e., samples belonging to the same prototype are regarded as true positive samples, and samples belonging to different prototypes are regarded as true negative samples.
.
|
**A**: TapNet introduces a learnable prototype for each predefined class and classifies the input time series sample according to the distance between the sample and each class prototype.
**B**: MHCCL [139] proposes a hierarchical clustering based on the upward masking strategy and a contrastive pairs selection strategy based on the downward masking strategy.
**C**:
In time series modeling based on prototypes contrast, ShapeNet [136] takes shapelets as input and constructs a cluster-level triplet loss, which considers the distance between the anchor and multiple positive (negative) samples as well as the distance between positive (negative) samples.
|
CAB
|
CAB
|
CAB
|
CAB
|
Selection 4
|
The variational method is one of the most widely used strategies for image denoising problems. For models in the variational method, a key point is the regularization term. Among the existing regularization terms, total variation (TV)[14] is the most common one. TV can maintain sharp edges during noise removal, which overcomes the disadvantage of excessively smooth edges caused by Tikhonov regularization[15]. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|>
|
**A**: The central disadvantage of TV is that it may bring artifact boundaries in smooth regions[16].
**B**: Compared to TV, these modified models exhibit better restoration results and reduce staircase effects in some regions of the image.
.
**C**: Numerous models are proposed to deal with such kinds of staircase effects; for instance,
the high-order total variation (HOTV)[17, 18], the total generalized variation (TGV)[19, 20], and the nolocal total variation (NLTV)[21].
|
ACB
|
ACB
|
BCA
|
ACB
|
Selection 4
|
Our problem setting differs from the existing literature in our choice of the class of switching signals. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> We also provide a set of sufficient conditions on multiple Lyapunov-like functions and the given ranges of dwell times on the subsystems such that the inequality mentioned in (b) above, is satisfied. In the absence of outputs (resp., also inputs), our class of switching signals ensures ISS (resp., GAS) of a switched system.
Further, we construct a class of state-norm estimators for a switched system operating under our class of stabilizing switching signals. These estimators are switched systems themselves with two subsystems — one ISS and one unstable. The key apparatus for our construction of state-norm estimators for switched systems is suitable design of the ISS (resp., unstable) dynamics as functions of the rate of decay (resp., growth) of the Lyapunov-like functions corresponding to the IOSS and unstable subsystems, norm of the exogenous inputs and norm of the outputs of the switched system for which estimation is required. Our design of a state-norm estimator is independent of the exact switching instants and destinations of the switched system that the estimator is constructed for. To the best of our knowledge, this is the first instance in the literature where IOSS and state-norm estimation of continuous-time switched nonlinear systems is studied under pre-specified restrictions on admissible switches between the subsystems and admissible dwell times on the subsystems..
|
**A**: We show that if (a) it is allowed to switch from every unstable subsystem to a stable subsystem, and (b) a set of scalars computed from Lyapunov-like functions corresponding to the individual subsystems together with an admissible choice of dwell times on the subsystems satisfy a certain inequality, then every switching signal that dwells on the subsystems for the above admissible durations of time and does not activate unstable subsystems in any two consecutive switching instants, preserves IOSS of a continuous-time switched nonlinear system.
**B**: We employ multiple Lyapunov-like functions [3] as the key apparatus for our analysis.
**C**: Indeed, instead of considering all switching signals (arbitrary switching) or a set of stabilizing elements from the set of all switching signals (average dwell time switching), we restrict our attention to a pre-specified subset of the set of all switching signals.
|
CBA
|
CBA
|
CBA
|
CBA
|
Selection 3
|
Magnetic Resonance Imaging (MRI) imaging is widely utilized in brain tumor examination. The Brain MRI AD benchmark is reorganized using the flair modality of the latest large-scale brain lesion segmentation dataset, BraTS2021 [3]. BraTS2021 is proposed for multimodal brain tumor segmentation. The original data comprises a collection of the complete 3D volume of a patient’s brain structure and corresponding brain tumor segmentation annotation. To adapt the data to AD, we sliced both the brain scan and their annotation along the axial plane. Only slides containing substantial brain structures, usually with a depth of 60-100, were selected in this benchmark. Slices without brain tumor are labelled as normal. Then normal slices from a subset of patients form the training set, and the rest are divided into validation and test sets. To avoid data leakage in model evaluation, we leveraged the information of patient IDs for data partition and ensured that data from the same patient was contained by one set only.
Liver CT AD Benchmark. Computed Tomography (CT) is commonly used for abdominal examination. We structure this benchmark from two distinct datasets, BTCV [30] and Liver Tumor Segmentation (LiTs) set [10]. The anomaly-free BTCV set is initially proposed for multi-organ segmentation on abdominal CTs and taken to constitute the train set in this benchmark. <|MaskedSetence|> For both datasets, Hounsfield-Unit (HU) of the 3D scans were transformed into grayscale with an abdominal window. The scans were then cropped into 2D axial slices, and the liver regions were extracted based on the provided segmentation annotations. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: This allow researchers to access both versions and make informed decisions based on their specific needs.
.
**B**: CT scans in LiTs is exploited to form the evaluation and test data.
**C**: Following conversion in prior arts [20, 34], we further performed histogram equalization on each slide for image enhancement 333For completeness, we also provide a version of the liver benchmark without any data processing in BMAD.
|
CBA
|
BCA
|
BCA
|
BCA
|
Selection 3
|
We first compare Models 1, 2 and 3 in two different test systems, one is a modified 12-bus PDNet system with 12 SDNets [27]-[28], as shown in Fig. 3, another one is a real utility primary-secondary distribution network in the U.S., as shown in Fig. 4. Fig. 5 and Fig. 6 show voltage distributions calculated by Models 1, 2, and 3 in the above two test systems, respectively. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> The safe operation of PDNet cannot guarantee the safe operation of SDNet.
.
|
**A**: As shown in Fig. 5 and Fig. 6, Model 1 only reflects voltage distributions across the PDNets of two test systems, depicted by the red dotted line, due to the lack of SDNets.
**B**: In addition, Models 2 and 3 can detect the under-voltage violations of SDNet 12 in the modified 12-bus PDNet system, shown in Fig.5, and SDNets 2 and 19 in the real utility primary-secondary distribution network, shown in Fig.6, but Model 1 fails to reflect the voltage violations in SDNets of test systems.
**C**: Instead, Models 2 and 3 exhibit the capability of depicting voltage distributions across each SDNet of two test systems, where the voltage distributions for +120V and -120V circuits of SDNet are represented by the blue and orange markers, and the SDNet voltage distributions of Model 3 closely track Model 2.
|
ABC
|
ACB
|
ACB
|
ACB
|
Selection 2
|
<|MaskedSetence|> [62]. <|MaskedSetence|> This equation requires magnitude, distance of the site, and local site effects in terms of shear wave velocity (Vs30). Similar to the CI and CW datasets, we have used epicentral distance instead of Joyner–Boore distance (RJB). For identifying the Vs30 value at the location of the stations, we have used the Global Vs30 grid file, which is made available by the United States Geological Survey (USGS) based on the works
of Heath et al. <|MaskedSetence|> The final location coordinates and magnitudes, as documented in the datasets, were employed for applying the GMPEs to all three datasets.
.
|
**A**:
For STEAD, PGA values at the station locations for each earthquake are predicted using GMPE given by Boore et al.
**B**: [65].
**C**: This GMPE is developed using the NGA-West2 ground motion database provided by the Pacific Earthquake Engineering Research Center (PEER).
|
ACB
|
ACB
|
ABC
|
ACB
|
Selection 1
|
<|MaskedSetence|> This effect was statistically significant in three of the clips: MeridianFace (optimised and non-optimised) and NocturneRoom (optimised) as 95% confidence intervals based on standard errors of the estimates did not include zero. For the non-optimised version of NocturneRoom, the significance was borderline.
We believe that non-experts, who are used to consuming compressed content on VoD services, consider ISO noise as a quality degrading factor. <|MaskedSetence|> <|MaskedSetence|> A similar phenomenon was observed in a subjective study with SDR videos for UHD transmission for HEVC in 2015 [30] for sources with noise.
.
|
**A**: In optimised and non-optimised versions of MeridianFace and NocturneRoom (clips that exhibited ISO noise), the average DMOS difference for non-experts was negative, meaning that, on average, a higher bitrate (qp27𝑞𝑝27qp27italic_q italic_p 27) was rated as lower quality.
**B**: Since the video encoder acts as a denoiser [29] at a mid-range of bitrates, non-experts would tend to rate these lower bitrate clips as higher quality.
**C**: This was confirmed by discussions held with the participants after the end of the experiment.
|
ABC
|
ABC
|
BAC
|
ABC
|
Selection 4
|
There are three categories of features that are often used for radiomics tasks: first-order features (e.g. entropy, skewness), shape-based features (e.g mesh surface), and texture-based features (e.g. <|MaskedSetence|> For the purposes of this study, we trained six FINs to emulate common radiomics features. Five FINs were each independently trained to emulate texture-based features: Autocorrelation[17], Gray Level Variance [18], Cluster Shade [19], Difference Entropy [20], and Size Zone Non-uniformity [21]. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: We intentionally excluded shape-based features because the extraction of shape features requires significantly more computational power (over 10x) than texture or first-order features.
.
**B**: The sixth FIN was trained to imitate a first-order feature: Skewness[22].
**C**: autocorrelation) [16].
|
CBA
|
ABC
|
CBA
|
CBA
|
Selection 4
|
Reconfigurable intelligent surfaces (RISs) have been recently proposed to address this problem and find energy-efficient techniques with low-cost hardware (e.g., metasurface-based transmitters in [16], RIS-aided antennas in [15], and transmitter-type RIS in [17]). An RIS is a planar array consisting of tunable elements that reflect the incident electromagnetic signal with a desired phase/amplitude adjustment without using RF chains or introducing a processing delay. RISs can be either passive [18, 19, 20, 21, 22] or active [23, 24, 25] and can be controlled and reconfigured almost in real time [26]. They can be either reflecting (where the signal is reflected from the surface) or transmitting (where the signal is transmitted in the forward direction),111The main difference between reflective and transmitting RIS is that the latter uses a pair of patches (one in each half-plane) for receiving and transmitting, whereas the former reuses the same patch for receiving and reflecting [15, 27]; nevertheless, both types can accurately be described by the same equivalent lumped circuit model [28, 29]. <|MaskedSetence|> but, more recently, simultaneously reflecting and transmitting RISs have been also considered [30]. <|MaskedSetence|> In the former case, the RIS is part of the wireless channel that can be dynamically controlled so that a smart radio environment can be realized [31, 16, 32, 33, 34, 35]. In the latter case, instead, the RIS is embedded into the transmit architecture; namely, the whole system acts as a feed antenna, and the channel between the source and the RIS is fixed and can be properly designed during manufacturing [36, 37, 38].
In this work, we tackle the problem of beampattern design for the RIS-based transmit architecture introduced in [15, 39], where a few active antennas called sources (possibly one, as in [39]), each equipped with a dedicated RF chain, illuminate a (passive) RIS, composed of a large number of low-cost and energy-efficient elements, that reflect/retransmit a phase-shifted version of the superposition of the signals emitted by the sources. These types of architectures have many advantages [15]. <|MaskedSetence|> Furthermore, contrary to what happens in hybrid analog-digital arrays, where the analog network which feeds the transmit antennas causes a severe power loss that prevents its implementation in massive MIMO and high-frequency systems [36], in this RIS-based transmit architecture the source-RIS channel is wireless and is referred to as space feeding mechanism, that is inherently more energy-efficient [36, 20]..
|
**A**: Indeed, the beam-steering capability is directly controlled by the RIS, so the number of active antennas does not have to scale with the number of passive elements in the RIS.
**B**: In general, reflective RISs are less complex (thanks to the easier placement of the control system for the phase shifters on the back side of the surface) and have a larger gain (due to the metal ground plane that reflects the entire incident wave).
**C**: RISs can be deployed either far away from or in close proximity to the transmitting source.
|
BCA
|
BCA
|
BCA
|
BCA
|
Selection 1
|
<|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> These signals can be opinions in social networks, nodal measurements in infrastructure networks, encephalography signals in brain connectivity, and gene network expressions due to genetic interactions. When only graph signals are available, a common heuristic for graph matching is first inferring the graph topology from the observed signals by topology inference (a.k.a. graph learning), and then matching nodes based on the estimated topology. However, this heuristic is prone to errors because topology inference usually requires strong assumptions about graph structures or signals [Topology_inference].
On the other hand, recent research has shown that graph analysis can be efficiently carried out using filtered graph signals generated from graph filters [LPGP_mag]. For example, [GSP1, 9757839] used filtered graph signals to detect communities and central nodes of unknown graphs.
.
|
**A**:
The current work on graph matching assumes prior knowledge of the graph topology.
**B**: Instead, the underlying graph is constructed from observations of interactions between nodes, known as graph signals.
**C**: However, in many applications, such as social networks, infrastructure networks, and functional brain connectivity, direct observations of network links are not available.
|
ACB
|
ACB
|
CBA
|
ACB
|
Selection 1
|
A recently emergent framework for learning data distributions in computer vision is based on diffusion models [13, 22]. <|MaskedSetence|> <|MaskedSetence|> There is a more recent work that tried to mitigate fully-sampled data needs by Cui et al. [5]. <|MaskedSetence|> Our model differs from this approach since we trained it end-to-end without allowing error propagation from distinct training sessions.
.
|
**A**: Several recent studies have considered diffusion-based MRI reconstructions, where either an unconditional or a conditional diffusion model is trained to generate images and reconstruction is achieved by later injecting data-consistency projections in between diffusion steps during inference [29, 3, 4, 24, 6].
**B**: In this work authors proposed a two-staged training strategy where a Bayesian network is used to learn the fully-sampled data distribution to train a score model which is then used for conditional sampling.
**C**: While promising results have been reported, these diffusion methods can show limited reliability due to omission of physical constraints during training, and undesirable reliance on fully-sampled images.
|
ACB
|
ACB
|
CBA
|
ACB
|
Selection 1
|
<|MaskedSetence|> <|MaskedSetence|> For example, optimizing towards one SDG target may have negative side-effects on another one Renaud et al. <|MaskedSetence|> (2022). The general challenge, therefore, is finding acceptable and responsible balancing trade-offs between different sustainability goals. Adopting the mindset of circular systems engineering allows for further opportunities to find trade-offs across different phases of engineering processes..
|
**A**:
Finding sustainability trade-offs.
Putting sustainability assessment methods in place enables optimizing systems engineering processes for sustainability goals.
**B**: Optimization, however, is not trivial given the stratified and multi-systemic nature of sustainability.
**C**: (2022); Tzachor et al.
|
ABC
|
BAC
|
ABC
|
ABC
|
Selection 3
|
Moreover, the phase shifting capability is not always sufficient. The experimental results in [6] show that the phase response of RIS element is sensitive to the angle of incident signal, which is due to the RIS being spatially dispersive[17, 18]. Besides, most existing RIS elements have limited phase shifting capability and cannot cover the range from 0 to 2π2𝜋2\pi2 italic_π.
There have been some previous studies to investigate the practical system model of RIS-aided systems [19, 20, 21, 22, 23, 24, 25, 26, 27]. <|MaskedSetence|> <|MaskedSetence|> Furthermore, robust beamforming for the RIS-aided communication systems is investigated under imperfect channel state information (CSI) in [21]. Additionally, the authors of [22] explore rate-splitting multiple access for RIS-aided cell-edge users with discrete phase shifts. The work in [23] evaluates the performance of uplink RIS-aided communication systems, particularly focusing on the impact of limited phase shifts on data rates. On the other hand, studies such as [24, 25, 26, 27] explore scenarios involving the practical system model that takes into account the intertwinement between the amplitude and phase response, with varying approaches to channel estimation and beamforming optimization techniques. For instance, the authors of [24] propose a practical phase shift model and beamforming optimization for intelligent reflecting surfaces, highlighting the importance of realistic modeling in system design. Moreover, the authors of [25] provide a comprehensive performance analysis based on the model proposed in [24]. <|MaskedSetence|>
|
**A**: For instance, the works in [19, 20, 21, 22, 23] consider various aspects of RIS-aided systems with discrete phase shift models with uniform reflective amplitude at each element.
**B**: Furthermore, the authors of [26] focus on practical modeling and beamforming techniques for wideband systems aided by RIS, while the channel estimation for practical IRS-assisted OFDM systems is addressed in [27], accounting for discrete phase shifts.
In the studies mentioned above, however, the authors primarily focused on the discrete phase shift at RIS elements, or the phase-dependent amplitude of the reflective signal, without considering the limited phase shifting range, which is a realistic problem in practice..
**C**: Specifically, the authors of [19] introduce beamforming optimization techniques considering discrete phase shifts, while the work in [20] focuses on channel estimation and passive beamforming with discrete phase shifts.
|
ACB
|
ACB
|
ACB
|
CBA
|
Selection 1
|
<|MaskedSetence|> <|MaskedSetence|> In our problem setting, we consider outbound operations from a single terminal and have the flexibility to change the outbound trailer capacity to one or more destinations from the terminal if it leads to better consolidation while allowing volume splitting across multiple routes.
Specifically, this research bridges the gap between tactical flow and load planning \parencitebakir2021motor and operational execution \parenciteherszterg2022near, baubaid2023dynamic. The goal here is to efficiently and effectively adjust the existing load plan as more accurate package volume forecast is available; this problem is mentioned as an interesting and useful future research direction by [lindsey2016improved]. <|MaskedSetence|> In recent years, there has been a notable surge of interest among.
|
**A**: The flexibility to adjust the load plans enables terminal planners to better manage daily operations while maintaining service guarantees.
From a methodological perspective, we propose an optimization-based learning framework that can generate near-optimal and implementable solutions even for the largest terminal in a few seconds.
**B**: The authors formulate the problem as a MIP model and propose heuristic algorithms to solve it.
**C**: It is important to note that the problem described in both [baubaid2023dynamic] and [herszterg2022near] considers a network of terminals and fixed trailer capacity between any pair of terminals in the network.
|
ACB
|
BCA
|
BCA
|
BCA
|
Selection 3
|
<|MaskedSetence|> <|MaskedSetence|> lightness, movement, ease of care, and ease of fabrication. [31]
Textile construction techniques, such as weaving and knitting, are often chosen for the development of soft circuitry. The bottom-up construction of e-textiles (made directly from yarns or fibres) allows designs that benefit from multi-layered architecture, accommodating controlled interconnection and complex electronic structures. <|MaskedSetence|>
|
**A**: With pleating being a technique that is applied after fabric construction, new soft circuit designs that take advantage of structural properties on fibre, yarn, and fabric levels become possible..
**B**: It can be done with almost all fabrics and therefore offers extensive versatility regarding textures and volume design.
Mainly implemented in Haute Couture fashion, an example is Miyake’s seminal work on pleating technology where the aesthetic form is paired with textile functionality, e.g.
**C**: Different from other textile folding techniques, pleating is applied after the fabric is constructed.
|
CBA
|
BAC
|
CBA
|
CBA
|
Selection 3
|
<|MaskedSetence|> The architecture and prompting mechanism of LDM is based on NaturalSpeech 2 (Shen et al., 2023b). All baselines use 10 seconds of prompts. <|MaskedSetence|> <|MaskedSetence|> This is because the timbre information is absorbed by the VQ codebook and puts great pressure on the P-LLM, which demonstrates the effectiveness of decomposing timbre and prosody information. For setting 3), substituting the VQ encoder and P-LLM with VAE and LDM results in similar performance compared to Ours-10s. However, the performance of w/ VAE+LM is still much inferior to Ours-300s, indicating the superiority of the proposed multi-sentence prompting mechanism.
.
|
**A**: The results are shown in Table 4.
**B**: For setting 1), it can be observed that the removal of MRTE significantly affects both the audio quality and speaker similarity.
**C**:
We test the following four settings: 1) w/o MRTE, which removes the MRTE from our model and does not disentangle the prosody and timbre; 2) w/ VAE, which uses VAE to perform generative prosody modeling; 3) w/ VAE+LDM, which uses VAE and latent diffusion model (LDM) (Rombach et al., 2022) to perform generative prosody modeling.
|
CAB
|
CAB
|
CAB
|
BCA
|
Selection 2
|
<|MaskedSetence|> Unless otherwise stated, the downstream model is implemented as a simple classifier consisting of two fully connected layers with an average pooling layer in between, which is consistent with the SUPERB benchmark [56]. The classifier is placed on top of the pretrained Vesper to predict the emotion state. The hidden dimension of the classifier is set to 256. <|MaskedSetence|> <|MaskedSetence|> To align with the SUPERB benchmark, the representations of each layer are weighted by trainable weights to generate the input of the downstream classifier, unless otherwise stated. The audio samples in LSSED are cropped or padded to 5s for pretraining. The audio samples in IEMOCAP, MELD, and CREMA-D are cropped or padded to 6.5s, 4.5s, and 3s for fine-tuning, respectively.
We introduce two versions, Vesper-4 and Vesper-12, with 4 and 12 Transformer layers, respectively. Note that the number of Transformer layers employed in WavLM Base is 12, and the number of Transformer layers used in WavLM Large is 24. Due to the shared model structure between Vesper and WavLM, the configurations of Vesper-4 and Vesper-12 remain consistent with WavLM Large, except for the variation in the number of layers employed..
|
**A**: In addition, we freeze the CNN encoder throughout the process, including during pretraining and fine-tuning.
**B**:
When fine-tuning on the downstream datasets, the cross-entropy loss is employed as the objective function.
**C**: Note that the pretrained Vesper remains fixed, and only the classifier is trained during the fine-tuning process.
|
BCA
|
BCA
|
BCA
|
BCA
|
Selection 3
|
<|MaskedSetence|> However, these spiking neuron models have shown limitations in efficient information transmission, as highlighted in recent studies li2022brain ; wu2021liaf . To improve this, Guo et al. <|MaskedSetence|> This technique enhances SNNs’ information representation capabilities and demonstrated superior performance on various datasets by the re-parameterization technique. In addition, Lang et al. <|MaskedSetence|> This method enhances gradient propagation and the expression ability of neurons in these networks. As part of our methodology, we propose the multi-threshold spiking neuron, which fires multiple spikes upon reaching different thresholds. This approach aligns the output of the spiking neuron with the activation of the artificial neural network during the conversion process, thereby streamlining the training after conversion.
.
|
**A**: guo2022real employed real-valued spikes (RS) neurons for training and binary spikes for inference.
**B**: feng2022multi proposed a multi-level spiking neuron (ML) for improving the efficiency of spiking neural networks.
**C**:
2.1 Spiking Neuron Model
The majority of spiking neural networks predominantly employ the Integrate-and-Fire (IF) model abbott1999lapicque or the Leaky-Integrate-and-Fire (LIF) model doutsi2021dynamic .
|
BCA
|
CAB
|
CAB
|
CAB
|
Selection 3
|
3.1 Image segmentation
Pixel-by-pixel atomic number reconstruction is impractical due to the poor signal-to-noise ratio of individual pixels. Instead, the image is first segmented into different regions and subsequently the material properties of each cluster are predicted. <|MaskedSetence|> <|MaskedSetence|> The algorithm to be described in the next section was performed independently on every segment. <|MaskedSetence|>
|
**A**: This study uses Felzenszwalb’s image segmentation algorithm, chosen due to its speed, flexibility, and minimal hyperparameter tuning Felzenszwalb2004 .
**B**: For practical applications, a specialized image segmentation routine should be used, as the accuracy of this work is highly dependent on the ability to obtain a precise segmentation.
.
**C**: Past studies describe various segmentation methods for cargo applications, including a modified leader algorithm and a hybrid k-means region growing technique Ogorodnikov2002 ; Fu2010 .
|
BAC
|
CAB
|
CAB
|
CAB
|
Selection 4
|
Emerging Literature on HDR Tomography.
First attempts by Trpovski et al. [7] on HDR recovery were based on X-ray image pair fusion.
Follow-up works by Chen et al. [8] and by Haidekker et al. [9] were based on a multiple exposure approach for X-ray imaging. For an illustrative example of HDR X-ray, we refer the readers to Fig. 6 in [9]. A pixel-level design for an HDR X-ray imaging setup was considered by Weiss et al. [10]. Akin to computational photography, multi-exposure X-ray imaging requires acquisition with different gains. <|MaskedSetence|> The work of Li et al. [11] presents an automated approach to this problem. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: This is achieved by careful calibration at each exposure.
**B**: Going beyond the case of single projection angle, exposure adaption can be performed across the scanning angles.
**C**: This approach was investigated by Chen et al. in [12].
.
|
ABC
|
ABC
|
ABC
|
CBA
|
Selection 1
|
<|MaskedSetence|> During sample selection, the visual distortion types and levels are largely determined by the video applications of interest. Early subjective VQA [2, 25, 26, 27] focuses on synthetic visual distortions arising from different video processing stages, including spatiotemporal downsampling, compression, and transmission. Recent subjective VQA [3, 4, 5, 6, 7, 8, 9, 10] shifts the attention to realistic visual distortions arising during video capture (and subsequent realistic video processing over the Internet). <|MaskedSetence|> <|MaskedSetence|>
|
**A**:
Subjective VQA involves two key steps: sample selection and subjective testing, which outputs a video dataset with perceptual quality annotations in the form of mean opinion scores (MOSs).
**B**: This dataset shaping technique has inspired later researchers to give a careful treatment of sample selection during dataset construction [5, 7, 9].
With respect to subjective testing methodologies, International Telecommunication Union (ITU) has made several recommendations [29, 30, 31] regarding the experimental environment, the video clip length, the number of subjects, and the rating type.
.
**C**: To encourage wide content coverage, Vonikakis et al. [28] cast sample selection as a mixed integer linear program to enforce specific marginal distributions over different video attributes (e.g., spatial and temporal information).
|
ACB
|
ACB
|
ABC
|
ACB
|
Selection 4
|
<|MaskedSetence|> From the quantitative results, we further notice that the performance improvement of BigFWI compared to InversionNet on the original velocity maps is smaller than the one on the smoothed version. <|MaskedSetence|> It points out a future direction where instead of simply combining all the datasets, we may bias towards SB dataset during training by generating more samples or training more steps on SB so that the model can yield better performance on the realistic cases with high-wavenumber components. In addition, we observe that InversionNet trained on SA achieves smaller RTM differences for the smoothed version of Overthrust. The discrepancy in this case may be attributed to a velocity misfit, causing RTM image interfaces to be half-cycle shifted in depth, resulting in larger RMS and L2-norm values. However, the performance of the four models on the Overthrust-smooth is relatively comparable. <|MaskedSetence|>
|
**A**: Moreover, it is worth noting that although BigFWI achieves better results compared to InversionNet for both Marmousi and Overthrust, the performance is still insufficient for real-world applications, which indicates much space for improvement.
.
**B**: This is consistent with our previous observations where the improvement of BigFWI models on SB is always smaller than the one of SA.
**C**:
The quantitative results are provided in Supplementary Table S10, S11 and S12, which generally align with our observations in the visualization results.
|
CBA
|
CBA
|
CBA
|
CBA
|
Selection 4
|
Takikawa et al. [25] propose NGLOD, which learns an octree data structure for realtime rendering of an SRN fitting a signed-distance function.
Martel et al. <|MaskedSetence|> <|MaskedSetence|> [20] introduce hash grid encoding for neural graphics primitives (NGP), which uses random hashing to map input coordinates to features in a hash table at multiple levels of detail.
Wu et al. <|MaskedSetence|>
|
**A**: [32] use the hash grid architecture[20] to model scientific data up to 1TB in size..
**B**: [3] use tensor (de)composition to reduce the space complexity of 3D feature grids with their VM decomposition of a tensor.
Müller et al.
**C**: [17] create an adaptive coordinate network that uses quadtrees/octrees during training, updated by solving an integer linear programming optimization problem every set number of iterations.
Chen et al.
|
CBA
|
CBA
|
ACB
|
CBA
|
Selection 1
|
<|MaskedSetence|> Carusi et al. [16] derived an analytical estimate based on Öpik’s Theory to calculate the deflection of an asteroid due to an impulse. <|MaskedSetence|> By using this estimate and comparing it to an n-body problem, Carusi et al. show that the estimate describes well the deflection. However, according to their work [16, 36, 37, 38], the divergence occurs when there are resonant encounters with the Earth between the time of interception and the predicted impact. <|MaskedSetence|>
|
**A**: This estimate is very similar to Izzo’s approach for impulsive deflection [21].
**B**:
6 Results and Discussion
As discussed in Section 1, the effect of other bodies on a deflection has been described in the literature, albeit only in limited aspects.
**C**: In such cases, it is shown that the optimal impulse tends to diverge significantly from the analytical estimate, sometimes requiring a much larger or smaller impulse than expected.
.
|
BAC
|
BAC
|
BAC
|
BAC
|
Selection 3
|
<|MaskedSetence|> Most algorithms optimized the efficiency, enabling them to finish the inference within 13s. It is essential to mention that this time metric also included the Docker starting time, hence the actual inference time is considerably shorter. For instance, the best-performing algorithm (T1-osilab) achieved an inference time of approximately 2 seconds for an image size of 1000×1000100010001000\times 10001000 × 1000. <|MaskedSetence|> <|MaskedSetence|> Each team was compared with the other teams based on the one-sided Wilcoxon signed rank test. Yellow shading indicates that the F1 scores of the algorithm on the x-axis are significantly superior (p<0.05𝑝0.05p<0.05italic_p < 0.05) to those from the algorithm on the y-axis, while blue shading indicates no significant difference between the two algorithms. The winning algorithm is significantly better than all the others. The 2nd algorithm and the 3rd algorithm obtain comparable performances with no significant differences, but they are significantly superior to other teams..
|
**A**: Additionally, the median maximum GPU memory consumption was 3099MB (approximately $500), suggesting that these algorithms are affordable for practical deployment.
**B**: This favorable combination of accuracy and efficiency makes them well-suited for real-world applications in biological image analysis.
We also performed a statistical significance analysis for the 28 algorithms (Fig. 3c).
**C**:
The bubble plot (Fig. 3b) presents the median F1 score, running time, and the maximum GPU memory consumption of 28 algorithms, which can provide insights into the trade-off between algorithm accuracy and efficiency.
|
CAB
|
CAB
|
CAB
|
ACB
|
Selection 2
|
<|MaskedSetence|> These experiments included a baseline model with no additional structures, LadleNet_+skip model with only skip connections, LadleNet_+concat model with only adjusted aggregation, and the original LadleNet model. Table 3 summarizes the performance of the models with different optimization methods compared to the unoptimized baseline model. Notably, the optimization method of adding only skip connections outperforms the method of adding feature aggregation alone compared to the baseline model. We believe this might be related to the structural characteristics of LadleNet. <|MaskedSetence|> However, these high-level features themselves are derived from aggregated low-level features. As a result, continuous feature aggregation occurs without sufficient feature extraction processes, which could explain the relatively less effective performance of this optimization method. The method of adding skip connections transfers the aggregated features from the Handle module to the feature aggregation part of the Bowl module through skip connections. <|MaskedSetence|> Figure 10 illustrates the convergence speed variations of the loss functions for different optimization methods.
Table 3: Comparative results of different optimization methods for model performance..
|
**A**: This ensures the effectiveness of each feature aggregation step through the skip connections.
**B**: The optimization method of feature aggregation involves aggregating high-level features from the Handle module with low-level features in the Bowl module.
**C**:
To gain deeper insights into the effects of various optimization methods on LadleNet’s performance, we conducted multiple sets of experiments for comparison.
|
CBA
|
CBA
|
CBA
|
ABC
|
Selection 2
|
After obtaining the high-quality dialogue part, we can subtract it from the original track to obtain the non-dialogue part. To separate it into two remaining stems, we trained two versions of the HT demucs model (Rouard et al.,, 2023) on the DnR dataset. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Interestingly, blending both models is still beneficial as can also be seen from Table 3, where we blended four checkpoints from the 2-stem HT demucs training with seven checkpoints of the 3-stem HT demucs training, giving each the same weight. Please note that we also updated the vocal model in this submission and, hence, there is also a slight improvement for vocals if compared to the individual models. Consequently, for the final submission, we used several checkpoints of each of the 2-stem and 3-stem models to average predictions and obtain better generalization.
.
|
**A**: Table 3 shows the global SDR on CDXDB23 and we can observe that the 2-stem model yields better scores.
**B**: The first HT demucs model was trained using the standard protocol for all three stems, while the second HT demucs model was trained only on two stems: sound effects and music, excluding dialogue.
**C**: Especially music benefits from the simplified training mixtures as it improves by 2.5 dB.
|
BAC
|
BAC
|
CBA
|
BAC
|
Selection 2
|
Library selection or construction is a crucial step to the success of a semisupervised unmixing method. Blind selection of a library without extra attention and some processing steps will lead to poor results for semisupervised unmixing. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> The latter seeks a sparse solution relying on the library, and therefore, the high correlation of the library endmembers avoids the sparse solution. In both paradigms, the library must well represent the materials in a sense, i.e., it must contain all the endmembers of the materials in the scene. Generally, a library can be obtained using either of the following ways [149, 3].
1.
.
|
**A**: Therefore, the spectral library is designed to represent the variability of the endmembers.
**B**: The former was designed to address the spectral variability using the endmember variability.
**C**: There are two major paradigms in semisupervised unmixing i) Multiple Endmember Spectral and Mixture Analysis and ii) Sparse unmixing.
|
CBA
|
CAB
|
CBA
|
CBA
|
Selection 3
|
V-C Comparison of Operations with Existing Infrastructure
The proposed co-design method can also be used to improve control operations for existing infrastructure, leveraging the optimal control parameters for a given tank size. <|MaskedSetence|> The historical operation was based on trigger-level control. <|MaskedSetence|> Nevertheless, the comparison is believed to serve as a reasonable representation of potential improvements to current practice. <|MaskedSetence|> Moreover, the maximum volume of the existing storage is 136 ML. An EPANET hydraulic model of the case study was used in the closed-loop simulation. Table V shows the results..
|
**A**: Using historical data on water demands and electricity prices in 2019, we compared the co-design solutions and historical operations in 2019.
**B**: It should also be noted that our optimization considers 2019 data only, while SA Water strategy may have considered different metrics and risk scenarios over a longer time.
**C**: For this case study, the tank size is 38.5 ML, in which 28.5 ML is an emergency buffer.
|
BCA
|
ABC
|
ABC
|
ABC
|
Selection 2
|
Predictive coding network learns spatial proximity not image similarity
In the previous section, we show that a neural network that performs predictive
coding learns an internal representation of its physical environment within its latent space. Here, we demonstrate that the prediction task itself is essential for spatial mapping . <|MaskedSetence|> Many frameworks including principal
components analysis, IsoMap\autocitetenenbaumGlobalGeometricFramework2000, and autoencoder neural networks can collocate images by visual similarity. <|MaskedSetence|> <|MaskedSetence|> Thus, in the two forest environments might generate similar images but are actually each closer to the lake region than to one another (Figure 1(a))..
|
**A**: Prediction forces a network to learn spatial proximity and not merely image similarity.
**B**: While similar scenes might be proximate in space, similar scenes can also be spatially divergent.
**C**: For example, the virtual environment we constructed has two different ‘forest’ regions that are separated by a lake.
|
CAB
|
ABC
|
ABC
|
ABC
|
Selection 3
|
Acknowledging our overall positive outcomes, it is important to note that in sequences MH 01 and MH 02, our method exhibited slightly inferior results in comparison to the constant covariance approach. <|MaskedSetence|> <|MaskedSetence|> However, our focus has predominantly centered on adaptive inertial noise covariance refinement, with the visual noise covariance remaining constant. This could potentially lead to less precise outcomes. These findings serve as motivation to further enhance the noise covariance estimation system to also include an adaptive visual quality covariance estimation module.
The entire trajectories are shown fully in Fig. <|MaskedSetence|> Each trajectory image is combined from the ground truth trajectory, the estimated trajectory from the VINS-Mono baseline algorithm, and the trajectory estimated by our VIO-DualProNet. As also shown in Table II, the trajectories of the VIO-DualProNet aligned better with the ground truth trajectories in the majority of the sequences, compared to the original VINS-Mono.
.
|
**A**: Notably, accurate estimation of visual and inertial noise covariances significantly impacts visual-inertial fusion.
**B**: 5.
**C**: Despite the absence of discernible variations in IMU behavior in these sequences, we suspect that differences in visual quality might account for this outcome.
|
CAB
|
CBA
|
CAB
|
CAB
|
Selection 1
|
<|MaskedSetence|> Although CHiME-5 [19] is like a cocktail party dataset; however, the clean utterance is no warranty. <|MaskedSetence|> <|MaskedSetence|> We use audio from the LibriSpeech dataset [20] (2338 speakers for training, 73 speakers for testing). The ambient noise dataset includes MUSAN and WHAM [21, 22] (a total of 189 hours including music, speech, and environmental noise, 169 hours for training, 20 hours for testing). The reverb dataset is from Room RIR and BUT Speech@FIT [23, 24] (2650 room impulse response signals, 2350 signals for training, 300 signals for testing).
.
|
**A**: It shows through the WERs for the development set using the binaural microphones (clean utterance of a speaker) reported around 47.9%.
**B**: Instead, we train and evaluate the system using our generated data.
**C**:
3.1 Data preparation
In our setup, we need a clean utterance of a specific speaker, noisy audio containing that utterance, and that speaker’s embedding.
|
CAB
|
CAB
|
CAB
|
ACB
|
Selection 1
|
<|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> 6. While the embeddings from the first layer display scattered and overlapping clusters, indicating little separation between classes, those from the last layer showcase clear and distinct clusters, with data points from different labels forming separate and well-defined groups. This disparity underscores the effectiveness of the feature extraction process in the deeper layers of our fine-tuned model, particularly in capturing label-related features essential for classification performance.
.
|
**A**: The t-SNE plots reveal noteworthy differences between the two sets of embeddings, as shown in Fig.
**B**: 4.4 Visualization
The t-Distributed Stochastic Neighbor Embedding (t-SNE) [22] method is a powerful technique commonly used for visualizing high-dimensional data in lower dimensions while preserving local structure.
**C**: In our study, we utilize t-SNE to compare the visualizations obtained from the embeddings of the first and last layers of our fine-tuned models on the test sets of the datasets mentioned above.
|
BCA
|
BCA
|
BCA
|
BCA
|
Selection 3
|
The work in [14] demonstrates that in comparison to the conventional constant-envelope sinusoidal signals, certain waveforms with high peak-to-average-power-ratio (PAPR) provide higher harvested DC from the EH circuit. Based on this observation, some works, e.g. [13, 15, 16], investigate the effect of transmitted waveforms and modulations on WPT and SWIPT. <|MaskedSetence|> The authors in [15] develop a new signal design for WPT, which relies on multiple dumb antennas at the transmitter to induce fast fluctuations of the channel. The work in [16] proposes an asymmetric modulation scheme specifically for SWIPT, that significantly enhances the rate-energy region as compared to its existing symmetric counterpart. <|MaskedSetence|> The authors in [18] propose an analytical framework for continuous-time chaotic signals, which justifies the above observation.
Chaotic signals have been extensively used in the past decades for the purpose of wireless privacy and information security [19]. However, in reality, digital operating platforms cannot support the continuous nature of the system variables, or signals and so, discretization of the continuous states or discrete approximations of the system have to be applied. Moreover, finding the solutions of differential functions costs computational capacity and hardware resources. In this context, as there is no heavy computational burden, the discrete-time chaotic system, namely the differential chaos shift keying (DCSK), is one of the most widely studied chaotic signal-based communication techniques [20]. DCSK is a prominent benchmark in the class of noncoherent transmitted reference modulation techniques, which comprises of a reference and an identical/inverted replica of the reference depending on the data transmitted. <|MaskedSetence|> The authors in [21] propose an M𝑀Mitalic_M-ary DCSK system, in which successive information bits are converted to a symbol and then transmitted by using the same modulation scheme. The work in [22] investigates the performance of a cooperative diversity-aided DCSK-based system. Finally, to enhance the system data rate, the authors in [23] propose a DCSK system with shorter symbol duration, namely, short reference DCSK (SR-DCSK)..
|
**A**: The majority of the related works focus on the error performance of DCSK-based systems for various scenarios [21, 22, 23].
**B**: Due to the high PAPR of multisine waveforms, the work in [13] proposes a multisine-based novel SWIPT architecture based on the superposition of multi-carrier unmodulated and modulated waveforms at the transmitter.
**C**: Apart from the multisine waveforms, experimental studies demonstrate that due to their high PAPR, chaotic waveforms also are beneficial in terms of WPT efficiency [17].
|
BCA
|
CAB
|
BCA
|
BCA
|
Selection 3
|
This study investigates the performance of value-oriented forecasting under different levels of wind power capacities: 20, 30, and 40 kW. <|MaskedSetence|> 5.
It shows with the increase in wind power capacity, the average operational cost decreases as wind power, which has zero marginal cost, gradually contributes a larger share toward balancing the load. Also, the trend of cost reduction is more evident in the proposed approach, compared to the Qua-E.
Furthermore, under different levels of wind power penetration, the proposed value-oriented forecasting has lower operation costs. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: The results show that the benefit of value-oriented forecasting is more significant under the higher penetration of renewable energy resources.
.
**B**: The cost reduction is more evident under large wind power capacity.
**C**: Under these penetration levels, the primary results of the proposed value-oriented forecasting approach, the quality-oriented forecasts that issue the expectation (Qua-E), and the costs under the perfect forecasts exactly matching the wind realization (Per-F) are summarized in Fig.
|
CBA
|
ACB
|
CBA
|
CBA
|
Selection 1
|
VII-B Analysis of Two Example Coalescence Scenarios under Extremely Heavy Clutter
Figure 9: Trajectories and example measurements in the two considered coalescence cases in Section VII-B. Object number K𝐾Kitalic_K in figure (a) and (b) are 20202020 and 8888 respectively. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Red dots are the measurements generated by the 10101010 objects (whose position is in the red triangles in figure (b)), and blue dots are the clutter received at the time step 11111111. Figure (c)’s range [−750,750]×[−750,750]750750750750[-750,750]\times[-750,750][ - 750 , 750 ] × [ - 750 , 750 ] corresponds to the white dashed square depicted in figure (b)..
|
**A**: Figure (c) shows the measurements received at the time step 11111111 from the same dataset as figure (b).
**B**: The black circles, red triangles and blue squares mark the objects’ positions at time steps 1111, 11111111 and 31313131.
**C**: The dense grey dots in (a) and (b) are all measurements received from all time steps.
|
BCA
|
BCA
|
BCA
|
BCA
|
Selection 2
|
Image-domain data fidelity
We evaluate data fidelity delivered by the imaging algorithms by scrutinizing their residual dirty images (i.e., back-projected data residual) displayed in panels (a) of all figures, whose standard deviation values are reported in the corresponding captions. AIRI, uSARA, and CS-CLEAN obtain residual dirty images with the lowest standard deviation values, reflecting their high data fidelity. Both MS-CLEAN and R2D2 variants provide comparable values, slightly above the best performing algorithms. Among R2D2 variants, R3D3 performs better than both R2D2 and R2D2-Net, owing to its underpinning model-informed DNN modules on the one hand, and its iterative structure on the other hand. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> While similar behavior has been observed in high-dynamic acquisition regimes in simulation (Aghabiglou et al., 2024), these patterns are possibly amplified by the pointing errors at the hotspots..
|
**A**: This numerical analysis aligns with the overall visual examination.
**B**: A closer inspection of the R2D2 variants reveals discernible structure at the pixel positions of the Western and Eastern hotspots, where the highest emission is concentrated.
**C**: Hö-CLEAN delivers the lowest fidelity with its standard deviation value being more than one order of magnitude higher than the rest.
|
CAB
|
CAB
|
CBA
|
CAB
|
Selection 4
|
In addition, RNN-based solutions, face challenges in modeling long-term dependencies [4, 5] and transformers are limited by memory and computation efficiency constraints [6, 7]. <|MaskedSetence|> These models have achieved state-of-the-art performance in the Long Range Arena benchmark [10]. <|MaskedSetence|> They replicate neural activity patterns present in biological brains, utilizing spiking neurons that convey information through discrete pulses or spikes. The fusion of spike-based computing with neuromorphic hardware holds considerable promise for energy-efficient applications. Numerous studies have highlighted the efficacy of integrating SNNs with deep-learning methodologies [12, 13].
Efforts have been made to employ SNNs in the context of speech enhancement [14, 15, 16, 17]. <|MaskedSetence|> propose a shallow lateral inhibitory SNN with spectrogram-based rate.
|
**A**: They have been applied to speech enhancement in [11] combined with the U-Net structure.
In contrast to CNNs, Spiking Neural Networks (SNNs) serve as the computational foundation underlying the functionality of neurobiological systems.
**B**: Yannan et al.
**C**: Recently, the structured state space model (SSM) series [8, 9] has revitalized RNNs, addressing critical limitations that have hindered the effectiveness of traditional RNNs.
|
BCA
|
CAB
|
CAB
|
CAB
|
Selection 4
|
The stability constraint violation rates under different circumstances are also shown in Fig. 5. In Case I, as indicated by the blue curve, the stability constraint violation starts to appear at 3GW3GW3\,\mathrm{GW}3 roman_GW wind capacity for about 5% of the total operation time. <|MaskedSetence|> <|MaskedSetence|> This impact becomes more remarkable at higher wind capacity, illustrating the importance of modeling and management of the uncertainty in system stability constraints. <|MaskedSetence|> Note that it is unlikely to quantify the real-time adjustment costs due to the stability violation as in the conventional UC and dispatch framework that deals with the uncertainty of the renewables and demand, as the uncertainty associated with the system dynamic parameters does not reveal itself as time evolves.
.
|
**A**: The stability constraint violation due to the uncertainty in Case II is eliminated in Case III with the incorporation of the distributionally robust stability chance constraint, therefore demonstrating the effectiveness of the proposed method.
**B**: As for Case II, although the stability constraint is considered, a small percentage (less than 10%) of the operating conditions do not satisfy the stability constraint due to the neglect of the parameter uncertainty related to system dynamics.
**C**: This violation rate increases dramatically to about 45%, with the rising of the wind capacity in the system, which demonstrates the necessity of including the stability constraint in the system optimization.
|
CAB
|
CBA
|
CBA
|
CBA
|
Selection 2
|
2.3 TICV/PFV Estimation
In the pretraining stage, the final convolutional layer of the network transforms the feature map with 32 channels to 133 channels and uses softmax to obtain the segmentation mask. <|MaskedSetence|> These layers have the purpose of transforming the 32-channel feature map into two distinct feature maps, each having a single channel. <|MaskedSetence|> Throughout the fine-tuning process, optimization is performed on both the segmentation mask for the 133 brain classes and the TICV/PFV segmentation masks. <|MaskedSetence|>
|
**A**: Dice loss is used to optimize both the 132 brain regions and Dice and binary cross entropy loss are used to optimize the TICV/PFV:
.
**B**: The segmentation masks for TICV and PFV are derived by applying the sigmoid function to these feature maps.
**C**: In the finetuning stage, two extra convolutional layers are introduced.
|
CBA
|
CBA
|
CBA
|
CBA
|
Selection 4
|
<|MaskedSetence|> Bright-field images used in this paper were obtained under the protocol described in [5]. <|MaskedSetence|> We obtained one whole slide image from each group.
SegmentAnything and post processing. In our research, we utilized the Python API for SegmentAnything and evaluated three pretrained models [4], namely ViT-B, ViT-H, and ViT-L, ultimately selecting the ViT-H model for inference due to its consistent performance across various microscopy analyses. <|MaskedSetence|> 2, which required post-processing to achieve accurate cell identification..
|
**A**: However, we encountered challenges with the SegmentAnything-generated masks, as is shown in FIg.
**B**: Data acquisition.
**C**: The images were captured using Leica DMi8 microscope (Leica) equipped with 10×/0.32 objective lens.
|
CBA
|
BCA
|
BCA
|
BCA
|
Selection 2
|
To understand this, we systematically studied different combinations of the EEG Fpz-Cz, EEG Pz-Oz, EOG, EMG, and the respiration channels of the SleepEDF dataset (Kemp et al., 2000), which are simultaneously recorded. We followed the same train/val/test split as in Eldele et al. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> (B) When a modality is swapped with another available one, or (C) when modalities are dropped out at test time, our model gives lower performance degradation when comparing to a robust baseline..
|
**A**: (A) Two modality mismatch scenarios are considered: Modality substitution and modality dropout.
**B**: We also implemented two variants of multimodal latent expansion methods as in Appendix C.
Figure 3: Multimodal evaluation results.
**C**: (2021) while attaching the multimodal information instead of using only the unimodal information.
We utilized the same model setup as in Section 5.1, aside from that we follow Section 4.2 to expand the training and testing under multimodal designs with weight sharing and channel independence.
|
CBA
|
CBA
|
CAB
|
CBA
|
Selection 4
|
<|MaskedSetence|> <|MaskedSetence|> RAKI and rRAKI were trained with identical datasets, scale factors and ACS sizes as the proposed method. <|MaskedSetence|> were not publicly available. However, this network was also trained with identical datasets and scale factors, and very close ACS sizes. Therefore, the results reported by Feng et al. are used for comparison in this paper. Table 1 shows the quantitative evaluation results. We observe that the proposed method with pre-combination of multi-channel images is significantly superior to all comparison methods, and the results of the proposed method without the pre-combination are further improved.
Fig. 2 visualizes the reconstruction effects of different methods on the three undersampling scales of the brain dataset. We can observe that these GRAPPA-reconstructed MR images are heavily noisy and suffer from obvious artifacts at higher reconstruction scales. In contrast, RAKI’s results have low noise levels and significantly improved SSIM and PSNR; however, the artifacts remain a serious unresolved problem. The performance of rRAKI, including the quality of the reconstructed images and the evaluation metrics, is between GRAPPA and RAKI. The reconstruction effect of the three methods decreases sharply with increasing undersampling scale. On the contrary, the proposed method is more robust, produces similar image quality at different undersampling scales without noise and obvious artifacts, and outperforms the compared methods.
.
|
**A**: In our experiments, we compared the proposed method with GRAPPA [3], RAKI [23], residual RAKI (rRAKI) [24], and Feng et al.
**B**: [17] on two datasets with reconstruction scales of 4, 5, and 6.
**C**: The source code and trained network of Feng et al.
|
ABC
|
ABC
|
ABC
|
ABC
|
Selection 3
|
The PCinC dataset was collected from the PhysioNet/Computing in Cardiology Challenge 2021 (Reyna et al., 2021). <|MaskedSetence|> The diagnoses of these ECG datasets were encoded to 133 diagnoses using approximate Systematized Nomenclature of Medicine Clinical Terms (SNOMED-CT) codes, with 30 diagnoses were selected as being of clinical interest (Reyna et al., 2022).
Following the competition, we merged 8 diagnoses into 4 labels. <|MaskedSetence|> Furthermore, we refine the dataset by selecting only labels with an incidence rate of at least 0.5%, further reducing the number of labels to 25. <|MaskedSetence|>
|
**A**: We then consider subjects with complete 12-lead signals, sampled at a rate of 500 Hz over a duration of 10 seconds, resulting in a total of 79,574 ECG recordings.
**B**: To evaluate the algorithms, we randomly divided the entire dataset into training, validation, and testing sets in a ratio of 7:1:2, respectively..
**C**: The public portion of the challenge datasets contains 88,253 12-lead ECG recordings gathered
from 8 datasets,
including PTB (Bousseljot et al., 1995),
PTB-XL (Wagner et al., 2020), Chapman-Shaoxing (Zheng et al., 2020b), Ningbo (Zheng et al., 2020a), CPSC (Liu et al., 2018), CPSC-Extra (Liu et al., 2018), INCART (Tihonenko et al., 2007), and G12EC(Reyna et al., 2021).
|
CAB
|
CAB
|
ABC
|
CAB
|
Selection 4
|
CE schemes in OTFS systems can be categorized into three types, such as the full pilot scheme, embedded pilot scheme, and superimposed pilot (SP) scheme. <|MaskedSetence|> The full pilot scheme leads to low spectrum efficiency, but also to difficulties in dealing with fast time-varying channels. To overcome these problems, the embedded pilot scheme is used to estimate the DD domain channel, e.g., in [3], [11], [17] and [21], where CE and signal detection are performed at the same frame. Although the embedded pilot scheme can deal with fast time-varying channels, the problem of low spectral efficiency is still a concern due to the use of guard interval between pilots and data, especially for long delay and/or high Doppler shift channels. In order to improve the spectral efficiency, the SP scheme is promising, where the pilots are superimposed with data and guard intervals are no longer needed [10], [12], [22, 23, 24, 25, 26, 27]. <|MaskedSetence|> There are many types of BEMs, such as complex exponential BEM (CE-BEM), generalized CE-BEM (GCE-BEM), discrete prolate spheroidal BEM (DPS-BEM), Karhunen-Loeve BEM (KL-BEM), etc [28, 29, 30, 17, 11, 12]. The SP scheme is attractive in terms of spectral efficiency and dealing with fast time-varying channels, but joint CE and signal detection need to be performed to deal with the interference between pilots and data. The existing OTFS receivers with the SP scheme require high-dimensional matrix operations and inverse (or pseudo inverse) operations [10], [12], which is a serious concern due to the high computational complexity involved.
In this paper, we aim to develop a low complexity OTFS receiver with the SP scheme. To reduce the number of unknown channel parameters and improve the accuracy of CE, we adopt the GCE-BEM CE. In this work, leveraging the message passing techniques, with SP in the DD domain and the GCE BEM for channels, a message passing based iterative receiver called the SP-DD receiver is proposed. Compared to existing receivers, the SP-DD receiver drastically reduces the computational complexity while with marginal performance loss. To make the SP scheme more efficient, we design the pilot signals carefully to achieve pilot power concentration, facilitating the reduction of the pilot power without decreasing the performance of the receiver, leading to a receiver named SP-DD-D, which can also reduce the peak-to-average power ratio (PAPR) of OTFS signals. <|MaskedSetence|>
|
**A**: In the full pilot scheme, a frame comprising only one non-zero pilot symbol [20] is dedicated to CE, and the estimated channel is used for detection in subsequent frames.
**B**: Extensive simulation results are provided to demonstrate the superiority of the proposed receivers against existing receivers..
**C**: Moreover, to reduce the complexity of the SP scheme, the basis expansion modeling (BEM) has been applied in [11], [12] and [17], which can significantly reduce the number of unknown channel parameters.
|
BAC
|
ACB
|
ACB
|
ACB
|
Selection 4
|
<|MaskedSetence|> The model’s key innovation lies in the use of simple adapters, a prevalent technique in LLM-based multimodal models. Our approach builds upon a design where both the music encoder and LLM remain fixed, while a single adapter network is trained to project music embeddings into the text embedding space. As demonstrated in fig. <|MaskedSetence|> <|MaskedSetence|> (2023) as the language model, with the adapter consisting of a simple linear layer followed by temporal compression. Our methodology involves pre-training and instruction tuning to grasp music concepts and generate coherent responses. This streamlined design substantially reduces the time and resources needed for music-language model training.
.
|
**A**: 1, We utilised MERT-330M Li et al.
**B**: 4 Method
In this section, we introduce MusiLingo, a potent music-language model that leverages LLM capabilities to enhance music comprehension.
**C**: (2023b) as the music encoder and Vicuna-7B Chiang et al.
|
CAB
|
BAC
|
BAC
|
BAC
|
Selection 2
|
Owing to its significant advantage in overcoming severe path-loss, active RIS intrinsically possesses immense potential in ISAC systems. The reason for this is that the receivers in practical ISAC systems are typically less sensitive than traditional radar receivers, primarily due to cost considerations related to hardware.
Hence, the reception of weak echo signals by the low-sensitivity ISAC receivers results in unsatisfactory target detection/parameter estimation performance. Active RIS has become a prospective solution for ISAC systems to address the above issues and enhance both radar echo signal quality and communication performance by situationally manipulating the wireless propagations and amplifying the signals. <|MaskedSetence|> The authors in [39] propose to utilize an active RIS to improve the achievable communication secrecy rate while taking into account the worst radar detection SNR. <|MaskedSetence|> <|MaskedSetence|> Both transmit/receive and reflection beamformings are jointly designed to maximize the radar SNR while guaranteeing pre-defined SINRs for communication users.
While existing works on active RIS-empowered ISAC systems focus on target detection function, target parameter estimation is also an important task in radar sensing and should be further explored.
.
|
**A**: Moreover, an active RIS-aided ISAC system in the scenario of cloud radio access network (C-RAN) is investigated in [40].
**B**: Our recent work [41] employs active RIS to overcome the blockage issue by introducing an additional virtual LoS link between the base station (BS) and the target.
**C**: There are several studies intended to explore the application of active RIS in ISAC systems.
|
CAB
|
CAB
|
CAB
|
CAB
|
Selection 2
|
Currently, a multitude of researchers have devised a wide array of algorithms to address distributed consensus optimization problems. Among the notable contributions, Nedić & Ozdaglar[25, 26] introduced a method predicated on weighted averaging for subgradient approaches in scenarios devoid of constraints, subsequently extending their framework to incorporate convex set limitations[27]. Another innovative approach was presented by Zanella et al.[28], who unveiled a consensus-based method leveraging the Newton-Raphson algorithm. The inception of employing the push-sum consensus model for devising distributed optimization strategies was attributed to Tsianos et al.[29], a methodology that has garnered extensive exploration and refinement in various studies thereafter[30, 31, 32]. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Characterized by its simplicity and clarity in parameter selection, this algorithm is readily adaptable to a diverse array of complex and dynamic distributed network environments. It is underpinned by a rigorous yet accessible theoretical framework for convergence, demonstrating rapid convergence rates. Empirical evaluations have showcased its exceptional performance, positioning it as a potent new tool for tackling distributed consensus optimization challenges.
.
|
**A**: This method distinguishes itself by a dual mechanism where gradient information is disseminated to adjacent nodes (push), whilst decision variable information is assimilated from them (pull), thereby coining the term “push–pull gradient methods.” Additionally, the consensus ADMM[34] strategy emerged, notable for its employment of an inexact step during each ADMM update.
**B**: This strategic choice permits the execution of computationally economical operations at every iteration, enhancing the algorithm’s efficiency and applicability in distributed settings.
Despite the plethora of algorithms available for addressing distributed consensus issues, each exhibiting a unique blend of strengths and weaknesses, this paper introduces a novel decentralized algorithm that primarily relies on gradients and projections.
**C**: Furthermore, Pu et al.[33] developed a novel push-pull gradient technique.
|
CAB
|
CAB
|
CAB
|
BCA
|
Selection 2
|
In discrete-time, the number of studies that focus on the average-cost optimality criteria for mean-field problems is limited compared to the continuous-time studies. Furthermore, most of the existing work focuses on the mean-field game problems. [48, 2, 16, 57, 56] are among the papers which study ergodic mean-field game problems in discrete time. [57] considers mean-field game problems with discrete spaces. [16, 56] establish mean-field equilibria using the ergodicity properties of the agent state process under a mixing type condition on the agent dynamics. [16] also assumes that the dynamics do not depend on the mean-field of the agent states. [48, 2] provide value iteration algorithms for mean-field game problems where the convergence of these iterations are shown to the mean-field equilibria under certain minorization and mixing type conditions on the agent dynamics. We note that the papers that focus on the game setup aim to establish the mean field equilibria. Therefore, they are able to use the ergodicity properties of the dynamics by using stationary policies for the agents (independent of the mean-field term) under minorization conditions on the transition kernel. However, for mean-field control problems, the focus is mainly on establishing the existence of the global optimal performance for the team problem. <|MaskedSetence|> Hence, the minorization condition by itself may not be sufficient to establish the convergence to the stationary distributions of the collective agent states. Hence, in this paper, we make use of weak continuity assumptions on the agent dynamics without needing the ergodicity or minorization assumptions on the state process to establish the existence of optimal stationary and Markov policies.
Finally, to our knowledge, the only paper that deals with the discrete-time mean-field control problems under average cost (reward) criteria is [9]. In [9], existence of optimal stationary Markov policies has been established through the average cost optimality inequality. <|MaskedSetence|> This assumption is usually not easy to verify using conditions on the primitive system components. Furthermore, in [9], two special cases are considered, namely (i) when the cost function only depends on the mean-field term and the dynamics are independent of the mean-field term, and there is no common noise (ii) when the cost functions only depend on the mean-field term (transitions can depend on the mean field term) and the admissible policies are Lipschitz continuous in the mean-field term. <|MaskedSetence|>
|
**A**: It is shown that the dynamic optimization problem reduces to a static optimization problem under either of these special settings..
**B**: In particular, the resulting measure valued control problem requires the use of policies that depend on the mean-field term (empirical distribution of the states).
**C**: It is assumed that the difference between the optimal value of any two initial points is uniformly bounded over all initial points and discount factors.
|
CBA
|
BCA
|
BCA
|
BCA
|
Selection 3
|
2 Related Work
Deep learning researchers have successfully utilized GANs for several applications such as image synthesis, image segmentation, image reconstruction, and object detection in the biomedical imaging domain [41] [42]. <|MaskedSetence|> <|MaskedSetence|> Self-attention mechanism in Progressive Growing of GAN (PGGAN) was proposed to generate more diversified skin lesion images [14]. <|MaskedSetence|> Similarly, Huang et al. [27] used the SSIM loss in the PGGAN architecture to address the mode collapse problem while generating diversified Magnetic Resonance (MR) images. Modanwal et al. [28] utilized 34x34 small patches of images in the discriminator to alleviate the mode collapse in CycleGAN for MR image generation. Small patches help to maintain the structural information of dense tissues during image translation tasks. Contributions such as the improved autoencoder [29], spectral normalization [30], variational GAN [31], and improved style generator [32] have been made to alleviate the mode collapse problem in GANs.
.
|
**A**: Several approaches have been proposed to address the mode collapse problem for various GAN architectures.
**B**: The efficacy of the attention mechanism is discussed, stating that it helps in the coordination of salient features at every location to capture the long-range dependencies in biomedical images.
**C**: While performing these tasks, GANs face technical challenges such as mode collapse, during training.
|
CAB
|
CAB
|
CAB
|
BAC
|
Selection 1
|
<|MaskedSetence|> 5. <|MaskedSetence|> Thus, we propose a systematic way where each stabilizer is considered separately in a sequential fashion to achieve this goal. <|MaskedSetence|> 7. First, the CZ gates are brought to the left, keeping in mind that it can commute with CX gates only in pairs to preserve the eigen value of +1. Next, control and target of the CZ gates are reversed using Rule 2. Subsequently, the CZ gates are converted to CX gates using Rule 3. Similar steps can be applied to all the stabilizers to obtain the circuit shown in Fig. 8.
.
|
**A**: However, this step will result in the increase of H𝐻Hitalic_H gates significantly.
**B**:
Since most practical quantum computing systems use one type of 2-qubit gates, we will convert all controlled-Z gates to controlled-X gates using Rule 3 in Fig.
**C**: The steps for the first stabilizer is illustrated in Fig.
|
BAC
|
BAC
|
BAC
|
ABC
|
Selection 1
|
The model configuration of PET-TSVAD was the same as the transformer-based TS-VAD [18] except for the pseudo-speaker profile extraction module. We used five 128-dim zero-vectors as input followed by positional encoding and a linear layer to derive the pseudo-speaker profiles that are five 128-dim vectors. These five pseudo-speaker profiles were concatenated with the speaker profiles generated by the first pass diarization,
and fed to the TS-VAD module of the PET-TSVAD.
The other stream of the input was a sequence of 80-dim log mel-filterbank feature vectors extracted for every 10 ms. <|MaskedSetence|> The encoder module was a 17-layer ResNet, that had the same architecture as the speaker profile extractor except that the final pooling layer was removed.
The independent speaker detection module consisted of a linear projection layer followed by BLSTM layers. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: The number of parameters of the PET-TSVAD model was 12.47M..
**B**: The joint speaker detection module consisted of two blocks of sequence layers where a transformer layer is applied on the speaker-axis followed by a BLSTM layer across the time-axis [18].
**C**: The log mel-filterbank features then went through the encoder module to extract the speech embeddings.
|
CBA
|
CBA
|
BAC
|
CBA
|
Selection 4
|
Recently, Whisper [23], a large pre-trained model based on weak supervision, has been proposed and shown to have good potential for generating more robust acoustic features. This is due to the availability of audio transcripts in different languages and tasks. Unlike the SSL model that predicts the masked audio, the weak supervision model uses the actual transcript during model training. Because of this, the audio features generated by Whisper are expected to contain more phonetic information. Hence, it is worth investigating whether Whisper can provide more informative features for the speech assessment task.
In this study, we aim to explore the potential advantages of speech representations from Whisper, and propose an improved version of the multi-objective speech assessment model, namely MOSA-Net+. <|MaskedSetence|> <|MaskedSetence|> MOSA-Net+ employs a multitasking learning approach to predict subjective quality and intelligibility scores. Its architecture comprises a convolutional neural network (CNN), followed by a bidirectional long short-term memory (BLSTM) and fully connected layers. <|MaskedSetence|>
|
**A**: Each task-specific layer includes an attention mechanism, a fully connected layer, and a global average pooling to obtain an estimated utterance score.
.
**B**: Within our framework, the pre-trained Whisper module is accompanied by an additional adapter layer, facilitating task-specific adaptation and dimension reduction.
**C**: MOSA-Net+ incorporates three distinct features: traditional spectral features, waveforms processed using adaptable filters from a convolutional network [24], and latent representations obtained from Whisper.
|
CAB
|
CBA
|
CBA
|
CBA
|
Selection 4
|
In this section, we investigate the impact of spatio-temporal adapter designs on the performance of our method. <|MaskedSetence|> Our experiments demonstrate that the inclusion of cross-frame attention leads to a 0.2% improvement in Dice and a 0.03 increase in temporal smoothness, showcasing its positive impact on our method’s performance. <|MaskedSetence|> Nevertheless, our model successfully captures temporal information along the time axis, resulting in segmented images with enhanced temporal smoothness. This observation highlights the interdependence between adjacent frames, where each frame significantly influences its neighboring frames.
To further investigate the impact of the order in which the spatio-temporal adapter is built, we conducted a comparison by reversing the order of spatial and temporal attention. <|MaskedSetence|> In comparison, the concurrent application of spatial and temporal attention, as Fig 1, results in a reduction of 0.4% in the dice coefficient and a 0.02% increase in temporal smoothness. Based on these findings, we have chosen to structure the temporal and spatial attention.
.
|
**A**: We conducted ablation experiments to examine the effects of cross-frame attention in our model.
**B**: The quantitative evaluation results, presented in Table 3, reveal that initiating with spatial attention, followed by temporal attention, leads to a decrease of 0.5% and an increase of 0.02, Dice and temporal consistency metric.
**C**: Figure 4 provides a visual comparison of MediViSTA across frames, emphasizing the presence of ambiguous boundaries on the lateral side.
|
ACB
|
ACB
|
ABC
|
ACB
|
Selection 4
|
Sequence discriminative training criteria including maximum mutual information (MMI) [15] and minimum Bayes risk training (MBR) [7, 16, 17] also show significant improvements for seq2seq models ([7, 17, 18]). [18] introduces MMI training with ELM as an early-stage LM integration in the training phase. In [16], MBR training with LM integration achieves better performance than not using LM during training. <|MaskedSetence|> Theoretically, we show a similar effect of ILM subtraction and MMI training by deriving the global optimum of MMI criterion. <|MaskedSetence|> Experimental results on Librispeech demonstrate that sequence discriminative training shares similar effects as ILM subtraction. Furthermore, we perform an extensive study of the impact of sequence discriminative training. <|MaskedSetence|>
|
**A**: In [11], MMI serves as an ILM suppression method for improving LM integration during decoding.
In this work, we further investigate the relation between ILM subtraction and sequence discriminative training.
**B**: Empirically, we perform a series of comparisons between ILM subtraction and sequence discriminative training across different settings.
**C**: Experimental results show a joint effect on both encoder and prediction + joint network to reshape posterior output including both label distribution and blank..
|
ABC
|
CBA
|
ABC
|
ABC
|
Selection 4
|
<|MaskedSetence|> The main reasons for that may be the use of an improved speech inversion system[14] to extract the vocal tract variables and the changes adopted in pre-processing with the use of speech segments with overlaps. The use of speech segments with overlaps also reasonably increased the number of samples used to train both audio and video modalities.
Another significant contribution of the current work comes with the incorporation of a text modality to the mix of audio and video modalities, which to the best of our knowledge has not been experimented at all together in schizophrenia research. <|MaskedSetence|> This may be because the data was collected in an interview setting and then diarized to get the subject’s speech to produce text data that had numerous short utterances. Previous text-based studies [28] have shown, that semantic coherence is one of the key properties of text in differentiating schizophrenia subjects from healthy subjects. <|MaskedSetence|> This may be why the text model produced the lowest performance out of the three individual modalities. Still, the significant contribution of the current work comes with the incorporation of a text modality to the mix of audio and video modalities, which to the best of our knowledge has not been experimented with altogether in schizophrenia research..
|
**A**: But in our case, the subjects’ responses are frequently short, which may have made semantic coherence, or the lack of it, more difficult for the model to detect.
**B**: The text model produced the lowest performance out of the three individual modalities.
**C**:
A key observation with the current uni-modal systems is that the performance of both the video and audio models has improved while the audio model has improved drastically when compared to the previous work in [6].
|
ABC
|
CBA
|
CBA
|
CBA
|
Selection 4
|
<|MaskedSetence|> <|MaskedSetence|> The adjacent channel interference can also increase further due to the simultaneous transmissions from many nodes participating in OAC. In the literature, few OAC schemes are analyzed from the perspective of PMEPR. To reduce PMEPR, chirps and single-carrier (SC) waveforms are used in [25] and [34], respectively. <|MaskedSetence|> However, to the best of our knowledge, CSs have not been utilized for reliable OAC while reducing the dynamic range of the transmitted OFDM signals.
.
|
**A**: If a transmitted signal for OAC has a large peak-to-mean envelope power ratio (PMEPR), it can result in a reduced cell size due to the power back-off or a higher adjacent channel interference due to the PA saturation [25].
**B**: It is well-known that the PMEPR of the OFDM signals using CSs is less than or equal to 3 dB while achieving some coding gain [35].
**C**:
The second challenge for reliable computation arises because the received signal powers of the nodes need to be similar, if not identical.
|
CAB
|
CAB
|
ABC
|
CAB
|
Selection 1
|
The distinctive THz channel characteristics, large bandwidths, and massive antenna numbers present both opportunities and constraints for system design and baseband signal processing. <|MaskedSetence|> However, ideal UM-MIMO signal processing assumes uncorrelated channels, emphasizing the need for THz-specific solutions. Hybrid beamforming emerges as a promising solution in various THz transceiver architectures like sub-connected array-of-subarrays (AoSA) and dynamic subarray (SA) with fixed true-time delay. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: The AoSA architectures divide UM-MIMO arrays into SAs, each fed with an exclusive radio frequency chain, reducing power consumption and hardware complexity, and simplifying channel estimation compared to a fully digital architecture.
**B**: Spectral efficiency may decrease compared to fully-connected structures, but with ample bandwidths, enabling multi-user and multiple-stream communications while respecting hardware constraints takes precedence [2].
.
**C**: In favorable far-field near-static UM-MIMO THz environments, channels show high correlation in space, time, and frequency, where the small delay spread under high antenna directivity increases the coherence bandwidth and the likelihood of flat fading.
|
CAB
|
CAB
|
BAC
|
CAB
|
Selection 1
|
Bias Adjustment:
Despite the perturbation being small, it may still drive the model away from the ground truth to some extent. This is because the neural ODE is nonlinear, and small parametric changes may still lead to non-negligible output deviations that accumulate when the ODE is integrated to obtain system trajectories. <|MaskedSetence|> <|MaskedSetence|> We collect training data in a similar manner as the first step, but pick starting points for generating the three trajectories using a different random seed. <|MaskedSetence|> After bias adjustment, we demonstrate that our final model closely matches the ground truth and retains closeness to the unconstrained baseline model (Fig. 4), while guaranteeing incremental dissipativity.
.
|
**A**: Therefore, in the last step, we freeze the weights (which were designed to guarantee dissipativity), and adjust only on biases to compensate for any loss of fit.
**B**: Note that the biases can be trained independently while maintaining dissipativity guarantees (Remark 4).
**C**: We purposely do this to avoid overfitting.
|
ABC
|
ABC
|
ABC
|
ABC
|
Selection 3
|
Moreover, the majority of the proposed methods to date is available for discrete-time systems and, when extended to continuous-time, require unquantified approximations. Our aim is to propose a methodology to identify the Fourier coefficients of the state and input matrices of LTP systems in continuous time for both stable and unstable systems. Importantly, we aim to remove restrictions related to systems achieving a steady-state and to eliminate signal limitations.
To achieve this objective, we make use of an equivalence result established in [12], linking LTP systems to infinite-dimensional LTI systems characterized by a block Toeplitz structure formed by the Fourier coefficients of the state and input matrices. LTP system’s parameters are thus inferred from the Fourier series. <|MaskedSetence|> <|MaskedSetence|> By selecting a sufficiently high truncation order, the identification problem is translated into a finite-dimensional linear least-squares problem. <|MaskedSetence|>
|
**A**: However, it is essential to note that the harmonic system is inherently infinite-dimensional.
**B**: Consequently, truncation becomes necessary.
**C**: We prove that the solution to this finite-dimensional problem converges to the solution of the infinite-dimensional counterpart with an arbitrarily small error.
.
|
ABC
|
ABC
|
ABC
|
CBA
|
Selection 2
|
Illustrations must appear within the designated margins. They may span the two columns. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Figure 1 shows you an example of how to do this..
|
**A**: If possible, position illustrations at the top of columns, rather than in the middle or at the bottom.
**B**: All halftone illustrations must be clear black and white prints.
Since there are many ways, often incompatible, of including images (e.g., with experimental results) in a LaTeXdocument.
**C**: Caption and number every illustration.
|
ACB
|
ACB
|
ABC
|
ACB
|
Selection 1
|
Table II-(a) shows the prediction performance for these methods. <|MaskedSetence|> The GNN uses topological information to optimize the dispatch and satisfy loads. Second, the PhyR-based frameworks achieve lower topology errors by up to 10%, by embedding the discrete decisions directly within the ML framework. However, the topology error remains high (>30%absentpercent30>30\%> 30 %), demonstrating the challenge in learning to optimize this combinatorial task. Finally, SiPhyR and Global-GraPhyR achieve the best performance across feasibility metrics, with lower magnitude and number of inequality violations. Notably, the maximum inequality violation is an order of magnitude higher for InSi which does not benefit from PhyR, and GraPhyR which makes local predictions. <|MaskedSetence|> First, PhyR explicitly accounts for binary variables within the training loop to enable the end-to-end learning: PhyR selects a feasible topology upon which the neural framework predicts a near-feasible power flow solution. <|MaskedSetence|> Figure 5 plots the mean inequality violation of the 10 trained GraPhyR models, for the sets of inequality constraints, namely (8), (9), and (11). The constraints are always respected for voltage (by design) and connectivity (by constraint penalty). Nodal generation constraints are frequently violated as the lowest cost (lowest line losses) solution is to supply all loads locally.
.
|
**A**: This is expected.
**B**: Second, GraPhyR sacrifices some prediction performance for the flexibility to train and predict on multiple graphs.
**C**: We first observe that the GNN frameworks achieve lower dispatch error, with Global-GraPhyR outperforming SiPhyR by two orders of magnitude.
|
CAB
|
BCA
|
CAB
|
CAB
|
Selection 1
|
<|MaskedSetence|> In this data-driven approach, we exploit contextual information, so-called features, such as historical (deterministic) forecasts of wind power and directly map them to an optimal action. In the training stage, the hybrid power plant operator learns feature-driven policies based on historical features and uncertainty realizations. In the decision-making stage, the learned policies are applied to new available features, leading to trading decisions without the need to solve a complex optimization problem. A few papers in the literature, e.g., [10], [11] and [12], develop feature-driven trading models for renewable power producers. To the best of our knowledge, this is the first paper that develops such a model for hybrid power plants trading both wind power and hydrogen, which is a more complicated problem. <|MaskedSetence|> Finally, the third contribution of this paper is to develop a pragmatic rule-based adjustment strategy for real-time hydrogen production. <|MaskedSetence|>
|
**A**: Using an out-of-sample simulation, we show how the resulting profit from feature-driven trading in the day-ahead stage and the adjustment strategy in real time is very close to that in an ideal benchmark (oracle).
.
**B**:
The first contribution of this paper is to propose a novel application of a prescriptive analytics framework based on decision rules, inspired by [9], to the multi-market bidding problem of hybrid power plants.
**C**: The second contribution of this paper is to extend these previous works by investigating various model architectures, in particular price- and time-dependent policies, and feature vectors, including an improved forecast feature vector.
|
BCA
|
CAB
|
BCA
|
BCA
|
Selection 4
|
<|MaskedSetence|> <|MaskedSetence|> Blue ’X’s indicate the location of installed TCSCs.
Figures 8 and 9 depict the location and amount of mean unserved energy for TNEP and TNEP+FACTS, respectively. A predominant portion of the unserved energy is observed near urban load centers. Notably, the integration of FACTS results in a pronounced reduction of unserved energy in the northern corridors. <|MaskedSetence|>
|
**A**: This aligns with the locations where the TCSCs were installed as seen in Figure 7..
**B**: Red lines indicate the location of capacity upgrades, with the thickness/width of each line representing one of the three available upgrade levels.
**C**:
Figure 7: TNEP+FACTS Investments.
|
CBA
|
CBA
|
BCA
|
CBA
|
Selection 2
|
<|MaskedSetence|> 13 and Fig. <|MaskedSetence|> As shown in Fig. 13(a) and Fig. 14(a), the accuracy of range estimation is maximally improved by 54.66 %, 84.36 %, compared with improved 2D FFT algorithm in [19] and conventional 2D FFT algorithm, respectively. As shown in Fig. <|MaskedSetence|> 14(b), the accuracy of velocity estimation is maximally improved by 41.54 %, 97.09 %, compared with improved 2D FFT algorithm in [19] and conventional 2D FFT algorithm, respectively. Furthermore, the accuracy of range and velocity estimation with the proposed algorithm in this paper is satisfy the required for current scenarios [48]. Simulation results prove that the JCMSA is superior with a high SNR, which confirms with Theorem 1.
.
|
**A**:
Fig.
**B**: 14 show the results of RMSEs of range and velocity estimation with SNR greater than -15 dB.
**C**: 13(b) and Fig.
|
ABC
|
CBA
|
ABC
|
ABC
|
Selection 3
|
In the classical approach, the (infinite-horizon) LQR problem relies upon the solution of a Riccati equation. In 1970, Athans and Levine [LA70] introduced the idea of a direct gradient descent computation of optimal feedback gains, a procedure which can be interpreted as a form of RL. Thus, the LQR problem offers an ideal benchmark for better understanding policy optimization methods in the RL field, as one can compare solutions to the known optimal solution, and analysis of gradient methods can take advantage of theory developed for LQR. For policy optimization in the LQR problem, the objective function is a cumulative quadratic function of the state and control inputs, the control policy is parameterized as a linear function (feedback) of the state, and the admissible set, consisting of all the stabilizing control gains, is an open subset of an Euclidean space.
As investigated for example in [BMM20, MZSJ22, HZL+23], the gradient of the objective function can be computed by using a Lyapunov equation that depends on the system matrices. Nevertheless, if precise system knowledge is unavailable, as in the setting of model-free RL, the gradient has to be numerically approximated through sampling and experiments. <|MaskedSetence|> In [MZSJ22, FGKM18, LTZL22], the gradient is directly calculated by the finite differences method [NW06, Section 7.1], based on the change in function values in response to small perturbations near a given point. For these data-driven methods, a gradient estimation error is inevitable due to noisy data and insufficient samples. <|MaskedSetence|> We also show that two variants of gradient flows, natural gradient flows and Newton gradient flows, are small-disturbance ISS. The new contribution is to establish the CJS-PL property for the LQR problem. This considerably extends previous work [MZSJ22, BMM20] that only showed a semiglobal estimate (and thus would imply merely iISS). <|MaskedSetence|> This is incorrect..
|
**A**: In [Son22], it was mistakenly stated that the magnitude of the gradient is lower bounded by a 𝒦∞subscript𝒦\mathcal{K}_{\infty}caligraphic_K start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT-function, which is a stronger property.
**B**: For example, by utilizing the approximate dynamic programming technique [BT96, Pow07], the Lyapunov equation was solved by data-driven methods in [JJ17, LL13, JBG20].
**C**: Therefore, the robustness analysis of the policy optimization algorithm in the presence of perturbations is critical for efficient learning, and lays the foundations for better understanding RL algorithms.
Our main result will be that, for the LQR problem, the loss function is coercive and satisfies the CJS-PL property, and therefore, by the results in the first part of the paper, we conclude that the perturbed standard gradient flow is small-disturbance ISS.
|
BCA
|
BCA
|
BCA
|
CBA
|
Selection 3
|
5.2 Baselines
We compare our approach with SOTA task-specific baselines from prior work Arora et al. (2022a); Vygon and Mikhaylovskiy (2021); Tian and Gorinski (2020); Yen et al. <|MaskedSetence|> <|MaskedSetence|> (2022); Ahamad et al. <|MaskedSetence|> (2019); Ray et al. (2022); Hechmi et al. (2021); Jia et al. (2021); Chen et al. (2022) by reporting the performance from the original papers (“SOTA” in Table 3). We also report the performance achieved by SpeechPrompt v2 Chang et al. (2023) (“SpeechPrompt v2” in Table 3) to quantify the impact of our proposed modifications to prior MTL approaches based on speech prompting, as discussed in Sec. 3..
|
**A**: (2022); Shor et al.
**B**: (2020); Castro et al.
**C**: (2021); Bermuth et al.
|
CAB
|
CAB
|
CAB
|
ABC
|
Selection 3
|
Up to now, the dual objectives of fidelity and realism have been treated as distinct and even in tension [12, 29, 26, 30, 31]. <|MaskedSetence|> It is natural then to seek a simultaneous generalization of the two. <|MaskedSetence|> The main contribution of this paper is one such generalization, Wasserstein distortion, which is grounded in models of the Human Visual System (HVS).
Realism objectives take several forms depending on how one induces a probability distribution from images. First, one can consider the distribution induced by the ensemble of full resolution images [24, 32, 27, 26, 28]. Second, one can form a distribution over patches by selecting a patch at random from within a randomly selected image [18]. Finally, for a given image, one can consider the distribution over patches induced by selecting a location at random and extracting the resulting patch [33, 34]. Theoretical studies have tended to focus on the first approach while experimental studies have focused more on patches. We shall focus on the third approach because it lends itself more naturally to unification with fidelity: both depend only on the image under examination without reference to other images in the ensemble. <|MaskedSetence|> Under an ergodicity assumption, as occurs with textures, ensemble and per-image notions of realism coincide; see the discussion in [35, p. 51]..
|
**A**: That said, the proposed Wasserstein distortion can be extended naturally to videos and other sequences of images and in this way it generalizes the other notions of realism.
**B**: Such a generalization could be more aligned with human perception than either objective alone, or even a linear combination of the two.
**C**: Yet they represent two attempts to capture the same notion, namely the differences perceived by a human observer.
|
CBA
|
BAC
|
CBA
|
CBA
|
Selection 1
|
Table 2: Comparison of hidden states extracted from different Transformer blocks of the Whisper model and encoding the transcriptions obtained from the Whisper model by the BERT model on AD detection.
As discussed above, linguistic information is effective for AD detection. <|MaskedSetence|> Two ways of extracting linguistic features were compared: (i) extracting hidden states from different intermediate Transformer blocks of the Whisper model; (ii) obtaining transcriptions from the output of the Whisper model and encoding the transcriptions by a BERT model. The results are shown in Table 2. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: The BERT embeddings extracted using the ASR transcriptions (denoted as BERTASRASR{}_{\text{ASR}}start_FLOATSUBSCRIPT ASR end_FLOATSUBSCRIPT) produced better performance than directly using hidden states of the ASR model.
**B**: This section investigates incorporating linguistic information in AD detection.
**C**:
.
|
CAB
|
BAC
|
BAC
|
BAC
|
Selection 3
|
<|MaskedSetence|> The aircraft was tested at the Laboratory for Verification and Validation (LVV) in Sheffield, England. For this dataset, only the starboard wing was measured, as well as supplementary measurements at the top of the aircraft and the exhaust, more details on the exact locations of the sensors will be given later in Section 3.2, Figure 3. <|MaskedSetence|> Therefore, between the main body and the ground are the wheels, supports and hydraulic shock absorbers. <|MaskedSetence|> Figure 1 shows both an image of the real aircraft, as well as a schematic, indicating the boundary conditions of the wing.
.
|
**A**: All these substructures are joined with a variety of fixing methods and types, which are not detailed here.
**B**: 2 Structure
The structure used in this dataset is a BAE systems Hawk T1A aircraft fraser2011hawk .
This is a real, decommissioned aircraft, that was used for advanced training in the RAF.
**C**: The aircraft was positioned resting on its wheels on the ground.
|
BCA
|
BCA
|
BCA
|
CAB
|
Selection 1
|
IV-A QCar Experimental Setup
The QCar self-driving robotic testbed incorporates a drive motor and a steering servo motor for its motion. <|MaskedSetence|> <|MaskedSetence|> The control algorithm is developed in Simulink, then compiled into C-code and executed on an embedded Linux-based system. The testbed is powered by an onboard NVIDIA Jetson TX2 processor. <|MaskedSetence|>
|
**A**: It is equipped with 4 vision cameras, a 2-dimensional LiDAR, and an Intel Realsense depth camera.
**B**: The processor receives control inputs from the ground station computer via Wi-Fi and transmits the collected IMU and sensor data to the ground station computer.
.
**C**: The pose measurements of the testbed are obtained using an onboard 9-axis Inertial Measurement Unit (IMU).
|
ACB
|
ACB
|
ACB
|
ABC
|
Selection 3
|
<|MaskedSetence|> <|MaskedSetence|> On the contrary, the last four classes represent only 39 samples or correspondingly 3.4% of the full dataset. Furthermore, it is important to mention that there are no male samples available for the age groups 75-79 years and 85-92 years. As past studies did not indicate a strong interaction effect between gender and age prediction, and dividing the dataset by gender would worsen the imbalance in the smaller age groups, we decided to ignore gender as a covariant in this study.
Figure 1: Age-group distribution.
Age-group distribution in terms of age groups provided in the Autonomic Aging dataset[11]. <|MaskedSetence|>
|
**A**: There is a clear imbalance in the age distribution, with the majority in age group 20-24 with 422 samples, followed by age group 25-29 with 105 samples.
**B**: Age-group distribution Fig 1 shows the age distribution across the dataset in terms of 15 age groups, where the first age group contains subjects aged 18 to 19, whereas all following age groups but the last cover age intervals of 5 years.
**C**: The age groups span a range from 18 to 92 years, where the majority of patients are between 20 to 50 years old..
|
BAC
|
BAC
|
BCA
|
BAC
|
Selection 4
|
Herein, PHC refers to parameterized hypercomplex convolutions, with the weight matrix defined as in Eq. (3), and 𝐱𝐱\mathbf{x}bold_x is the multidimensional input composed of breast cancer images and the corresponding attention map.
Attention maps are a visualization of what the attention layer has learned during training [14]. <|MaskedSetence|> More in detail, to compute the attention maps we deploy the recent PatchConvNet [14], since it allows to obtain high-resolution attention maps thanks to its non-hierarchical design. As the network is initially trained on ImageNet, direct utilization for medical images is not feasible. Thus, we first perform a fine-tuning step on a breast cancer dataset and thereafter we apply the fine-tuned model to infer attention maps on other breast cancer databases. <|MaskedSetence|> In this way, the attention map conditions the training process by emphasizing the most critical portion of the image. <|MaskedSetence|>
|
**A**: Then, we construct the augmented dataset in which each sample is composed of the original mammograms or histopathological images and the corresponding attention maps, considering them as a single multi-dimensional input.
**B**: This allows the neural network to focus on these crucial areas, thereby enhancing its predictive capabilities and leading to improved performance..
**C**: We propose to exploit this information in image form, i.e., we augment the dataset with the attention maps, inspired by the similar utilization of heatmaps in recent works [6, 42].
|
CAB
|
CAB
|
CAB
|
BAC
|
Selection 1
|
In this work, to study the effectiveness of the proposed method, we choose SENet-34 [7] as the backbone SSD network for all baseline models. The speech enhancement module uses a convolutional recurrent neural (CRN) model [58]. Details of the different baselines used for comparison in this work can be found in Table II. The first three models are traditional structures, consistent with Fig. 1(a). The structures of “Cascade” and “Joint” are consistent with Fig. 1(b). The “Cascade” system simply concatenates the speech enhancement model and the synthetic speech detection model. <|MaskedSetence|> Conversely, the “Joint” system jointly trains the speech enhancement model and the synthetic speech detection model. <|MaskedSetence|> <|MaskedSetence|> Compared to the “Joint” system, the structure of “DKDSSD” differs in that it adopts a dual-branch knowledge distillation structure and the interactive fusion module.
TABLE II: The configuration of different systems in this work. “††\dagger†” means proposed.
.
|
**A**: This approach affects the learning of shared parameters and is beneficial for optimizing the speech enhancement part towards the final objective.
**B**: During training, the loss of the speech enhancement part is backpropagated independently, resulting in less correlation between the frontend and backend.
**C**: During actual training, both tasks need to be accomplished, and backpropagation is performed simultaneously.
|
BCA
|
ACB
|
BCA
|
BCA
|
Selection 1
|
For the PIPAL dataset, we follow their official split and training protocols mentioned in the paper. For the other datasets mentioned in Sec.IV-A, we randomly sample 80% of the data as the training dataset and the rest as the testing dataset. <|MaskedSetence|> The learning rate is set to 1e−41superscript𝑒41e^{-4}1 italic_e start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT using the CosineAnneling scheduler with Tmax and eta_min set to 50 and 0, respectively. We use a batch size of 8 and Adam optimizer [24] with default parameters. <|MaskedSetence|> <|MaskedSetence|> We train the network 5 times using different seeds and report the mean value. Code will be released upon acceptance.
.
|
**A**: During testing, each image is randomly cropped into 224×224224224224\times 224224 × 224 for 20 times, and their averaged prediction scores are calculated.
**B**: We choose ResNet50 and Swin Transformer as our encoder backbone to compare with other CNN and transformer models respectively.
During training, images are augmented by random cropping with a size of 224×224224224224\times 224224 × 224 and random horizontal flipping by 50% chance.
**C**: The network is trained using L2 loss for 100 epochs.
|
BCA
|
BCA
|
BCA
|
CAB
|
Selection 3
|
<|MaskedSetence|> It develops a content-aware SR model called NAS-MDSR and utilizes a Pensieve-like RL agent to manage both download and enhancement processes. SRAVS (Zhang et al., 2020) adopts RL controllers and a lightweight SR model while proposing a double-buffer system model. <|MaskedSetence|> It formulates the problem as MPC and solves it heuristically. <|MaskedSetence|>
|
**A**:
Neural-Enhanced Streaming
NES incorporates neural enhancement methods into video streaming algorithms and enables high-quality content delivery by leveraging both bandwidth and computational resources.
NAS (Yeo et al., 2018) is the first method to integrate SR models into on-demand video streaming.
**B**: PreSR (Zhou et al., 2023) pre-fetches and enhances “complex” segments that bring the most quality improvement and bandwidth reduction.
**C**: Our method is most similar to these approaches.
.
|
ABC
|
CBA
|
ABC
|
ABC
|
Selection 1
|
Furthermore, despite the adoption of spherical wave models in the existing studies of near-field ISAC, both the USW model and the more precise NUSW model are still considered unreasonable. In the USW model, the signal phases for different antennas are accurately modeled [21], while the channel gains are still uniform as in the UPW model. More accurate than the USW model, the NUSW model appropriately captures the variations of both the phases and channel gains for different links across array elements [22]. <|MaskedSetence|> <|MaskedSetence|> Consequently, the effective aperture and polarization loss vary across array elements. <|MaskedSetence|>
|
**A**: The effective aperture denotes the projected antenna aperture that is orthogonal to the local wave propagation direction corresponding to the current element, and the polarization mismatch represents the angular difference in polarization between the local wave and the antenna [9].
**B**: However, it worth to note that all the three conventional models (TCMs) mentioned above, i.e., UPW, USW and NUSW, ignore the loss in channel gain caused by the effective antenna aperture [23] and the polarization mismatch [24, 25].
**C**: If such losses are neglected, the receive power can unlimitedly increase with the number of antennas and even exceed the transmit power, leading to the violation of the energy-conservation law [9].
.
|
BAC
|
ACB
|
BAC
|
BAC
|
Selection 4
|
Based on the implementation methods, XL-MIMO can be divided into discrete antenna array and continuous-aperture surface. The state-of-the-art MIMO and massive MIMO systems are realized with discrete antenna arrays, where the array elements are connected to radio-frequency (RF) chains and analog-to-digital converters/digital-to-analog converters (ADCs/DACs) [40]. Typically, the adjacent array elements are separated by half wavelength, so as to circumvent the impact of mutual coupling among elements and reap the spatial diversity gain. <|MaskedSetence|> <|MaskedSetence|> Different from the conventional discrete antenna array, the whole continuous-aperture surface is capable of transmitting/receiving signals, thus enabling a signal processing paradigm shift from the conventional hybrid digital-analog domain to the EM domain.
On the other hand, the sub-wavelength architecture renders the effect of mutual coupling and spatial correlation among elements non-negligible for practical modelling and communications [41, 45]. <|MaskedSetence|> Unless otherwise stated, this article focuses on XL-MIMO based on discrete array architecture.
.
|
**A**: However, thanks to the recent advances in metamaterials and metasurfaces, the element separation may be reduced to sub-wavelength, rendering it possible to pack more array elements in the same physical dimension [41].
**B**: It is also worth mentioning that metasurface based XL-MIMO differs from the extensively studied intelligent reflecting surfaces (IRSs) or reconfigurable intelligent surfaces (RISs) [46, 47, 48, 49, 50], where the active metasurface based XL-MIMO, such as dynamic metasurface antenna (DMA) [51] and reconfigurable holographic surface (RHS) [52, 53], possesses the capabilities of transmitting/receiving signals, while the semi-passive IRS/RIS without requiring RF chains is usually used for signal reflection.
**C**: In particular, when an uncountably infinite number of antennas are packed in a compact surface, the continuous-aperture or quasi continuous-aperture antenna array can be realized, also known as Holographic MIMO [42, 41] or large intelligent surface (LIS) [43, 44].
|
ACB
|
ACB
|
CBA
|
ACB
|
Selection 4
|
Figure 1 presents the overall architecture of the proposed Zipformer model.
Different from Conformer (Gulati et al., 2020) that processes the sequence at a fixed frame rate of 25Hz, Zipformer uses a U-Net-like structure learning temporal representation at different resolutions in a more efficient way.
Specifically, given the acoustic features with frame rate of 100Hz, the convolution-based module called Conv-Embed first reduces the length by a factor of 2, resulting in a 50Hz embedding sequence. The obtained sequence is then fed into 6 cascaded stacks to learn temporal representation at frame rates of 50Hz, 25Hz, 12.5Hz, 6.25Hz, 12.5Hz, and 25Hz, respectively. <|MaskedSetence|> The frame rate between stacks is consistently 50Hz. <|MaskedSetence|> <|MaskedSetence|> The final encoder output dimension is set to the maximum of all stacks’ dimensions. Specifically, if the last stack output has the largest dimension, it is taken as the encoder output; otherwise, it is concatenated from different pieces of stack outputs, taking each dimension from the most recent output that has it present. Finally, a Downsample module converts the sequence to 25Hz, resulting in the encoder output.
.
|
**A**: Different stacks have different embedding dimensions, and the middle stacks have larger dimensions.
**B**: Except for the first stack, the other stacks all adopt the downsampled structures, processing the sequence at lower frame rates.
**C**: The output of each stack is truncated or padded with zeros to match the dimension of the next stack.
|
CAB
|
BAC
|
BAC
|
BAC
|
Selection 4
|
<|MaskedSetence|> Because the loudspeakers and microphones maintained consistent distances, the direct parts of the RIRs were omitted. <|MaskedSetence|> <|MaskedSetence|> Gaussian noise was added to emulate standard noise disturbances. The background noise was adjusted to ensure a signal-to-noise ratio (SNR) between [10,20]1020[10,~{}20][ 10 , 20 ] dB relative to the overall energy of the RIR.
Wall absorption greatly affects the strength and dispersion of echoes. A set of typical absorption materials for floors, ceilings, and sidewalls defined in [32] were utilized and randomly assigned to each room. These materials include linoleum on concrete, carpet, and audience floor (wooden floor) for floors; gypsum boards, metal panels, and plasterboards for ceilings; and hard surfaces, rough concrete, rough lime washes, glass windows, and plasterboards for sidewalls..
|
**A**: RIRs were generated at an 8 kHz sampling rate and included N=1024𝑁1024N=1024italic_N = 1024 samples in the time dimension.
**B**: With this configuration, a single sample represents approximately 4.3 cm of sound travel, and the total length of an RIR corresponds to 44 m.
**C**:
The raytracing engine of the Pyroomacoustics software [32] was employed to generate multichannel RIRs for general polyhedral rooms.
|
CAB
|
CAB
|
CAB
|
ABC
|
Selection 1
|
<|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> The text prompt is used to instruct SALMONN to answer open-ended questions about the general audio inputs and the answers are in the LLM text responses. The LLM and encoders are kept frozen while the rest can be updated in training.
.
|
**A**: The LoRA adaptor aligns the augmented LLM input space with its output space.
**B**:
Figure 1: The model architecture of SALMONN.
**C**: A window-level Q-Former is used as the connection module to fuse the outputs from a Whisper speech encoder and a BEATs audio encoder as augmented audio tokens, which are aligned with the LLM input space.
|
ACB
|
BCA
|
BCA
|
BCA
|
Selection 2
|
In recent years, speech pre-training technology has made significant advancements. The features extracted by speech pre-trained models can replace traditional FBank and other features, thereby enhancing the recognition accuracy of speech recognition models. Furthermore, due to the robust feature extraction and representation capabilities of speech pre-trained models, we also utilize their outputs for cross-modal extractor training. We train cross-modal extractors based on three distinct pre-trained models and compare their final recognition error rates. <|MaskedSetence|> The models consist of 12 layers of transformer blocks, each with 768 nodes. During all fine-tuning processes, we freeze the parameters of the pre-trained models. <|MaskedSetence|> <|MaskedSetence|> The input to the prenet in these configurations is cross-modal representation, while the postnet is fed with text embeddings..
|
**A**: We fine-tune the SpeechLM model on the corresponding supervised datasets to ensure a fair comparison.
TABLE II: CER (%) on RMAC test set of cross-modal representations with different speech pre-trained models.
**B**: All three models adopt the same configuration as the base model in fairseq, with consistent parameter values, and use a 10,000-hour WenetSpeech dataset [37] for pre-training.
**C**: Additionally, we compare the results of our method with the pre-trained model SpeechLM [33], which also incorporates textual information into the pre-trained model.
|
BCA
|
BCA
|
CBA
|
BCA
|
Selection 4
|
<|MaskedSetence|> <|MaskedSetence|> RIS can also achieve passive information transmission by controlling the on/off states of its elements [12]. Specifically, [13] innovatively integrates RIS with SM, proposing the RIS-aided receive space shift keying (RIS-RSSK) and RIS-aided RSM (RIS-RSM) schemes. In [13], RIS operates in RIS-access point (RIS-AP) mode, serving as a transmitter module to reflect signals to the selected receive antenna based on spatial bits.
Numerical results demonstrate the superior bit error rate (BER) performance of the RIS-aided schemes compared to the conventional schemes.
[14] combines GSSK with RIS and proposes the RIS-aided receive generalized space shift keying (RIS-RGSSK) scheme to mitigate the limitations of SSK. <|MaskedSetence|>
|
**A**: To attain a higher transmission rate, GSM should be introduced into RIS-aided scenarios where the number of receive antennas is limited..
**B**: RIS can function as the modulator to realize a low-cost and energy-efficient RF chain-free transmitter [11].
**C**:
RIS consists of numerous passive elements that manipulate channel scattering and propagation characteristics by introducing pre-designed phases to incident waves [10].
|
CBA
|
ABC
|
CBA
|
CBA
|
Selection 4
|
The framework utilized, an overview of which is depicted in Fig. 1 for the case of a convolutional backend, is inspired by TUne+, which was recently introduced by Vasquez et al. [21] in the context of self-supervised waveform-domain music representation learning. <|MaskedSetence|> <|MaskedSetence|> It is also worth noting that feature maps in the expansive path are connected to compatible, dimensionality-wise, feature maps in both the contractive path and the tail through skip connections.
Contractive and Expansive Paths: Both the encoder and the decoder of our network are modeled after the baseline U-Net presented in [17], with the caveat of reducing the filter capacity at half of the original. The encoder receives STFT magnitude spectrograms and consists of 6 blocks, each of which contains 2 2D-convolutional sub-blocks222Throughout the networks, all convolutional kernels are of dimensionality 3×\times×3, while pooling and upsampling operations are performed at a factor of 2 in each dimension, unless stated otherwise., followed by a max pooling operation. <|MaskedSetence|> Each decoder block receives the outputs of both the previous decoder block and its symmetric encoder block, propagated through a skip connection.
.
|
**A**: Conversely, the decoder is built symmetrically to the encoder and consists of 6 blocks, comprising transposed convolutional layers followed by convolutional sub-blocks, structured similarly to the encoder, and followed by a final convolutional sub-block.
**B**: Since TUne+ was trained within a contrastive learning framework, the output of the tail was used as a representation for downstream tasks.
**C**: In short, the architecture of TUne+ consists of: a) a contractive path (encoder), which gradually downsamples the input through a series of convolutional layers, producing thus multi-resolution features, b) an expansive path (decoder), which re-instates the feature map to its initial dimensionality by combining and upsampling the learned contractive features, and c) the tail, which produces the final learned embeddings.
|
ACB
|
CBA
|
CBA
|
CBA
|
Selection 3
|
Learning-based or classic tone-mapping methods can merge time-synchronized low dynamic range (LDR) frames captured at different exposures into a single HDR frame [18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29]. However, the performance of these methods depends on additional alignment techniques to rectify differences between frames due to motion, overexposure, and occlusions. Failure to synchronize the LDR frames results in ghosting artifacts in the merged HDR frame.
Obtaining time-aligned frames can also be challenging and often requires using multiple cameras. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Additionally, camera arrays demand parallax correction to align cameras’ FOV, which leaves room for artifacts and errors.
.
|
**A**: Moreover, dividing the incoming light among multiple detectors degrades the overall SNR.
**B**: Multi-camera systems add significant complexity, bulkiness, and increased power consumption.
**C**: These setups either split incoming light using beam splitters among different cameras [30, 3, 31], or employing an array of cameras, each with its independent optical path [1, 32, 2].
|
CBA
|
BCA
|
CBA
|
CBA
|
Selection 3
|
<|MaskedSetence|> Most existing RFF solutions collect data using a single receiver to identify a specific wireless transmitter within a pool of devices. <|MaskedSetence|> Additionally, the receiver hardware used during training is typically expected to remain unchanged during deployment. <|MaskedSetence|> Therefore, the resulting trained model is significantly biased towards the physical characteristics of the receiver used during training. With this limitation, any change at the receiver—where the model has been deployed—would require retraining the model.
This is paramount for any generalized deployment, with a trained model for one transmitting device, 𝒱𝒱\mathcal{V}caligraphic_V, distributed to many (or any) devices that seek to identify/authenticate 𝒱𝒱\mathcal{V}caligraphic_V.
.
|
**A**:
Receiver Hardware Bias.
**B**: In reality, the RF fingerprint is affected by the entire communication chain, including the transmitter, the channel, and finally, the receiver [11, 9].
**C**: Using this technique, it is commonly assumed that the receiver hardware does not introduce its own variability to the captured RF fingerprint.
|
ACB
|
ACB
|
ABC
|
ACB
|
Selection 4
|
<|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> The style, emotion and speaker representations are extracted from three independent modules, which are probably entangled and lead to low similarity. These results show that our proposed approach obtains well-disentangled emotion, style, and speaker representations. Besides, the end-to-end VITS model avoids the error accumulation of intermediate representations, enabling flexible and natural expressive speech synthesis..
|
**A**: TSEW gets the lowest naturalness, which we speculate that the error accumulation of BN prediction leads to this phenomenon.
**B**: SCIVTS achieve higher naturalness but the lowest emotion, speaker, and style similarity.
**C**:
4.1 Monolingual subjective evaluation
As shown in Table 1, the proposed approach outperforms the compared models in terms of naturalness, emotion similarity, speaker similarity, and style similarity.
|
CAB
|
CAB
|
CAB
|
BCA
|
Selection 3
|
Considerable efforts have been devoted to the algorithm development (see, e.g., [18, 12, 19] for recent surveys, and references therein). For example, convex optimization-based with extensive recovery theoretical guarantees [20] by converting the quadratic problem to a recovery of a rank-1 matrix via semidefinite programming. While high performance is achieved with these methods, the computational cost becomes infeasible at large-scale signal dimensions. <|MaskedSetence|> In these approaches, the selection of the fidelity function ℒℒ\mathcal{L}caligraphic_L plays a crucial role. <|MaskedSetence|> In [22], the WF convergence is improved by changing the quadratic cost function for a Poisson-based data fidelity function. Further improvements include using the least square function [23], or median-truncated functions to obtain a robust algorithm to outliers [24].
Another key component of these methods is the initialization algorithm, which is usually performed via spectral methods to compute the leading eigenvector of the measurement matrix. <|MaskedSetence|>
|
**A**: On the other hand, non-convex optimization alleviates the computational complexity by performing gradient descent-like iterations.
**B**: For instance, the seminal paper on the Wirtinger flow (WF) algorithm [21] employs a quadratic loss of the squared magnitude measurements.
**C**: Based on the vast PR literature, in the following section, we formulate HPR models and algorithms..
|
ABC
|
ABC
|
ACB
|
ABC
|
Selection 4
|
The rest of this paper is organized as follows.
Section II introduces the system model. <|MaskedSetence|> <|MaskedSetence|> Section VI provides the theoretical BER analysis. Section VII presents the simulation results to evaluate our proposed scheme. <|MaskedSetence|>
|
**A**: Section III presents the proposed modulation scheme and the receiver design.
Section IV presents design criteria and problem formulation.
**B**: Section V utilizes the geometrical analysis to solve the formulated problem.
**C**: Finally, Section VIII concludes the paper.
.
|
ABC
|
CBA
|
ABC
|
ABC
|
Selection 1
|
The remainder of this paper is organized as follows. Sec. II introduces the system model of the considered ISAC system and the corresponding performance metrics. Sec. <|MaskedSetence|> <|MaskedSetence|> IV generalizes the DDP and DIP framework to ISAC scenarios. <|MaskedSetence|> V provides simulation results to validate the performance of the proposed precoders. Finally, Sec. VI concludes the paper.
.
|
**A**: Sec.
**B**: III elaborates on the dedicated DDP and DIP precoding designs for sensing-only scenarios with random signaling.
**C**: Sec.
|
BAC
|
BAC
|
BAC
|
ABC
|
Selection 2
|
<|MaskedSetence|> <|MaskedSetence|> However, when the information for T stage was removed, the model confused and included some regional nodes in the target range as if considering it an upstage. <|MaskedSetence|> Likewise, without information about laterality, the model contours on the opposite breast. On the contrary, the competing model showed inaccurate results such as contouring on the opposite breast even without omission. Moreover, regardless the presence or absence of omission, there was little change in target contouring (e.g., laterality), or target contouring changed in patterns unrelated to the omitted information (e.g., contouring on the opposite side when omitting T stage or N stage information). These results indicate that the competing model receiving clinical context in a simpler manner fail to effectively incorporate such information and perform CTV delineation unrelated to the provided information. Similarly, in another case of T1cN1M0 breast cancer in the left breast where total mastectomy was performed as shown in Fig. 4(c), when surgery information is not provided, our method confused the surgery type and showed ablation results sparing the skin and chest wall, like the case of breast-conserving surgery. However, the competing model rather contoured on the opposite breast, which was irrelevant to surgical method.
Table 4: Ablation of input text information components for two different multimodal methods..
|
**A**: This trend was similarly observed in the omission of N stage information, where the model included regional nodes as in cases with nodal metastasis like N1 or N2.
**B**: Firstly, without omission as shown in Fig. 4(b), our method accurately segmented only the right breast area as the target volume for a person with T1aN0M0 cancer who underwent breast-conserving surgery.
**C**:
We further conducted ablation study by omitting each piece of input textual information and compared the difference between a competing method (Numeric Category) and our method (LLMSeg) in Fig. 4(b)-(c).
|
CBA
|
CBA
|
ACB
|
CBA
|
Selection 1
|
One major challenge for ultra-broadband transmission over the THz region is the constrained windows for finite regions of operation. Due to the critical absorption and scattering, THz windows limit the effective use of such broadband. This phenomenon enhances the significance of optimizing efficient utilization within the THz spectrum. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|>
|
**A**: Simply, an operating frequency region g𝑔gitalic_g can be defined with linear function as follows:
.
**B**: However, this is not the only band limit on THz.
One method for reaching THz is extending the operating frequency via multiplication.
**C**: Frequency sparsity poses a practical challenge when operating in the THz range at a non-ideal center frequency.
|
ABC
|
BCA
|
BCA
|
BCA
|
Selection 2
|
<|MaskedSetence|> The channel model is presented in Section II. Some preliminaries are provided in Section III. <|MaskedSetence|> In Section V, a general framework for the optimally-shaped constellation construction is presented. An illustrative example and relevant numerical results are given in Section VI. <|MaskedSetence|>
|
**A**: In Section IV, we characterize the second-order asymptotical properties of the optimal shaping region for the VLC under dual intensity constraints.
**B**: Section VII concludes the whole paper.
.
**C**:
The remainder of this paper is organized as follows.
|
CAB
|
CAB
|
CAB
|
ACB
|
Selection 1
|
By contrast, in most
applications, one same system is monitored by several sensors, which naturally calls for the joint (or multivariate) analysis of a collection of time series. Multivariate selfsimilarity with the practical and accurate estimation of the vector of selfsimilarity parameters thus constitutes the heart of the present work.
Related work. Fractional Brownian motion (fBm) [16, 17], the only Gaussian, selfsimilar, stationary-increment process, has quasi-exclusively been used in practice as the reference model for scale-free dynamics.
The selfsimilarity parameter H𝐻Hitalic_H is the quantity of interest in applications. <|MaskedSetence|> Notably, multiscale (wavelet) representations have proven to yield accurate and reliable estimation procedures for H𝐻Hitalic_H [19]: They rely on the crucial facts that the statistics of the wavelet coefficients behave as power laws with respect to scales, and that the exponents of these power laws are controlled by H𝐻Hitalic_H. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: Nonetheless, these constructs were essentially based on component-wise univariate power law behavior [21, 19]..
**B**: To account for the multivariate nature of data in applications, multivariate extensions of fBm were recently proposed based on gathering correlated fBm [20].
Multivariate multiscale-based representations were accordingly devised for the estimation of the resulting vector of selfsimilarity parameters.
**C**: Its accurate estimation has thus received considerable attention (cf. [18, 17] for reviews).
|
BAC
|
CBA
|
CBA
|
CBA
|
Selection 4
|
For training data and testing Dataset I, bus-level load and wind power generation are randomly drawn from independent and identical uniform distributions. <|MaskedSetence|> <|MaskedSetence|> Although these samples share the same grid topology, they are distinct graphs because feature vector of each node is different. Supervised learning is used during the training process and the hyper-parameters are carefully tuned to promote model performance.
For the synthesis of Dataset II, the grids are partitioned into zones, and each zone contains a specified number of load buses and generators (both renewable and other dispatchable generators). Following the procedure outlined in Section II-D, the joint PDF approximation is established using marginal PDFs via Eq. 8 and 9. We consider demand following a truncated normal (TN) distribution, while wind power generation is converted from wind speed which is assumed to follow a Weibull (WB) distribution. <|MaskedSetence|> Details of zonal partitioning of the four grids, zonal correlations, the parameters used to define the marginal distributions, the training loss versus the number of training samples, as well as other important details of the numerical experiments can be found in the Supplementary Information (SI) provided with this article. Note that the training loss behavior suggests that it is not necessary to use more samples in the training process..
|
**A**: A sample of the stochastic variables is given as input to the numerical solver Pandapower to obtain the OPF solution (bus-level, branch-level and system-level QoIs).
**B**: A total of 1000 samples (70% for training and 30% for Dataset I) are generated using the OPF solver.
**C**: The conversion follows the power rating curve in [5].
|
ABC
|
ABC
|
ABC
|
ABC
|
Selection 2
|
<|MaskedSetence|> Conventional memories like SRAM and DRAM are volatile, necessitating a continuous power supply. SRAM notably faces leakage issues, and DRAM necessitates periodic refresh cycles, rendering it energy-inefficient. <|MaskedSetence|> Among the emerging non-volatile technologies, RRAM encounters endurance problems, while PCM demands high voltage for switching. <|MaskedSetence|> However, this drawback is offset by SRAM’s leakage issues. Due to all these benefits, STT-RAM stands out as the only commercially available emerging non-volatile technology in the market due to its overall performance and reliability (Everspin Technologies, 2024; Nair and othes, 2018).
Furthermore, since we do not need STT-RAM’s prolonged data-retention (of up to 10 years) for the targeted experiments, we take advantage of the tunability of its thermal-stability factor by which we shorten the retention time in return for further improving its energy efficiency and access latency (Smullen et al., 2011).
These features make STT-RAM very suitable for our experiments.
.
|
**A**: Although STT-RAM boasts numerous advantages, it has the drawback of requiring a constant current for writing, resulting in slightly higher energy consumption and latency compared to the traditional SRAM technology.
**B**: Flash memory, another conventional technology, operates at relatively high voltages.
**C**: Table 2 illustrates a comprehensive comparison between STT-RAM and various memory technologies, encompassing both conventional and emerging non-volatile memories.
|
CBA
|
CAB
|
CBA
|
CBA
|
Selection 4
|
Finally, we compute the least number of appliances to be compromised to cause system voltage safety violation. Equation (8) determines the critical device count for the system with CP loads, while (22) calculates it for the system with ZIP loads. The results are summarized in Table II where we inject LAAs at different leaf buses in the system (one at a time). <|MaskedSetence|> <|MaskedSetence|> 1, where we similarly observe that launching an LAA at bus 18181818 is most detrimental to the system. <|MaskedSetence|> 2, the voltage profile (computed using the model in Subsection IV-B) of the system is depicted in the presence of an LAA at bus 18181818. According to this figure, based on the closed-form approximation, launching LAA on 282 air conditioners, 163 resistive heaters, or 151 copiers in bus 18181818 is sufficient to cause voltage constraint violation..
|
**A**: We observe that launching an LAA at bus 18181818 requires the least number of compromised devices.
**B**:
In Fig.
**C**: This also confirms the results observed in Fig.
|
ACB
|
BAC
|
ACB
|
ACB
|
Selection 3
|
Most denoising methods assume AWGN of a known noise level. Blind denoising, i.e., the removal of AWGN of unknown level has been investigated in several recent works.
Some of these works employ a neural network trained with examples of randomly varying noise levels, e.g., denoising convolutional neural networks (DnCNNs)[16, 17]. <|MaskedSetence|> An alternative approach, related to our work here, is to employ boosting of image denoising [20, 21]. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: Other works proposed to first estimate the noise level from the data and then employ a denoiser that is designed to fit the predicted noise level [18, 19].
**B**: That is, iteratively strengthen the signal by adding the previous denoised image to the degraded input image, denoise the strengthened image, and subtract the previous denoised image from the restored signal-strengthened outcome.
A recent line of work, somewhat related with RED and PnP, inspired by annealed Langevin dynamics, employs a trained denoiser as an estimator for the posterior distribution, thus iteratively sampling from the posterior distribution to obtain improved perceptual quality.
**C**: This approach was recently employed for generative purposes (denoising diffusion probabilistic models [22, 23, 24]) as well as for score-based stochastic denoising [25] and image reconstruction [26, 27] in an attempt to avoid the so-called ”regression to the mean” phenomena..
|
ABC
|
ABC
|
ABC
|
CAB
|
Selection 3
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.