text_with_holes
stringlengths 106
5.69k
| text_candidates
stringlengths 64
1.88k
| A
stringclasses 6
values | B
stringclasses 6
values | C
stringclasses 6
values | D
stringclasses 6
values | label
stringclasses 4
values |
|---|---|---|---|---|---|---|
<|MaskedSetence|> PuzzleTuning addresses this by varying puzzle patch sizes regulated by a patch-scheduler. Specifically, we currently employ a repetitive-looping strategy shown in Fig. <|MaskedSetence|> Additionally, inspired by curriculum learning to improve the convergence, a fix-position ratio scheduler is designed, varying the proportion of un-shuffled patches from 90% to 20%. It modulates shuffling complexity and thus controls the task difficulty. <|MaskedSetence|>
|
**A**: With the two scheduler design, this standard curriculum is applied to all PuzzleTuning experiments.
.
**B**: 4, cycling patch sizes from 16 to 112 every three epochs.
**C**:
In pathological image analysis, multi-scale semantic knowledge is crucial.
|
BAC
|
CBA
|
CBA
|
CBA
|
Selection 2
|
Furthermore, it is apparent from the Fig. 9(a) that gesture recognition for “beckoning” and “plugging” are not sufficiently accurate. <|MaskedSetence|> Moreover, 22.5%percent22.522.5\%22.5 % confusion probability exists between the gestures of “beckoning” and “pushing and pulling”, indicating that some of the simulation samples of “beckoning” are similar to the experimental samples of “pushing and pulling”.
To qualitatively support the aforementioned observations, we applied dimensionality reduction techniques to the extracted features (network output before entering the fully-connected layer classifier) for the entire simulation and experimental datasets using the ResNet18 model trained by Scheme 1111. Specifically, t-distributed Stochastic Neighbor Embedding (t-SNE)[32] and Principal Component Analysis (PCA)[33] were employed to visualize and analyze the high-dimensional features (512512512512 dimensions for ResNet18) of the dataset, as illustrated in Fig. 10. <|MaskedSetence|> <|MaskedSetence|> These could be regarded as the limitation of the proposed simulator, since the real-world channel is more complex than the simulated one due to the impacts of multipath and the non-ideal hardware.
.
|
**A**: The recognition accuracy is 70%percent7070\%70 % and 75%percent7575\%75 % respectively.
**B**: It could be observed that although 83.0%percent83.083.0\%83.0 % gesture recognition accuracy is achieved, the distributions of the features of different gestures are not sufficiently separated.
**C**: Moreover, the simulated and experimental features for the gesture “Scaling” are not well aligned, indicating the inherent feature distinctions between simulated and experimental datasets.
|
ABC
|
ABC
|
ABC
|
BCA
|
Selection 3
|
Contribution
In this short communication, we present two equivalent ex post conditions that guarantee that a solution obtained via any conic relaxation of the AC-OPF resulting from a lifted, linear power flow constraint coincides with the solution to the original problem. <|MaskedSetence|> <|MaskedSetence|> If either condition is respected, the obtained solution will satisfy all AC-OPF constraints and will minimize its objective function. The operator can then directly implement this solution in the grid without having to check for feasibility or running the NP-hard AC-OPF. <|MaskedSetence|>
|
**A**: In particular, this result holds for the tight-and-cheap relaxation and the strong, tight-and-cheap relaxation presented in bingane2018tight .
**B**: Specifically, we show that obtaining an optimal voltage matrix that is either rank-1111 or has self-coherent cycles from a conic relaxation implies that this solution is exact for the original AC-OPF problem.
**C**: We then illustrate the application of these conditions on MATPOWER cases where sufficient exactness conditions are not met.
.
|
BAC
|
BAC
|
BAC
|
CBA
|
Selection 1
|
For the ISEA system in Fig. 1, we mainly consider analog multi-access techniques for enabling efficient simultaneous access (i.e., view-and-channel aggregation). We also explore ISEA with noiseless feature aggregation in Section III, aligning with scenarios involving reliable digital transmission. In the class of analog transmission, we primarily adopt AirComp for feature aggregation. To investigate its optimality, we further consider analog orthogonal access, namely orthogonal access with fast analog transmission [26], as a benchmark scheme. It achieves the same multi-access latency as AirComp but requires receive spatial DoF to be equal to or exceed the number of sensors. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Block fading is considered such that the channel remains unchanged over a coherence duration comprising T𝑇Titalic_T time slots. Symbol-level synchronization is assumed over all sensors.
.
|
**A**: The assumptions and operations of the schemes are described as follows.
**B**: Assuming a frequency non-selective channel, time is slotted and each slot is used for transmitting one symbol.
**C**: The server and sensors are equipped with N𝑁Nitalic_N-element array and a single antenna, respectively.
|
ACB
|
ACB
|
ACB
|
ACB
|
Selection 1
|
<|MaskedSetence|> VoxCommunis has almost twice as many hours as Fleurs-Ipa with roughly half of the languages. <|MaskedSetence|> But this finding also suggests that we can achieve good crosslinguistic generalizability with fewer languages but longer hours using phoneme modeling. Given the empirical data distributions in real-life settings, scaling up training hours in a dozen of languages is much easier than scaling up the number of languages. <|MaskedSetence|>
|
**A**: The practical implication is that we might be able to build multilingual speech processing systems for many low-resource or zero-resource languages with large-scale data in a dozen relatively high-resource languages.
Is it feasible to scale up the creation of good-quality phonemic transcriptions in world languages?.
**B**: We compared Clap-Ipa only trained on VoxCommunis Ahn and Chodroff (2022) or Fleurs-Ipa.
**C**: In Table 2 and 3, Clap-Ipa-Vc trained on more hours of speech generally has similar performance as Clap-Ipa-Fleurs trained on a subset of the IpaPack across metrics, which suggests that creating high-quality data is effective in achieving good performance.
|
BCA
|
ABC
|
BCA
|
BCA
|
Selection 4
|
In this work, we extend the predictive safety filter [12] to address bounded perturbations as well as time-varying systems and constraints. <|MaskedSetence|> This event triggering only requires solving the safety filter optimization problem if the nominal control performs an unsafe action. <|MaskedSetence|> <|MaskedSetence|> The main contributions of this work are: (1) Robust predictive safety filter for time-varying systems and constraints, (2) 1-step robust safety filter for discrete-time systems, and (3) event-triggering methods for reducing online computations.
.
|
**A**: Furthermore, we address computation demand via an event-triggering scheme.
**B**: We also extend the existing results in the literature to address robust discrete-time barrier functions with a 1-step safety filter that reduces computation and simplifies the process of barrier function verification.
The resulting methodology is guaranteed to be safe and robust.
**C**: Figure 1 illustrates the proposed method.
|
ABC
|
ABC
|
ABC
|
ABC
|
Selection 3
|
In the realm of ultra high-speed communications with extensive signal bandwidth, frequency-dependent precoding, realized by fully-digital architecture, is favourable to be adaptive to wideband frequency-selective channels. However, the practical implementation of massive antenna arrays can introduce significant beamsplit due to the analog front-end in hybrid fabrication, which may result in significant performance degradation. <|MaskedSetence|> To mitigate the effect of beamsplit, recent studies[5, 6, 20, 21] have employed true-time delayers (TTDs), which introduce specific time delays to signals, thereby creating frequency-dependent phase shifts. These hybrid beamforming architectures, incorporating a limited number of TTDs alongside a larger array of PSs, have demonstrated significant advantages in enhancing far-field wideband communications performance[5, 6, 20, 21]. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: For instance, in far-field transmission, a beamformer designed for a specific direction at a given frequency might experience substantial deviation to an unintended direction due to frequency mismatches[5], thus compromising the beamforming gain over the entire bandwidth.
**B**: More recently, there have been several attempts extending the study of beamsplit to near-field communications[23, 24, 25, 26], exploring spatial waveform design[23], heuristic piece-wise approximation[25], convex optimization[24], and deep learning[26], respectively.
.
**C**: This improvement holds true even when employing cost-effective hardware options like fixed-time-delay TTDs[22].
|
CAB
|
ACB
|
ACB
|
ACB
|
Selection 2
|
We next illustrate the proposed control strategy, and apply the three-step control redesign strategy in Section V-A after each RBC. Here we set the linear velocities in the local controllers (24) as constants. <|MaskedSetence|> That is, the tasks of the two mobile robots are accomplished under the proposed control strategy. Comparing Fig. 4(a) and Fig. 4(b), we can see the advantages of the proposed control strategy: (i) the task accomplishment is guaranteed; (ii) the re-collision with the same rigid body is avoided; (iii) the chattering and deadlock phenomena are excluded.
To show the effects of the imposed impulses and the local controllers, the motion directions and control inputs in [0,18]018[0,18][ 0 , 18 ] are presented in Fig. 5. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: We can see that for each mobile robot, an RBC causes a jump of the motion direction and two jumps of the control input.
**B**: The jump of the motion direction and the first jump of the control input occur at the collision time..
**C**: The position trajectories of the two mobile robots are shown in Fig. 4(b), where the target positions are reached after two RBCs.
|
CAB
|
CAB
|
CAB
|
ABC
|
Selection 1
|
<|MaskedSetence|> OPC is depicted in Fig. 8 for different EMF thresholds. As we can see, the performance only worsens when the limit is substantially lesser than the standard established by ICNIRP and FCC (solid line), which indicates we can satisfy the health regulations without compromising the data rate. De novo, CF-mMIMO generates better QoS in all situations.
Finally, to compare the results of both multi-antenna technologies versus the user load, in Fig. 8, we show the data rate obtained under OPC for different values of K𝐾Kitalic_K. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: As a result, one can infer that cell-free approaches are more suited for handling denser deployments.
.
**B**: To further investigate the role of radiation, the data rate w.r.t.
**C**: Unsurprisingly, the QoS deteriorates substantially with a large number of users (e.g., K=40𝐾40K=40italic_K = 40), yet the distributed design always surpasses the multi-cell scheme.
|
BCA
|
BCA
|
BCA
|
BCA
|
Selection 3
|
<|MaskedSetence|> <|MaskedSetence|> The antenna-domain channel incorporates the channel responses of all antenna elements, as illustrated in (2).
The wavenumber channel is derived through a linear transformation of the antenna channel, as depicted in (6). <|MaskedSetence|> Due to the limited aperture size, the wavenumber-domain channel is typically sparse [8].
Similarly, the polar-domain channel is also a linear transformation of the antenna-domain channel. Unlike the wavenumber-domain representation, which solely considers the sampling in the angular domain, the polar-domain representation simultaneously accounts for the sampling of angular and distance information. Consequently, the dimension of the polar domain transformation matrix is much larger than that of the wavenumber domain.
.
|
**A**: In this paper, various domains concerning near-field channels are encompassed, namely the antenna domain, wavenumber domain, and polar domain.
**B**: It reflects the significance of different spatial frequency components in accurately representing the antenna-domain channels.
**C**: To provide a clear understanding of these domains, we elaborate on their differences and connections as follows.
|
BAC
|
ACB
|
ACB
|
ACB
|
Selection 4
|
III-A2 Average Marginal Damage from Local Air Pollution
We use the Intervention Model for Air Pollution (InMAP) [38] to compute average marginal damages from local air pollution of existing power plants. InMAP uses air pollution source-receptor matrices to relate emissions at source location to concentration at receptor locations. These matrices then can be used to estimate locational damages from air pollution without simulations with computationally demanding air quality models. <|MaskedSetence|> To map the Air Emissions Modeling data to our power plant database, we first match the power plant data to Clean Air Markets Program Data (CAMD) using the EPA-EIA-Crosswalk [50]. <|MaskedSetence|> We first simulate the total PM2.5 concentration from emissions at the power plant level. The total PM2.5 concentration is the sum of primary PM2.5 concentration, particulate NH4 concentration, particulate SO4 concentration, particulate NO3 concentration, and secondary organic aerosol concentration all measured in (μ𝜇\muitalic_μg/m3). We then use the estimated total PM2.5 concentration to estimate the number of deaths using the Cox proportional hazards equation, along with information on population counts and baseline mortality rates. <|MaskedSetence|> Finally, to estimate the economic damage, we apply the value of a statistical life metric set to $9 million [39].
.
|
**A**: We assume that the overall mortality rate increases by 14% for every 10 μ𝜇\muitalic_μg/m3 increase in total PM2.5 concentration, as shown in [51].
**B**: We use 2016 annual emissions of volatile organic compounds (VOC), Nitrogen oxides (NOx), ammonia (NH3), Sulfur oxides (SOx), and fine particulate matter (PM2.5) measured in short tons as inputs to InMAP.
**C**: To calculate the damages from air pollution, we follow InMAP’s methodology [38].
|
BCA
|
BCA
|
BCA
|
BCA
|
Selection 1
|
In Table 1, we provide the quantitative results. <|MaskedSetence|> Note that the second best method Uformer-B has 50.42M parameters and 205.82G FLOPs. <|MaskedSetence|> Our MARformers also achieves faster inference speeds than the Uformers, though with inferior Dice scores, respectively.
The qualitative results of visual quality are presented in Fig. 3. We observe that our MARformer-L well recovers the teeth shapes and obtains higher PSNR and SSIM results than the other comparison methods. <|MaskedSetence|>
|
**A**: Besides, our MARformer-T achieves similar PSNR and SSIM results with Uformer-T, but needs only 0.40M parameters and 12.82G FLOPs compared to 5.24M and 25.39G for Uformer-T.
**B**: One can see that our MARformer-L outperforms the other methods in terms of PSNR and SSIM, but needs only 11.76M parameters and 60.25G FLOPs.
**C**: The light-weight MARformer-L achieves similar results to Uformer-T.
All these results validate that our MARformer is more efficient than the comparison methods on dental CBCT MAR.
.
|
ACB
|
BAC
|
BAC
|
BAC
|
Selection 3
|
Each benchmark is repeated ten times, each run being initialised with a different random seeding. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Secondly, we report the average, minimum and maximum computation times over all successful runs. These results are collected in Table 3.
As shown in Table 3, it is clear that the approaches are similar when synthesising the more straightforward Lyapunov function, with a slight improvement in terms of speed and robustness..
|
**A**: We consider two measures to determine the quality of each tool.
**B**: We report this as the success rate S𝑆Sitalic_S.
**C**: Firstly, how often the tool correctly terminates successfully (having verified the property) out of 10 runs.
|
BAC
|
ACB
|
ACB
|
ACB
|
Selection 3
|
3. Radarize Design
Figure 4. Radarize Overview. Left. <|MaskedSetence|> <|MaskedSetence|> The mapping module converts range-azimuth heatmaps into range scans. <|MaskedSetence|> The tracking module (a) regresses relative rotation from successive range-azimuth heatmaps and (b) regresses velocities from doppler-azimuth heatmaps and integrates them into odometry estimates. Right. An optimization-based SLAM backend outputs real-time map and trajectory estimates.
.
|
**A**: Radar frames are processed into heatmaps.
**B**: Bottom.
**C**: Top.
|
ACB
|
BAC
|
ACB
|
ACB
|
Selection 1
|
<|MaskedSetence|> The components observation noise, reward recalculation, exploration noise decay, and disturbances are minor changes, while the asymmetric actor-critic structure, action history as well as rotor delay, and the curriculum are larger modifications.
From the results in Table II, we can see that overall (and particularly when executed on the real system) the baseline is the most reliable with no crashes in any of the tasks/seed combinations. In general, we can observe a trend that removing the smaller modifications still yields reliable policies (early during the training as well as after convergence). <|MaskedSetence|> Here, we can observe that the curriculum indeed strongly impacts the training speed. Without the curriculum, the policies are not able to fly early on while after convergence they reach a comparable performance to the baseline. <|MaskedSetence|>
|
**A**: Note that the configurations differ in the added/removed complexity.
**B**: In contrast, when removing the more complex components, we can see a more pronounced drop in reliability as well as tracking performance.
**C**: We can observe a similar behavior when removing the Asymmetric Actor-Critic (AAC) and hence also ablate removing both, the asymmetric actor-critic and the curriculum and find that the training takes considerably longer and even after 3 000 00030000003\,000\,0003 000 000 steps most of the policies are crashing.
As deduced from first principles, we can also confirm that training without simulating the rotor delay leads to no usable policy..
|
BCA
|
ABC
|
ABC
|
ABC
|
Selection 4
|
Recently, the emerging reconfigurable intelligent surface (RIS) has been identified as a promising technology for 6G and beyond for its capability of reconstructing a programmable and intelligent wireless environment [10, 11, 12, 13, 14, 15, 16]. <|MaskedSetence|> Besides, RIS can be introduced to various conventional wireless communication systems in an environment-friendly and compatible way, and the performance gains brought by deploying RISs have been widely investigated in physical layer secret communication system [17], interference-nulling scenarios [18], non-orthogonal multiple access networks [19], multi-cell networks [20], and so on.
In addition, RIS has also been introduced to spectrum sensing systems, making it possible to achieve a reliable sensing performance with less required sensing time [21, 22, 23, 24, 25, 26]. In [21], the authors investigate a RIS-assisted spectrum sensing system where a multi-antenna SU aims to detect the primary signals with maximum eigenvalue detection method, and the number of required reflecting elements (REs) to achieve a near-1111 detection probability is analyzed by utilizing random matrix theory. <|MaskedSetence|> In [23], the authors design a RIS-aided spectrum sensing system where the RIS reflection dynamically changes according to a given codebook, and a weighted energy combination method is proposed to exploit the time-variant reflected signal power for spectrum sensing. <|MaskedSetence|> In [25], the authors study an RIS-enhanced opportunistic CR system, where the RIS is configured to not only improve the sensing performance but also enhance the secondary transmission rate.
.
|
**A**: The authors of [22] study a RIS-enhanced energy detection method, and the closed-form expression of the average detection probability is derived.
**B**: In [24], the RIS’s effect on sensing performance is investigated by comparing two typical configurations, namely, enhancing the SNR of the primary receiver and increasing the sensing SNR of SU, and the challenges for RIS-assisted spectrum sensing systems are also summarized.
**C**: It actually acts as a tunable reflector of the wireless environment that can reflect different incident signal components in different manners.
|
ACB
|
CAB
|
CAB
|
CAB
|
Selection 4
|
To improve the listening experience for signals processed with FOA, researchers have proposed various HRTF preprocessing methods. <|MaskedSetence|> Examples of these methods include global equalization, time alignment, ear alignment, and magnitude least squares (MagLS) [5, 6, 7, 8].
While the MagLS method has been proven to be very beneficial in reducing spectral error in FOA, the resulting binaural signals still have significant spatial errors [2, 4]. The aim of this paper is to propose and investigate improvements over the current MagLS method. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: This approach aims to improve FOA’s spatial attributes while preserving low spectral errors, ultimately providing a more immersive and realistic audio experience for listeners..
**B**: This is achieved by the use of the HRTF preprocessing with integrated MagLS and interaural level difference (ILD) optimized errors.
**C**: These methods aim to overcome some of the spatial resolution and timbre degradation issues of FOA by correcting the low-order HRTF spatial and spectral errors.
|
CBA
|
CAB
|
CBA
|
CBA
|
Selection 3
|
<|MaskedSetence|> In contrast, the high-level features contain more semantic information but lack precise details (e.g., object boundaries) due to the significant resolution reduction. <|MaskedSetence|> This is a challenging issue, especially in the context of medical imaging, which is commonly constrained by limited data. Such information fusion, accomplished by concatenating low-level and high-level features across multiple levels through dense connections, may limit the contribution of information from different levels and potentially introduce noise. <|MaskedSetence|> This leads to an increase in both GPU memory usage and floating point operations (FLOPs).
In [8], reverse
attention was.
|
**A**: On the other hand, despite the fact that the additional convolutions introduced do not significantly increase the number of parameters, GPU memory consumption will rise because all intermediate feature maps and the corresponding gradients must be stored for forward passes and backward gradient computations.
**B**: Simply fusing features through concatenation will heavily rely on the network’s learning capacity, which is often proportional to the training dataset size.
**C**: Regarding the features extracted by the encoders, the low-level features usually preserve more details but lack sufficient semantic information and may contain undesired noise.
|
CAB
|
CBA
|
CBA
|
CBA
|
Selection 2
|
Recently, there have been significant attempts to deanonymize Bitcoin transactions. <|MaskedSetence|> Instead, researchers use synthetic and fake data: Ashfaq et al. [2] use a synthetic dataset; Rabieinejad et al. [49] generate fake labels; Dahiya et al. [12] use an unverified Kaggle dataset; Pham and Lee [47] use unverified labels for “30 thieves” of unknown provenance; Sankar Roy et al. [52] use the same Pham and Lee dataset. <|MaskedSetence|> This approach is a methodological dead end; putting aside the assumptions associated with fake, arbitrary, or false datasets, due to the imbalance of classes (very few labels of illegal or abnormal transactions), researchers must use resampling techniques like SMOTE to improve learning performance [63]. These are not good data science practices; effectively researchers are using low quality data and stretching it even thinner to boost learning performance (often using overly simple metrics like recall and accuracy). The results have no practical consequence. <|MaskedSetence|>
|
**A**: Most research uses supervised machine learning techniques to classify addresses, but with the exception of Akcora et al. [1] and Paquet-Clouston et al. [46], researchers do not have access to quality labelled datasets.
**B**: Or researchers use very simple heuristics (such as node degree patterns, see Weber et al. [60] and Lorenz et al. [36] who use the Weber et al. dataset) or slightly more complex heuristics (like motifs, see Wu et al. [63]) and assume that such patterns are evidence of complex criminal behaviour, like money laundering.
**C**: Indeed, surveying the field with all of these constraints and heuristics in mind, we can conclude that practical Bitcoin deanonymization remains elusive.
.
|
CAB
|
ABC
|
ABC
|
ABC
|
Selection 4
|
Figure 2 shows the visual performance of mGRE image translation to FLAIR and MPRAGE sequences. It is clear that U-Net introduces blurring artifacts and detail loss. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> By using the 2.5D refinement module, DiffGEPCI successfully mitigates these artifacts, attaining state-of-the-art performance across all three planes, as demonstrated in both Figure 2 and Figure 3. The quantitative results over the whole
testing set is presented in Table 1, highlighting the quality improvements achieved by DiffGEPCI.
.
|
**A**: Pix2Pix outperform U-Net due to its adversarial nature, but still results in blurry images.
**B**: However, upon closer examination of the generated volume in the other two planes, cross-slice artifacts become evident, as depicted in Figure 3.
**C**: The traditional 2D diffusion model excels in recovering details compared with the previous two methods.
|
ACB
|
ACB
|
ACB
|
BAC
|
Selection 1
|
To address these requirements, we introduce controlgym, a lightweight and versatile Python library that offers a spectrum of environments spanning from linear systems to chaotic, large-scale systems governed by partial differential equations (PDEs). <|MaskedSetence|> Additionally, controlgym includes ten large-scale control environments governed by fundamental PDEs in fluid dynamics and physics. These PDEs are discretized in space by custom solvers, yielding user-tunable state-space dimensions without affecting the dynamics of the environment, a key aspect for assessing the scalability of RL algorithms. <|MaskedSetence|> These environments, detailed in Tables 1 and 2, enhance Gym’s collection and are highly customizable to support theoretical advancement in L4DC. <|MaskedSetence|> Moreover, our PDE environments uniquely allow the users to extend system dimensionality to infinity while preserving the intrinsic dynamics. The PDE solvers implemented to power controlgym are innovative, employing state-of-the-art schemes with exponential spatial convergence and high-order temporal accuracy masked behind a user-friendly discrete-time state-space formulation. Specifically for linear PDE environments, we have developed novel state-space models to evolve the PDE dynamics.
.
|
**A**: All environments comply with Gym and support standard RL algorithms (Sutton et al., 2000; Kakade, 2002; Schulman et al., 2015, 2017; Mnih et al., 2016; Sutton and Barto, 2018), e.g., as seen in stable-baselines3 (Raffin et al., 2021).
Our primary contribution is the introduction of a diverse array of control environments characterized by continuous and unbounded action-observation spaces, designed for large-scale systems.
**B**: For example, users can manipulate the open-loop dynamics of PDEs by adjusting physical parameters, with explicit formulas relating parameters and eigenvalues available in linear PDE environments (cf., Section 3.1).
**C**: Specifically, controlgym features thirty-six linear industrial control environments, encompassing sectors like aerospace, cyber-physical systems, ground and underwater vehicles, and power systems.
|
CAB
|
CAB
|
CAB
|
ACB
|
Selection 3
|
<|MaskedSetence|> <|MaskedSetence|> Four of them are linked in a series, yielding a net voltage of 48V, which powers the TBWS motor. A step-down converter is utilized to convert the voltage from 48V to 12V, which in turn provides power to the SBWS and EBS motor. The remaining two batteries, also interconnected in series, produce a net voltage of 24V. <|MaskedSetence|> 2(a)) and control (Fig. 2(b)) systems.
.
|
**A**:
The autonomous go-kart is powered by six Nermak Lithium LiFePO4 deep cycle batteries, each possessing a voltage of 12V and a capacity of 50Ah.
**B**: These batteries are installed on both sides of the go-kart and interconnected via wiring across the chassis.
**C**: This voltage is then fed through several converters to obtain different desired voltages to power up the sensing (Fig.
|
ABC
|
ABC
|
ABC
|
ABC
|
Selection 4
|
Magnetic field odometry, on the other hand, is a lightweight solution to provide odometric information with a magnetometer array. It does not construct a magnetic field map but provides odometric information based on local magnetic field properties. The seminal work [8] derived an equation that relates the body velocity and the gradient of the magnetic field, which can be calculated from the measurements from a set of spatially distributed magnetometers. Later, in [14, 15], the authors proposed an observer to estimate body velocity, proved its convergence, and showcased its usability for indoor localization. <|MaskedSetence|> To address the issue of the noisy magnetic field gradient, the authors in [18, 19] derived a differential equation for high-order derivatives of the magnetic field. They then developed a filtering algorithm comprising a primary filter which is used to estimate the gradient of the field. <|MaskedSetence|> The pseudo-measurement is used to handle adversarial situations where the states’ observability is affected by the low gradient of the field and/or the target’s velocity close to zero. The methods developed have demonstrated promising prospects in magnetic field odometry. <|MaskedSetence|> To address this problem, [22] matches the waveforms from a pair of magnetometers in a sliding time window to reduce the influence of the temporary disappearance of the magnetic field gradient. Experiments show that the proposed method in [22] performs similarly to wheel odometry in magnetic-rich environments.
.
|
**A**: Later, the same authors [20, 21] proposed an AI-based solution where a Long Short-Term Memory (LSTM) model is used to create a pseudo-measurement of the inertial velocity of the target.
**B**: In subsequent works [16, 17], the authors incorporated inertial sensor biases and magnetic disturbance in the model and designed a filter based on the error-state Kalman filter (ESKF).
**C**: However, they are susceptible to noise when computing the gradient, and observability issues arise from a weak magnetic field gradient or low target speed.
|
BAC
|
BAC
|
ABC
|
BAC
|
Selection 4
|
The FPGAs are adopted as the data processing unit of each node. The information bits are randomly generated in the personal computer(PC). The information bits are divied into groups of 8888 bits and sent from the PC to the FPGA board via UART interface. <|MaskedSetence|> <|MaskedSetence|> The information bits and beacon signals are exported to modulate the UV LED via the pin of the FPGA board.
PMTs are employed as the photon-detector, which can convert the received photoelectrons to analog pulses such that photon counting can be realized via pulse counting. A PMT(CR340) detector is sealed into a shelding box, which is integrated with a UV optical filter that passes the light signal of wavelenth around 266266266266 nm and blocks signals on other wavelenths. The digital voltage values are achieved via analog-to-digital converters(ADCs). The digital processing of beacon reception, time compensation, time slot transition is performed in the FPGA boards of the slave nodes and the digital processing of frame sycnhronization, channel estimation and symbol detection are performed in the FPGA boards of all nodes. The block diagram and experimental system of the NLOS UV scattering communication network are shown in Figs. <|MaskedSetence|>
|
**A**: In addition, the beacon signals are generated in the FPGA board of the master node.
**B**: LEDs are employed as the UV transmitter and we adopt OOK modulation.
**C**: 2 and 3, respectively..
|
ABC
|
ABC
|
CAB
|
ABC
|
Selection 1
|
Observation 6 – Different path options offer the flexibility to trade off between FLOPs and performance, with increased FLOPs generally resulting in improved results. <|MaskedSetence|> In addition, previous work demonstrates superiority by comparing FLOPs as a metric [50, 40, 24, 25, 41, 70, 76, 94, 95]. <|MaskedSetence|> However, rather than simply increasing FLOPs, experimental results show that an optimal stride configuration utilizes FLOPs more efficiently.
The testing results reported in the right sub-table of Table V demonstrate a consistent trend similar to that observed on the development set. <|MaskedSetence|>
|
**A**: This practice is based on the common understanding that bigger FLOPs often correlate with better performance.
**B**: This flexibility in model design allows for better adaptation to specific application scenarios.
**C**: This further verifies the two observations mentioned above.
In addition, all the stride configurations depicted in both trellis diagrams in Fig. 3 are visualized in Fig. 4, comparing their performance, model size, and FLOPs..
|
BAC
|
BAC
|
BAC
|
BAC
|
Selection 1
|
<|MaskedSetence|> However, it invokes a sequence of binary product methods with a left-to-right precedence rule. Prior to RT software expression, this rule may require modification of an RT algebraic expression. In particular, to express an N𝑁Nitalic_N-ary inner product over an index all but the rightmost operand must have the same variant, true or false, for that index, with only the rightmost operand having a complementary variant.
The NT software has two deviations from its NT algebra. To correctly compute some expressions, one or more sub-expressions have to be enclosed in parentheses and preceded by an extra operator, whose purpose is to decrement internal counters associated with underline-equivalent operators on enclosed indices. <|MaskedSetence|> <|MaskedSetence|> These deviations represent additional weaknesses, this time from a constructivism angle, of NT underline and multi-variant index formalisms..
|
**A**:
In theory, MATLAB could invoke a single overloaded method for an N𝑁Nitalic_N-ary product expression involving a tensor.
**B**: For this purpose, the tilde operator is hijacked.
**C**: Thus, it cannot represent the entrywise NOT of a DenseNT, a tensor class of the NT software.
|
ABC
|
ABC
|
ABC
|
BCA
|
Selection 2
|
<|MaskedSetence|> The key idea is to project structured light patterns of varied angle of linear polarization (AoLP).
Polarization is invisible to the naked eye and a regular camera. By using structured polarization patterns and an RGB-polarimetric camera, we can realize invisible depth and texture capture. Polarization also gives us two advantages that none of the past visible or IR structured light methods have. The first is that the surface normals can be directly recovered providing dense fine details of the target surface. <|MaskedSetence|> The second is that we can also estimate the reflectance properties of the target surface from the polarimetric object appearance. <|MaskedSetence|>
|
**A**:
In this paper, we introduce Structured Polarization, a first-of-its-kind novel depth and reflectance sensing method using structured light of polarized light, which we refer to as SPIDeRS.
**B**: This is in sharp contrast to regular depth sensing where the normals can only be obtained from the triangulated depth, which is inevitably noisy due to the differentiation.
**C**: This enables joint depth and reflectance sensing with a single set of invisible structured light patterns.
.
|
CAB
|
ABC
|
ABC
|
ABC
|
Selection 4
|
In our quest to comprehend the bottom-up structure of music, we propose a novel hierarchical part-whole contrastive learning approach named the Music Audio Representation Transformer (MART). <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Additionally, we craft a hierarchical contrastive learning objective to align part-whole music representations at adjacent levels, thereby progressively establishing a multi-hierarchy representation space. We validate the effectiveness of our MART approach for music representation learning on both music classification and cover song identification tasks.
2. Approach.
|
**A**: Our work draws large inspiration from Hinton’s conceptual GLOM framework (Hinton, 2023) on representing part-whole hierarchies in a neural network.
**B**: MART facilitates feature interactions among cropped music clips while taking into consideration their part-whole hierarchies.
**C**: Specifically, we propose a part-whole transformer to capture the structural relationships between music clips within a part-whole hierarchy.
|
BAC
|
ABC
|
BAC
|
BAC
|
Selection 4
|
To capture KPIs, we employ the TRACTOR xApp, which retrieves requested KPIs from the gNB every 250ms250ms250\mathrm{ms}250 roman_m roman_s over the E2 interface. <|MaskedSetence|> Simultaneously, we record all the available KPIs for offline training. These KPIs are stored in a .csv file and are part of our publicly accessible dataset. <|MaskedSetence|> <|MaskedSetence|> Additionally, we provide a tool for automated IPsec configuration over this E2 interface, as elaborated in [25], which is the first open-source solution for configuring O-RAN compliant IPsec over the E2 interface, facilitating swift O-RAN system deployment and systematic performance analysis.
.
|
**A**: The process is depicted in block C of Fig.
**B**: This xApp uses our ML model for online traffic slice classification.
**C**: 2.
|
ABC
|
BAC
|
BAC
|
BAC
|
Selection 2
|
<|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Algorithm 1 summarizes the details of the main ICD algorithm. The ICD step is a novelty over prior work [22]. The greedy and then ICD masks are obtained for each scan individually (patient and scan adaptive).
Algorithm 1 ICD Mask Optimization
.
|
**A**: The proposed iterative coordinate descent (ICD) scheme then further optimizes the mask iteratively by picking one line at a time in the current mask and moving it to the best new location in terms of the reconstruction error.
**B**: Starting with no sampled lines or only fixed low-frequency lines, at each step of greedy mask optimization, the k-space phase encode or line that gives the lowest reconstruction error is added to a particular mask.
**C**: The algorithm keeps adding lines until the sampling budget is reached.
|
BCA
|
BCA
|
BAC
|
BCA
|
Selection 2
|
To assess the effectiveness of the Attention−--KD and SGM modules, quantitative experiments were conducted using the baseline Polyp-PVT with B0 backbone trained on our dataset setting. Initially, KL divergence was implemented, followed by the integration of Attention−--KD into the baseline. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> This approach also helps address inconsistencies between features of different scales in both the student and teacher models, leading to improved results overall..
|
**A**: Moreover, the incorporation of Attention−--KD and SGM into the Knowledge Distillation pipeline allows the student model to focus on the important information from the teacher.
**B**: All student models were tested on the Etis Dataset, and the results are presented in Table 3.
The results demonstrate that training with guidance from the teacher model enhances the performance of the student model.
**C**: Finally, the SGM, representing the full method of the KDAS framework, was added.
|
CBA
|
ACB
|
CBA
|
CBA
|
Selection 4
|
<|MaskedSetence|> In particular, one can drop the gradient ∇V∇𝑉\nabla V∇ italic_V of its value function from the aforementioned linear program that is the crux of the framework proposed in Harrison (1996). On the other hand, the solution approach proposed in this paper broadens the class of singular control problems one can solve significantly. As such, it paves the way for establishing the asymptotic optimality of the discrete-review policies in full generality as proposed in Harrison (1996), providing a general resolution of the translation problem that we intend to revisit in future work.
All of the queueing network examples treated in this paper assume independent Poisson arrival streams and exponential service time distributions, but a major virtue of heavy traffic diffusion approximations is that they are distribution free. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: That is, the approximating diffusion model (in our setting, the approximating singular control problem) depends on the distributions of the original queueing network model only through their first two moments.
**B**: Our primary reason for using Poisson arrivals and exponential service time distributions in our examples is to allow exact MDP formulations of the associated queueing control problems, which can then be solved numerically for comparison against our diffusion approximations.
.
**C**: A common feature of the formulations in these papers is that their approximating EWF is one-dimensional, and it admits a pathwise optimal solution.
|
BAC
|
CAB
|
CAB
|
CAB
|
Selection 4
|
V-B4 Performance Validation with Measured Signals
The ASK and AUS of all methods with the measured signal dataset D3 are presented in Label. III. <|MaskedSetence|> This can be attributed to the generally high SNR, which leads to very clear separation between known classes in the representation space. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: The AKS of all methods are above 99% for the five sub-datasets.
**B**: For UCI task, the proposed method, SR2CNN and CGDL exhibit relatively better and more stable AUS performance against other methods.
**C**: Among them, the proposed method achieves the best UCI performance across all five subsets..
|
ABC
|
BCA
|
ABC
|
ABC
|
Selection 3
|
<|MaskedSetence|> Then, as depicted in Fig. 3, the output layer, FC layer, flattened layer and the last convolutional and maxpooling layers were removed. <|MaskedSetence|> The weights of the pre-trained part were made non-trainable, because these should not change during the new training runs. <|MaskedSetence|> A subset of the entire data set (1000 data instances with 80%-20% train-test split) was used for applying GA to make the training processes faster.
.
|
**A**:
At first, the saved pre-trained model was loaded.
**B**: After that, similar untrained layers were added newly to construct the extended model (Fig. 3).
**C**: Next, GA was applied to choose parameters for the untrained part to maximize the classification accuracy.
|
ABC
|
CBA
|
ABC
|
ABC
|
Selection 3
|
<|MaskedSetence|> Therefore, we formulate the problem aiming at optimizing the overall system performance of the considered RIS-empowered network in terms of system sum rate as per Eq. (7). This would allow us to directly optimize the precoding strategy at the transmitter and the reflection coefficient at the RIS. <|MaskedSetence|>
Hence, we reformulate our problem by looking at SMSE as a means to reduce signal leakage and improve privacy and security in specific locations, while maintaining the connectivity in the other spaces. <|MaskedSetence|>
|
**A**: In our reference scenario, the RIS is used to limit the incoming signal for specific areas or receivers.
**B**: Considering all the receivers, u𝑢uitalic_u, placed within the room or area where we would like to guarantee privacy and EMF isolation, we aim at maximizing the overall SMSE for all those receivers.
The received MSE of receiver u𝑢uitalic_u in a specific location is given by.
**C**: However, given the complexity of such an expression, we aim at optimizing the sum-mean-square-error (SMSE) over all receivers, known to be related to Eq. (7), as reported in [11].
|
BAC
|
ACB
|
ACB
|
ACB
|
Selection 2
|
<|MaskedSetence|> <|MaskedSetence|> This compromise gives satisfactory results and sufficient heat dissipation by air cooling. If the litz wire winding is not operated near their maximum current rating, e.g. for split-core toroids in filter stages Thieben et al. (2024); Mattingly et al. <|MaskedSetence|> A general study on optimal shapes for air cores and non-air core multilayered toroidal inductors can be found in Ref. Murgatroyd and Eastaugh (2000).
.
|
**A**: However, insufficient heat dissipation from inner layers becomes an issue that in turn increases copper resistance and might damage the litz wire insulation layer.
**B**: (2022), multiple dense layers are an efficient option.
**C**: To this end, we propose a single nearly dense outer layer of turns as shown in Fig. 4 (b), which overlap on the inside near the axis of symmetry.
|
CAB
|
ACB
|
ACB
|
ACB
|
Selection 4
|
<|MaskedSetence|> Moreover, to close MP1 a great effort has recently been made [4, 10] to relax the PE condition to exponential stability requirement, which is sufficient [3, p. 327] to ensure robustness (UUB) of the closed-loop adaptive control system in the presence of external perturbations. However, all approaches under consideration do not ensure convergence of the parameter estimation and/or tracking errors to zero even if an arbitrary small time-dependent non-vanishing perturbation affects the system, but this is a vita necessary to achieve acceptable control quality and identification accuracy in practice (MP2). <|MaskedSetence|> However, such approach has three main drawbacks: i) the overall closed-loop system or/and system parameterization are complicated due to overparametrization, ii) the internal models can be used for description of relatively small class of perturbations (constant, , exponentially decaying signals and their combinations), and iii) a priori knowledge of the disturbance type is required to apply the internal model principle.
Generally, the origin of sensitivity of the adaptive control systems to unknown but bounded time-dependent perturbations is rooted in the properties of the adaptive and estimation laws, which are applied to solve the adaptive control and identification problems. <|MaskedSetence|> However, the above-mentioned approach is hard to tune, provides only asymptotic rate of convergence even in the petrurbation-free case and ensures parameter convergence only if the restrictive PE condition is met and the system regressor is independent with perturbations. Recently proposed robust identifiers with finite/fixed convergence time and relaxed excitation requirement [13, 14] also provide only boundedness of the parametric error in the presence of an external disturbance. So, the main topic of this study is to close this gap and propose continuous-time online estimation law for linear time-invariant plants, which in comparison with P-LS [12] and other existing estimation laws [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15] has the following novel properties:.
|
**A**:
To solve the first problem, the adaptive control community has proposed many robust modifications (σ𝜎\sigmaitalic_σ, e𝑒eitalic_e, projection operator, dead zone etc.), which ensure required UUB [1, 2, 6, 7, 8, 9] even if the PE condition is not met.
**B**: Considering the adaptive control framework, to cope with the external perturbations, a methodology [3, 11] has been developed that uses the internal model principle to parameterize a perturbation in the form of a linear regression equation with respect to some new unknown parameters with higher dimension.
**C**: Particularly, to the best of authors’ knowledge, considering the continuous time adaptive control literature [1, 2, 3], only pure least-squares (P-LS) estimation law ensures exact asymptotic identification of the unknown parameters in the presence of perturbations, which are independent with the system regressor [12].
|
ABC
|
ABC
|
ABC
|
ABC
|
Selection 1
|
<|MaskedSetence|> The joint tracking with the tempo branch generally benefits the performance on beat tracking, as it may serve as a regularization term that helps reach better convergence.
For training, we equally weigh the binary cross entropy loss over beat, downbeat, and tempo to make a combination of them. We use a batch size of 1 to train whole sequences with different lengths. For excessively long audios, we split them into 30-second (1.3k-frame) clips. <|MaskedSetence|> <|MaskedSetence|> Our model has 7.5M trainable parameters and is trained using an NVIDIA GeForce RTX 4090 GPU with fp32 precision. We trained all the models for 50 epochs to reach sufficient convergence and picked the checkpoint with the lowest validation loss as the best model..
|
**A**:
Beat and downbeat annotations are each represented as a 1D binary sequence that indicates beat (1) and non-beat (0) states at each input frame.
**B**: We use dropout with a rate of 0.5 for the tempo branch and 0.1 for other parts of the network.
**C**: We apply RAdam and Lookahead optimizer with an initial learning rate of 1E-3, which is reduced by a factor of 5 whenever the validation loss gets stuck for 6 epochs before being capped at a minimum value of 1E-7.
|
BAC
|
ACB
|
ACB
|
ACB
|
Selection 2
|
Recent works have since pivoted towards directly solving the optimization problem by incorporating a model for the distributional map. <|MaskedSetence|> The show that location scale families can be learned from the system prior to optimization without requiring further interaction with the system during optimization. The work of [lin2023plug] extends this learning framework beyond location scale families and derives regret bounds that incorporate the associated approximation and statistical errors arising from the statistical learning problem. Outside of the distribution learning approach, derivative-free methods have been studied in [miller2021outside, narang2022learning, wood2023stochastic]. <|MaskedSetence|> <|MaskedSetence|> However, their exploration remains limited to this specific model category.
This work complements the technical findings in [lin2023plug] by offering a generalization of the so called “plug-in” approach presented in this work to the multi-agent setting. Relative to [narang2022learning], we focus on solving the Nash equilibrium problem for a general class of models by learning the distributional map prior to optimization—thereby reducing the required number of interaction with the system to receive a reasonable answer.
.
|
**A**: While these methods do not depend on a learned model of the distributional map, they generally exhibit slower convergence rates.
**B**: In [miller2021outside], the authors provide conditions that ensure convexity of the expected cost, alongside a learning-based algorithm for finding solutions to problems with location scale families.
**C**: On the other hand, adaptive learning strategies, particularly for location-scale models, have shown promise, as discussed in [narang2022learning] whereby the model is learned during optimization.
|
BAC
|
BAC
|
BAC
|
BCA
|
Selection 1
|
The interaction between RSMA and SPC for URLLC has been extensively studied in [34, 35, 36, 37, 38, 39, 40]. However, further investigations are necessary to address existing deficiencies. For example, first, the time-varying channels pose challenges to CSI acquisition, especially in multi-antenna systems. Meanwhile, considering the limited pilot sequences for channel estimation, achieving perfect CSI is unrealistic in URLLC. <|MaskedSetence|> <|MaskedSetence|> Thus, residual interference after SIC operations will further degrade system performance. However, these conditions have not been studied in the aforementioned works. Second, the optimization methods employed in these studies are based on successive convex approximation (SCA) [34, 35, 36, 37, 38, 39, 40], or alternative optimization (AO) [37, 38, 39, 40]. <|MaskedSetence|> Third, in NOMA/RSMA, inter-user interference (IUI) increases significantly with the number of multiplexed users, which severely degrades the efficiency and reliability of systems. Therefore, the way of integrating orthogonal multiple access, i.e., MC systems, has great potential to enable massive access in NOMA/RSMA. To this end, user grouping is an additional problem that needs to be addressed. However, previous studies reported in [37] and [38] only considered MC-RSMA transmission but did not provide any insight into user grouping in MC-RSMA systems.
.
|
**A**: Therefore, imperfect CSI needs to be taken into account.
**B**: This means that the perfect SIC cannot be guaranteed due to channel errors.
**C**: These methods require at least one-layer iteration to obtain suboptimal solutions, resulting in high computational complexity that would affect the latency of URLLC.
|
CBA
|
ABC
|
ABC
|
ABC
|
Selection 2
|
Determining the optimal inputs in dynamic metabolic engineering is not trivial; for example, the optimal light input profiles in optogenetically controlled systems. <|MaskedSetence|> The application of open-loop model-based optimization often fails to consider system uncertainties, such as disturbances or model-plant mismatch. The concept of metabolic cybergenetics (see Fig. 1), which is the focus of this paper, has emerged as a way to address the limitations of open-loop control by integrating state feedback 17, 20. These systems adopt closed-loop control, such as model predictive control, by iteratively updating and solving the open-loop optimization problem with the current system state 20. Other control configurations are also possible, e.g., by merging external control via open-loop optimization and in-cell feedback encoded by genetic circuits 21.
Figure 1: Scheme of a metabolic cybergenetic control system. Optimal system inputs are obtained through computational methods such as dynamic optimization. These external input signals modulate the expression of metabolic enzymes and thereby metabolic fluxes. Closed-loop control is implemented by incorporating state feedback and re-optimizing the system. <|MaskedSetence|> <|MaskedSetence|> Process data can also be used to capture certain parts of the model using machine learning..
|
**A**: Model-based dynamic optimization has been shown to be a powerful tool in predicting dynamic inputs for optimally steering the expression of manipulatable enzymes in bioprocesses 20.
**B**: The dynamic model can contain prior knowledge such as physics (e.g., metabolic networks) and phenomenological relations (e.g., kinetic functions), which can either be modeled explicitly or indirectly via machine-learning surrogates.
**C**: Optionally, if not all the state information can be measured, soft sensors in the form of state estimators can be employed to reconstruct the system state.
|
ABC
|
ACB
|
ACB
|
ACB
|
Selection 4
|
<|MaskedSetence|> However, ML-RET are not yet adopted for production to the best knowledge of the authors.
In this study, we analyze the reasons holding back ML-RET correction from utilization in RET production flows. Based on this analysis, we introduce a novel flow that mitigates those obstructions and offers an end-to-end production-ready platform for ML-RET with very high scalability. <|MaskedSetence|> <|MaskedSetence|> We then introduce TPM-RET, a novel end-to-end production friendly ML flow to model and correct lithography in section 4, and showcase some of its results in section 5. Next, we discuss how TPM-RET flow solves the production difficulties in section 6. Finally, we present our future plans and conclusions in sections 7 and 8 respectively..
|
**A**: The rest of this paper is organized as follows: in section 2, we discuss the traditional way of performing RET correction.
**B**: Computational lithography industry sought Machine Learning (ML)111We use the term “machine learning” as generalized term to refer to any advanced AI techniques including deep learning, generative networks and similar technologies.
algorithms to accelerate the RET correction and reduce the computational load.
**C**: Next, we discuss and analyze the state-of-the-art of ML-RET and the roadblocks holding it from being a viable production option in section 3.
|
BAC
|
CAB
|
BAC
|
BAC
|
Selection 1
|
In Fig. 6, we show an example with MST++ for the validation on SAM in two situations, one with fixed metamers, and the other with on-the-fly metamers. In each experiment, we compare the SAM difference with and without aberrations. <|MaskedSetence|> <|MaskedSetence|> With chromatic aberrations,
the network can already distinguish fixed metamer pairs, achieving similar accuracy as the standard case. In the more aggressive case of on-the-fly metamers, chromatic aberrations also improve the spectral accuracy, compared with its no-aberration counterpart. Again, this aberration advantage holds for all datasets (Table 8). <|MaskedSetence|>
|
**A**: As a reference, we also show the standard validation without metamers.
**B**: See Supplemental Material for details.
.
**C**: As we can see, the realistic optical aberrations of the lens actually improve the spectral estimation in the presence of metamers as long as the aberrations are modeled in the training.
|
ACB
|
ACB
|
BCA
|
ACB
|
Selection 2
|
The above features distinguish CF-MMIMO from other technologies such as small-cell and conventional user-centric network MIMO discussed in the prior section. Note that, when we use the term “cell-free”, we simply refer to the data plane, i.e., there is no fixed cells created by the data transmission protocol in active mode. Numerous specific mechanisms beyond the Physical Layer (PHY) data plane remain undefined or inadequately defined. Consider, for instance, the processes of user clustering or the initial connection and authentication of new users. These procedures may align with the principles of current cellular 5G-NR. Therefore, even in a “cell-free” network, the underlying cellular architecture may persist. In the future, the control plane may also adopt a cell-free approach. However, this presents considerable challenges as it depends on factors that are predominantly standard-dependent rather than purely theoretical/technical.
The authors of [14] demonstrated that CF-MMIMO could provide consistently high-quality service to all users with very simple maximum-ratio (MR) processing. <|MaskedSetence|> We can see that, compared to co-located massive MIMO, cell-free systems can offer a much more uniform connectivity for all users in any location. <|MaskedSetence|> <|MaskedSetence|> This enables multi-stream transmission even in line-of-sight, a unique advantage of CF-MMIMO as compared to co-located massive MIMO. As a result, CF-MMIMO has attracted significant attention and interest within the research community ever since its inception..
|
**A**: This can be verified via Fig. 4 where the rates displayed with scaled colors for cell-free and co-located massive MIMO are presented.
**B**: Moreover, it outperforms the conventional small-cell systems.
**C**: In addition, in LoS propagation environments, CF-MMIMO with multiple-antenna users can offer favorable propagation, but co-located massive MIMO may not.
|
ABC
|
ABC
|
CAB
|
ABC
|
Selection 4
|
The number of proposed DT concepts related to ASVs increases. For instance, [8] introduces a preliminary study for the development of a DT technology using an ASV. However, while data acquisition is performed for subsequent analysis and modeling, the proposal did not implement a real-time connection from the physical system to its DT. <|MaskedSetence|> <|MaskedSetence|> Furthermore, [11] introduces a predictive DT for ASVs by utilizing a nonlinear adaptive observer and exogenous Kalman filters for state and parameter estimations while predicting future states using a discretized state space model. Predictive capabilities might be essential for real operations of a highly advanced DT of an ASV and its environment. However, predictions cover only a small subset of the potential that a DT can enable. Recently, a DT framework for an ASV was developed, described in [12], incorporating aspects such as SITAW using data from cameras, LiDAR, the global navigation satellite system (GNSS), and an inertial measurement unit (IMU). Furthermore, COLAV is approached by applying deep reinforcement learning (DRL), showing overall that a DT is a dynamic and safe concept for the design and development phase of ASVs. <|MaskedSetence|> The study shows the development and concept of a versatile DT by setting the foundation of a three-dimensional (3D) digital environment with the Unity game engine using a real-world 3D elevation map and computer-aided design (CAD) models for the 3D representation of surface vessels. Furthermore, real-time automatic identification system (AIS) data and weather data are gathered to facilitate descriptive and diagnostic capabilities, while implemented nonlinear vessel dynamics define the kinetic behavior of the digital representatives. In addition, DRL is applied for autonomous path following and COLAV, where randomly generated scenarios, including highly dynamic obstacles, set the basis for training..
|
**A**: The proposal of [10] presents the combination of supervised learning and DTs concerning autonomous path planning.
**B**: In addition, in [9], an anti-grounding scheme is demonstrated for ASVs assisted by a DT using an electronic chart display and information system.
**C**: However, the proposal uses a highly simplified reward function to train the deep neural network (DNN), and the training phase considered only one static obstacle, making the proposed DRL control concept questionable in the case of more challenging scenarios.
In [13], the development of a DT framework for an ASV is illustrated, considering the capability scale mentioned earlier.
|
CBA
|
BAC
|
BAC
|
BAC
|
Selection 3
|
<|MaskedSetence|> The benchmarks include 35 IMDPs taken from the literature, with the total number of transitions between states ranging from a few tens for the smaller models to tens of millions for the larger models. The empirical analysis shows that IntervalMDP.jl CPU implementation is on average 2222-4×4\times4 × faster compared to the state of the art, while the GPU implementation can achieve speed-ups of various orders of magnitude on the larger systems. Furthermore, because of the use of sparse matrices and the Julia type system, in all cases, IntervalMDP.jl requires less memory compared to PRISM and bmdp-tool.
The paper is organized as follows. <|MaskedSetence|> Then, in Section 3, we give an overview of IntervalMDP.jl and describe how IMDPs can be created and stored in the tool and how to perform strategy synthesis and verification. In Section 4, we detail our algorithmic approach to robust value iteration on GPUs. <|MaskedSetence|>
|
**A**: We evaluate IntervalMDP.jl on various benchmarks and compare it with PRISM (Kwiatkowska et al., 2011) and bdmp-tool (Lahijanian, 2021), the only tools available to perform reachability analysis for IntervalMDP.jl to the best of our knowledge.
**B**: Finally, in Section 5, we illustrate the effectiveness of IntervalMDP.jl on various benchmarks.
.
**C**: First, in Section 2, we formally introduce IMDPs and robust value iteration for IMDPs.
|
CAB
|
ACB
|
ACB
|
ACB
|
Selection 3
|
The number of input frames for both training and inference matters for recurrent-based networks, especially in turbulence mitigation. Since turbulence degradation is caused by zero-mean stochastic phase distortion, the more frames the network can perceive, the better the non-distortion state it can evaluate. <|MaskedSetence|> 4. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: This is particularly valid for static scene sequences, where the pixel-level turbulence statistics are much easier to track and analyze through time.
We trained two models with 12-frame and 24-frame inputs and presented their respective performance during inference in Fig.
**B**: This figure shows in the temporal range of our experimental setting, a positive correlation between the performance and the number of input frames always exists, especially on the static scene modality where an over 1 dB boost can be obtained with more frames.
**C**: This phenomenon suggests one of the success factors for turbulence mitigation is the capability of fusing more frames, similar to the video super-resolution problem [8].
.
|
ABC
|
ACB
|
ABC
|
ABC
|
Selection 4
|
Impact of FOPPAS and Runtime Analysis. <|MaskedSetence|> Considering the resampling of Repaint [30], it would be slower by times. However, thanks to our FOPPAS mechanism, we can synthesize arbitrary-long smooth motion sequences in real time. We test the runtime of our method on a one-minute-long audio corresponding to 900 frames of expressions and gestures (BEAT uses 15 FPS motion), with the number of overlapping frames in FOPPAS set to 4. <|MaskedSetence|> The time for audio encoding (Mel-Spectrogram and HuBERT) is also included. <|MaskedSetence|> Therefore, our proposed FOPPAS can benefit the diffusion model to achieve real-time in the inference stage, making many downstream products applicable.
.
|
**A**: The average runtime of our method is 28.6 seconds on a single NVIDIA GeForce RTX 3090 GPU, corresponding to around 31.5 FPS.
**B**: In contrast, if we directly use DDPM with Repaint [30], it costs 2068.1s to infer 900 frames using the default setting of repaint, which is 0.44 FPS.
**C**: Diffusion models suffer from the long sampling steps at the test time – the original DDPM model takes 1000 steps to sample a quality image.
|
CAB
|
BCA
|
CAB
|
CAB
|
Selection 1
|
However, unlike widely studied black-box attacks for classification problems, attacking NR-IQA presents distinct challenges. Firstly, quantifying the success of attacks on regression-based IQA problems is not straightforward. Different from classification, which naturally defines a classification boundary for determining attack success, regression-based IQA lacks a direct measure of attack “success” due to its continuous output. Secondly, identifying the attack direction becomes particularly challenging when the gradient of an IQA model is unavailable. <|MaskedSetence|> In our preliminary experiment where we attacked images for the classification and NR-IQA task with the Gaussian noise, the misclassification rate achieved 92.6%percent92.692.6\%92.6 % but the quality score only changed by 2.092.092.092.09 on average, with the predicted image quality score within the range of [0,100]0100[0,100][ 0 , 100 ]. <|MaskedSetence|> This disparity emphasizes the need for a more thoughtful and delicate design of attack directions in the context of NR-IQA. Thirdly, NR-IQA tasks are more sensitive to image quality variation than classification tasks. <|MaskedSetence|> These intricate challenges underscore the significance of developing tailored black-box attack strategies for NR-IQA methods.
.
|
**A**: Unlike classification tasks, where a small perturbation like Gaussian noise may easily lead to successful attacks, the IQA problem demands a more deliberate design of the attack direction to generate substantial changes in the predicted quality scores.
**B**: So attacking NR-IQA models has a more strict constraint for the perceptual similarity between the adversarial example and its original image, which implies that the perturbation is expected to be imperceptible for humans but could cause misjudgments by NR-IQA models.
**C**: The result shows that the efficiency of this stochastic attack direction dropped dramatically in the context of NR-IQA.
|
ACB
|
ACB
|
ACB
|
CAB
|
Selection 2
|
2.2 Self-Supervised Audio-Visual Representation Learning
In general, there are three kinds of methods for learning generic audio-visual representations through self-supervision, namely contrastive learning, masked data modeling, and the hybrid method. Contrastive learning exploits the natural audio-visual correspondence in videos as a free signal for self-supervision [58, 59, 60]. In contrast, the goal of masked data modeling is to reconstruct the original data from its masked input. Motivated by its recent success in the image and video domain [17, 61, 62], Georgescu et al. <|MaskedSetence|> Recently, a few studies have made attempts to combine the former two kinds of methods, resulting in the hybrid method. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: In particular, CAV-MAE [18] integrates MAE and cross-modal contrastive learning and shows that they are complementary.
**B**: [63] extend it to the audio-visual domain and achieve significant improvements over previous methods.
**C**: MAViL [19] further introduces intra-modal contrastive learning and masked self-training on contextualized features to improve audio-visual pre-training.
Although these methods have demonstrated great success in general audio-visual tasks, the learned representations are typically not suitable for AVER because they are trained on general scene or action videos instead of facial videos in AVER..
|
ACB
|
BAC
|
BAC
|
BAC
|
Selection 2
|
<|MaskedSetence|> <|MaskedSetence|> Also, in [16], unitary AMP (UAMP)-based method was presented, which is more computationally efficient than the ALS- and VAMP-based methods. Noticeably, the AMP-based methods are only applicable in larger systems such that the number of users (denoted by K𝐾Kitalic_K) is not less than the number of reflective elements (denoted by N𝑁Nitalic_N) in the RIS. In the example of N=128𝑁128N=128italic_N = 128, the AMP-based methods can be used at least when K≥128𝐾128K\geq 128italic_K ≥ 128. <|MaskedSetence|>
|
**A**:
Besides the CS-based channel estimation methods, approximate message passing (AMP)-based methods were developed in [15, 16], which is based on the parallel factor decomposition to unfold the so-called effective (or cascaded) channels to be estimated.
**B**: This can limit the applicability of the AMP-based channel estimation methods in various RIS-aided applications.
.
**C**: In [15], alternating least squares (ALS) and vector AMP (VAMP) were presented as efficient iterative algorithms.
|
ACB
|
ACB
|
CAB
|
ACB
|
Selection 1
|
Manual speaker classification and transcription is often a limiting factor in understanding classroom speech. <|MaskedSetence|> Researchers have begun to tackle this problem through automated quantification of selected speech features from classroom audio [21, 22, 20]. We add to this literature with an automated framework for the large-scale analysis of speaker classification and transcription (who said what).
Speaker classification results indicated moderate levels of overall reliability for both teacher and child utterances. Word error rates–a rigorous metric–indicated relatively high levels of transcription accuracy for both teacher and child recordings. <|MaskedSetence|> These analyses used all available audio from both teacher and child recorders. The results suggest promising levels of correspondence on teacher and child MLU, rate of speech, use of questions, and responses to questions. <|MaskedSetence|> For example, our data suggest that Whisper over-estimated lexical alignment between teacher utterances followed by child utterances. This may be due to the tendency of this large language model to “hear” words in child utterances that had been identified in the previous teacher utterances..
|
**A**: However, each of these features must be examined individually.
**B**: For example, expert transcription of the 110110110110 minutes of audio data reported here took approximately 55555555 hours (5555 hours of expert transcription per 10 minutes of audio).
**C**: Comparison of expert and automated processing was not limited to standard reliability metrics such as word error rate, but was extended to the speech features that are the likely substantive areas of analysis for classroom interaction research.
|
BCA
|
BAC
|
BCA
|
BCA
|
Selection 4
|
5 Conclusion
In this study, we proposed PVTFormer architecture by leveraging a pretrained Pyramid Vision Transformer (PVT v2) as an encoder and incorporating Up block, Decoder block and residual learning for accurate liver segmentation. <|MaskedSetence|> The evaluation against eight existing state-of-the-art methods showcased that PVTFormer achieves excellent results and outperforms its competitor with a high dice coefficient of 86.78%, a mIoU of 78.46% and a low Hausdorff distance of 3.50. <|MaskedSetence|> In the future, we plan to annotate the test dataset of LiTs with the help of radiologists from our team and perform a more comprehensive study on the multi-center dataset. <|MaskedSetence|>
|
**A**: Moreover, we plan to extend the capabilities of PVTFormer for segmenting multi-orans in abdominal CT scans.
.
**B**: The performance of the PVTFormer demonstrates that it acts as an effective method for precise healthy liver segmentation and can also be translated to other medical applications.
**C**: The hierarchical decoding strategy in decoder blocks enhances semantic features, boosting the quality of output segmentation masks.
|
CBA
|
CBA
|
BCA
|
CBA
|
Selection 2
|
<|MaskedSetence|> The key idea behind our approach is to treat degradation and recovery as requirement-driven adaptation tasks: Degradation can be thought of as temporarily weakening an original system requirement to be achieved by the system, and recovery as strengthening a previously weakened requirement when the environmental conditions improve. Furthermore, weakening and strengthening can be regarded as dual operations: The former relaxes the set of system behaviors that are deemed acceptable, and the latter restricts it. <|MaskedSetence|> Our proposed approach overcomes the above two challenges, by (i) removing the need to develop an application-specific logic for coordinating degradation and recovery, and (ii) allowing the designer to plug in a different requirement without modifying the underlying logic.
To concretize this approach, we propose a self-adaptation framework that takes system requirements specified in signal temporal logic (STL) (Maler and Nickovic, 2004), a type of temporal logic that is particularly well-suited for specifying time-varying behaviors of CPS (e.g., “The vehicle must never deviate outside the lane for more than 2 seconds”). We show how the weakening and strengthening operations can be formulated formally as the problem of relaxing and strengthening a given STL specification, respectively. <|MaskedSetence|> To support such optimal degradation and recovery, we also describe how the problem of generating minimal weakening and maximal strengthening can be encoded and solved as an instance of mixed-integer linear programming (MILP)..
|
**A**: In addition, it would be desirable to reduce the impact of degradation (i.e., apply minimal weakening necessary) and maximize the rate of recovery (i.e., strengthen the requirement as much as possible).
**B**: Based on this idea, we propose a single, unified requirement-driven adaptation framework that is capable of automatically switching between degradation and recovery, depending on the changes that arise in the environment.
**C**:
In this paper, we propose a self-adaptation approach for improving system resiliency through automated coordination of graceful degradation and recovery.
|
CBA
|
CBA
|
BCA
|
CBA
|
Selection 1
|
<|MaskedSetence|> <|MaskedSetence|> Then, type over sections of trans_jour.docx or cut and paste from another document and use markup styles. <|MaskedSetence|> Highlight a section that you want to designate with a certain style, and then select the appropriate name on the style menu. The style will adjust your fonts and line spacing. Do not change the font sizes or line spacing to squeeze more text into a limited number of pages. Use italics for emphasis; do not underline.
To insert images in Word, position the cursor at the insertion point and either use Insert ∣∣\mid∣ Picture ∣∣\mid∣ From File or copy the image to the Windows clipboard and then Edit ∣∣\mid∣ Paste Special ∣∣\mid∣ Picture (with “float over text” unchecked).
.
|
**A**: When you open trans_jour.docx, select “Page Layout” from the “View” menu in the menu bar (View Page Layout), (these instructions assume MS 6.0.
**B**: The pull-down style menu is at the left of the Formatting Toolbar at the top of your Word window (for example, the style at this point in the document is “Text”).
**C**: Some versions may have alternate ways to access the same functionalities noted here).
|
ACB
|
ACB
|
ACB
|
ABC
|
Selection 2
|
5 Example and linear filter design
In this section, we demonstrate the proposed stability filter on a lane keeping example for automotive systems. <|MaskedSetence|> <|MaskedSetence|> (2009); Köhler et al. (2018, 2020) for similar design procedures for nonlinear systems. <|MaskedSetence|>
|
**A**: The example is based on a linearization of the vehicle’s dynamics and we therefore discuss a design procedure for linear systems affected by additive disturbances based on Chisci et al.
**B**: (2001).
As Theorem 12 provides a concept that supports more general problem settings, we refer to Limon et al.
**C**: We first present the general design procedure for linear systems with polytopic constraints and then specify the details of the numerical example..
|
BAC
|
ABC
|
ABC
|
ABC
|
Selection 4
|
<|MaskedSetence|> Specifically, the authors in [15] showcased that UAV-mounted RIS can improve outage performance in dense urban scenarios, even with the dynamic mobility of UAVs. Additionally, [16] explored strategies for the optimal deployment of UAV-mounted RIS in URLLC systems, focusing on scenarios where user fairness is of paramount importance, while [17] proposed a novel system design that leverages ambient backscatter communication in UAV-mounted RIS networks. <|MaskedSetence|> Transitioning to trajectory design, [19] emphasized on three-dimensional (3D) trajectory design for UAVs in urban environments, aiming to optimize the signal-to-noise ratio (SNR) for ground users, while [20] presented a trajectory optimization framework for UAV-mounted RIS focusing on maximizing the network’s secure energy efficiency. Finally, [21] examined the joint optimization of the UAV trajectory design and the RIS design to facilitate the offloading of computational tasks in IoT networks. <|MaskedSetence|>
|
**A**:
As researchers explore ways to optimize wireless communications, the integration of UAVs with RISs emerges as a promising solution to maintain LoS links, especially in propagation environments where the wireless links can often be obstructed [14].
**B**: To this end, it is imperative to consider both communication and trajectory design for realizing the full potential of UAV-mounted RIS in diverse wireless network scenarios.
.
**C**: Interestingly, considering the different electromagnetic functionalities of RIS [10], the authors in [18] provided a rigorous path loss model for the case where a UAV-mounted absorbing metasurface is utilized and validated the findings experimentally in an anechoic chamber.
|
ACB
|
CBA
|
ACB
|
ACB
|
Selection 3
|
4.2 Experiment Settings
We evaluate all experiments using the weighted average F1 score on two class-imbalanced datasets. <|MaskedSetence|> The output dimension of all encoders is unified to 768. <|MaskedSetence|> <|MaskedSetence|> All experiments are conducted on a single NVIDIA GeForce RTX 3090. More details are in Appendix A.2.
.
|
**A**: The optimizer is AdamW and the initial learning rate is 1e-5.
**B**: We use a linear schedule with warmup for the learning rate scheduler.
**C**: We use the initial weight of the pre-trained models from Huggingface’s Transformers (Wolf et al., 2019).
|
BCA
|
CAB
|
CAB
|
CAB
|
Selection 4
|
The first contribution of this paper is to establish a novel stochastic hybrid model for multiple decentralized networks
under the reasonable assumptions and constraints imposed on the stochastic network delays and Poisson pulsing DoS (Pp-DoS) attacks, in view of the stochastic hybrid formalism of [1]. <|MaskedSetence|> <|MaskedSetence|> To address this, we begin by characterizing Pp-DoS attacks, assuming that the cardinal number of attacks follows a Poisson distribution, and the inter-attack time follows an exponential distribution. <|MaskedSetence|> These variables respectively track whether the next event is a transmission or an update event, and whether the next update is successful or not. Additionally, we introduce a time variable and an integer variable, representing the time elapsed since the latest transmission and the total number of transmissions over time. Furthermore, we introduce a memory variable to store certain values related to the medium access protocol at the moment of a transmission, also used to model the update event at an update instant. This approach represents a significant advancement over existing methods, particularly in its ability to handle the intricacies of decentralized event-triggered control strategies and the nuanced dynamics of Pp-DoS attacks. By integrating these variables into a hybrid model, we are able to provide a more comprehensive understanding of the system’s behavior, enabling more effective control strategies and improved resilience against cyber-attacks.
.
|
**A**: The primary objective of this study is to develop a sophisticated model that can effectively accommodate decentralized event-triggered control strategies and stochastic stability analysis.
**B**: Within each network, we introduce a hybrid model featuring two boolean variables.
**C**: The primary challenge lies in accurately representing data transmissions by integrating network delays with Pp-DoS attacks, particularly in capturing the transitions between transmission and update segments due to delays.
|
ACB
|
ACB
|
BAC
|
ACB
|
Selection 4
|
Due to the radiation, various approaches have been explored, which can be primarily categorized into two main categories, either to modify the scanning protocol by reducing the tube voltage and/or tube current [44] or downsample the measured data for CT reconstruction, such as interior CT [36, 51, 60, 70], and sparse-view CT [68, 66, 34, 63, 72, 65].
These approaches aim to achieve dose reduction while maintaining satisfactory image quality. However, employing the first category of approaches for radiation dose reduction may result in the measured CT data being heavily contaminated with noise. <|MaskedSetence|> <|MaskedSetence|> It is worth noting that the regularization method is an effective technique for tackling ill-posed problems by incorporating prior information. <|MaskedSetence|> A noise suppression-guided image filtering reconstruction algorithm has recently been proposed as a solution to address the low signal-to-noise ratio (SNR) problem [31]..
|
**A**: On the other hand, the second category of approaches, aimed at reducing radiation dose, usually introduces additional ill-posedness into the inverse problem.
**B**: The total variation (TV) regularization, as a prominent example, is considered state-of-the-art in handling low-dose (LD) and few-view CT [38].
Moreover, alternative sparsity regularization techniques, e.g., the utilization of wavelet frames [69], have been investigated in CT reconstruction.
**C**: The standard filtered back-projection (FBP) method, when applied to reconstruct CT images using noise-contaminated CT data, suffers from a significant degradation in image quality.
|
BAC
|
CAB
|
CAB
|
CAB
|
Selection 4
|
<|MaskedSetence|> <|MaskedSetence|> The Mel-filterbank captures energy within specific frequency ranges based on the Mel scale. <|MaskedSetence|> The color’s intensity corresponds to the strength of the sound wave at that particular frequency and instant. The plot shows that the frequency content of the sound signal varies over time, which reveals details about the sound’s spectral properties..
|
**A**: It involves segmenting the input signal into short frames, applying window functions to minimize spectral leakage, performing the Fourier transform on each frame to obtain the power spectrum, and applying a Mel-filterbank to create the Mel-spectrogram.
**B**: The resulting Mel-spectrogram visually displays the energy distribution across frequencies and time, providing valuable insights into the spectral characteristics of the cough signal.
The x-axis shows time, and the y-axis shows the frequency spectrum of a Mel-spectrogram in Figure 5.
**C**: Mel Spectogram:
The Mel-spectrogram represents the power spectrum of a cough signal in the Mel-frequency domain.
|
CAB
|
CAB
|
CAB
|
ABC
|
Selection 1
|
Here, Conditions 1 and 2 refer to the likelihood, and Conditions 3 and 4 are linked to the iterate update procedure. Conditions 1, 2, and 4 are similar to (8), (6), and (7) in [6], respectively. <|MaskedSetence|> Further, our analysis generalizes the EM convergence result in [6]. Using a similar proof technique, we can extend the other results in [6] to our setting (for example, the results on generalized EM). Furthermore, if the stationary points of the likelihood cost are isolated, the algorithm converges to a single point. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: Also, we assume that ℍℍ\mathbb{H}blackboard_H is finite, such as any bounded subset of integers or rational numbers.
**B**: Condition 3 is required for the global convergence theorem [18] to hold.
**C**: There are estimation problems where this assumption holds, and we give an example in the next section.
.
|
ACB
|
BAC
|
BAC
|
BAC
|
Selection 4
|
II-A (M)LLM-driven framework
The advancements in (M)LLMs have garnered significant attention for exploring new paradigms in autonomous driving across scenario understanding, decision-making, and end-to-end mechanisms. Decision-making methods driven by Large Language Models (LLMs) demonstrate capabilities in overcoming the interpretability and generalization limitations of learning-based systems. <|MaskedSetence|> LanguageMPC [21] utilized the LLM to identify crucial interacting vehicles for action guidance, generating observation matrix, weight matrix, and action bias to align with MPC-based vehicle control. <|MaskedSetence|> GPT-Driver [22] tokenized input texts and combining them with an end-to-end architecture to achieve open-loop decision-making. <|MaskedSetence|>
|
**A**: SurrealDriver [11] leveraged driving experiences from multiple drivers as chain-of-thought prompts, developing a coach agent module for human-like perception and feedback..
**B**: Drive as You Speak [19] integrated LLM’s language and reasoning abilities into autonomous vehicles, achieving interactive design based on human-machine dialogue, which was also validated through real-world experiments [20].
**C**: DiLu [8] designed a closed-loop decision system based on the LLM, introducing memory and reflection models for performance improvement.
|
BCA
|
BCA
|
BCA
|
ACB
|
Selection 1
|
<|MaskedSetence|> A qualitative comparison is given in Figure 3. We observe that Ann-SNORE outperforms other approaches with better distortion and perceptual scores (PSNR, SSIM, LPIPS). Visually, we observe that Ann-SNORE succeeds to restore textures. Other qualitative results are given in Appendix F. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: Hence, contrary to the deblurring task, inpainting with Ann-SNORE does not involve any additional computational load with respect to RED methods.
.
**B**:
On Table 2, we compare Ann-SNORE to other methods for the inpainting task with p=0.5𝑝0.5p=0.5italic_p = 0.5 on CBSD68.
**C**: Note that each method (except DiffPIR) is run with the same number of iterations.
|
BCA
|
BCA
|
BCA
|
ABC
|
Selection 1
|
Notably, this study evaluated the performance of SDEMG under the specific scenario in which biceps brachii sEMG [7, 10] (Channel 11 in DB2) is introduced as simulation data. <|MaskedSetence|> Consequently, we investigate the performance of SDEMG in this experimental setup along with other methods, as shown in Fig. 3. The analysis demonstrates that the use of SDEMG retains its superiority in most of the cases.
Moreover, compared to our previous study [14], we observed performance degradation of FCN, particularly in feature extraction error metrics. This discrepancy might stem from the reduced length of sEMG segments (from 60s to 5s), which is designed to make the denoising framework more efficient and practical. It is discovered that FCN trained with a smaller segment size tends to introduce more distortion to sEMG signals, increasing the errors in sEMG feature extraction. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: Conversely, SDEMG demonstrates the capability to alleviate signal distortion, generating higher-quality sEMG signals.
**B**: These trunk sEMG signals are prone to contamination of ECG at around SNR -10dB according to a previous study [35].
**C**: This highlights the potential of SDEMG to provide improved signal quality for clinical assessment and evaluation.
.
|
BCA
|
BAC
|
BAC
|
BAC
|
Selection 4
|
<|MaskedSetence|> This showcase its promising capabilities in modeling long-range dependencies. Techniques like shift windows have further tailored ViT, resulting in Swin-Transformer [18], which enhances their applicability in dense prediction tasks in computer vision, such as image segmentation, and detection [19, 31, 17]. In medical image segmentation, the integration of ViT with U-Net architectures, inspired by traditional CNN designs, has also led to various hybrid and pure ViT-based U-Nets. For instance, TransUNet is the first work to harness the feature learning power of ViT in the encoders of UNet [4]. <|MaskedSetence|> Recent developments in State Space Models (SSMs) [6, 22, 27], especially Structured SSMs (S4) [8], offer a promising solution with their efficient performance in processing long sequences. <|MaskedSetence|> The introduction of the Cross-Scan Module (CSM) in the Visual State Space Model (VMamba) further enhances Mamba’s applicability to computer vision tasks by enabling the traversal of the spatial domain and converting non-causal visual images into ordered patch sequences [16]. Inspired by these capabilities, we propose leveraging Visual Mamba blocks (VSS) within the U-Net architecture to improve long-range dependency modeling in medical image analysis, resulting in Mamba-UNet. The evolution of U-Net with various network blocks and the positioning of our proposed Mamba-UNet are briefly illustrated in Figure 1.
.
|
**A**: The Mamba model enhances S4 with a selective mechanism and hardware optimization, showing superior performance in dense data domains [7].
**B**: Motivated by the success of self-attention mechanisms from natural language processing [26], ViT was the first to utilize a pure multi-head self-attention mechanism for the image recognition task with the state-of-the-art performance [5].
**C**: UNETR combines ViT with UNet for 3D segmentation [9], while Swin-UNet and DCSUnet further explore purely Swin Vision Transformer network blocks with U-Net-based structure [3, 28].
While Transformers excel in capturing long-range dependencies, their high computational cost, due to the quadratic scaling of the self-attention mechanism with input size, poses a challenge, particularly for high-resolution biomedical images [32, 21].
|
BCA
|
BCA
|
BCA
|
BCA
|
Selection 4
|
<|MaskedSetence|> <|MaskedSetence|> The nanomagnet is deposited on a poled piezoelectric film and gate pads are delineated around it such that the line joining the pads passes through the major axis. The two pads are shorted together and a voltage is applied between the shorted pads and the grounded conducting substrate. The substrate is made ‘conducting’ so that the applied gate voltage drops mostly across the piezoelectric layer and not the substrate. <|MaskedSetence|> Reversing the polarity of the gate voltage will reverse the signs of the strains. The dimensions of the nanomagnet and the electrodes, the separation between the nanomagnet edge and the nearest electrode, and the piezoelectric film thickness have to satisfy certain conditions for the biaxial strain generation as described [7], but these conditions are relatively easy to fulfill.
We can build a magnetic tunnel junction (MTJ) on top of the nanomagnet, with the latter acting as the soft layer whose magnetization fluctuates. This will transform the MTJ into a fluctuating resistance that acts as either a BSN (no gate voltage applied to cause strain and lower the energy barrier) or an ASN (gate voltage of the right polarity applied to cause strain that lowers the energy barrier). This is the basis of a reconfigurable stochastic neuron (RSN).
.
|
**A**: One way to strain a nanomagnet electrically is to use the configuration shown is Fig.
**B**: 3.
**C**: If the resulting electric field is parallel to the direction of the poling, then tensile strain will appear along the major axis and compressive along the minor axis of the elliptical nanomagnet.
|
ACB
|
ABC
|
ABC
|
ABC
|
Selection 2
|
<|MaskedSetence|> Yoon et al., (2023) propose a method for analysing cement paste with a similar set up. Instead of a stereo camera, a depth camera is used to record a point cloud of the cement paste after the slump test. The point cloud is used to extract the diameter, spread height and curvature. These parameters are used as input for a deep learning algorithm to predict yield stress, plastic viscosity, adsorption ratio of superplasticizer and bleeding.
Schack et al., 2023a ; Schack et al., 2023b ; Schack et al., 2023c take this work one step further and use images taken from the spread flow of the concrete and are not only able to derive the spread flow diameter of the concrete but also information on the concrete composition. Therefore the surface roughness of the spread flow cake is analysed in order to derive e.g. the content of the coarse particle contained in the concrete. In Coenen et al., (2024), a method is presented, in which a camera observes the channel flow of the fresh concrete at the outlet of a mixing truck. <|MaskedSetence|> <|MaskedSetence|>
To overcome this drawback, the properties of the fresh concrete have to be predicted before or during the mixing process. There are two main procedures to achieve this goal. One approach is to perform the prediction on the basis of the mix design information of the concrete. The concrete mix design contains the exact content (in kilograms) of the individual materials used to produce the concrete. The type and concentration of the materials have a major impact on the properties of the concrete. Chidiac and Mahmoodzadeh, (2009) summarize the most common models to determine the plastic viscosity based on the mix design. It is shown that the results vary between different models. The most recent methods based on the mix design use machine learning, and in particular deep learning algorithms. Methods like least squares support vector machines (LSSVM) and particle swarm optimization (PSO) (Nguyen et al.,, 2020), extreme learning machines (Kina et al.,, 2021), random forests and XGBoost (Zhang et al.,, 2022; Hosseinzadeh et al.,, 2023) as well as multi layer perceptrons (MLP) (Navarrete et al.,, 2023) are used for this purpose. In Nguyen et al., (2020) and Navarrete et al., (2023) the input information from the mix design is extended with temporal information, representing the time difference between the mixing process and the time at which the properties are to be determined, to take into account the change of properties over time. The mix design contains valuable information for the time-dependent change of the properties (e.g. the additive content). Although these methods achieve promising results, they omit essential information as e.g. possible variations in the properties of the employed constituents, even though there has been progress in this field in recent years, e.g. Coenen et al., (2023); Lux et al., (2023)..
|
**A**: A CNN predicts the fresh concrete properties on the basis of the spatio-temporal flow fields.
The disadvantage of these methods is, that they are applied post-production, meaning that the concrete still has to be discharged and new concrete has to be produced, if significant deviations to the target properties are detected.
**B**: Spatio-temporal flow fields are generated from the recorded images, which contain information about the flow behaviour of the concrete.
**C**: The authors argue that replacing the manual measurement with an imaging system improves the accuracy of the result and reduces the workload.
|
ACB
|
CBA
|
CBA
|
CBA
|
Selection 2
|
<|MaskedSetence|> Diff-U-Net conditions DDPMs with MRIs at each denoising step to generate the corresponding tumor masks. <|MaskedSetence|> <|MaskedSetence|> Among the 3 variants, we chose the best performing UA-Diffusion approach (Table LABEL:tab:results1) and used it for the remaining experiments. The expression for the best performing variant of UA-Diffusion is shown in equation 2:
.
|
**A**: 2.2.2 U-Net augmented Diffusion (UA-Diffusion)
This model builds upon the Diff-U-Net model proposed by [xing2023diff].
**B**: In comparison, we condition our diffusion model with predictions (U(I)𝑈𝐼U(I)italic_U ( italic_I )) from baseline U-Net along with MRIs (I𝐼Iitalic_I) as shown in figure LABEL:fig:fig2.
**C**: We tested 3 variants of this approach (3 different inputs) as explained in section 3.
|
ABC
|
CBA
|
ABC
|
ABC
|
Selection 4
|
In this paper, we tackle the challenge of advancing Optical Music Recognition beyond monophonic transcription, which has traditionally involved simplifications or ad-hoc adaptations of existing methods. We introduce the Sheet Music Transformer (SMT) model, an autoregressive Transformer-based model that utilizes attention computation and language modeling to transcribe input scores into digital music encodings. We evaluate this approach in two polyphonic music scenarios: pianoform and string quartet scores. <|MaskedSetence|> <|MaskedSetence|> Addressing this issue could involve exploring graphic-based output music encodings [7] or employing additional musicological metrics in order to assess transcription accuracy [25]. <|MaskedSetence|> Finally, we propose that the Universal OMR transcription challenge, a desired goal in OMR research, could be addressed by profiting from the language modeling capabilities of the SMT, thus allowing it to be trained to transcribe music engraved in various manners..
|
**A**: The development of segmentation-free full-page transcription methods is another promising direction, as it is not limited by image features or layout constraints.
**B**: Our results demonstrate that the SMT model not only effectively transcribes complex musical layouts, but also outperforms current state-of-the-art methods, thus implying a significant advance in OMR.
This research also opens avenues for further improvements to OMR.
**C**: Firstly, and as discussed in Section 6, standard OMR evaluation methods may overlook musicological interpretations, leading to pessimistic results.
|
BCA
|
BCA
|
CAB
|
BCA
|
Selection 1
|
IV-A3
Result
First, the results of the ACC model are depicted in Fig. <|MaskedSetence|> 3, where the IPO method illustrates the optimal curve, whereas reinforcement learning often falls short of achieving such performance. The primary focus is on the final spacing error and acceleration change rate during the tracking process. Using the quadratic reward function leads to a larger steady-state spacing error, as shown in Fig. 2, and the absolute value reward function induces significant peaks in the acceleration change rate throughout the control process, as shown in Fig. <|MaskedSetence|> <|MaskedSetence|> The specific steady-error values and undiscounted episodic rewards (costs) are shown in Table II..
|
**A**: 2 and Fig.
**B**: However, employing PI-based Methods 1 and 2 can reduce the final steady-state spacing error and result in small fluctuations in the acceleration change rate.
**C**: 3.
|
ACB
|
ABC
|
ACB
|
ACB
|
Selection 1
|
<|MaskedSetence|> Flamingo [18] is one of the early attempts that insert cross-attention layers into LLMs to introduce visual features into textual space. Meanwhile, to better align multi-modal features, BLIP2 [73] unifies the pre-trained visual encoder with LLM through an ingeniously designed Q-former. After that, InstructBLIP [41] extends BLIP-2 with instruction-following data and obtains better performance. <|MaskedSetence|> For example, LLaVA [77] constructs 158K instruction-following data to conduct the training process and achieves great performance. Building upon the success of LLaVA, several subsequent LVLMs [48, 70, 123] leverage the high-quality 158k multimodal data to facilitate the training process. Furthermore, MiniGPT-4 [128] aligns a frozen visual encoder with a frozen LLM via only one projection layer. To better fine-tune the model, MiniGPT-4 utilizes 3500 detailed image-description pairs, illustrating that even a relatively small amount of high-quality data can significantly enhance the training of LVLMs. <|MaskedSetence|>
|
**A**: Motivated by this success, most LVLMs are built through the instruction-tuning pipeline.
**B**: Additionally, VPGTrans [125] transfers the text encoder of BLIP2 model to Vicuna, which reduces the training costs and maintains the convincing performance..
**C**:
2.1 Large Vision-Language Models
Based on the recent emergence of Large Language Models (LLMs) such as LLaMA [111] and GPT [90], LVLMs utilize the knowledge from LLMs and align visual features to the textual space for various text output.
|
CAB
|
CAB
|
CAB
|
CBA
|
Selection 1
|
The concept of a distributed system is foundational in informatics, characterizing the architecture of both physical and abstract infrastructures that are an integral part of modern life. <|MaskedSetence|> One common approach is to give ‘resource semantics’ of logics; that is, interpretations of logical structures and relations in terms of system concepts. <|MaskedSetence|> <|MaskedSetence|> This underscores the paper’s thesis, as articulated in Section 3.3, asserting inferentialism as an intuitive and useful framework for logical systems modelling..
|
**A**: Furthermore, it offers a consistent interpretation of the inferentialist perspective across these logics, specifically through their base-extension semantics that uniformly recovers their established resource readings.
**B**: Notable examples include the number-of-uses interpretation for linear
logics and the sharing/separation interpretation for bunched logics.
Despite the distinct nature of these resource semantics, this paper presents a unified definition of ‘resource semantics’ that encompasses them all.
**C**: Logic serves as a vital tool for representing, understanding, and reasoning about such systems.
|
CBA
|
CBA
|
ACB
|
CBA
|
Selection 1
|
Table 4 compares the complexity of different OTFS detectors in terms of the required number of floating-point operations (FLOPs)222The required number of FLOPs is counted by the fvcore library available at: https://github.com/facebookresearch/fvcore, while those of the unsupported operations, e.g. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Thanks to the aid of AMP, AMP-GNN-v2 achieves pronounced BER improvements over the GNN-based detector. With IDI approximation, the AMP-GNN-based detector shows much lower complexity than that of AMP-GNN-v2 at negligible loss of BER performance, which demonstrates its benefits of cost-effective OTFS data detection. It is noteworthy that when P=4𝑃4P=4italic_P = 4, the GNN-based detector incurs shorter runtime than the MP-based detector despite requiring slightly more FLOPs owing to the GPU acceleration for neural networks.
.
|
**A**: sparse matrix operations, are counted manually.
**B**: and runtime.
**C**: It is observed that the AMP-based detector has the lowest complexity but worst BER performance.
|
ABC
|
ABC
|
ABC
|
ACB
|
Selection 2
|
There has been a significant shift from traditional, hand-crafted audio features such as MFCCs to the use of raw audio waveforms and spectrogram representations as inputs for neural networks. <|MaskedSetence|> <|MaskedSetence|> These often involve segmenting the audio input with some window length before converting it into an embedding compatible with the models, rather than producing spectrograms. <|MaskedSetence|> We do not focus on this class of models for our analysis..
|
**A**: Wyse (2017) showed the advantages of spectrogram representations for deep neural networks, particularly their ability to capture both time and frequency information, which is crucial for effectively modeling and generating complex audio signals.
**B**: However, this class of models has seemed to work well primarily for generative applications (Gardner et al., 2021).
**C**: Furthermore, multiple works have employed representations based on spectrograms, coupled with convolutional neural networks, and have shown that these work particularly well together (Hershey et al., 2017; Schmid et al., 2023; Gong et al., 2021b).
A class of models is also based on directly processing raw audio (Verma and Berger, 2021).
|
BCA
|
ACB
|
ACB
|
ACB
|
Selection 4
|
This temporary sine tone at the shifted frequency is used in GSM for initial frequency synchronization and coarse time slot synchronization.
For the all-one codeword, the short-term spectrum looks very similar. <|MaskedSetence|> This is as GSM uses differential precoding. <|MaskedSetence|> <|MaskedSetence|> However, the waveform is phase-shifted by π𝜋\piitalic_π.
.
|
**A**: Thus, a sequence of 148 1-bits is first converted into a sequence of at least 147 0-symbols before it is fed into the modulator.
**B**: The frequency shift is not towards lower frequencies, but also upwards.
**C**: For a duration of at least 142 symbols111GMSK modulation contains memory of up to 5 symbols., the instantaneous frequency is identical.
|
ACB
|
BAC
|
BAC
|
BAC
|
Selection 4
|
<|MaskedSetence|> We employed a combination of Librilight’s small and medium collections (Kahn et al., 2020), speech segments from DNS Challenge 4 (Dubey et al., 2022), the Common Voice dataset (version 16.0) (Ardila et al., 2019), LibriTTS (Panayotov et al., 2015) training set, and 20,000 hours of internal Chinese data as the integrated training dataset. <|MaskedSetence|> Additionally, we performed tests on the LJSpeech dataset to simulate out-of-domain scenarios. <|MaskedSetence|> Inference testing was carried out on the LibriSpeech (Panayotov et al., 2015) Test-Clean sets, following VALL-E (Wang et al., 2023) and MobileSpeech (Ji et al., 2024), we filtered audio samples of 4-10 seconds from the LibriSpeech Test-Clean sets.
.
|
**A**: To ensure a fair comparison of codec models’ performance, we conducted inference testing on the LibriTTS (Zen et al., 2019) Test-Clean and Test-Other sets to evaluate codecs’ restoration effectiveness in common and noisy environments respectively.
**B**: Datasets
Language-Codec was trained on a comprehensive 50,000-hour speech dataset.
**C**: For downstream speech language models, we utilized the LibriTTS training set to train zero-shot text-to-speech models.
|
BAC
|
CAB
|
BAC
|
BAC
|
Selection 3
|
In contrast to the above-mentioned recent results, we focus on real-time event identification using PMU data and physics-based modal decomposition methods along with interpretable ML models. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> In the white box setup, we assume that the attacker has full knowledge of the classification framework including the classification model (i.e., knows both (i) and (ii) detailed above), and can only tamper with a subset of PMUs. On the other hand, for the gray box setup, we assume that the attacker does not know the ML classifier used by the system operator or the data that was used for training; however, the attacker has knowledge of the aspect (i) of the framework, has access to historical data from the same network, and can tamper with a subset of PMUs. In either setting, the attack algorithm perturbs event features in the direction of the classifier’s gradient until the event is incorrectly classified. Using detailed event-inclusive PSS/E generated synthetic data for the 500-bus South Carolina system, we show that both types of attacks can significantly reduce the accuracy of the event classification framework presented in [7].
.
|
**A**: Our event identification framework leverages the approach in [7] and involves two steps: (i) extract features using physics-based modal decomposition methods; (ii) use such features to learn logistic regression (LR) and gradient boosting (GB) models for event classification.
**B**: Our primary goal is to design an algorithmic approach that generates adversarial examples to evaluate the robustness of this physics-based event classification framework.
**C**: We evaluate our attack algorithm in two distinct settings: white box and gray box.
|
BAC
|
ABC
|
ABC
|
ABC
|
Selection 4
|
<|MaskedSetence|> However, the high computational complexity of the ML approach makes it unfeasible. <|MaskedSetence|> The use of a two-dimensional Fourier transform across the transmitting and receiving antenna arrays of each radar pair is a common approximation for the ML solution. However, this approximation yields inaccurate results when the number of antennas is limited. <|MaskedSetence|> However, there is no framework to combine the outputs of the MUSIC algorithm from multiple radar pairs.
I-A Major Contributions.
|
**A**: In comparison, subspace-based methods such as the multiple signal classification (MUSIC) algorithm provide better accuracy than the Fourier transform and lower complexity than the ML approach as a two-dimensional search is possible [7].
**B**: In practice, the brute force estimation requires a 2K2𝐾2K2 italic_K-dimensional search over the (x,y)𝑥𝑦(x,y)( italic_x , italic_y ) position of the K𝐾Kitalic_K targets.
**C**: The maximum likelihood (ML) framework presents an optimal solution for deriving a data fusion combination rule.
|
ACB
|
CBA
|
CBA
|
CBA
|
Selection 4
|
<|MaskedSetence|> <|MaskedSetence|> (2018); Changbin et al. (2015); Zhao et al. (2014); Meuleners and Fraser (2015); Schlüter et al. (2021).
Research involving instrumented vehicles or simulators can make participants conscious of their involvement in an experiment, potentially introducing biases compared to real-world driving Carsten et al. (2013); Shrestha et al. <|MaskedSetence|>
|
**A**: (2017)..
**B**: Some drivers perceive situations within a simulator as less hazardous in contrast to real-world situations and drive more aggressively Corcoba et al.
**C**: (2021), reasoning that no one will get injured Helman and Reed (2015).
However, there is substantiated evidence that a driving simulator serves as a valid tool for analyzing driving behavior van Huysduynen et al.
|
BCA
|
BCA
|
CAB
|
BCA
|
Selection 4
|
<|MaskedSetence|> The model comprises six convolutional layers with corresponding average pooling layers, and each convolutional layer is followed by a Rectified Linear Unit (ReLU) activation function. To prevent overfitting, dropout layers, which discard 50% of the connections, are introduced within the network structure. <|MaskedSetence|> <|MaskedSetence|> Its system architecture is illustrated in Fig. 4.
.
|
**A**: In particular, the structure of its last layer is dynamically adjusted according to the requirements of the blind room parameter estimation task to meet the performance needs of different tasks.
**B**: 4.1 Convolutional neural network
In this section, a model based on a CNN following the “+phase” model in [21] is introduced, for processing two-dimensional feature blocks extracted from audio data.
**C**: Taking the estimation of room volume parameters as an example, the final output layer is a fully connected layer, mapping the output dimension to downstream tasks.
|
BCA
|
BAC
|
BCA
|
BCA
|
Selection 4
|
Other works [20, 21, 22, 23] modeled the distribution of satellite locations using a PPP, i.e., the number of nodes is randomly determined according to the Poisson distribution. While it is conventionally applied in an infinite two-dimensional area, the recent finding has demonstrated that PPPs can effectively characterize node distribution even in finite space. In particular, the authors in [20] showed that a PPP could effectively capture the actual Starlink constellation in terms of the number of visible satellites, and derived the coverage probability. <|MaskedSetence|> The work [22] dealt the problem that determines the optimal satellite altitude to maximize the downlink coverage probability. <|MaskedSetence|> Furthermore, the other works [24, 25] analyzed satellite networks considering orbit geometry. In [24], a PPP was employed to model the distribution of satellites in orbits. <|MaskedSetence|> From these works, we conclude that point processes can successfully model actual satellite constellations, and analytical results based on stochastic geometry can effectively estimate the actual performance. However, these performance analyses have been conducted solely under the scenario where a single satellite serves a user.
The proliferation of satellites underscores the necessity for a theoretical understanding of system parameters to design networks effectively. In light of this, our stochastic geometry-based analyses shift the focus from scenarios where a single satellite serves to those involving multiple satellites, examining the impact of system parameters on network performance..
|
**A**: In [21], the coverage probability was derived based on the contact distance distribution.
**B**: In [23], a non-homogeneous PPP was used to model the varying satellite density according to latitudes, and the coverage probability and average achievable data rate were derived.
**C**: To capture the geometric characteristics of satellites with orbits that may vary in altitude, a framework using a Cox point process was suggested, and the outage probability was investigated in [25].
|
CAB
|
ABC
|
ABC
|
ABC
|
Selection 2
|
where J𝐽Jitalic_J stands for the number of items. [Item name] is the index of the item and is not directly used for image reconstruction, so it is only represented by no more than three words. [Item detail] includes the shape, color, status, or other attributes. <|MaskedSetence|> <|MaskedSetence|> Past GIC [31] shows that as the number of description words increases, the compression performance gradually increases and reaches a maximum of about 50 words. Considering the data scale of text format, for a 512×512512512512\times 512512 × 512 image, a word usually occupies 1∼2×10−4similar-to12superscript1041\sim 2\times 10^{-4}1 ∼ 2 × 10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT bpp. <|MaskedSetence|>
|
**A**: [Detail all] is a description of the image as a whole.
**B**: To avoid unnecessary overhead, we set 50 as the benchmark.
Assuming in the semantic domain, a few items take up most of the information in the image (just as the low frequencies take up most of the energy),.
**C**: Considering that the description of an object in existing visual question answering [46] tasks usually does not exceed 10 words, we set it as an upper limit.
|
CAB
|
BCA
|
CAB
|
CAB
|
Selection 1
|
2.2 Image-only process module
In the domain of medical image classification, the amalgamation of both global and local features within an image is instrumental to the successful categorization of 3D medical images. Conventional network architectures often have a restricted receptive field of the convolutional kernel. Consequently, we crafted our image-only model based on the Multi-View Coupled Self-Attention (MVCS) module proposed by Zhou et al.[10], the architecture of which is illustrated in Figure 1. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|>
|
**A**: These dual attention mechanisms can capture both global and local information of the CT image across three dimensions, systematically modeling the correlations among the space, channel, and dimension.
**B**: Figure 2 displays the specific structure of the MVCS block.
.
**C**: Specifically, the MVCS Block incorporates spatial and dimensional attention mechanisms into the 3D ResNet [11].
|
CAB
|
CAB
|
CAB
|
BCA
|
Selection 3
|
In the main paper, we evaluate the effectiveness of VoCo on 3D medical images. To further verify its performance on 2D medical images, we also conduct experiments on the NIH ChestX-ray [55] datasets. We follow the consistent settings of previous works [71, 72], i.e., pre-train on NIH ChestX-ray and fine-tune on NIH ChestX-ray. Specifically, for fair comparisons with [71, 72], 60% of data are used for pre-training and the remaining is used for finetuning. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|>
|
**A**: As shown in Table 11, VoCo can also achieve competitive results on the 2D medical dataset.
**B**: 3D-UNet [47] is used for experiments.
**C**: We conclude that although the 2D images contain less information than 3D scans, the position priors still exist, which benefits the training of VoCo.
.
|
ABC
|
BAC
|
BAC
|
BAC
|
Selection 3
|
<|MaskedSetence|> <|MaskedSetence|> In the agricultural robotics domain, another study proposes a virtual reality (VR) and Kinect-based immersive teleoperation system for navigating unstructured agricultural environments, utilizing real-time 3D reconstruction algorithms to create realistic virtual environments [13]. Additionally, digital twin technology has gained traction in engineering communities, with a study presenting an AI-powered framework for efficient communication and reconstruction of large-scale digital twins using 3D point cloud data [14]. <|MaskedSetence|> A more resource-heavyweight approach presented in another study leverages signal-to-noise ratio (SNR) measurements from low earth orbit (LEO) communication satellites for real-time 3D city map reconstruction, offering a novel solution to overcome the limitations of traditional passive sensors and enable global-scale mapping [16]. However, despite these advancements, challenges persist in achieving cost-effective, lightweight and scalable 3D map reconstruction for urban environments..
|
**A**:
II Related works
In recent years, a diverse range of technologies has been used for achieving environment reconstruction, addressing various challenges and opportunities in urban planning, agriculture, engineering, and robotics domains.
**B**: Furthermore, a decentralized framework is proposed for collaborative 3D mapping of outdoor areas using mobile ground robots, demonstrating the reliability and efficiency of real-time 3D LiDAR measurements and peer-to-peer communication strategies [15].
**C**: One notable study focuses on the digital transformation of urban planning processes, emphasizing the use of 3D spatial data and models to create a city digital twin, enabling more illustrative and comprehensible representations of urban environments [12].
|
ACB
|
ACB
|
CAB
|
ACB
|
Selection 1
|
<|MaskedSetence|> These experiments revealed that waves on the surface of a descending thin liquid film exhibit strong non-linearity, manifesting the emergence of saturated waves from small perturbations in amplitude, as well as the presence of solitary waves. For low Reynolds numbers (Re<300𝑅𝑒300Re<300italic_R italic_e < 300), it was noted that the wavelength of the non-linear waves greatly exceeded the thickness of the film, allowing for potential simplifications in their physical modeling (referblack to as the long-wave regime). <|MaskedSetence|> <|MaskedSetence|> The flow rate q𝑞qitalic_q and the fluid height hℎhitalic_h are simultaneously evolved using the following set of equations:
.
|
**A**: Various physical models were proposed, among them the Shkadov model, which was introduced in 1967 [19].
**B**: The initial investigation into vertically falling fluid films was conducted by Kapitza & Kapitza [18], sparking extensive experimental exploration in subsequent decades.
**C**: Despite being shown to be inconsistent [20], the model exhibits intriguing spatio-temporal dynamics while remaining computationally affordable.
|
BAC
|
BAC
|
ABC
|
BAC
|
Selection 1
|
<|MaskedSetence|> We propose a two-stage method that sequentially solves the decision-making and vehicle control problems, both of which are addressed by combining MPC and LSTM with convolutional social pooling (CS-LSTM) for trajectory prediction. Through this holistic control strategy, an autonomous vehicle can dynamically adapt to the rapidly changing conditions of the road environment, ensuring both the safety and efficiency of lane-change maneuvers.
The paper is organized in the following manner. Section II describes the lane-change decision method based on MPC. <|MaskedSetence|> <|MaskedSetence|>
|
**A**:
A controlling method combining lane-change decision and control, inspired by the previous work [tajeddin2019ecological], is presented here.
**B**: Section III details the establishment of a dynamic bicycle model and the control of the vehicle based on MPC.
**C**: Section IV presents the simulation results and analysis..
|
CBA
|
ABC
|
ABC
|
ABC
|
Selection 4
|
Canon Dual-pixel RAW data capture. We capture images using the Canon camera, under the DP-RAW setting with the lowest possible ISO setting of 100100100100. We process the raw .CR2 files using Adobe DNG converter to convert them to the DNG format. <|MaskedSetence|> <|MaskedSetence|> The 14-bit RAW data is appropriately scaled and the black-level is subtracted to get the left, right DP images.
Scene capture details. For a given scene, we capture a naive (no code) DP measurement, a CADS measurement, and also a f/22𝑓22f/22italic_f / 22 measurement to obtain the ground truth deblurred all-in-focus image of the scene. <|MaskedSetence|> We pre-calibrate the Intel RealSensor sensor with our Canon DSLR sensor with the help of a 10x12 checkerboard pattern, so as to transform the depth map into the DSLR’s frame of reference. We crop the captured DP measurements and only reconstruct the central 1696x1522x3 region. We show a few more example results in Fig. 13..
|
**A**: For simplicity we do not perform demosaicking, thus we obtain RGB DP captures of size 3440x2272x3.
**B**: We extract the combined (L+R) and left (L) images from the DNG files using the Tiff() capabilities in MATLAB.
**C**: Furthermore, we also capture a coarse ground truth depth map using an Intel RealSense D415 stereo sensor (see Fig. 11).
|
BAC
|
BAC
|
ABC
|
BAC
|
Selection 2
|
Talkative Power Communication (TPC) has emerged as an innovative solution [10, 11] for co-transfer of power and information, that transmits messages through power lines by encoding and superimposing on the respective bus voltages.
Various digital modulation techniques, such as amplitude-shift keying (ASK), frequency-shift keying (FSK), and phase-shift keying (PSK), are utilized for overlaying data onto a reference signal [12]. An alternative approach involves modifying the carrier during the modulation process [13]. TPC modulations can also be implemented entirely in the digital domain [14].
However in AC systems, these signals cannot traverse through transformers due to the absence of a zero-sequence path. Similarly, in DC systems, these signals are obstructed by solid-state medium frequency transformers (SSTs) as SST inherently functions as a DC-DC converter which decouples the input and output. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> This sets up the foundation for innovations in new co-transfer technologies beyond the state-of-the-art..
|
**A**: This constraint restricts the application of TPC in microgrids with different voltage levels, which thereby encounters significant technical difficulties when scaling up to high voltage (HV) levels and number of converters [15].
**B**: Apart from the power inefficiency incurred by TPC, its scalability and evolution beyond the request-receive communication protocol still remain a big question in MGs.
**C**: The encoded signals can also be attenuated by the input capacitor of solid-state transformers (SSTs).
|
CAB
|
BAC
|
CAB
|
CAB
|
Selection 3
|
II-A The TDOA-Based Positioning Protocol
Consider 2-dimensional positioning, while 3-dimensional positioning can be derived in a similar fashion. Assume that transmitters A, B and C are three separate anchor points, which send signals to receiver R, as shown in Fig. <|MaskedSetence|> The three transmitters are aligned via an unified clock in the form of pulse per second (PPS) from a satellite timing system. <|MaskedSetence|> <|MaskedSetence|> Based on the synchronization of pilots from the three transmitters, the receiver can obtain the time difference of arrival, from which the receiver location can be estimated..
|
**A**: 1.
**B**: The time-division transmission is controlled by 10MHz10𝑀𝐻𝑧10MHz10 italic_M italic_H italic_z square wave generated by an atomic clock drived by the PPS signal.
**C**: The transmitters send synchronization signals to the receiver under an unified clock in a time-division manner at fixed time interval T𝑇Titalic_T.
|
ACB
|
CAB
|
ACB
|
ACB
|
Selection 1
|
<|MaskedSetence|> Korhonen [13] proposed a two-level approach, extracting low-complexity features from each spatially downsampled frame and high-complexity features from key frames at the actual spatial resolution.
An alternative approach is to work with spatiotemporal chips [6], which are spatial-time localized cuts of the full-size video in local motion flow directions. <|MaskedSetence|> <|MaskedSetence|> Moreover, due to the small scale of video quality datasets, many BVQA methods that utilize convolutional neural networks (CNNs) commonly depend on pretrained models from object recognition tasks, which expect small and fixed-size inputs. Consequently, videos need to be spatially resized [46, 43, 37] and/or cropped [47, 43, 44, 45], and temporally subsampled [46, 37, 43]..
|
**A**: Nevertheless, operating on the full-size video sequence is computationally daunting.
**B**: There are few attempts [17, 16] to assess full-size videos, which come with significant computational demands, especially for videos of high resolutions and frame rates.
**C**: In general, knowledge-driven BVQA models perform marginally due to the limited expressiveness of manually crafted features.
The computational issues faced by deep learning-based data-driven BVQA methods are more pronounced.
|
CAB
|
ACB
|
ACB
|
ACB
|
Selection 2
|
<|MaskedSetence|> They demonstrate the potential of combining resources across testbeds to overcome limitations such as waiting times and experiment size constraints. <|MaskedSetence|> However, due to limited resources within these testbeds, there are often extended waiting periods for experiment approval, and the size of experiments becomes restricted. To overcome these limitations, inter-testbed experiments can be conducted such as in the case of COSMOS [10] and POWDER-RENEW [11]. <|MaskedSetence|>
|
**A**: By leveraging resources from multiple testbeds, researchers can create a more extensive and collective resource pool.
.
**B**: COSMOS draws inspiration from networking and wireless testbeds such as GENI [6], Emulab [7], PlanetLab [14], OneLab [15], CloudLab [16], and notably, ORBIT [9].
The purpose behind the accessibility of the testbeds is to facilitate the testing of prototypes without requiring the researchers to invest in a dedicated testing infrastructure.
**C**: COSMOS [10], and POWDER-RENEW [11] focus on ultra-high bandwidth and low latency, and massive MIMO technologies, respectively.
|
CBA
|
CBA
|
CBA
|
BAC
|
Selection 1
|
Since the existing listener head datasets do not contain the full data required for training our model, we conduct additional data annotations on two datasets:(1) ViCo [46], a popular dataset in listener head generation; (2) RealTalk, proposed by [12], which serves as a database for retrieving listener videos in [12]. Firstly, to learn the relationships between the text prior and listener expressions, we follow the text annotation pipeline in TalkCLIP [24], and conduct text annotation on ViCo[46] and RealTalk[12]. Specifically, we employ [6] to acquire emotion labels, activated AUs as well as their intensities. <|MaskedSetence|> Finally, we generate diverse text prior description for each video segment according to the format: [A person <<<EMOTION>>> and listens with <<<AU>>> (and <<<HEAD MOTION>>>)], where <<<EMOTION>>> denotes the emotional labels, <<<AU>>> denotes fine-grained expression-related texts based on AU activation intensities and Facial Action Coding System [10], and <<<HEAD MOTION>>> indicates head nodding/shaking labels. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: Detailed examples can be seen in Appendix A.
.
**B**: Additionally, to learn realistic head movements such as nodding to convey agreement and shaking the head to signify disagreement, we use Hopenet [34] to detect head motions.
**C**: We use “and” to link when there is more than one activated AU.
|
BCA
|
BAC
|
BCA
|
BCA
|
Selection 4
|
<|MaskedSetence|> <|MaskedSetence|> This is useful in the online setting for CAS, but also for interaction data generation to be used by learning-based CAS algorithms. Training of Reinforcement Learning (RL) agents with the Gymnasium framework typically requires a reset of the environment after each terminated or truncated episode. In this context, RRT algorithms can be used for rapidly generating target ship trajectory scenarios after a reset. As an example, PQ-RRT* will, in general, produce more path-optimal solutions than RRT and RRT*, and can thus be used for spawning target ship behavior scenarios with minimal maneuvering, whereas the RRT* or RRT variants can be used for spawning gradually unpredictable maneuvering target ship behaviors that can be considered as outliers. Thus, employing multiple RRT algorithm variants for ship scenario generation can improve the ability of learning-based CAS to generalize, and also enlarge the test coverage in the context of simulation-based CAS verification.
Note that, in the scenario-generation context the RRT cost function can be designed to e.g. <|MaskedSetence|> Furthermore, in the maritime context, COLREG can be misinterpreted and lead to ambiguous and therefore often dangerous situations (Chauvin and Lardjane, 2008). Thus, RRTs could be guided towards edge case situations in COLREG where the applicable situation rule(s) are easy to misinterpret. These considerations are a topic for future work.
.
|
**A**: minimize time to collision or converge towards near misses between the random vessel and the own-ship that runs the CAS to be tested (Tuncali and Fainekos, 2019).
**B**: After being built, the resulting RRT variant can be queried efficiently through R-tree spatial nearest neighbor search (Guttman, 1984), and used to rapidly sample random ship trajectories for intention-aware CAS or simulation-based testing of CAS.
**C**: The first case is a random vessel trajectory generation scenario where the goal is to generate multiple random trajectories from a given start position.
|
CBA
|
CBA
|
ABC
|
CBA
|
Selection 2
|
<|MaskedSetence|> <|MaskedSetence|> For example, \AcLLM based on DTL has demonstrated significant potential for ASR tasks, particularly for both LM and AM components. The incorporation of DTL techniques into large language model (LLM) facilitates the transfer of knowledge from extensive pre-training tasks to enhance ASR effectiveness. <|MaskedSetence|> This enables the AM component to grasp acoustic features like spectrograms or Mel-frequency cepstral coefficients (MFCCs) and utilize pre-trained knowledge to improve speech recognition accuracy. Through fine-tuning, the model can adjust and specialize in specific datasets or acoustic domains, leading to enhanced ASR performance.
.
|
**A**: In terms of AM, fine-tuning LLM can leverage insights gained from pre-trained models exposed to sample acoustic data.
**B**:
\Acp
LLM and Transformers represent the forefront of AI, trained on vast datasets spanning various domains, including text, speech, images, and multi-modal inputs.
**C**: Despite extensive research on ASR, existing SOTA approaches often lack integration of advanced AI techniques like DRL and FL into both AM and LM domains.
|
BCA
|
BCA
|
BAC
|
BCA
|
Selection 4
|
<|MaskedSetence|> This is thanks to our improvement to the image degradation model where we provide a robust restoration capability on compression and resize functionality.
Meanwhile, due to our proposed balanced twin perceptual loss, images generated by our GAN network do not show unwanted color artifacts as in AnimeSR and VQD-SR, which can be seen in the fifth row. Further, thanks to the versatile scenes collected in our proposed dataset, we are capable of achieving effective restoration in dark scenes. <|MaskedSetence|> IQA stands for image quality assessment. <|MaskedSetence|>
|
**A**: More visual results can be found in the supplementary materials.
Table 2: Ablation study results of different training datasets.
**B**: In addressing various twisted lines and shadow artifacts, our model outperforms others in effective restoration, evidenced by the third and fourth rows.
**C**: ICA stands for image complexity assessment..
|
BAC
|
BAC
|
CAB
|
BAC
|
Selection 2
|
VII Multi-channel orthogonal sampling
In this section, we illustrate the theoretical power of our formalism by revisiting the sampling/reconstruction system designed in [32, 11] for multi-channel time encoding. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> We show that this encoding system turns out to satisfy the abstract conditions of (1) and (3). While the reconstruction iterates of [32, 11] coincide with those of (84) under our formalism, our theory uncovers the full pseudo-inversion property of this method, which was only studied in a noise-free case of perfect reconstruction in these references. Moreover, while the reconstruction method was mostly presented at a conceptual level in [32, 11], our abstract reformulation simultaneously allows a more explicit descriptions of practical implementations..
|
**A**: A basic POCS algorithm was used to reconstruct a multi-channel signal from the elaborate sampling system shown in Fig.
**B**: 2222The letters ‘x𝑥xitalic_x’ and ‘y𝑦yitalic_y’ from [32, 11] have been interchanged in Fig.
**C**: 2 to be compatible with the notation of the present article..
|
ABC
|
ACB
|
ABC
|
ABC
|
Selection 1
|
<|MaskedSetence|> <|MaskedSetence|> Furthermore, it is now established that SKC in bits per second from channel probing is not constrained by channel coherence time, which is unlike SKC based on reciprocal channel responses. These results are complementary to the prior works on DoF of SKC from MIMO channel probing. <|MaskedSetence|> Theorem 1 provides a strong motivation for further development of radio or non-quantum based schemes for SKG..
|
**A**:
IV Conclusion
For the first time, closed-form expressions of SKC based on data sets from a Gaussian MIMO channel probing are shown.
**B**: Compared to quantum key distribution [14], SKG from radio or any non-quantum channels is much more cost-effective.
**C**: The gap between Maurer’s upper and lower bounds is proven to be zero when the data sets used are from one-way probing.
|
ACB
|
ACB
|
ACB
|
ACB
|
Selection 3
|
<|MaskedSetence|> <|MaskedSetence|> It contains 40 distinct speakers and 5.4-hour speech. Following [5], we randomly select one sentence for each speaker for LibriSpeech test-clean benchmark. Specifically, we randomly select 3333-second clips as prompts from the same speaker’s speech. 2) RAVDESS [58], an emotional TTS dataset featuring 24 professional actors (12121212 female, 12121212 male) across 8888 emotions (neutral, calm, happy, sad, angry, fearful, surprise, and disgust) in 2222 emotional intensity (normal and strong). We use strong-intensity samples for RAVDESS benchmark. <|MaskedSetence|>
|
**A**: We employ two benchmark datasets: 1) LibriSpeech [57] test-clean, a widely-used testset for zero-shot TTS task.
**B**: We adopt this benchmark for prosody evaluation, considering 1) for the same speaker, speech with the same emotion shares similar prosody, while speech with different emotions displays varied prosodies; 2) the benchmark provides speech samples with the same text from the same speaker across eight different emotions.
.
**C**:
Evaluation Dataset.
|
CAB
|
CAB
|
CBA
|
CAB
|
Selection 2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.