The lesions generally appeared in areas that were predicted to have high temperatures. While more work is needed to validate the temperature estimates in and around the skull, being able to predict the locations and onset for lesions in the bone marrow could allow for better distribution of the acoustic energy over the skull. Understanding skull absorption characteristics of TcMRgFUS could also be useful in optimizing transcranial focusing.Cerenkov luminescence tomography (CLT) is a promising imaging tool for obtaining three-dimensional (3D) non-invasive visualization of the in vivo distribution of radiopharmaceuticals. https://www.selleckchem.com/products/qx77.html However, the reconstruction performance remains unsatisfactory for biomedical applications because the inverse problem of CLT is severely ill-conditioned and intractable. In this study, therefore, a novel non-negative iterative convex refinement (NNICR) approach was utilized to improve the CLT reconstruction accuracy, robustness as well as the shape recovery capability. The spike and slab prior information was employed to capture the sparsity of Cerenkov source, which could be formalized as a non-convex optimization problem. The NNICR approach solved this non-convex problem by refining the solutions of the convex sub-problems. To evaluate the performance of the NNICR approach, numerical simulations and in vivo tumor-bearing mice models experiments were conducted. Conjugated gradient based Tikhonov regularization approach (CG-Tikhonov), fast iterative shrinkage-thresholding algorithm based Lasso approach (Fista-Lasso) and Elastic-Net regularization approach were used for the comparison of the reconstruction performance. The results of these experiments demonstrated that the NNICR approach obtained superior reconstruction performance in terms of location accuracy, shape recovery capability, robustness and in vivo practicability. It was believed that this study would facilitate the preclinical and clinical applications of CLT in the future.We introduce VA-Point-MVSNet, a novel visibility-aware point-based deep framework for multi-view stereo (MVS). Distinct from existing cost volume approaches, our method directly processes the target scene as point clouds. More specifically, our method predicts the depth in a coarse-to-fine manner. We first generate a coarse depth map, convert it into a point cloud and refine the point cloud iteratively by estimating the residual between the depth of the current iteration and that of the ground truth. Our network leverages 3D geometry priors and 2D texture information jointly and effectively by fusing them into a feature-augmented point cloud, and processes the point cloud to estimate the 3D flow for each point. This point-based architecture allows higher accuracy, more computational efficiency and more flexibility than cost-volume-based counterparts. Furthermore, our visibility-aware multi-view feature aggregation allows the network to aggregate multi-view appearance cues while taking into account occlusions. Experimental results show that our approach achieves a significant improvement in reconstruction quality compared with state-of-the-art methods on the DTU and the Tanks and Temples dataset. The code of VA-Point-MVSNet proposed in this work will be released at https//github.com/callmeray/PointMVSNet.Traditional video compression approaches build upon the hybrid coding framework with motion-compensated prediction and residual transform coding. In this paper, taking advantage of both the classical compression architecture and the powerful non-linear representation ability of neural networks, we propose the first end-to-end deep video compression framework. Our framework employs pixel-wise motion information, which is learned from an optical flow network and further compressed by an auto-encoder network to save bits. The other compression components are also implemented by well-designed networks for high efficiency. All the modules are jointly optimized by using the rate-distortion trade-off and collaborate with each other. More importantly, the proposed deep video compression framework is very flexible and can be easily extended by using lightweight or advanced networks for higher speed or better efficiency. Experimental results show that the proposed approach can outperform the widely used video coding standard H.264 and be even on par with the latest standard H.265.In this paper, we propose a channel-wise interaction based binary convolutional neural network learning method for efficient inference. Conventional methods apply xnor and bitcount operations in binary convolution with notable quantization error, which usually obtain inconsistent signs in binary feature maps compared with their full-precision counterpart and lead to significant information loss. On the contrary, our CI-BCNN mines the channel-wise interactions, where prior knowledge is provided to alleviate inconsistency of signs in binary feature maps and preserves the information of input samples during inference. Specifically, we mine the channel-wise interactions by a reinforcement learning model, and impose channel-wise priors on the intermediate feature maps through the interacted bitcount function. As CI-BCNN mines the channel-wise interactions in large search space where each channel may correlate with others, the search deficiency caused by sparse interaction obstacles the agent to obtain the optimal policy. We further present hierarchical channel-wise interaction based binary convolutional neural networks (HCI-BCNN) to shrink the search space via hierarchical reinforcement learning. Moreover, we propose denoising interacted bitcount in binary convolution by smoothing the channel-wise interactions, so that noise in channel-wise priors can be alleviated. Experimental results on the CIFAR-10 and ImageNet datasets demonstrate the effectiveness of the proposed approach.We demonstrate a versatile thin lensless camera with a designed phase-mask placed at sub-2 mm from an imaging CMOS sensor. Using wave optics and phase retrieval methods, we present a general-purpose framework to create phase-masks that achieve desired sharp point-spread-functions (PSFs) for desired camera thicknesses. From a single 2D encoded measurement, we show the reconstruction of high-resolution 2D images, computational refocusing, and 3D imaging. This ability is made possible by our proposed high-performance contour-based PSF. The heuristic contour-based PSF is designed using concepts in signal processing to achieve maximal information transfer to a bit-depth limited sensor. Due to the efficient coding, we can use fast linear methods for high-quality image reconstructions and switch to iterative nonlinear methods for higher fidelity reconstructions and 3D imaging.OBJECTIVE Estimation of the discharge pattern of motor units by electromyography (EMG) decomposition has been applied for neurophysiologic investigations, clinical diagnosis, and human-machine interfacing. However, most of the methods for EMG decomposition are currently applied offline. Here, we propose an approach for high-density surface EMG decomposition in real-time. METHODS A real-time decomposition scheme including two sessions, offline training and online decomposition, is proposed based on the convolutional kernel compensation algorithm. The estimation parameters, separation vectors and the thresholds for spike extraction, are first computed during offline training, and then they are directly applied to estimate motor unit spike trains (MUSTs) during the online decomposition. The estimation parameters are updated with the identification of new discharges to adapt to non-stationary conditions. The decomposition accuracy was validated on simulated EMG signals by convolving synthetic MUSTs with motor unit action potentials (MUAPs). Moreover, the accuracy of the online decomposition was assessed from experimental signals recorded from forearm muscles using a signal-based performance metrics (pulse-to-noise ratio, PNR). MAIN RESULTS The proposed algorithm yielded a high decomposition accuracy and robustness to non-stationary conditions. The accuracy of MUSTs identified from simulated EMG signals was > 80% for most conditions. From experimental EMG signals, on average, 12±2 MUSTs were identified from each electrode grid with PNR of 25.0±1.8 dB, corresponding to an estimated decomposition accuracy > 75%. CONCLUSION AND SIGNIFICANCE These results indicate the feasibility of real-time identification of motor unit activities non-invasively during variable force contractions, extending the potential applications of high-density EMG as a neural interface.Parkinson's disease (PD) is a neurodegenerative disorder that affects multiple neurological systems. Traditional PD assessment is conducted by a physician during infrequent clinic visits. Using smartphones, remote patient monitoring has the potential to obtain objective behavioral data semi-continuously, track disease fluctuations, and avoid rater dependency. Smartphones collect sensor data during various active tests and passive monitoring, including balance (postural instability), dexterity (skill in performing tasks using hands), gait (the pattern of walking), tremor (involuntary muscle contraction and relaxation), and voice. Some of the features extracted from smartphone data are potentially associated with specific PD symptoms identified by physicians. To leverage large-scale cross-modality smartphone features, we propose a machine-learning framework for performing automated disease assessment. The framework consists of a two-step feature selection procedure and a generic model based on the elastic-net regularization. Using this framework, we map the PD-specific architecture of behaviors using data obtained from both PD participants and healthy controls (HCs). Utilizing these atlases of features, the framework shows promises to (a) discriminate PD participants from HCs, and (b) estimate the disease severity of individuals with PD. Data analysis results from 437 behavioral features obtained from 72 subjects (37 PD and 35 HC) sampled from 17 separate days during a period of up to six months suggest that this framework is potentially useful for the analysis of remotely collected smartphone sensor data in individuals with PD.OBJECTIVE Accurate monitoring of joint kinematics in individuals with neuromuscular and musculoskeletal disorders within ambulatory settings could provide important information about changes in disease status and the effectiveness of rehabilitation programs and/or pharmacological treatments. This paper introduces a reliable, power efficient, and low-cost wearable system designed for the long-term monitoring of joint kinematics in ambulatory settings. METHODS Seventeen healthy subjects wore a retractable string sensor, fixed to two anchor points on the opposing segments of the knee joint, while walking at three different self-selected speeds. Joint angles were estimated from calibrated sensor values and their derivatives in a leave-one-subject-out cross validation manner using a random forest algorithm. RESULTS The proposed system estimated knee flexion/extension angles with a root mean square error (RMSE) of 5.0° ±1.0° across the study subjects upon removal of a single outlier subject. The outlier was likely a result of sensor miscalibration.


トップ   編集 凍結 差分 バックアップ 添付 複製 名前変更 リロード   新規 一覧 検索 最終更新   ヘルプ   最終更新のRSS
Last-modified: 2024-09-10 (火) 23:55:54