Abstract | ||
---|---|---|
Locating microseismic sources in time is a challenging problem in microseismic monitoring. In order to improve the accuracy and efficiency of locating sources, this article presents a method for locating microseismic sources using deep reinforcement learning (RL). We first construct and train a convolutional autoencoder to preprocess the seismic records in the microseismic waveform database. Then, the problem of locating the source is described as a Markov decision process for the application of deep RL. We decompose the task of locating the source into three subtasks and design the critical elements of deep RL. Three agents independently learn optimal policies for their respective subtasks in the framework of a deep Q-network (DQN) and jointly determine the precise location of the microseismic source. Finally, we evaluate the proposed method using synthetic data generated from the Marmousi model and the 3-D velocity model. The experimental results indicate that the proposed method can locate microseismic sources efficiently and accurately. |
Year | DOI | Venue |
---|---|---|
2022 | 10.1109/TGRS.2022.3182991 | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING |
Keywords | DocType | Volume |
Reinforcement learning, Feature extraction, Position measurement, Monitoring, Deep learning, Markov processes, Data models, Deep Q-network (DQN), deep reinforcement learning (RL), microseismic monitoring, source location | Journal | 60 |
ISSN | Citations | PageRank |
0196-2892 | 0 | 0.34 |
References | Authors | |
0 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Qiang Feng | 1 | 0 | 0.34 |
Liguo Han | 2 | 0 | 1.69 |
Baozhi Pan | 3 | 0 | 0.34 |
Binghui Zhao | 4 | 0 | 0.34 |