+ All Categories
Home > Documents > JAN. 2021 1 Artificial Intelligence for Satellite ...

JAN. 2021 1 Artificial Intelligence for Satellite ...

Date post: 18-Dec-2021
Category:
Upload: others
View: 5 times
Download: 0 times
Share this document with a friend
20
JAN. 2021 1 Artificial Intelligence for Satellite Communication: A Review Fares Fourati, Mohamed-Slim Alouini, Fellow, IEEE Abstract—Satellite communication offers the prospect of ser- vice continuity over uncovered and under-covered areas, service ubiquity, and service scalability. However, several challenges must first be addressed to realize these benefits, as the resource management, network control, network security, spectrum man- agement, and energy usage of satellite networks are more chal- lenging than that of terrestrial networks. Meanwhile, artificial intelligence (AI), including machine learning, deep learning, and reinforcement learning, has been steadily growing as a research field and has shown successful results in diverse applications, including wireless communication. In particular, the application of AI to a wide variety of satellite communication aspects have demonstrated excellent potential, including beam-hopping, anti-jamming, network traffic forecasting, channel modeling, telemetry mining, ionospheric scintillation detecting, interference managing, remote sensing, behavior modeling, space-air-ground integrating, and energy managing. This work thus provides a general overview of AI, its diverse sub-fields, and its state-of- the-art algorithms. Several challenges facing diverse aspects of satellite communication systems are then discussed, and their proposed and potential AI-based solutions are presented. Finally, an outlook of field is drawn, and future steps are suggested. Index Terms—Satellite Communication, Artificial Intelligence, Machine Learning, Deep Learning, Reinforcement Learning I. I NTRODUCTION T HE remarkable advancement of wireless communication systems, quickly increasing demand for new services in various fields, and rapid development of intelligent devices have led to a growing demand for satellite communication systems to complement conventional terrestrial networks to give access over uncovered and under-covered urban, rural, and mountainous areas, as well as the seas. There are three major types of satellites, including the geostationary Earth orbit, also referred to as a geosynchronous equatorial orbit (GEO), medium Earth orbit (MEO), and low Earth orbit (LEO) satellites. This classification depends on three main features, i.e., the altitude, beam footprint size, and orbit. GEO, MEO, and LEO satellites have an orbit around the Earth at an altitude of 35786 km, 7000–25000 km, and 300–1500 km, respectively. The beam footprint of a GEO satellite ranges from 200 to 3500 km; that of an MEO or LEO beam footprint satellite ranges from 100 to 1000 km. The orbital period of a GEO satellite is equal to that of the Earth period, which makes it appear fixed to the ground observers, whereas LEO and MEO satellites have a shorter period, many LEO and MEO satellites are required to offer continuous global coverage. For example, Iridium NEXT has Fares Fourati and Mohamed Slim Alouini are with King Abdullah Univer- sity of Science and Technology (KAUST), CEMSE Division, Thuwal, 23955- 6900 KSA, (e-mail: [email protected], [email protected]) 66 LEO satellites and 6 spares, Starlink by SpaceX plans to have 4425 LEO satellites plus some spares, and O3b has 20 MEO satellites including 3 on-orbit spares [1]. Satellite communication use cases can also be split into three categories: i) service continuity, to provide network access over uncovered and under-covered areas; ii) service ubiquity, to ameliorate the network availability in cases of temporary outage or destruction of a ground network due to disasters; and iii) service scalability, to offload traffic from the ground networks. In addition, satellite communication systems could provide coverage to various fields, such as the transportation, energy, agriculture, business, and public safety fields [2]. Although satellite communication offers improved global coverage and increased communication quality, it has several challenges. Satellites, especially LEO satellites, have limited on-board resources and move quickly, bringing high dynam- ics to the network access. The high mobility of the space segments, and the inherent heterogeneity between the satel- lite layers (GEO, MEO, LEO), the aerial layers (unmanned aerial vehicles (UAVs), balloons, airships), and the ground layer make network control, network security, and spectrum management challenging. In addition, achieving high energy efficiency for satellite communication is more challenging than for terrestrial networks. Several surveys have discussed different aspects of satellite communication systems, such as handoff schemes [3], mobile satellite systems [4], MIMO over satellite [5], satellites for the Internet of Remote Things [6], inter-satellite communica- tion systems [7], Quality of Service (QoS) provisioning [8], space optical communication [9], space-air-ground integrated networks [10], small satellite communication [11], physical space security [12], CubeSat communications [13], and non- terrestrial networks [2]. Meanwhile, interest in artificial intel- ligence (AI) increased in recent years. AI, including machine learning (ML), deep learning (DL) and reinforcement learning (RL), has shown successful results in diverse applications in science and engineering fields, such as electrical engineering, software engineering, bioengineering, financial engineering, and medicine etc. Several researchers have thus turned to AI techniques to solve various challenges in their respective fields and have designed diverse successful AI-based applications, to overcome several challenges in the wireless communication field. Many researchers have discussed AI and its applications to wireless communication in general [14]–[17]. Others have focused on the application of AI to one aspect of wireless communication, such as wireless communications in the Inter- net of Things (IoT) [18], network management [19], wireless arXiv:2101.10899v1 [eess.SP] 25 Jan 2021
Transcript
Page 1: JAN. 2021 1 Artificial Intelligence for Satellite ...

JAN. 2021 1

Artificial Intelligencefor Satellite Communication: A Review

Fares Fourati, Mohamed-Slim Alouini, Fellow, IEEE

Abstract—Satellite communication offers the prospect of ser-vice continuity over uncovered and under-covered areas, serviceubiquity, and service scalability. However, several challengesmust first be addressed to realize these benefits, as the resourcemanagement, network control, network security, spectrum man-agement, and energy usage of satellite networks are more chal-lenging than that of terrestrial networks. Meanwhile, artificialintelligence (AI), including machine learning, deep learning, andreinforcement learning, has been steadily growing as a researchfield and has shown successful results in diverse applications,including wireless communication. In particular, the applicationof AI to a wide variety of satellite communication aspectshave demonstrated excellent potential, including beam-hopping,anti-jamming, network traffic forecasting, channel modeling,telemetry mining, ionospheric scintillation detecting, interferencemanaging, remote sensing, behavior modeling, space-air-groundintegrating, and energy managing. This work thus provides ageneral overview of AI, its diverse sub-fields, and its state-of-the-art algorithms. Several challenges facing diverse aspects ofsatellite communication systems are then discussed, and theirproposed and potential AI-based solutions are presented. Finally,an outlook of field is drawn, and future steps are suggested.

Index Terms—Satellite Communication, Artificial Intelligence,Machine Learning, Deep Learning, Reinforcement Learning

I. INTRODUCTION

THE remarkable advancement of wireless communicationsystems, quickly increasing demand for new services in

various fields, and rapid development of intelligent deviceshave led to a growing demand for satellite communicationsystems to complement conventional terrestrial networks togive access over uncovered and under-covered urban, rural,and mountainous areas, as well as the seas.

There are three major types of satellites, including thegeostationary Earth orbit, also referred to as a geosynchronousequatorial orbit (GEO), medium Earth orbit (MEO), and lowEarth orbit (LEO) satellites. This classification depends onthree main features, i.e., the altitude, beam footprint size, andorbit. GEO, MEO, and LEO satellites have an orbit aroundthe Earth at an altitude of 35786 km, 7000–25000 km, and300–1500 km, respectively. The beam footprint of a GEOsatellite ranges from 200 to 3500 km; that of an MEO orLEO beam footprint satellite ranges from 100 to 1000 km.The orbital period of a GEO satellite is equal to that ofthe Earth period, which makes it appear fixed to the groundobservers, whereas LEO and MEO satellites have a shorterperiod, many LEO and MEO satellites are required to offercontinuous global coverage. For example, Iridium NEXT has

Fares Fourati and Mohamed Slim Alouini are with King Abdullah Univer-sity of Science and Technology (KAUST), CEMSE Division, Thuwal, 23955-6900 KSA, (e-mail: [email protected], [email protected])

66 LEO satellites and 6 spares, Starlink by SpaceX plans tohave 4425 LEO satellites plus some spares, and O3b has 20MEO satellites including 3 on-orbit spares [1].

Satellite communication use cases can also be split intothree categories: i) service continuity, to provide networkaccess over uncovered and under-covered areas; ii) serviceubiquity, to ameliorate the network availability in cases oftemporary outage or destruction of a ground network due todisasters; and iii) service scalability, to offload traffic fromthe ground networks. In addition, satellite communicationsystems could provide coverage to various fields, such as thetransportation, energy, agriculture, business, and public safetyfields [2].

Although satellite communication offers improved globalcoverage and increased communication quality, it has severalchallenges. Satellites, especially LEO satellites, have limitedon-board resources and move quickly, bringing high dynam-ics to the network access. The high mobility of the spacesegments, and the inherent heterogeneity between the satel-lite layers (GEO, MEO, LEO), the aerial layers (unmannedaerial vehicles (UAVs), balloons, airships), and the groundlayer make network control, network security, and spectrummanagement challenging. In addition, achieving high energyefficiency for satellite communication is more challenging thanfor terrestrial networks.

Several surveys have discussed different aspects of satellitecommunication systems, such as handoff schemes [3], mobilesatellite systems [4], MIMO over satellite [5], satellites forthe Internet of Remote Things [6], inter-satellite communica-tion systems [7], Quality of Service (QoS) provisioning [8],space optical communication [9], space-air-ground integratednetworks [10], small satellite communication [11], physicalspace security [12], CubeSat communications [13], and non-terrestrial networks [2]. Meanwhile, interest in artificial intel-ligence (AI) increased in recent years. AI, including machinelearning (ML), deep learning (DL) and reinforcement learning(RL), has shown successful results in diverse applications inscience and engineering fields, such as electrical engineering,software engineering, bioengineering, financial engineering,and medicine etc. Several researchers have thus turned to AItechniques to solve various challenges in their respective fieldsand have designed diverse successful AI-based applications, toovercome several challenges in the wireless communicationfield.

Many researchers have discussed AI and its applicationsto wireless communication in general [14]–[17]. Others havefocused on the application of AI to one aspect of wirelesscommunication, such as wireless communications in the Inter-net of Things (IoT) [18], network management [19], wireless

arX

iv:2

101.

1089

9v1

[ee

ss.S

P] 2

5 Ja

n 20

21

Page 2: JAN. 2021 1 Artificial Intelligence for Satellite ...

JAN. 2021 2

Fig. 1. Applications of artificial intelligence (AI) for different satellitecommunication aspects

security [20], emerging robotics communication [21], antennadesign [22] and UAV networks [23], [24]. Vazquez et al. [25]briefly discussed some promising use cases of AI for satellitecommunication, whereas Kato et al. [26] discussed the use ofAI for space-air-integrated networks. The use of DL in spaceapplications has also been addressed [27].

Overall, several researchers have discussed wireless andsatellite communication systems, and some of these havediscussed the use of AI for one or a few aspects of satellitecommunication; however, an extensive survey of AI applica-tions in diverse aspects of satellite communication has yet tobe performed.

This work therefore aims to provide an introduction to AI,a discussion of various challenges being faced by satellitecommunication and an extensive survey of potential AI-basedapplications to overcome these challenges. A general overviewof AI, its diverse sub-fields and its state-of-the-art algorithmsare presented in Section II. Several challenges being facedby diverse aspects of satellite communication systems andpotential AI-based solutions are then discussed in SectionIII; these applications are summarized in Fig.1. For ease ofreference, the acronyms and abbreviations used in this paperare presented in Table I.

II. ARTIFICIAL INTELLIGENCE (AI)

The demonstration of successful applications of AI inhealthcare, finance, business, industries, robotics, autonomouscars and wireless communication including satellites has led itto become a subject of high interest in the research community,industries, and media.

This section therefore aims to provide a brief overview ofthe world of AI, ML, DL and RL. Sub-fields, commonly used

AE AutoencoderAI Artificial intelligenceAJ Anti-jammingARIMA Auto regressive integrated moving averageARMA Auto regressive moving averageBH Beam hoppingCNN Convolutional neural networkDL Deep learningDNN Deep neural networkDRL Deep reinforcement learningELM Extreme learning machineEMD Empirical mode decompositionFARIMA Fractional auto regressive integrated moving averageFCN Fully convolutional networkFDMA Frequency division multiple accessFH Frequency hoppingGA Genetic algorithmsGANs Generative adversarial networksGNSS Global navigation satellite systemIoS Internet of satelliteskNN k-nearest neighborLRD Long-range-dependenceLSTM Long short-term memoryMDP Markov decision processML Machine learningMO-DRL Multi-objective deep reinforcement learningNNs Neural networksPCA Principal component analysisQoS Quality of serviceRFs Random forestsRL Reinforcement learningRNNs Recurrent neural networksRS Remote sensingRSRP Reference signal received powerSAGIN Space-air-ground integrated networkSRD Short range dependenceSVM Support vector machineSVR Support vector regressionSatIot Satellite Internet of ThingsUE User equipmentVAEs Variational autoencoders

TABLE IACRONYMS AND ABBREVIATIONS

algorithms, challenges, achievements, and outlooks are alsoaddressed.

A. Artificial Intelligence

Although AI sounds like a novel approach, it can betraced to the 1950s and encompasses several approaches andparadigms. ML, DL, RL and their intersections are all partsof AI, as summarized in Fig.2 [28]. Thus, a major part of AIfollows the learning approach, although approaches withoutany learning aspects are also included. Overall, research intoAI aims to make the machine smarter, either by followingsome rules or by facilitating guided learning. The former refersto symbolic AI; the latter refers to ML. Here smarter indicatesthe ability to accomplish complex intellectual tasks normallynecessitating a human such as classification, regression, clus-tering, detection, recognition, segmentation, planning, schedul-ing, or decision making. In the early days of AI, many believedthat these tasks could be achieved by transferring humanknowledge to computers by providing an extensive set of rulesthat encompasses the humans’ expertise. Much focus was thusplaced on feature engineering and implementing sophisticated

Page 3: JAN. 2021 1 Artificial Intelligence for Satellite ...

JAN. 2021 3

Fig. 2. Artificial Intelligence, Machine Learning, Deep Learning and Rein-forcement Learning

Fig. 3. Machine Learning Approach

handcrafted commands to be explicitly used by the comput-ers. Although this symbolic AI has been suitable for manyapplications, it has shown various limitations in terms of bothprecision and accuracy for more advanced problems that showmore complexity, less structure, and more hidden features suchas computer-vision and language-processing tasks. To addressthese limitations, researchers turned to a learning approachknown as ML.

B. Machine Learning (ML)

ML, which encompasses DL and RL, is a subset of AI.In contrast to symbolic AI, where the machine is providedwith all the rules to solve a certain problem, ML requires alearning approach. Thus, rather than giving the rules to solvea problem, the machine is provided with the context to learnthe rules by itself to solve the issue, as shown in Fig.3 andbest summarized by the AI pioneer Alan Turing [29]: ”Animportant feature of a learning machine is that its teacherwill often be very largely ignorant of quite what is going oninside, although he may still be able to some extent to predicthis pupil’s behavior,” An ML system is trained rather thanprogrammed with explicit rules. The learning process requiresdata to extract patterns and hidden structures; the focus ison finding optimal representations of the data to get closerto the expected result by searching within a predefined spaceof possibilities using guidance from a feedback signal, whererepresentations of the data refer to different ways to look at orencode the data. To achieve that, three things are mandatory:input data, samples of the expected output, and a way to

measure the performance of the algorithm [28]. This simpleidea of learning a useful representation of data has been usefulin multiple applications from image classification to satellitecommunication.

ML algorithms are commonly classified as either deep ornon-deep learning. Although DL has gained higher popularityand attention, some classical non-deep ML algorithms aremore useful in certain applications, especially when data islacking. ML algorithms can also be classified as supervised,semi-supervised, unsupervised, and RL classes, as shown inFig.4. In this subsection, only non-RL, non-deep ML ap-proaches are addressed; RL and DL are addressed in sectionsII.C and II.D, respectively.

1) Supervised, Unsupervised and Semi-supervised Learn-ing: Supervised, unsupervised and semi-supervised learningare all ML approaches that can be employed to solve a broadvariety of problems.

During supervised learning, all of the training data islabeled, i.e., tagged with the correct answer. The algorithmis thus fully supervised, as it can check its predictions areright or wrong at any point in the training process. Duringimage classification, for example, the algorithm is providedwith images of different classes and each image is taggedwith the corresponding class. The supervised model learns thepatterns from the training data to then be able to predict labelsfor non-labeled data during inferencing. Supervised learninghas been applied for classification and regression tasks.

As labeling can be impossible due to a lack of informationor infeasible due to high costs, unsupervised learning employsan unlabeled data set during training. Using unlabeled data,the model can extract hidden patterns or structures in thedata that may be useful to understand a certain phenomenonor its output could be used as an input for other models.Unsupervised learning has been commonly used for clustering,anomaly detection, association and autoencoders (AEs).

As a middle ground between supervised and unsupervisedlearning, semi-supervised learning allows a mixture of non-labelled and labaled portions of training data. Semi-supervisedlearning is thus an excellent option when only a small part ofthe data is labeled and/or the labeling process is either difficultor expensive. An example of this technique is pseudo-labeling,which has been used to improve supervised models [33].

2) Probabilistic Modeling: Probabilistic modeling as men-tioned by its name, involves models using statistical techniquesto analyze data and was one of the earliest forms of ML[30]. A popular example is the Naive Bayes classifier, whichuses Bayes’ theorem while assuming that all of the inputfeatures are independent; as they generally are not, this is anaive assumption [28]. Another popular example is logisticregression; as the algorithm for this classifier is simple, it iscommonly used in the data science community.

3) Support Vector Machine (SVM): Kernel methods area popular class of algorithms [28], [31]; where the mostwell-known one of them is the SVM, which aims to finda decision boundary to classify data inputs. The algorithmmaps the data into a high dimensional representation wherethe decision boundary is expressed as a hyperplane. Thehyperplane is then searched by trying to maximize the distance

Page 4: JAN. 2021 1 Artificial Intelligence for Satellite ...

JAN. 2021 4

Fig. 4. Machine Learning Sub-fields

Fig. 5. Decision Tree

between the hyperplane and the nearest data points from eachclass in a process called maximizing the margin. Althoughmapping the data into a high dimensional space is theoriticallystraightforward, it requires high computational resources. The’kernel trick’, which is based on kernel functions [32], is thusused to compute the distance between points without explicitcomputation of coordinates, thereby avoiding the computationof the coordinated of a point in a high-dimensional space.SVMs have been the state-of-the-art for classification for afairly long time and have shown many successful applicationsin several scientific and engineering areas [34]. HoweverSVMs have shown limitations when applied on large datasets.Furthermore, when the SVM is applied to perceptual problems,a feature engineering step is required to enhance the perfor-mance because it is a shallow model; this requires humanexpertise. Although it has been surpassed by DL algorithms,it is still useful because of its simplicity and interpretability.

4) Decision Trees: A decision tree is a supervised learningalgorithm that represents features of the data as a tree bydefining conditional control statements, as summarized inFig.5 [35], [36]. Given its intelligibility and simplicity, it isone of the most popular algorithms in ML. Further, decisiontrees can be used for both regression and classification, asdecisions could be either continuous values or categories. A

Fig. 6. Neural Networks

more robust version of decision trees, random forests (RFs),combines various decision trees to bring optimized results.This involves building many different weak decision treesand then assembling their outputs using bootstrap aggregating(bagging) [37], [38]. Another popular version of decisiontrees, that is often more effective than RFs is a gradientboosting machine; gradient boosting also combines variousdecision tree models but differs from RFs by using gradientboosting [39], which is a way to improve ML models byiteratively training new models that focus on the mistakes ofthe previous models. The XGBoost [40], [41] library is anexcellent implementation of the gradient boosting algorithmthat supports C++, Java, Python, R, Julia, Perl, and Scala.RFs and gradient boosting machines are the most popular androbust non-deep algorithms that have been widely used to winvarious data science competitions on the Kaggle website [42].

5) Neural Networks (NNs): NNs contain different layers ofinterconnected nodes, as shown in Fig.6, where each node is aperceptron that feeds the signal produced by a multiple linearregression to an activation function that may be nonlinear [43],[44]. A nonlinear activation function is generally chosen to addmore complexity to the model by eliminating linearity. NNscan be used for regression by predicting continuous values orfor classification by predicting probabilities for each class. Ina NN, the features of one input (e.g., one image) are assignedas the input layer. Then, according to a matrix of weights thenext hidden layers are computed using matrix multiplications(linear manipulations) and then non linear activation functions.The training of NNs is all about finding the best weights.To do so, a loss function is designed to compare the outputof the model and the ground truth for each output, to findthe weights that minimize that loss function. Backpropagationalgorithms have been designed to train chains of weightsusing optimization techniques such as gradient-descent [45].NNs have been successfully used for both regression andclassification, although they are most efficient when dealing ahigh number of features (input parameters) and hidden layers,which has led to the development of DL.

Page 5: JAN. 2021 1 Artificial Intelligence for Satellite ...

JAN. 2021 5

Fig. 7. Simplified Architecture of a Recurrent Neural Networks

C. Deep Learning (DL)

In contrast to shallow models, this sub-field of ML requireshigh computational resources [28], [46]. Recent computationaladvancements and the automation of feature engineering havepaved the way for DL algorithms to surpass classical ML al-gorithms for solving complex tasks, especially perceptual onessuch as computer vision and natural language processing. Dueto their relative simplicity, shallow ML algorithms, requirehuman expertise and intervention to extract valuable featuresor to transform the data to make it easier for the model tolearn. DL models minimize or eliminate these steps as thesetransformations are implicitly done within the deep networks.

1) Convolutional Neural Networks (CNN): CNN [47], [48],are a common type of deep NNs (DNNs) that are composed ofan input layer, hidden convolution layers, and an output layerand have been commonly used in computer vision applicationssuch as image classification [50], object detection [51], andobject tracking [52]. They have also shown success in otherfields including speech and natural language processing [53].As their name indicates, CNNs are based on convolutions. Thehidden layers of a CNN consist of a series of convolutionallayers that convolve. An activation function is chosen andfollowed by additional convolutions. CNN architectures aredefined by by choosing the sizes, numbers, and positions offilters (kernels) and the activation functions. Learning theninvolves finding the best set of filters that can be applied tothe input to extract useful information and predict the correctoutput.

2) Recurrent Neural Networks (RNNs): RNNs [54] areanother family of neural networks in which nodes form adirected graph along a temporal sequence where previous out-puts are used as inputs. RNNs are specialized for processinga sequence of values x(0), x(1), x(2), ..., x(T). RNNs usetheir internal memory to process variable-length sequencesof inputs. Different architectures are designed based on theproblem and the data. In general, RNNs are designed as inFig. 7, where for each time stamp t, x(t) represents the inputat that time, a(t) is the activation, and y(t) is the output, Wa,Wx, Wy , bx and by are coefficients that are shared temporarilyand g1 and g2 are activation functions.

a(t) = g1(Wa.a(t− 1) +Wx.x(t) + ba) (1)

y(t) = g2(Wy.a(t) + by) (2)

Fig. 8. Autoencoder

Fig. 9. Generative Adverserial Networks GANs

RNN models are most commonly used in the fields ofnatural language processing, speech recognition and musiccomposition.

3) Autoencoders (AEs): AEs are another type of NNs usedto learn efficient data representation in an unsupervised way[55]. AEs encode the data using the bottleneck technique,which comprises dimensionality reduction to ignore the noiseof the input data and an initial data regeneration from theencoded data, as summarized in Fig.8. The initial input andgenerated output are then compared to asses the quality ofcoding. AEs have been widely applied for for dimensionalityreduction [56] and anomaly detection [57].

4) Deep generative models: Deep generative models [58]are DL models that involve the automatic discovering andlearning of regularities in the input data in such a way that newsamples can be generated. These models have shown variousapplications, especially in the field of computer vision. Themost popular generative models are variational AEs (VAEs)and generative adversarial networks (GANs).

Of these, VAEs learn complicated data distribution usingunsupervised NNs [59]. Although VAEs are a type of AEs,their encoding distribution is regularized during the trainingto ensure that their latent space (i.e., representation of com-pressed data) has good properties for generating new data.

GANs are composed of two NNs in competition, where agenerator network G learns to capture the data distribution andgenerate new data and a discriminator model D estimates theprobability that a given sample came from the generator ratherthan the initial training data, as summarized in Fig. 9 [60],[61]. The generator thus is used to produce misleading samplesand to that the discriminator can determine whether a given

Page 6: JAN. 2021 1 Artificial Intelligence for Satellite ...

JAN. 2021 6

Fig. 10. Reinforcement Learning

sample is fake or real. The generator fools the discriminatorby generating almost real samples and the discriminator foolsthe generator by improving its discriminative capability.

D. Reinforcement Learning (RL)

This subset of ML involves a different learning methodthan those using supervised, semi-supervised, or unsupervisedlearning [64]. RL is about learning what actions to take in thehope to maximize a reward signal. The agent must find whichactions bring the most recompense by trying each action, asshown in 10. These actions can affect immediate rewards aswell as subsequent rewards. Some RL approaches require theintroduction of DL; such approaches are part of deep RL(DRL).

One of the challenges encountred during RL is balancingthe trade-off between exploration and exploitation. To geta maximum immediate reward, an RL agent must performexploitation, i.e., choose actions that it has explored previouslyand found to be the best. To find such actions, it must explorethe solution space, i.e., try new actions.

All RL agents have explicit goals, are aware of someaspects of their environment, can take actions that impact theirenvironments, and act despite significant uncertainty abouttheir environment. Other than the agent and the environment,an RL system has four sub-elements: a policy, a reward signal,a value function, and, sometimes, a model of the environment.

Here, learning involves the agent determining the bestmethod to map states of the environment to actions to betaken when in those states. After each action, the environmentsends the RL agent a reward signal, which is the goal of theRL problem. Unlike a reward that brings immediate evaluationof the action, a value function estimates the total amount ofrecompense an agent can anticipate to collect in the longer-term. Finally, a model of the environment mimics the behaviorof the environment. These models can be used for planningby allowing the agent to consider possible future situationsbefore they occur. Methods for solving RL problems thatutilize models are called model-based methods, whereas thosewithout models are referred to as model-free methods.

E. Discussion

1) Model Selection: AI is a broad field that encompassesvarious approaches, each of which encompasses several algo-rithms. AI could be based on predefined rules or on ML. Thislearning can be supervised, semi-supervised, unsupervised, orreinforcement learning; in each of these categories learning

Fig. 11. Training and test errors over the training time. Early stopping iscommon technique to reduce overfitting by stopping the training process atan early stage, i.e. when the test error starts to remarkably increasing

can be deep or shallow. As each approach offers something dif-ferent to the world of AI, interest in each should depend on thegiven problem; a more-complex approach or algorithm doesnot necessarily lead to better results. For example, a commonassumption is that DL is better than shallow learning. Althoughthis holds in several cases, especially for perceptual problemssuch as computer vision problems, it is not always applicable,as DL algorithms require greater computational resources andlarge datasets which are not always available. Supervisedlearning is an effective approach when a fully labeled datasetis available. However, this is not always the case, as datacan be expensive, difficult or even impossible. Under thesecircumstances, semi-supervised or unsupervised learning orRL is more applicable. Whereas unsupervised learning can findhidden patterns in non-labeled data, RL learns the best policyto achieve a certain task. Thus, unsupervised learning is a goodtool to extract information from data, Whereas RL is bettersuited for decision-making tasks. Therefore, the choice of anapproach or an algorithm should not be based on its perceivedelegance, but by matching the method to characteristics ofthe problem at hand, including the goal, the quality of thedata, the computational resources, the time constraints, and theprospective future updates. Solving a problem may require acombination of more than one approach.

After assessing the problem and choosing an approach, analgorithm must be chosen. Although ML has mathematicalfoundations, it remains an empirical research field. To choosethe best algorithm, data science and ML researchers andengineers empirically compare different algorithms for a givenproblem. Algorithms are compared by splitting the data intoa training set and a test set. The training set is then used totrain the model, whereas the test set is to compare the outputbetween models.

In competitive data science, such as in Kaggle [42] compe-titions, where each incrementation matters, models are oftencombined to improve their overall results, and various en-semble techniques such as bagging [38], boosting [39], andadaptive boosting [62] are used.

2) Model Regularization: After the approach and algorithmhave been selected, hyperparameter tuning is generally done

Page 7: JAN. 2021 1 Artificial Intelligence for Satellite ...

JAN. 2021 7

to improve the output of the algorithm. In most cases, MLalgorithms depend on many hyperparameters; choosing thebest hyperparameters for a given problem thus allows forhigher accuracy. This step can be done manually by intuitivelychoosing better hyperparameters, or automatically using vari-ous methods such as grid search and stochastic methods [63].

A common trap in ML is overfitting, during which themachine stops learning (generalizing) and instead begins tomemorize the data. When this occurs, the model can achievegood results on seen data but fails when confronted with newdata, i.e., a decreased training error and an increasing testerror, as shown in Fig. Fig.11. Overfitting can be discoveredby splitting the data into training, validation and testing sets,where neither the validation nor the testing sets are used totrain the model. The training set is used to train the model,the validation set is used to verify the model predictions onunseen data and for hyperparameter tuning, and the testing setis used for the final testing of the model.

A variety of methods can be employed to reduce overfitting.It be reduced by augmenting the size of the dataset, whichis commonly performed in the field of computer vision. Forexample, image data could be augmented by applying transfor-mations to the images, such as rotating, flipping, adding noise,or cutting parts of the images. Although useful, this techniqueis not always applicable. Another method involves using cross-validation rather than splitting the data into a training set anda validation set Early stopping, as shown in Fig.11, consistsof stopping the learning process before the algorithm beginsto memorize the data. Ensemble learning is also commonlyused.

3) The hype and the hope: Rapid progress has been madein AI research, including its various subfields, over the lastten years as a result of exponentially increasing investments.However, few substantial developments have been made toaddress real-world problems; as such, many are doubtful thatAI will have much influence on the state of technology andthe world. Chollet [28] compared the progress of AI withthat of the internet in 1995, the majority of people could notforesee the true potential, consequences, and pertinence of theinternet, as it had yet to come to pass. As the case with theoverhyping and subsequent funding crash throughout the early2000s before the widespread implementation and applicationof the internet, AI may also become an integral part of globaltechnologies. The authors thus believe that the inevitableprogress of AI is likely to have long-term impacts and that AIwill likely be a major part of diverse applications across allscientific fields, from mathematics to satellite communication.

III. ARTIFICIAL INTELLIGENCE FOR SATELLITECOMMUNICATION

A. Beam hopping

1) Definition & limitations: Satellite resources are expen-sive and thus require efficient systems involving optimizingand time-sharing. In conventional satellite systems the re-sources are fixed and uniformly distributed across beams [65].As a result, conventional large multi-beam satellite systemshave shown a mismatch between the offered and requested

Fig. 12. The demand–capacity mismatch among beams demonstrates thelimitation of using fixed and uniformly distributed resources across all beamsin a multi-beam satellite system

Fig. 13. Simplified architecture of beam hopping (BH)

resources; some spot beams have a higher demand thanthe offered capacity, leaving the demand pending (i.e., hot-spots), while others present a demand lower than the installedcapacity, leaving the offered capacity unused (i.e., cold-spots).Thus, to improve multi-beam satellite communication, the on-board flexible allocation of satellite resources over the servicecoverage area is necessary to achieve more efficient satellitecommunication.

Beam hopping (BH) has emerged as a promising techniqueto achieve greater flexibility in managing non-uniform andvariant traffic requests throughout the day, year and lifetimeof the satellite over the coverage area [65], [66]. BH, involvesdynamically illuminating each cells with a small number ofactive beams, as summarized in 13, thus using all availableon-board satellite resources to offer service to only a subset ofbeams. The selection of this subset is time-variant and dependson the traffic demand, which is based on the time-spacedependent BH illumination pattern. The illuminated beams areonly active long enough to fill the request for each beam. Thus,the challenging task in BH systems is to decide which beamsshould be activated and for how long, i.e., the BH illuminationpattern; this responsibility is left to the resource manager whothen forwards the selected pattern to the satellite via telemetry,tracking and command [67].

Of the various methods that researchers have provided torealize BH, most have been based on classical optimizationalgorithms. For example, Angeletti et al. [68], demonstratedseveral advantages to the performance of a system whenusing BH and proposed the use of genetic algorithm (GA) todesign the BH illumination pattern; Anzalchi et al. [69], also

Page 8: JAN. 2021 1 Artificial Intelligence for Satellite ...

JAN. 2021 8

illustrated the merits of BH and compared the performancebetween BH and non-hopped systems. Alberti et al. [70],proposed a heuristic iterative algorithm to obtain a solutionto the BH illumination design. BH has also been used todecrease the number of transponder amplifiers for Terabit/ssatellites [71]. An iterative algorithm has also been proposedto maximize the overall offered capacity under certain beamdemand and power constraints in a joint BH design andspectrum assignment [72]. Alegre et al. [73], designed twoheuristics to allocate capacity resources basing on the trafficrequest per-beam, and then further discussed the long andshort-term traffic variations and suggested techniques to dealwith both variations [74]. Liu et al. [75], studied techniquesfor controlling the rate of the arriving traffic in BH systems.The QoS delay fairness equilibrium has also been addressedin BH satellites [76]. Joint BH schemes were proposed byShi et al. [77] and Ginesi et al. [78] to further ameliorate theefficiency of on-board resource allocation. To find the optimalBH illumination design, Cocco et al. [79] used a simulatedannealing algorithm.

Although employing optimization algorithms has achievedsatisfactory results in terms of flexibility and delay reductionof BH systems, some difficulties remain. As the search spacedramatically grow with the number of beams, an inherentdifficulty in designing the BH illumination pattern is findingthe optimal design rather than one of many local optima [72].For satellites with hundreds or thousands of beams, classicaloptimization algorithms may require long computation timeswhich is impractical in many scenarios.

Additionally, classical optimization algorithms, includingthe GAs or other heuristics, require revision when the scenariochanges moderately; this leads to a higher computationalcomplexity, which is impractical for on-board resource man-agement.

2) AI-based solutions: Seeking to overcome these limita-tions and enhance the performance of BH, some researchershave proposed AI-based solutions. Some of these solutionshave been fully based on the learning approach, i.e., end-to-end learning, in which the BH algorithm is a learningalgorithm. Others have tried to improve optimization algo-rithms by adding a learning layer, thus combining learningand optimization.

To optimize the transmission delay and the system through-put in multibeam satellite systems, Hu et al [80] formulatedan optimization problem and modeled it as a Markov decisionprocess (MDP). DRL is then used to solve the BH illuminationdesign and optimize the long-term accumulated rewards ofthe modeled MDP. As a result, the proposed DRL-based BHalgorithm can reduce the transmission delay by up to 52.2%and increased the system throughput by up to 11.4% whencompared with previous algorithms.

To combine the advantages of end-to-end learning ap-proaches and optimization approaches, for a more efficientBH illumination pattern design, Lei et al. [67] suggested alearning and optimization algorithm to deal with the beamhopping pattern illumination selection, in which a learningapproach, based on fully connected NNs, was used to predictnon-optimal BH patterns and thus address the difficulties faced

when applying an optimization algorithm to a large searchspace. Thus, the learning-based prediction reduces the searchspace, and the optimization can be reduced on a smaller setof promising BH patterns.

Researchers have also employed multi-objective DRL (MO-DRL) for the DVB-S2X satellite. Under real conditions, Zhanget al. [81] demonstrated that the low-complexity MO-DRLalgorithm could ensure the fairness of each cell, and amelio-rate the throughput better than previous techniques includingDRL [79] by 0.172%. In contrast, the complexity of GAproducing a similar result is about 110 times that of the MO-DRL model. Hu et al. [82] proposed a multi-action selectiontechnique based on double-loop learning and obtained a multi-dimensional state using a DNN. Their results showed that theproposed technique can achieve different objectives simulta-neously, and can allocate resources intelligently by adaptingto user requirements and channel conditions.

B. Anti-jamming

1) Definition & limitations: Satellite communication sys-tems are required to cover a wide area, and provide high-speed,communication and high-capacity transmission. However, intactical communication systems using satellites, reliability andsecurity are the prime concerns; therefore, an anti-jamming(AJ) capability is essential. Jamming attacks could be launchedtoward main locations and crucial devices in a satellite net-work to reduce or even paralyze the throughput. Several AJmethods have thus been designed to reduce possible attacksand guarantee secure satellite communication.

The frequency-hopping (FH) spread spectrum method hasbeen preferred in many prior tactical communication systemsusing satellites [83], [84]. Using the dehop–rehop transpon-der method employing FH-frequency division multiple access(FH-FDMA) scenarios, Bae et al. [85] developed an efficientsynchronization method with an AJ capability.

Most prior AJ techniques are not based on learning andthus cannot deal with clever jamming techniques that arecapable of continuously adjusting the jamming methodologyby interaction and learning. Developing AI algorithms offeradvanced tools to achieve diverse and intelligent jammingattacks based on learning approaches and thus present aserious threat to satellite communication reliability. In twosuch examples, a smart jamming formulation automaticallyadjusted the jamming channel [86], whereas a smart jammermaximized the jamming effect by adjusting both the jammingpower and channel [87]. In addition, attacks could be causedby multiple jammers simultaneously implementing intelligentjamming attacks based on learning approaches. Although thismay be an unlikely scenario, it has not yet been seriously con-sidered. Further, most researchers have focused on defendingagainst AJ attacks in the frequency-based domain, rather thanspacebased AJ techniques, such as routing AJ.

2) AI-based solutions: By using a long short-term memory(LSTM) network, which is a DL RNN, to learn the temporaltrend of a signal, Lee et al. [88] demonstrated a reductionof overall synchronization time in the previously discussedFH-FDMA scenario [85]. Han et al. [89] proposed the use

Page 9: JAN. 2021 1 Artificial Intelligence for Satellite ...

JAN. 2021 9

Fig. 14. Space-based anti-jamming (AJ) routing. The red line represents thefound jammed path, and the green one represents the suggested path [89]

of a learning approach for AJ to block smart jamming in theInternet of Satellites (IoS) using a space-based AJ method, AJrouting, summarized in Fig.14. By combining game theorymodeling with RL and modeling the interactions betweensmart jammers and satellite users as a Stackelberg AJ routinggame, they demonstrated how to use DL to deal with the largedecision space caused by the high dynamics of the IoS andRL to deal with the interplay between the satellites and thesmart jamming environment. DRL thus made it possible tosolve the routing selection issue for the heterogeneous IoSwhile preserving an available routing subset to simplify thedecision space for the Stackelberg AJ routing game. Based onthis routing subset, a popular RL algorithm, Q-Learning, wasthen used to respond rapidly to intelligent jamming and adaptAJ strategies.

Han et al. [90] later combined game theory modelingand RL to obtain AJ policies according to the dynamicand unknown jamming environment in the Satellite-EnabledArmy IoT (SatIoT). Here, a distributed dynamic AJ coalitionformation game was examined to decrease the energy use inthe jamming environment, and a hierarchical AJ Stackelberggame was proposed to express the confrontational interac-tion between jammers and SatIoT devices. Finally, RL-basedalgorithms were utilized to get the sub-optimal AJ policiesaccording to the jamming environment.

C. Network Traffic Forecasting

1) Definition & limitations: Network traffic forecastingis a proactive approach that aims to guarantee reliable andhigh-quality communication, as the predictability of traffic isimportant in many satellite applications, such as congestioncontrol, dynamic routing, dynamic channel allocation, networkplanning, and network security. Satellite network traffic isself-similar and demonstrates long-range-dependence (LRD)[91]. To achieve accurate forecasting, it is therefore necessaryto consider its self-similarity. However,forecasting models forterrestrial networks based on self-similarity have a high com-putational complexity; as the on-board satellite computationalresources are limited, terrestrial models are not suitable forsatellites. An efficient traffic forecasting design for satellitenetworks is thus required.

Several researchers have performed traffic forecasting forboth terrestrial and satellite networks; these techniques haveincluded the Markov [92], autoregressive moving average(ARMA) [93], autoregressive integrated moving average(ARIMA) [94] and fractional ARINA (FARIMA) [95] models.By using empirical mode decomposition (EMD) to decomposethe network traffic and then applying the ARMA forecastingmodel, Gao et al. [96] demonstrated remarkable improvement.

The two major difficulties facing satellite traffic forecastingare the LRD of satellite networks and the limited on-boardcomputational resources. Due to the LRD property of satellitenetworks, short-range-dependence (SRD) models have failedto achieve accurate forecasting. Although previous LRD mod-els have achieved better results than SRD models, they sufferfrom high complexity. To address these issues, researchershave turned to AI techniques.

2) AI-based solutions: Katris and Daskalaki [95] combinedFARIMA with NNs for internet traffic forecasting, whereasPan et al. [97] combined a differential evolution with NNsfor network traffic prediction. Due to the high complexity ofclassical NNs, a least-square SVM, which is an optimizedversion of a SVM, has also been used for forecasting [98].By applying principal component analysis (PCA), to reducethe input dimensions and then a generalized regression NN,Ziluan and Xin [99] achieved higher-accuracy forecasting withless training time. Zhenyu et al. [100] used traffic forecastingas a part of their distributed routing strategy for LEO satellitenetwork. An extreme learning machine (ELM) has also beenemployed for traffic load forecasting of satellite node beforerouting [101]. Bie et al. [91] used EMD to decompose thetraffic of the satellite with LRD into a series with SRD and atone frequency to decrease the predicting complexity and aug-ment the speed. Their combined EMD, fruit-fly optimization,and ELM methodology achieved more accurate forecasting ata higher speed than prior approaches.

D. Channel Modeling

1) Definition & limitations: A channel model is a math-ematical representation of the effect of a communicationchannel through which wireless signals are propagated; itis modeled as the impulse response of the channel in thefrequency or time domain.

A wireless channel presents a variety of challenges forreliable high-speed communication, as it is vulnerable to noise,interference, and other channel impediments, including pathloss and shadowing. Of these, path loss is caused by the wasteof the power emitted by the transmitter and the propagationchannel effects, whereas shadowing is caused by the obstaclesbetween the receiver and transmitter that absorb power [102].

Precise channel models are required to asses the perfor-mance of mobile communication system and therefore toenhance coverage for existing deployments. Channel modelsmay also be useful to forecast propagation in designed de-ployment outlines, which could allow for assessment beforedeployment, and for optimizing the coverage and capacityof actual systems. For small number of transmitter possiblepositions, outdoor extensive environment evaluation could

Page 10: JAN. 2021 1 Artificial Intelligence for Satellite ...

JAN. 2021 10

Fig. 15. Channel parameters prediction. 2D aerial/satellite images used asinput to the deep convolutional neural network (CNN)to to predict channelparameters. The model is trained separately for each parameter.

be done to estimate the parameters of the channel [103],[104]. As more advanced technologies have been used inwireless communication, more advanced channel modellingwas required. Therefore the use of stochastic models that arecomputationally efficient while providing satisfactory results[105].

Ray tracing is used for channel modeling, which requires3D images that are generally generated using computer visionmethods including stereo-vision-based depth estimation [106],[107], [108], [109].

A model is proposed for an urban environment requiresfeatures, including road widths, street orientation angles, andheight of buildings [110]. A simplified model was then pro-posed, by Fernandes and Soares [111] that required only theproportion of building occupation between the receiver andtransmitter, which could be computed from segmented imagesmanually or automatically [112].

Despite the satisfactory performance of some of the listedtechniques, they still have many limitations. For example, the3D images required by ray tracing r are not generally availableand their generation is not computationally efficient. Evenwhen the images are available, ray tracing is computationallycostly and data exhaustive and therefore is not appropriate forreal-time coverage area optimization. Further, the detailed datarequired for the model presented by Cichon and Kurner [110]is often unavailable.

2) AI-based solutions: Some early applications of AI forpath loss forecasting have been based on classical ML al-gorithms such as SVM [113], [114], NNs [115]–[120] anddecision trees [121]. Interested readers are referred to a surveyof ML-based path loss prediction approaches for further details[122].

However, although previous ML efforts have shown greatresults, many require 3D images. Researchers have recentlythus shifted their attention to using DL algorithms with 2Dsatellite/aerial images for path loss forecasting. For example,Ates et al. [123], approximated channel parameters, includingthe standard deviation of shadowing and the path loss expo-nent, from satellite images using deep CNN without the useof any added input parameters, as shown in Fig.15.

By using a DL model on satellite images and other input pa-rameters to predict the reference signal received power (RSRP)for specific receiver locations in a specific scenario/area,Thrane et al. [124] demonstrated a gain improvement of≈ 1 and ≈ 4.7 at 811 MHz and 2630 MHz respectively,over previous techniques, including ray tracing. SimilarlyAhmadien et al. [125], applied DL on satellite images for pathloss prediction, although they focused only on satellite imageswithout any supplemental features and worked on more gener-

alized data. Despite the practicality of this method, as it onlyneeds satellite images to forecast the path loss distribution, 2Dimages will not always be sufficient to characterize the 3Dstructure. In these cases, more features (e.g., building heights)must be input into the model.

E. Telemetry Mining

1) Definition & limitations: Telemetry is the process ofrecording and transferring measurements for control and mon-itoring. In satellite systems, on-board telemetry helps missioncontrol centers track platform’s status, detect abnormal events,and control various situations.

Satellite failure can be caused by a variety of things; mostcommonly, failure is due to the harsh environment of space,i.e., heat, vacuum, and radiation. The radiation environmentcan affect critical components of a satellite, including thecommunication system and power supply.

Telemetry processing enables tracking of the satellite’sbehavior to detect and minimize failure risks. Finding corre-lations, recognizing patterns, detecting anomalies, classifying,forecasting, and clustering are applied to the acquired data forfault diagnosis and reliable satellite monitoring.

One of the earliest and simplest techniques used in telemetryanalysis is limit checking. The method is based on settinga precise range for each feature (e.g., temperature, voltage,and current), and then monitoring the variance of each featureto detect out-of-range events. The main advantage of thisalgorithm is its simplicity limits, as can be chosen and updatedeasily to control spacecraft operation.

Complicated spacecraft with complex and advanced appli-cations challenges current space telemetry systems. Narrowwireless bandwidth and fixed-length frame telemetry maketransmitting the rapidly augmenting telemetry volumes dif-ficult. In addition, the discontinuous short-term contacts be-tween spacecraft and ground stations limit the data transmis-sion capability. Analyzing, monitoring and interpreting hugetelemetry parameters could be impossible due to the highcomplexity of data.

2) AI-based solutions: In recent years, AI techniques havebeen largely considered in space missions with telemetry.Satellite health monitoring has been performed using proba-bilistic clustering [126], dimensionality reduction, and hiddenMarkov [127], and regression trees [128], whereas others havedeveloped anomaly detection methods using the K-nearestneighbor (kNN), SVM, LSTM and testing on the telemetryof Centre National d’Etudes Spatiales spacecraft [129]–[131].

Further, the space functioning assistant was further devel-oped in diverse space applications using data-driven [132]and model-based [133] monitoring methods. In their study ofthe use of AI for fault diagnosis in general and for spaceutilization, Sun et al. [134] argued that the most promisingdirection is the use of DL; suggested its usage for faultdiagnosis for space utilization in China.

By comparing different ML algorithms using telemetry datafrom the Egyptsat-1 satellite, Ibrahim et al. [135] demonstratedthe high prediction accuracy of LSTM, ARIMA, and RNN

Page 11: JAN. 2021 1 Artificial Intelligence for Satellite ...

JAN. 2021 11

Fig. 16. Representation of ionospheric scintillation, where distortion occursduring signal propagation. The blue, green, and red lines show the line-of-sightsignal paths from the satellite to the earth antennas, the signal fluctuation, andthe signal delay, respectively.

models. They suggested simple linear regression for forecast-ing critical satellite features for short-lifetime satellites (i.e.,3–5 years) and NNs for long-lifetime satellites (15-20 years).

Unlike algorithms designed to operate on the ground inthe mission control center, Wan et al. [136] proposed a self-learning classification algorithm to achieve on-board telemetrydata classification with low computational complexity and lowtime latency.

F. Ionospheric Scintillation Detecting

1) Definition & limitations: Signals transmission by satel-lites toward the earth can be notably impacted due to theirpropagation through the atmosphere, especially the iono-sphere, which is the ionized part of the atmosphere higherlayer, and is distinguished by an elevated density of freeelectrons (Fig.16). The potential irregularities and gradientsof ionization can distort the signal phase and amplitude, in aprocess known as ionospheric scintillation.

In particular, propagation through the ionosphere can causedistortion of global navigation satellite system (GNSS) signals,leading to significant errors in the GNSS-based applications.GNSSs are radio-communication satellite systems that allowa user to compute the local time, velocity, and position in anyplace on the Earth by processing signals transferred from thesatellites and conducting trilateration [137]. GNSSs can alsobe used in a wide variety of applications, such as scientificobservations.

Because of the low-received power of GNSS waves, anyerrors significantly threaten the accuracy and credibility ofthe positioning systems. GNSS signals propagating throughthe ionosphere face the possibility of both a temporal delayand scintillation. Although delay compensation methods areapplied to all GNSS receivers [137], scintillation is stilla considerable issue, as its quasi-random nature makes itdifficult to model [138]. Ionospheric scintillation thus remainsa major limitation to high-accuracy applications of GNSSs.The accurate detection of scintillation thus required to improvethe credibility and quality of GNSSs [139]. To observe thesignals, which are a source of knowledge for interpreting andmodeling the atmosphere higher layers, and to raise cautionand take countermeasures for GNSS-based applications, net-works of GNSS receivers, have been installed, both at high and

low latitudes, where scintillation is expected to occur [140],[141]. Robust receivers and proper algorithms for scintillation-detecting algorithms are thus both required [142].

To evaluate the magnitude of scintillation impacting asignal, many researchers have employed simple event trig-gers, based on the comparison of the amplitude and phaseof two signals over defined interval [143]. Other proposedalternatives, have included using wavelet techniques [144],decomposing the carrier-to-noise density power propostion viaadaptive frequency-time techniques [145], and assessing thehistogram statistical properties of collected samples [146].

Using simple predefined thresholds to evaluate the mag-nitude of scintillation can be deceptive due its complexity.The loss of the transient phases of events could cause adelay in raising possible caution flags, and weak events withhigh variance could be missed. Further, it can be difficultto distinguish between signal distortions caused by otherphenomena, including multi-path. However, other proposedalternatives depend on complex and computationally costlyoperations or on customized receiver architectures.

2) AI-based solutions: Recently, studies have proved thatAI can be utilized for the detection of scintillation. Forexample, Rezende et al. [147], proposed a survey of datamining methods, that rely on observing and integrating GNSSreceivers.

A technique based on the SVM algorithm has been sug-gested for amplitude scintillation detection [148], [149], andthen later expanded to phase scintillation detection [150],[151].

By using decision trees and RF to systematically detectionospheric scintillation events impacting the amplitude of theGNSS signals, Linty et al.’s [152] methodology outperformedstate-of-the art methodologies in terms of accuracy (99.7%)and F-score (99.4%), thus reaching the levels of a manualhuman-driven annotation.

More recently, Imam and Dovis [153] proposed the use ofdecision trees, to differentiate between ionospheric scintilla-tion and multi-path in GNSS scintillation data. Their model,which annotates the data as scintillated, multi-path affected,or clean GNSS signal, demonstrated an accuracy of 96%

G. Managing Interference1) Definition & limitations: Interference managing is

mandatory for satellite communication operators, as interfer-ence negatively affects the communication channel, resultingin a reduced QoS, lower operational efficiency and loss ofrevenue [154]. Moreover, interference is a common event thatis increasing with the increasing congestion of the satellitefrequency band as more countries are launching satellites andmore applications are expected. With the growing number ofusers sharing the same frequency band, the possibility of in-terfering augments, as does the risk of intentional interference,as discussed in section III.B.

Interference managing is a thus essential to preserve high-quality and reliable communication systems; managementincludes detection, classification, and suppression of interfer-ence, as well as the application of techniques to minimize itsoccurrence.

Page 12: JAN. 2021 1 Artificial Intelligence for Satellite ...

JAN. 2021 12

Fig. 17. Satellite selection and antenna adjustment

Interference detection is a well-studied subject that has beenaddressed in the past few decades [155], [156], especially forsatellite communication [154], [157].

However, researchers have commonly relied on the decisiontheory of hypothesis testing, in which specific knowledge ofthe signal characteristics and the channel model is needed.Due, to the contemporary diverse wireless standards, thedesign of specific detectors for each signal category is fruitlessapproach.

2) AI-based solutions: To minimize interference, Liu etal. [158], suggested the use of AI for moving terminalsand stations in satellite-terrestrial networks by proposing aframework combining different AI approaches including SVM,unsupervised learning and DRL for satellite selection, antennapointing and tracking, as summarized in Fig.17.

Another AI-based approach executes automatic real-timeinterference detection is based on the forecasting of the follow-ing signal spectrum to be received in absence of anomaly, byusing LSTM trained on historical anomaly-free spectra [159].Here the predicted spectra is then compared to the receivedsignal using a designed metric, to detect anomalies.

Henarejos et al. [160] proposed the use of two AI-basedapproaches, DNN AEs and LSTM, for detecting and clas-sifying interference, respectively. In the former, the AE istrained with interference free signals and tested against othersignals without interference to obtain practical thresholds. Thedifference in error in signals with and without interference isthen exploited to detect the presence of interference.

H. Remote sensing (RS)

1) Definition & limitations: RS is the process of extractinginformation about an area, object or phenomenon by process-ing its reflected and emitted radiation at a distance, generallyfrom satellite or aircraft.

RS has a wide range of applications in multiple fieldsincluding land surveying, geography, geology, ecology, me-teorology, oceanography, military and communication. As RSoffers the possibility of monitoring areas that are dangerous,difficult or impossible to access, including mountains, forests,oceans and glaciers it is a popular and active research area.

2) AI-based solutions: The revolution in computer visioncapabilities caused by DL has led to the increased developmentof RS by adopting state-of-the-art DL algorithms on satelliteimages, image classification for RS has become most populartask in computer vision. For example, Kussul et al. [161] usedDL to classify land coverage and crop types using RS imagesfrom Landsat-8 and Sentinel-1A over a test site in Ukraine.Zhang et al [162] combined DNNs by using a gradient-boosting random CNN for scene classification. More recently,Chirayath et al. [163] proposed the combination of kNN andCNN to map coral reef marine habitats worldwide with RSimaging. RS and AI have also been used in communicationtheory applications, such as those discussed in section III.D[123], [124] and [125].

Many object detection and recognition applications havebeen developed using AI on RS images [164]. Recently, Zhouet al. [165] proposed the use of YOLOv3 [166], [167], a CNN-based object detection algorithm, for vehicle detection in RSimages. Others have proposed the use of DL for other objectdetection tasks, such as, building [168], airplane [169], cloud[170], [171], [172], ship [173], [174], and military target [175]detection. AI has also been applied to segment and restoreRS images, e.g., in cloud restorations, during which groundregions shadowed by clouds are restored.

Recently, Zheng et al. [176] proposed a two-stage cloudremoval method in which U-Net [177] and GANs are usedto perform cloud segmentation and image restoration, respec-tively.

AI proposed for on-board scheduling of agile Earth-observing satellites, as autonomy improves their performanceand allows them to acquire more images, by relying on on-board scheduling for quick decision-making. By comparingthe use of RF, NNs, and SVM to prior learning and non-learning-based approaches, Lu et al. [178] demonstrated thatRF improved both the solution quality and response time.

I. Behavior Modeling1) Definition & limitations: Owing to the increasing num-

bers of active and inactive (debris) satellites of diverse orbits,shapes, sizes, orientations and functions, it is becoming in-feasible for analysts to simultaneously monitor all satellites.Therefore, AI, especially ML, could play a major role byhelping to automate this process.

2) AI-based solutions: Mital et al. [179] discussed thepotential of ML algorithms to model satellite behavior. Super-vised models have been used to determine satellite stability[180], whereas unsupervised models have been used to detectanomalous behavior and a satellites’ location [181], and anRNN has been used to predict satellite maneuvers over time[182].

Accurate satellite pose estimation, i.e., identifying a satel-lite’s relative position and attitude, is critical in several spaceoperations, such as debris removal, inter-spacecraft commu-nication, and docking. The recent proposal for satellite poseestimation from a single image via combined ML and geo-metric optimization by Chen et al. [183] won the first placein the recent Kelvins pose estimation challenge organized bythe European Space Agency [184].

Page 13: JAN. 2021 1 Artificial Intelligence for Satellite ...

JAN. 2021 13

Fig. 18. Space-air-ground integrated networks (SAGINs) [26]

The amount of space debris has augmented immensely overthe last few years, which can cause a crucial menace tospace missions due to the high velocity of the debris. It isthus essential to classify space objects and apply collisionavoidance techniques to protect active satellites. As such,Jahirabadkar et al. [185] presented a survey of diverse AImethodologies, for classification of space objects using thecurves of light as a differentiating property.

Yadava et al. [186] employed NNs and RL for on-boardattitude determination and control; their method effectivelyprovided the needed torque to stabilize a nanosatellite alongthree axes.

To avoid catastrophic events because of battery failure,Ahmed et al. [187] developed an on-board remaining batterylife estimation system using ML and a logical analysis of dataapproaches.

J. Space-Air-Ground Integrating

1) Definition & limitations: Recently, notable advanceshave been made in ground communication systems to pro-vide users higher-quality internet access. Nevertheless, due tothe restricted capacity and coverage area of networks, suchservices are not possible everywhere at all times, especiallyfor users in rural or disaster areas.

Although terrestrial networks have the most resources andhighest throughput, non-terrestrial communication systemshave a much broader coverage area. However, non-terrestrialnetworks have their own limitations; e.g., satellite communica-tion systems have a long propagation latency, and air networkshave a narrow capacity and unstable links.

To supply users with better and more-flexible end-to-endservices by taking advantage of the way the networks cancomplement each other, researchers have suggested the use ofspace-air-ground integrated networks (SAGINs) [10], which

include the satellites in space, the balloons, airships, and UAVsin the air, and the ground segment, as shown in Fig.18.

The multi-layered satellite communication system whichconsists of GEO, MEO, and LEO satellites, can use multi-cast and broadcast methods to ameliorate the network capacity,crucially easing the augmenting traffic burden [10], [26]. AsSAGINs allow packet transmission to destinations via multiplepaths of diverse qualities, they can offer different packettransmissions methods to encounter diverse service demands[26].

However, the design and optimization of SAGINs is morechallenging than that of conventional ground communica-tion systems owing to their inherent self-organization, time-variability, and heterogeneity [10]. A variety of factors thatmust be considered when designing optimization techniqueshave thus been identified [10], [26]. For example, the diversepropagation mediums, the sharing of frequency bands bydifferent communication types, the high mobility of the spaceand air segments, and the inherent heterogeneity betweenthe three segments, make the network control and spectrummanagement of SAGIN arduous. The high mobility results infrequent handoffs, which makes safe routing more difficultto realize, thus making SAGINs more exposed to jamming.Further, as optimizing the energy efficiency is also morechallenging than in standard terrestrial networks, energy man-agement algorithms are also required.

2) AI-based solutions: In their discussion of challengesfacing SAGINs, Kato et al. [26] proposed the use of a CNNfor the routing problem to optimize the SAGIN’s overallperformance using traffic patterns and the remaining buffersize of GEO and MEO satellites.

Optimizing the satellite selection and the UAV locationto optimize the end-to-end data rate of the Source-Satellite-UAV-Destination communication is challenging due to thevast orbiting satellites number and the following time-varyingnetwork architecture. To address this problem, Lee et al. [188]jointly optimized the source-satellite-UAV association and thelocation of the UAV via DRL. Their suggested techniqueachieved up to a 5.74x higher average data rate than a directcommunication baseline in the absence of UAV and satellite.

For offloading calculation-intensive applications, a SAGINedge/cloud computing design has been developed in sucha way that satellites give access to the cloud and UAVsallow near-user edge computing. [189]. Here, a joint resourceallocation and task scheduling approach is used to allocatethe computing resources to virtual machines and schedule theoffloaded tasks for UAV edge servers, whereas an RL-basedcomputing offloading approach handles the multidimensionalSAGIN resources and learns the dynamic network condi-tions. Here, a joint resource allocation and task schedulingapproach is used to assign the computing resources to virtualmachines and plan the offloaded functions for UAV edgeservers, whereas an RL-based computing offloading approachhandles the multidimensional SAGIN resources and learns thedynamic network characteristics. Simulation results confirmedthe efficiency and convergence of the suggested technique.

As the heterogeneous multi-layer network requires advancedcapacity-management techniques, Jiang and Zhu [190] sug-

Page 14: JAN. 2021 1 Artificial Intelligence for Satellite ...

JAN. 2021 14

gested a low-complexity technique for computing the capacityamong satellites and suggested a long-term optimal capacityassignment RL-based model to maximize the long-term utilityof the system.

By formulating the joint resources assignment problem as ajoint optimization problem and using a DRL approach, Qiu etal. [191] proposed a software-defined satellite-terrestrial net-work to jointly manage caching, networking, and computingresources.

K. Energy Managing1) Definition & limitations: Recent advances in the con-

nection between ground, aerial, and satellite networks such asSAGIN have increased the demand imposed on satellite com-munication networks. This growing attention towards satelliteshas led to increased energy consumption requirements. Satel-lite energy management thus represents a hot research topicfor the further development of satellite communication.

Compared with a GEO Satellite, an LEO satellite hasrestricted on-board resources and moves quickly. Further, anLEO satellite has a limited energy capacity owing to its smallsize [192]; as billions of devices need to be served aroundthe world [193], current satellite resource capability can nolonger satisfy demand. To address this shortage of satellitecommunication resources, an efficient resource schedulingscheme to take full use of the limited resources, must bedesigned. As current resource allocation schemes have mostlybeen designed for GEO satellites, however, these schemesdo not consider many LEO specific concerns, such as theconstrained energy, movement attribute, or connection andtransmission dynamics.

2) AI-based solutions: Some researchers have thus turnedto AI-based solutions for power saving. For example, Kothariet al. [27] suggested the usage of DNN compression beforedata transmission to improve latency and save power. In theabsence of solar light, satellites are battery energy dependent,which places a heavy load on the satellite battery and canshorten their lifetimes leading to increased costs for satellitecommunication networks. To optimize the power allocation insatellite to ground communication using LEO satellites andthus extend their battery life, Tsuchida et al. [194] employedRL to share the workload of overworked satellites with nearsatellites with lower load. Similarly, implementing DRL forenergy-efficient channel allocation in Satlot allowed for a67.86% reduction in energy consumption when comparedwith previous models [195]. Mobile edge computing enhancedSatIoT networks contain diverse satellites and several satellitegateways that could be jointly optimized with coupled user as-sociation, offloading decisions computing, and communicationresource allocation to minimize the latency and energy cost.In a recent example, a joint user-association and offloadingdecision with optimal resource allocation methodology basedon DRL proposed by Cui et al. [196] improved the long-termlatency and energy costs.

L. Other Applications1) Handoff Optimization: Link-layer handoff occurs when

the change of one or more links is needed between the

communication endpoints due to the dynamic connectivitypatterns of LEO satellites. The management of handoff in LEOsatellites varies remarkably from that of terrestrial networks,since handoffs happen more frequently due to the movement ofsatellites [3]. Many researchers have thus focused on handoffmanagement in LEO satellite networks.

In general, user equipment (UE) periodically measures thestrength of reference signals of different cells to ensure accessto a strong cell, as the handoff decision depends on the signalstrength or some other parameters. Moreover, the historicalRSRP contains information to avoid unnecessary handoff.

Thus, Zhang [197] converted the handoff decision to aclassification problem. Although the historical RSRP is a timeseries, a CNN was employed rather than an RNN becausethe feature map of historical RSRP has a strong local spatialcorrelation and the use of an RNN could lead to a seriesof wrong decisions, as one decision largely impacts futuredecisions. In the proposed AI-based method, the handoff wasdecreased by more than 25% for more than 70% of the UE,whereas the commonly used “strongest beam” method onlyreduced the average RSRP by 3%.

2) Heat Source Layout Design: The effective design of theheat sources used can enhance the thermal performance ofthe overall system, and has thus become a crucial aspect ofseveral engineering areas, including integrated circuit designand satellite layout design. With the increasingly small sizeof components and higher power intensity, designing the heat-source layout has become a critical problem [198]. Conven-tionally, the optimal design is acquired by exploring the designspace by repeatedly running the thermal simulation to comparethe performance of each scheme [199]–[201]. To avoid the ex-tremely large computational burden of traditional techniques,Sun et al. [202] employed an inverse design method in whichthe layout of heat sources is directly generated from a givenexpected thermal performance based on a DL model calledShow, Attend, and Read [203]. Their developed model wascapable of learning the underlying physics of the designproblem and thus could efficiently forecast the design ofheat sources under a given condition without any performingsimulations. Other DL algorithms have been used in diversedesign areas, such as mechanics [204], optics [205], fluids[206], and materials [207].

3) Reflectarray analysis and design: ML algorithms havebeen employed in the analysis and design of antennas [22],including the analysis [208], [209] and design [210], [211]of reflectarrays. For example, NNs were used by Shan etal. [212] to forecast the phase-shift, whereas kriging wassuggested to forecast the electromagnetic response of reflec-tarray components [213]. Support vector regression (SVR)has been used to accelerate the examination [214] and todirectly optimize narrowband reflectarrays [215]. To hastencalculations without reducing their precision, Prado et al.[216] proposed a wideband SVR-based reflectarray designmethod, and demonstrated its ability to obtain wideband, dual-linear polarized, shaped-beam reflectarrays for direct broadcastsatellite applications.

4) Carrier Signal Detection: As each signal must be sepa-rated before classification, modulation, demodulation, decod-

Page 15: JAN. 2021 1 Artificial Intelligence for Satellite ...

JAN. 2021 15

ing and other signal processing, localization, and detection ofcarrier signals in the frequency domain is a crucial problemin wireless communication.

The algorithms used for carrier signal detection have beencommonly based on threshold values and required humanintervention [217]–[222], although several improvements havebeen made including the use of a double threshold [223],[224]. Kim et al. [225] proposed the use of a slope-tracing-based algorithm to separate the interval of signal elementsbased on signal properties such as amplitude, slope, deflectionwidth, or distance between neighboring deflections.

More recently, DL has been applied to carrier signal detec-tion; for example, Morozov and Ovchinnikov [226] applieda fully connected NN for their detection in FSK signals,whereas Yuan et al. [227] used DL, to morse signals blinddetection in wideband spectrum data. Huang er al. [228]employed a fully convolutional network (FCN) model to detectcarrier signal in the broadband power spectrum. A FCN is aDL method for semantic image segmentation in which thebroadband power spectrum is regarded as a 1D image andeach subcarrier as the target object to transform the carrierdetection problem on the broadband to a semantic 1D imagesegmentation problem [229]–[231]. Here, a 1D deep CNNFCN-based on was designed to categorize each point on abroadband power spectrum array into two categories (i.e.,subcarrier or noise), and then position the subcarrier signals’location on the broadband power spectrum. After being trainedand validated using a simulated and real satellite broadbandpower spectrum dataset, respectively, the proposed deep CNNsuccessfully detected the subcarrier signal in the broadbandpower spectrum and achieved a higher accuracy than the slopetracing method.

CONCLUSION

This review provided an overview of AI and its differentsub-fields, including ML, DL, and RL. Some limitations tosatellite communication were then presented and their pro-posed and potential AI-based solutions were discussed. Theapplication of AI has shown great results in a wide varietyof satellite communication aspects, including beam-hopping,AJ, network traffic forecasting, channel modeling, telemetrymining, ionospheric scintillation detecting, interference man-aging, remote sensing, behavior modeling, space-air-groundintegrating, and energy managing. Future work should aim toapply AI, to achieve more efficient, secure, reliable, and high-quality communication systems.

REFERENCES

[1] G. Maral, M. Bousquet, and Z. Sun, “Introduction,” in Satellite Commu-nications Systems: Systems, Techniques and Technology, 6th ed. Hoboken,NJ, USA: Wiley, 2020, ch. 1, sec. 3, pp. 3–11.

[2] F. Rinaldi, H. L. Maattanen, J. Torsner, S. Pizzi, S. Andreev, A. Iera,Y. Koucheryavy, and G. Araniti “Non-Terrestrial Networks in 5G &Beyond: A Survey,” in IEEE Access, vol. 8, pp. 165178-165200, 2020,doi: 10.1109/ACCESS.2020.3022981.

[3] P. Chowdhury, M. Atiquzzaman, and W. Ivancic, “Handover schemes insatellite networks: State-of-the-art and future research directions,” IEEECommun. Surveys Tuts., vol. 8, no. 4, pp. 2-14, Aug. 2006.

[4] P. Chini, G. Giambene, and S. Kota, “A survey on mobile satellitesystems,” Int. J. Satell. Commun. Netw., vol. 28, no. 1, pp. 29-57, Aug.2009.

[5] P.-D. Arapoglou, K. Liolis, M. Bertinelli, A. Panagopoulos, P. Cottis,and R. De Gaudenzi, “MIMO over satellite: A review,” IEEE Commun.Surveys Tuts., vol. 13, no. 1, pp. 27-51, 1st Quart. 2011.

[6] M. De Sanctis, E. Cianca, G. Araniti, I. Bisio, and R. Prasad, “Satellitecommunications supporting Internet of remote things,” IEEE InternetThings J., vol. 3, no. 1, pp. 113-123, Feb. 2016.

[7] R. Radhakrishnan, W. W. Edmonson, F. Afghah, R. M. Rodriguez-Osorio,F. Pinto, and S. C. Burleigh, “Survey of inter-satellite communicationfor small satellite systems: Physical layer to network layer view,” IEEECommun. Surveys Tuts., vol. 18, no. 4, pp. 2442-2473, May 2016.

[8] C. Niephaus, M. Kretschmer, and G. Ghinea, “QoS provisioning inconverged satellite and terrestrial networks: A survey of the state-of-the-art,” IEEE Commun. Surveys Tuts., vol. 18, no. 4, pp. 2415-2441, Apr.2016.

[9] H. Kaushal and G. Kaddoum, “Optical communication in space: Chal-lenges and mitigation techniques,” IEEE Commun. Surveys Tuts., vol. 19,no. 1, pp. 57-96, 1st Quart. 2017.

[10] J. Liu, Y. Shi, Z. M. Fadlullah, and N. Kato, “Space-Air-GroundIntegrated Network: A Survey,” IEEE Communications Surveys &Tutorials, vol. 20, no. 4, pp. 2714-2741, Fourthquarter 2018, doi:10.1109/COMST.2018.2841996.

[11] S. C. Burleigh, T. De Cola, S. Morosi, S. Jayousi, E. Cianca, and C.Fuchs, “From connectivity to advanced Internet services: A comprehen-sive review of small satellites communications and networks,” WirelessCommun. Mobile Comput., vol. 2019, pp. 1-17, May 2019.

[12] B. Li, Z. Fei, C. Zhou, and Y. Zhang, “Physical-layer security in spaceinformation networks: A survey,” IEEE Internet Things J., vol. 7, no. 1,pp. 33-52, Jan. 2020.

[13] N. Saeed, A. Elzanaty, H. Almorad, H. Dahrouj, T. Y. Al-Naffouri, andM. -S. Alouini, “CubeSat Communications: Recent Advances and FutureChallenges,” IEEE Communications Surveys & Tutorials, vol. 22, no. 3,pp. 1839-1862, thirdquarter 2020, doi: 10.1109/COMST.2020.2990499.

[14] O. Simeone, “A Very Brief Introduction to Machine Learning With Ap-plications to Communication Systems,” IEEE Transactions on CognitiveCommunications and Networking, vol. 4, no. 4, pp. 648-664, Dec. 2018,doi: 10.1109/TCCN.2018.2881442.

[15] M. Chen, U. Challita, W. Saad, C. Yin, and M. Debbah, “Artificial NeuralNetworks-Based Machine Learning for Wireless Networks: A Tutorial,”IEEE Communications Surveys & Tutorials, vol. 21, no. 4, pp. 3039-3071,Fourthquarter 2019, doi: 10.1109/COMST.2019.2926625.

[16] Y. Qian, J. Wu, R. Wang, F. Zhu, and W. Zhang, “Survey on Rein-forcement Learning Applications in Communication Networks,” Journalof Communications and Information Networks, vol. 4, no. 2, pp. 30-39,June 2020, doi: 10.23919/JCIN.2019.8917870.

[17] EC. Strinati, S. Barbarossa, JL. Gonzalez, D. Ktenas, N. Cassiau,L. Maret, and C. Dehos, “6G: The Next Frontier: From HolographicMessaging to Artificial Intelligence Using Subterahertz and Visible LightCommunication,” IEEE Vehicular Technology Magazine, vol. 14, no. 3,pp. 42-50, Sept. 2019, doi: 10.1109/MVT.2019.2921162.

[18] J. Jagannath, N. Polosky, A. Jagannath, F. Restuccia, T. Melo-dia, “Machine learning for wireless communications in the Inter-net of Things: A comprehensive survey,” Ad Hoc Networks, Vol-ume 93, Jan. 2019, 101913, ISSN 1570-8705, [online] Available:https://doi.org/10.1016/j.adhoc.2019.101913.

[19] G. P. Kumar and P. Venkataram, “Artificial intelligenceapproaches to network management: recent advances and asurvey,” Computer Communications, Volume 20, Issue 15, Dec.1997, Pages 1313-1322, ISSN 0140-3664, [online] Available:https://doi.org/10.1016/S0140-3664(97)00094-7.

[20] Y. Zou, J. Zhu, X. Wang, and L. Hanzo, “A Survey on WirelessSecurity: Technical Challenges, Recent Advances, and Future Trends,”Proceedings of the IEEE, vol. 104, no. 9, pp. 1727-1765, Sept. 2016,doi: 10.1109/JPROC.2016.2558521.

[21] S. H. Alsamhi, O. Ma, and M. S. Ansari. “Survey on artificial intelli-gence based techniques for emerging robotic communication,” Telecom-munication Systems: Modelling, Analysis, Design and Management, vol.72, issue 3, no. 12, pp. 483-503, Mars 2019, doi: 10.1007/s11235-019-00561-z

[22] H. M. E. Misilmani and T. Naous, “Machine Learning in An-tenna Design: An Overview on Machine Learning Concept and Al-gorithms,” 2019 International Conference on High Performance Com-puting & Simulation (HPCS), Dublin, Ireland, 2019, pp. 600-607, doi:10.1109/HPCS48598.2019.9188224.

[23] P. S. Bithas, E. T. Michailidis, N. Nomikos, D. Vouyioukas, andA. Kanatas “A survey on machine-learning techniques for UAV-based

Page 16: JAN. 2021 1 Artificial Intelligence for Satellite ...

JAN. 2021 16

communications,” Sensors 2019 19.23, Nov. 2019, [online] Available:https://doi.org/10.3390/s19235170.

[24] M. A. Lahmeri, M. A. Kishk, and MS. Alouini. “Machine learning forUAV-Based networks.” arXiv preprint, 2020, arXiv:2009.11522.

[25] M. A. Vazquez, P. Henarejos, A. I. Perez-Neira, E. Grechi, A. Voight, JC.Gil, I. Pappalardo, FD. Credico, and R. M. Lancellotti, “On the Use of AIfor Satellite Communications.” arXiv preprint, 2020, arXiv:2007.10110.

[26] N. Kato, ZM. Fadlullah, F. Tang, B. Mao, S. Tani, A. Okamura, andJ. Liu, “Optimizing Space-Air-Ground Integrated Networks by ArtificialIntelligence,” IEEE Wireless Communications, vol. 26, no. 4, pp. 140-147,August 2019, doi: 10.1109/MWC.2018.1800365.

[27] V. Kothari, E. Liberis, and N. D. Lane. “The Final Frontier: DeepLearning in Space,” Proceedings of the 21st International Workshop onMobile Computing Systems and Applications, pp. 45-49., 2020.

[28] F. Chollet, “What is Deep Learning ?” in Deep Learning with Python,1st ed. New York, NY, USA: Manning, 2017, ch. 1, pp. 3–24.

[29] A. M. Turing, “Computing Machinery and Intelligence,” in Mind, 59thed., 1950, ch. 1, pp. 433–460.

[30] C. M. Bishop, “Linear Models for Classification,” in Pattern Recognitionand Machine Learning, 1st ed. Berlin, Heidelberg, Germany: Springer-Verlag, 2006, ch. 4, pp. 179–224.

[31] C. M. Bishop, “Kernel Methods,” in Pattern Recognition and MachineLearning, 1st ed. Berlin, Heidelberg, Germany: Springer-Verlag, 2006,ch. 6, pp. 291–325.

[32] B. E. Boser, I. M. Guyon, and V. N. Vapnik, “A training algorithm foroptimal margin classifiers,” In Proceedings of the fifth annual workshopon Computational learning theory (COLT ’92), New York, NY, USA,Association for Computing Machinery, pp. 144–152., 1992, [online]Available: https://doi.org/10.1145/130385.130401

[33] F. Fourati., W. Souidene, and R. Attia, “An original framework for WheatHead Detection using Deep, Semi-supervised and Ensemble Learningwithin Global Wheat Head Detection (GWHD) Dataset,” arXiv preprint,2020, arXiv:2009.11977.

[34] J. Cervantesa, F. Garcia-Lamonta, L. Rodrıguez-Mazahuab, and A.Lopezc, “A comprehensive survey on support vector machine classifica-tion: Applications, challenges and trends,” Neurocomputing, 2020, 408,189-215.

[35] J. R. Quinlan, “Induction of decision trees.” Machine learning 1.1, 1986,81–106.

[36] C. M. Bishop, “Graphical Models,” in Pattern Recognition and MachineLearning, 1 st ed. Berlin, Heidelberg, Germany: Springer-Verlag, 2006,ch. 8, pp. 359–423.

[37] L. Breiman, “Random forests,” Machine learning 45.1, 2001, pp. 5-32.[38] L. Breiman, “Bagging predictors,” Machine learning 24.2, 1996, pp.

123-140.[39] J. H. Friedman “Greedy function approximation: a gradient boosting

machine,” Annals of statistics, 2001, pp.1189-1232.[40] [online] Available: https://xgboost.readthedocs.io/en/latest/[41] T. Chen and T. He, “Xgboost: extreme gradient boosting.”

Package Version: 1.3.2.1, Jan., 2021, [online] Available:https://cran.r-project.org/web/packages/xgboost/vignettes/xgboost.pdf

[42] [online] Available: https://www.kaggle.com/[43] P. Baldi and K. Hornik, “Neural networks and principal component anal-

ysis: Learning from examples without local minima.” Neural networks2.1, 1989, pp. 53–58.

[44] C. M. Bishop, “Neural Networks” in Pattern Recognition and MachineLearning, 1st ed. Berlin, Heidelberg, Germany: Springer-Verlag, 2006,ch. 5, pp. 225–290.

[45] R. Hecht-Nielsen, “Theory of the backpropagation neural network.”Neural networks for perception. Academic Press, 1992, pp. 65–93.

[46] I. Goodfellow, Y. Bengio, and A. Courville, “Introduction,” in Deeplearning, Cambridge: MIT press, 2016, ch. 1, pp. 1–26. [online] Avail-able: https://www.deeplearningbook.org/

[47] I. Goodfellow, Y. Bengio, and A. Courville, “Convolutional Networks,”in Deep learning, Cambridge: MIT press, 2016, ch. 9, pp. 326–366.[online] Available: https://www.deeplearningbook.org/

[48] S. Albawi, T. A. Mohammed, and S. Al-Zawi, “Understanding of a con-volutional neural network,” International Conference on Engineering andTechnology (ICET), Antalya, 2017, pp. 1–6, doi: 10.1109/ICEngTech-nol.2017.8308186.

[49] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi “You only look once:Unified, real-time object detection.” Proceedings of the IEEE conferenceon computer vision and pattern recognition. 2016.

[50] T. He, Z. Zhang, H. Zhang, Z. Zhang, J. Xie, M. Li “Bag of tricks forimage classification with convolutional neural networks.” Proceedings ofthe IEEE Conference on Computer Vision and Pattern Recognition. 2019.

[51] Z. Zou, Z. Shi, Y. Guo, and J. Ye, “Object detection in 20 years: Asurvey,” arXiv preprint arXiv:1905.05055, 2019.

[52] Q. Chu, W. Ouyang, H. Li, X. Wang, B. Liu, and N. Yu “Onlinemulti-object tracking using CNN-based single object tracker with spatial-temporal attention mechanism,” Proceedings of the IEEE InternationalConference on Computer Vision. 2017.

[53] K. R. Chowdhary, “Natural language processing,” Fundamentals ofArtificial Intelligence. Springer, New Delhi, 2020. pp. 603–649.

[54] I. Goodfellow, Y. Bengio, and A. Courville, “Sequence Mod-eling: Recurrent and Recursive Nets,” in Deep learning, Cam-bridge: MIT press, 2016, ch. 10, pp. 367–415. [online] Available:https://www.deeplearningbook.org/

[55] I. Goodfellow, Y. Bengio, and A. Courville, “Autoencoders,” in Deeplearning, Cambridge: MIT press, 2016, ch. 14, pp. 499–523. [online]Available: https://www.deeplearningbook.org/

[56] Y. Wang, Y. Hongxun , and Z. Sicheng, “Auto-encoder based dimen-sionality reduction,” Neurocomputing 184, 2016, pp. 232–242.

[57] C. Zhou and RC. Paffenroth, “Anomaly detection with robust deepautoencoders,” Proceedings of the 23rd ACM Special Interest Groupon Knowledge Discovery and Data Mining International Conference onKnowledge Discovery and Data Mining. 2017.

[58] I. Goodfellow, Y. Bengio, and A. Courville, “Deep generative models,”in Deep learning, Cambridge: MIT press, 2016, ch. 20, pp. 651–716.[online] Available: https://www.deeplearningbook.org/

[59] C. Doersch, “Tutorial on variational autoencoders,” arXiv preprintarXiv:1606.05908. 2016.

[60] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S.Ozair, A. Courville, Y. Bengio, Generative adversarial nets, Advances inneural information processing systems, 2014.

[61] A. Creswell, T. White, V. Dumoulin, K. Arulkumaran, B. Sengupta, andA. A. Bharath, “Generative adversarial networks: An overview,” IEEESignal Processing Magazine 35.1. 2018. pp. 53-65 .

[62] DD. Margineantu, and TG. Dietterich,“Pruning adaptive boosting,”ICML. Vol. 97. 1997.

[63] J. Snoek, H. Larochelle, and RP. Adams, “Practical bayesian optimiza-tion of machine learning algorithms,” Advances in neural informationprocessing systems 25. 2012. 2951–2959.

[64] RS. Sutton and GB. Andrew, “Reinforcement Learning: An Introduc-tion,” A Bradford Book, Cambridge, MA, USA. 2018.

[65] J. Anzalchi, A. Couchman, P. Gabellini, G. Gallinaro, L. D’Agristina,N. Alagha, and P. Angeletti, “Beam hopping in multi-beam broadbandsatellite systems: System simulation and performance comparison withnon-hopped systems,” in Proc. 5th Adv. Satell. Multimedia Syst. Conf.11th Signal Process. Space Commun. Workshop, Sep. 2010, pp. 248—255.

[66] A. Freedman, D. Rainish, and Y. Gat, “Beam hopping: How to make itpossible,” in Proc. Broadband Commun. Conf., Oct. 2015, pp. 1—6.

[67] L. Lei, E. Lagunas, Y. Yuan, M. G. Kibria, S. Chatzinotas, and B.Ottersten, “Beam Illumination Pattern Design in Satellite Networks:Learning and Optimization for Efficient Beam Hopping,” in IEEE Access,vol. 8, pp. 136655-136667, 2020, doi: 10.1109/ACCESS.2020.3011746.

[68] P. Angeletti, D. Fernandez Prim, R. Rinaldo, “Beam hopping in multi-beam broadband satellite systems: system performance and payloadarchitecture analysis,” The 24th AIAA Int. Communications SatelliteSystems Conf., San Diego, June 2006

[69] J. Anzalchi, A. Couchman, P. Gabellini, G. Gallinaro, L. D’Agristina,N. Alagha, and P. Angeletti, “Beam hopping in multibeam broadbandsatellite systems: system simulation and performance comparison withnon-hopped systems,” The 2010 5th Advanced Satellite MultimediaSystems Conf. and the 11th Signal Processing for Space CommunicationsWorkshop, Cagliari, Italy, September 2010, pp. 248—255.

[70] X. Alberti, J. M. Cebrian, A. Del Bianco, Z. Katona, J. Lei, M. AVazquez-Castro, A. Zanus, L. Gilbert, and N. Alagha, “System capacityoptimization in time and frequency for multibeam multi-media satellitesystems,” in Proc. 11th Signal Process. Space Commun. Workshop, Sep.2010, pp. 226—233.

[71] B. Evans and P. Thompson, “Key issues and technologies for a Terabit/ssatellite,” The 28th AIAA Int. Communications Satellite Systems Conf.(ICSSC 2010), Anaheim, California, USA, June 2010, p. 8713

[72] J. Lei and M. Vazquez-Castro, “Multibeam satellite frequency/time dual-ity study and capacity optimization,” in Proc. IEEE Int. Conf. Commun.,Oct. 2011, vol. 13, no. 5, pp. 471-–480.

[73] R. Alegre, N. Alagha, MA. Vazquez, “Heuristic algorithms for flexibleresource allocation in beam hopping multi-beam satellite systems,” The29th AIAA Int. Communications Satellite Systems Conf. (ICSSC 2011),Nara, Japan, July 2011, p. 8001

Page 17: JAN. 2021 1 Artificial Intelligence for Satellite ...

JAN. 2021 17

[74] R. Alegre, N. Alagha, MA. Vazquez, “Offered capacity optimizationmechanisms for multi-beam satellite systems,” The 2012 IEEE Int. Conf.on Communications (ICC), Ottawa, ON, Canada, June 2012, pp. 3180—3184

[75] H. Liu, Z. Yang, Z. Cao, “Max-min rate control on traffic in broadbandmultibeam satellite communications systems,” IEEE Commun. Lett.,2013, 17, (7), pp. 1396—1399

[76] H. Han, X. Zheng, Q. Huang, and Y. Lin, “QoS-equilibrium slot allo-cation for beam hopping in broadband satellite communication systems,”Wirel. Netw., 2015, 21, (8), pp. 2617—2630

[77] S. Shi, G. Li, Z. Li, H. Zhu, and B. Gao, “Joint power and bandwidthallocation for beamhopping user downlinks in smart gateway multibeamsatellite systems,” Int. J. Distrib. Sensor Netw., 2017, 13, (5), doi:1550147717709461

[78] A. Ginesi, E. Re, and P.D. Arapoglou, “Joint beam hopping andprecoding in HTS systems,” Int. Conf. on Wireless and Satellite Systems,OXFORD, GREAT BRITAIN, U.K, 2017, 43-–51

[79] G. Cocco, T. de Cola, M. Angelone, Z. Katona, and S. Erl, “Radioresource management optimization of flexible satellite payloads for DVB-S2 systems,” IEEE Trans. Broadcast., vol. 64, no. 2, Jun. 2018, pp. 266—280

[80] X. Hu, S. Liu, X. Hu, Y. Wang, L. Xu, Y. Zhang, C. Wang, and W.Wang, “Deep reinforcement learning-based beam Hopping algorithm inmultibeam satellite systems,” IET Communications. pp. 2485–91, Jan.2019.

[81] Y. Zhang, X. Hu, R. Chen, Z. Zhang, L. Wang, and W. Wang, “DynamicBeam Hopping for DVB-S2X Satellite: A Multi-Objective Deep Rein-forcement Learning Approach,” 2019 IEEE International Conferences onUbiquitous Computing & Communications (IUCC) and Data Science andComputational Intelligence (DSCI) and Smart Computing, Networkingand Services (SmartCNS), Shenyang, China, 2019, pp. 164–169, doi:10.1109/IUCC/DSCI/SmartCNS.2019.00056.

[82] X. Hu, Y. Zhang, X. Liao, Z. Liu, W. Wang, and F. M. Ghannouchi,“Dynamic Beam Hopping Method Based on Multi-Objective Deep Rein-forcement Learning for Next Generation Satellite Broadband Systems,”in IEEE Transactions on Broadcasting, vol. 66, no. 3, Sept. 2020, pp.630–646, doi: 10.1109/TBC.2019.2960940.

[83] M. K. Simon, J. K. Omura, and R. A. Sholtz “Spread spectrumcommunications,” vols. 1-3. Computer Science Press, Inc., 1985.

[84] D. Torrieri, “Principles of spread-spectrum communication systems,”Vol. 1. Heidelberg: Springer, 2005.

[85] S. Bae, S. Kim, and J. Kim, “Efficient frequency-hopping synchroniza-tion for satellite communications using dehop-rehop transponders,” inIEEE Transactions on Aerospace and Electronic Systems, vol. 52, no.1, pp. 261–274, Feb. 2016, doi: 10.1109/TAES.2015.150062.

[86] F. Yao, L. Jia, Y. Sun, Y. Xu, S. Feng, and Y. Zhu, “A hierarchicallearning approach to anti-jamming channel selection strategies,” Wirel.Netw., vol. 25, no. 1, Jan. 2019, pp. 201–213.

[87] C. Han and Y. Niu, “Cross-Layer Anti-Jamming Scheme: A HierarchicalLearning Approach,” IEEE Access, vol. 6, pp. 34874-34883, Jun. 2018.

[88] S. Lee, S. Kim, M. Seo, and D. Har, “Synchronization of FrequencyHopping by LSTM Network for Satellite Communication System,” inIEEE Communications Letters, vol. 23, no. 11, Nov. 2019, pp. 2054–2058, , doi: 10.1109/LCOMM.2019.2936019.

[89] C. Han, L. Huo, X. Tong, H. Wang, and X. Liu, “Spatial Anti-Jamming Scheme for Internet of Satellites Based on the Deep Rein-forcement Learning and Stackelberg Game,” in IEEE Transactions onVehicular Technology, vol. 69, no. 5, May 2020, pp. 5331–5342 , doi:10.1109/TVT.2020.2982672.

[90] C. Han, A. Liu, H. Wang, L. Huo, and X. Liang, “Dynamic Anti-Jamming Coalition for Satellite-Enabled Army IoT: A Distributed GameApproach,” in IEEE Internet of Things Journal, vol. 7, no. 11, Nov. 2020,pp.10932–10944, doi: 10.1109/JIOT.2020.2991585.

[91] Y. Bie, L. Wang, Y. Tian, and Z. Hu, “A Combined Forecasting Modelfor Satellite Network Self-Similar Traffic,” in IEEE Access, vol. 7, 2019,pp. 152004–152013, doi: 10.1109/ACCESS.2019.2944895.

[92] L. Rossi, J. Chakareski, P. Frossard, and S. Colonnese, “A PoissonHidden Markov Model for Multiview Video Traffic,” in IEEE/ACMTransactions on Networking, vol. 23, no. 2, April 2015, pp. 547–558,doi: 10.1109/TNET.2014.2303162.

[93] D. Yan and L. Wang, “TPDR: Traffic prediction based dynamic routingfor LEO&GEO satellite networks,” 2015 IEEE 5th International Confer-ence on Electronics Information and Emergency Communication, Beijing,2015, pp. 104–107, doi: 10.1109/ICEIEC.2015.7284498.

[94] F. Xu, Y. Lin, J. Huang, D. Wu, H. Shi, J. Song, and Y. Li, “Big DataDriven Mobile Traffic Understanding and Forecasting: A Time Series

Approach,” in IEEE Transactions on Services Computing, vol. 9, no. 5,pp. 796-805, 1 Sept.-Oct. 2016, doi: 10.1109/TSC.2016.2599878.

[95] C. Katris, S. Daskalaki, Comparing forecasting approaches for In-ternet traffic, Expert Systems with Applications, Volume 42, Is-sue 21, 2015, 8172–8183, ISSN 0957-4174, [online] Available:https://doi.org/10.1016/j.eswa.2015.06.029.

[96] B. Gao, Q. Zhang, Y.-S. Liang, N.-N. Liu, C.-B. Huang, and N.-T.Zhang, “Predicting self-similar networking traffic based on EMD andARMA,” 32. 47-56. 2011.

[97] X. Pan, W. Zhou, Y. Lu, and N. Sun, “Prediction of Network Traffic ofSmart Cities Based on DE-BP Neural Network,” in IEEE Access, vol. 7,pp. 55807–55816, 2019, doi: 10.1109/ACCESS.2019.2913017.

[98] JX. LIU and ZH. JIA. “Telecommunication Traffic Prediction Basedon Improved LSSVM,” International Journal of Pattern Recognition andArtificial Intelligence. 32. 10.1142/S0218001418500076. 2017.

[99] L. Ziluan and L. Xin. “Short-term traffic forecasting based on principalcomponent analysis and a generalized regression neural network forsatellite networks,” Journal of China Universities of Posts and Telecom-munications. 25. 15-28+36. 10.19682/j.cnki.1005-8885.2018.0002. 2018.

[100] Z. Na, Z. Pan, X. Liu, Z. Deng, Z. Gao, and Q. Guo, “Dis-tributed Routing Strategy Based on Machine Learning for LEOSatellite Network,” Wireless Communications and Mobile Computing,vol. 2018, Article ID 3026405, 10 pages, 2018. [online] Available:https://doi.org/10.1155/2018/3026405

[101] GB. Huang, QY. Zhu, C. Siew, (2004). “Extreme learning machine: Anew learning scheme of feedforward neural networks,” IEEE InternationalConference on Neural Networks - Conference Proceedings. 2. 985–990vol.2. 10.1109/IJCNN.2004.1380068.

[102] A. Goldsmith, “Path Loss and Shadowing,” in Wireless Communica-tions, Cambridge University Press, 2005, ch. 2, pp. 25–48.

[103] T. S. Rappaport, G. R. MacCartney, M. K. Samimi, and S. Sun,“Wideband Millimeter-Wave Propagation Measurements and ChannelModels for Future Wireless Communication System Design,” in IEEETransactions on Communications, vol. 63, no. 9, pp. 3029–3056, Sept.2015, doi: 10.1109/TCOMM.2015.2434384.

[104] S. Sangodoyin, S. Niranjayan, and A. F. Molisch, “A Measurement-Based Model for Outdoor Near-Ground Ultrawideband Channels,” inIEEE Transactions on Antennas and Propagation, vol. 64, no. 2, pp. 740–751, Feb. 2016, doi: 10.1109/TAP.2015.2505004.

[105] C. Wang, J. Bian, J. Sun, W. Zhang, and M. Zhang, “A Survey of 5GChannel Measurements and Models,” in IEEE Communications Surveys& Tutorials, vol. 20, no. 4, pp. 3142-3168, Fourthquarter 2018, doi:10.1109/COMST.2018.2862141.

[106] B. Ai, K. Guan, R. He, J. Li, G. Li, D. He, Z. Zhong, and K. M. S. Huq,“On Indoor Millimeter Wave Massive MIMO Channels: Measurement andSimulation,” in IEEE Journal on Selected Areas in Communications, vol.35, no. 7, July 2017, pp. 1678–1690, doi: 10.1109/JSAC.2017.2698780.

[107] G. Liang and H. L. Bertoni, “A new approach to 3-D ray tracing forpropagation prediction in cities,” IEEE Trans. Antennas Propag., vol. 46,no. 6, Jun. 1998, pp. 853—863.

[108] M. Zhu, A. Singh, and F. Tufvesson, “Measurement based ray launch-ing for analysis of outdoor propagation,” 2012 6th European Conferenceon Antennas and Propagation (EUCAP), Prague, 2012, pp. 3332–3336,doi: 10.1109/EuCAP.2012.6206329.

[109] Z. Yun and M. F. Iskander, “Ray Tracing for Radio PropagationModeling: Principles and Applications,” in IEEE Access, vol. 3, 2015,pp. 1089–1100, doi: 10.1109/ACCESS.2015.2453991.

[110] D. J. Cichon and T. Kurner, “Propagation prediction models,” Florence,Italy, Tech. Rep. COST-231 TD (95) 66, Apr. 1995, pp. 115–207.

[111] L. C. Fernandes and A. J. M. Soares, “Simplified characterization of theurban propagation environment for path loss calculation,” IEEE AntennasWireless Propag. Lett., vol. 9, 2010, pp. 24—27.

[112] L. C. Fernandes and A. J. M. Soares, “On the use of image segmenta-tion for propagation path loss prediction,” in IEEE MTT-S Int. Microw.Symp. Dig., Oct. 2011, pp. 129—133.

[113] M. Piacentini and F. Rinaldi, “Path loss prediction in urban environ-ment using learning machines and dimensionality reduction techniques,”Comput. Manage. Sci., vol. 8, no. 4, Nov. 2011, 371—385.

[114] M. Uccellari, F. Facchini, M. Sola, E. Sirignano, G. M. Vitetta, A.Barbieri, and S. Tondelli, “On the use of support vector machines for theprediction of propagation losses in smart metering systems,” in Proc. IE

[115] S. P. Sotiroudis, S. K. Goudos, K. A. Gotsis, K. Siakavara, and J.N. Sahalos, “Application of a composite differential evolution algorithmin optimal neural network design for propagation path-loss prediction inmobile communication systems,” IEEE Antennas Wireless Propag. Lett.,vol. 12, 2013, pp. 364—367.

Page 18: JAN. 2021 1 Artificial Intelligence for Satellite ...

JAN. 2021 18

[116] S. P. Sotiroudis and K. Siakavara, “Mobile radio propagation pathloss prediction using Artificial Neural Networks with optimal inputinformation for urban environments,” AEU-Int. J. Electron. Commun.,vol. 69, no. 10, Oct. 2015, pp. 1453—1463.

[117] I. Popescu, I. Nafornita, and P. Constantinou, “Comparison of neuralnetwork models for path loss prediction,” in Proc. IEEE Int. Conf.Wireless Mobile Comput., Netw. Commun., Aug. 2005, pp. 44—49.

[118] E. Ostlin, H.-J. Zepernick, and H. Suzuki, “Macrocell path-loss pre-diction using artificial neural networks,” IEEE Trans. Veh. Technol., vol.59, no. 6, Jul. 2010, pp. 2735-–2747.

[119] B. J. Cavalcanti, G. A. Cavalcante, L. M. D. Mendonca, G. M.Cantanhede, M. M. de Oliveira, and A. G. D’Assuncao, “A hybrid pathloss prediction model based on artificial neural networks using empiricalmodels for LTE and LTE-A at 800 MHz and 2600 MHz,” J. Microw.,Optoelectron. Electromagn. Appl., vol. 16, Sep. 2017, pp.708—722.

[120] Y. Zhang, J. Wen, G. Yang, Z. He, and X. Luo, “Air-to-air path lossprediction based on machine learning methods in urban environments,”Wireless Commun. Mobile Comput., vol. 6, May 2018, Art. no. 8489326.

[121] C. A. Oroza, Z. Zhang, T. Watteyne, and S. D. Glaser, “Amachinelearning-based connectivity model for complex terrain large-scalelowpower wireless deployments,” IEEE Trans. Cogn. Commun. Netw.,vol. 3, no. 4, Dec. 2017 pp. 576—584.

[122] Y. Zhang, J. Wen, G. Yang, Z. He, and J. Wang, “Path loss predictionbased on machine learning: Principle, method, and data expansion,” Appl.Sci., vol. 9, p. 1908, May 2019.

[123] H. F. Ates, S. M. Hashir, T. Baykas, and B. K. Gunturk, “Path LossExponent and Shadowing Factor Prediction From Satellite Images UsingDeep Learning,” in IEEE Access, vol. 7, 2019, pp. 101366–101375, doi:10.1109/ACCESS.2019.2931072.

[124] J. Thrane, D. Zibar, and H. L. Christiansen, “Model-Aided DeepLearning Method for Path Loss Prediction in Mobile CommunicationSystems at 2.6 GHz,” in IEEE Access, vol. 8, 2020, pp. 7925–7936, doi:10.1109/ACCESS.2020.2964103.

[125] O. Ahmadien, H. F. Ates, T. Baykas, and B. K. Gunturk, “Predict-ing Path Loss Distribution of an Area From Satellite Images UsingDeep Learning,” in IEEE Access, vol. 8, 2020, pp. 64982–64991, doi:10.1109/ACCESS.2020.2985929.

[126] T. Yairi, N. Takeishi, T. Oda, Y. Nakajima, N. Nishimura, and N.Takata, “A Data-Driven Health Monitoring Method for Satellite House-keeping Data Based on Probabilistic Clustering and Dimensionality Re-duction,” in IEEE Transactions on Aerospace and Electronic Systems, vol.53, no. 3, June 2017, pp. 1384–1401, doi: 10.1109/TAES.2017.2671247.

[127] T. Yairi, T. Tagawa, and N. Takata, “Telemetry monitoring by dimen-sionality reduction and learning hidden markov model,” in Proceedingsof International Symposium on Artificial Intelligence, Robotics andAutomation in Space, 2012.

[128] T. Yairi, M. Nakatsugawa, K. Hori, S. Nakasuka, K. Machida andN. Ishihama, “Adaptive limit checking for spacecraft telemetry datausing regression tree learning,” 2004 IEEE International Conference onSystems, Man and Cybernetics (IEEE Cat. No.04CH37583), The Hague,2004, pp. 5130–5135 vol.6, doi: 10.1109/ICSMC.2004.1401008.

[129] T. Shahroz, L. Sangyup, S. Youjin, L. Myeongshin, J. Okchul, C.Daewon, and S. W. Simon, “Detecting Anomalies in Space using Multi-variate Convolutional LSTM with Mixtures of Probabilistic PCA,” 25thACM Special Interest Group on Knowledge Discovery and Data MiningInternational Conference, Alaska, USA, 2019.

[130] K. Hundman, V. Constantinou, C. Laporte, I. Colwell, and T. Soder-strom, “Detecting Spacecraft Anomalies Using LSTMs and Nonparamet-ric Dynamic Thresholding,” 24th ACM Special Interest Group on Knowl-edge Discovery and Data Mining International Conference. London, UK,2018.

[131] S. Fuertes, G. Picart, JY. Tourneret, L. Chaari, A. Ferrari, andC. Richard, “Improving Spacecraft Health Monitoring with AutomaticAnomaly Detection Techniques,” 14th International Conference on SpaceOperations. Daejeon, Korea, 2016.

[132] D. L. Iverson, R. Martin, M. Schwabacher, L. Spirkovska, W. Taylor,R. Mackey, J. P. Castle and V. Baskaran, “General Purpose DataDrivenMonitoring for Space Operations,” Journal of Aerospace ComputingInformation & Communication, 9(2):26-44 2012.

[133] PI. Robinson, M. H. Shirley, D. Fletcher, R. Alena, D. Duncavage,and C. Lee “Applying model-based reasoning to the fdir of the commandand data handling subsystem of the international space station,” inProc. of International Symposium on Artificial Intelligence, Robotics andAutomation in Space, 2003.

[134] Y. Sun, L. Guo, Y. Wang, Z. Ma, and Y. Niu, “Fault diagnosis forspace utilisation,” in The Journal of Engineering, vol. 2019, no. 23, pp.8770-8775, 12 2019, doi: 10.1049/joe.2018.9102.

[135] S. K. Ibrahim, A. Ahmed, M. A. E. Zeidan, and I. E. Ziedan,“Machine Learning Methods for Spacecraft Telemetry Mining,” in IEEETransactions on Aerospace and Electronic Systems, vol. 55, no. 4, pp.1816–1827, Aug. 2019, doi: 10.1109/TAES.2018.2876586.

[136] P. Wan, Y. Zhan, and W. Jiang, “Study on the Satellite Telemetry DataClassification Based on Self-Learning,” in IEEE Access, vol. 8, pp. 2656-2669, 2020, doi: 10.1109/ACCESS.2019.2962235.

[137] P. W. Ward, J. W. Betz, and C. J. Hegarty, “Satellite signal acquisition,tracking, and data demodulation in Understanding GPS: Principles andApplications,” Norwood, MA, USA: Artech House, pp. 153–241, 2006.

[138] A. V. Dierendonck, J. Klobuchar, and Q. Hua, “Ionospheric scintillationmonitoring using commercial single frequency C/A code receivers,” inProc. 6th Int. Tech. Meet. Satellite Div. Inst. Navig., Salt Lake City, UT,USA, vol. 93, pp. 1333—1342, Sep. 1993.

[139] J. Lee, Y. T. J. Morton, J. Lee, H.-S. Moon, and J. Seo, “Monitoringand mitigation of ionospheric anomalies for GNSSbased safety criticalsystems: A review of up-to-date signal processing techniques,” IEEESignal Process. Mag., vol. 34, no. 5, pp. 96-–110, Sep. 2017

[140] C. Cesaroni, L. Alfonsi, R. Romero, N. Linty, F. Dovis, S. V. Veettil, J.Park, D. Barroca, M. C. Ortega, and R. O. Perez, “Monitoring IonosphereOver South America: The MImOSA and MImOSA2 projects,” 2015International Association of Institutes of Navigation World Congress(IAIN), Prague, pp. 1–7, 2015, doi: 10.1109/IAIN.2015.7352226.

[141] L. Nicola, R. Rodrigo, C. Calogero, D. Fabio, B. Michele, C. J.Thomas, F. G. Joaquim, W. Jonathan, L. Gert, R. Padraig, C. Pierre,C. Emilia, and A. Lucilla, “Ionospheric scintillation threats to GNSS inpolar regions: the DemoGRAPE case study in Antarctica,” in Proc. Eur.Navig. Conf., pp. 1—7, 2016.

[142] J. Vila-Valls, P. Closas, C. Fernandez-Prades, and J. T. Curran, “Onthe ionospheric scintillation mitigation in advanced GNSS receivers IEEETrans.” Aerosp. Electron. Syst., to be published.

[143] S. Taylor, Y. Morton, Y. Jiao, J. Triplett, and W. Pelgrum, “An improvedionosphere scintillation event detection and automatic trigger for GNSSdata collection systems,” in Proc Int. Tech. Meet. Inst. Navig., pp. 1563—1569, 2012.

[144] W. Fu, S. Han, C. Rizos, M. Knight, and A. Finn, “Real-time iono-spheric scintillation monitoring,” in Proc. 12th Int. Tech. Meet. SatelliteDiv. Inst. Navig., vol. 99, pp. 14—17, 1999.

[145] S. Miriyala, P. R. Koppireddi, and S. R. “Chanamallu Robust detectionof ionospheric scintillations using MF-DFA technique Earth,” Planets Sp.,vol. 67, no. 98, pp. 1—5, 2015.

[146] R. Romero, N. Linty, F. Dovis, and R. V. Field, “A novel approach toionospheric scintillation detection based on an open loop architecture,” inProc. 8th ESA Workshop Satellite Navig. Technol. Eur. Workshop GNSSSignals Signal Process., pp. 1—9, Dec. 2016.

[147] L. F. C. Rezende, E. R. de Paula, S. Stephany, I. J. Kantor, M. T. A.H. Muella, P. M. de Siqueira and K. S. Correa, “Survey and prediction ofthe ionospheric scintillation using data mining techniques,” Sp. Weather,vol. 8, no. 6, pp. 1—10, 2010.

[148] Y. Jiao, J. J. Hall, and Y. T. “Morton Performance evaluations of anequatorial GPS amplitude scintillation detector using a machine learningalgorithm,” in Proc 29th Int. Tech. Meet. Satellite Div. Inst. Navig., pp.195—199, Sep. 2016.

[149] Y. Jiao, J. J. Hall, and Y. T. Morton, “Automatic equatorial GPSamplitude scintillation detection using a machine learning algorithm,”IEEE Trans. Aerosp. Electron. Syst., vol. 53, no. 1, pp. 405—418, Feb.2017.

[150] Y. Jiao, J. J. Hall, and Y. T. Morton, “Automatic GPS phase scintillationdetector using a machine learning algorithm,” in Proc. Int. Tech. Meet.Inst. Navig., Monterey, CA, USA, pp. 1160—1172, Jan. 2017.

[151] Y. Jiao, J. J. Hall, and Y. T. Morton, “Performance evaluation of anautomatic GPS ionospheric phase scintillation detector using a machine-learning algorithm Navigation,” vol. 64, no. 3, pp. 391—402, 2017.

[152] N. Linty, A. Farasin, A. Favenza, and F. Dovis, “Detection of GNSSIonospheric Scintillations Based on Machine Learning Decision Tree,” inIEEE Transactions on Aerospace and Electronic Systems, vol. 55, no. 1,pp. 303–317, Feb. 2019, doi: 10.1109/TAES.2018.2850385.

[153] R. Imam and F. Dovis, “Distinguishing Ionospheric Scintillation fromMultipath in GNSS Signals Using Bagged Decision Trees Algorithm,”2020 IEEE International Conference on Wireless for Space and Ex-treme Environments (WiSEE), Vicenza, Italy, 2020, pp. 83-88, doi:10.1109/WiSEE44079.2020.9262699.

[154] C. Politis, S. MalekiSina, M. Christos, G. TsinosChristos, G. T. Show,“On-board the Satellite Interference Detection with Imperfect SignalCancellation,”

Page 19: JAN. 2021 1 Artificial Intelligence for Satellite ...

JAN. 2021 19

[155] A. V. Dandawate and G. B. Giannakis, “Statistical tests for presenceof cyclostationarity,” in IEEE Transactions on Signal Processing, vol. 42,no. 9, pp. 2355–2369, Sept. 1994, doi: 10.1109/78.317857.

[156] O. A. Dobre, A. Abdi, Y. Bar-Ness, and W. Su, “Survey of auto-matic modulation classification techniques: classical approaches and newtrends,” in IET Communications, vol. 1, no. 2, pp. 137–156, April 2007,doi: 10.1049/iet-com:20050176.

[157] J. Hu, D. Bian, Z. Xie, Y. Li, and L. Fan, “An approach for narrowband interference detection in satellite communication using morpho-logical filter,” International Conference on Information Technology andManagement Innovation, Shenzhen, China, Sept.,

[158] Q. Liu, J. Yang, C. Zhuang, A. Barnawi, and B. A Alzahrani, “ArtificialIntelligence Based Mobile Tracking and Antenna Pointing in Satellite-Terrestrial Network,” in IEEE Access, vol. 7, pp. 177497–177503, 2019,doi: 10.1109/ACCESS.2019.2956544.

[159] L. Pellaco, N. Singh, and J. Jalden. “Spectrum Prediction andInterference Detection for Satellite Communications,” arXiv preprintarXiv:1912.04716, 2019.

[160] P. Henarejos, M. A. Vazquez, and A. I. Perez-Neira, “Deep LearningFor Experimental Hybrid Terrestrial and Satellite Interference Manage-ment,” 2019 IEEE 20th International Workshop on Signal ProcessingAdvances in Wireless Communications (SPAWC), Cannes, France, 2019,pp. 1–5, doi: 10.1109/SPAWC.2019.8815532.

[161] N. Kussul, M. Lavreniuk, S. Skakun, and A. Shelestov, “Deep LearningClassification of Land Cover and Crop Types Using Remote SensingData,” in IEEE Geoscience and Remote Sensing Letters, vol. 14, no. 5,pp. 778–782, May 2017, doi: 10.1109/LGRS.2017.2681128.

[162] F. Zhang, B. Du, and L. Zhang, “Scene Classification via a GradientBoosting Random Convolutional Network Framework,” in IEEE Transac-tions on Geoscience and Remote Sensing, vol. 54, no. 3, pp. 1793–1802,March 2016, doi: 10.1109/TGRS.2015.2488681.

[163] A. S. Li, V. Chirayath, M. Segal-Rozenhaimer, J. L. Torres-Perez,and J. van den Bergh, “NASA NeMO-Net’s Convolutional Neural Net-work: Mapping Marine Habitats with Spectrally Heterogeneous RemoteSensing Imagery,” in IEEE Journal of Selected Topics in Applied EarthObservations and Remote Sensing, vol. 13, pp. 5115–5133, 2020, doi:10.1109/JSTARS.2020.3018719.

[164] S. A. Fatima, A. Kumar, A. Pratap, and S. S. Raoof, “ObjectRecognition and Detection in Remote Sensing Images: A Compara-tive Study,” 2020 International Conference on Artificial Intelligenceand Signal Processing (AISP), Amaravati, India, pp. 1–5, 2020, doi:10.1109/AISP48273.2020.9073614.

[165] L. Zhou, J. Liu, and L. Chen, “Vehicle detection based on remote sens-ing image of Yolov3,” 2020 IEEE 4th Information Technology, Network-ing, Electronic and Automation Control Conference (ITNEC), Chongqing,China, pp. 468–472, 2020, doi: 10.1109/ITNEC48623.2020.9084975.

[166] J. Redmon, et al. “You only look once: Unified, real-time objectdetection,” Proceedings of the IEEE conference on computer vision andpattern recognition. 2016.

[167] J. Redmon and A. Farhadi. “Yolov3: An incremental improvement,”arXiv preprint arXiv:1804.02767, 2018.

[168] A. Femin and K. S. Biju, “Accurate Detection of Buildings fromSatellite Images using CNN,” 2020 International Conference on Electrical,Communication, and Computer Engineering (ICECCE), Istanbul, Turkey,pp. 1–5, 2020, doi: 10.1109/ICECCE49384.2020.9179232.

[169] A. Hassan, W. M. Hussein, E. Said and M. E. Hanafy, “A Deep Learn-ing Framework for Automatic Airplane Detection in Remote SensingSatellite Images,” 2019 IEEE Aerospace Conference, Big Sky, MT, USA,pp. 1–10, 2019, doi: 10.1109/AERO.2019.8741938.

[170] G. Mateo-Garcia, V. Laparra, D. Lopez-Puigdollers, and L. Gomez-Chova, “Cross-Sensor Adversarial Domain Adaptation of Landsat-8 andProba-V images for Cloud Detection,” in IEEE Journal of Selected Topicsin Applied Earth Observations and Remote Sensing, doi: 10.1109/JS-TARS.2020.3031741.

[171] Z. Shao, Y. Pan, C. Diao, and J. Cai, “Cloud Detection in RemoteSensing Images Based on Multiscale Features-Convolutional Neural Net-work,” in IEEE Transactions on Geoscience and Remote Sensing, vol.57, no. 6, pp. 4062–4076, June 2019, doi: 10.1109/TGRS.2018.2889677.

[172] M. Tian, H. Chen, and G. Liu, “Cloud Detection and Classifica-tion for S-NPP FSR CRIS Data Using Supervised Machine Learn-ing,” IGARSS 2019 - 2019 IEEE International Geoscience and Re-mote Sensing Symposium, Yokohama, Japan, pp. 9827–9830, 2019, doi:10.1109/IGARSS.2019.8898876.

[173] F. Wang, F. Liao, and H. Zhu, “FPA-DNN: A Forward PropagationAcceleration based Deep Neural Network for Ship Detection,” 2020 Inter-national Joint Conference on Neural Networks (IJCNN), Glasgow, UnitedKingdom, pp. 1–8, 2020, doi: 10.1109/IJCNN48605.2020.9207603.

[174] L. Zong-ling et al., “Remote Sensing Ship Target Detection and Recog-nition System Based on Machine Learning,” IGARSS 2019 - 2019 IEEEInternational Geoscience and Remote Sensing Symposium, Yokohama,Japan, pp. 1272–1275, 2019, doi: 10.1109/IGARSS.2019.8898599.

[175] H. Bandarupally, H. R. Talusani, and T. Sridevi, “Detection of Mil-itary Targets from Satellite Images using Deep Convolutional NeuralNetworks,” 2020 IEEE 5th International Conference on Computing Com-munication and Automation (ICCCA), Greater Noida, India, pp. 531–535,2020, doi: 10.1109/ICCCA49541.2020.9250864.

[176] J. Zheng, X. -Y. Liu, and X. Wang, “Single Image Cloud RemovalUsing U-Net and Generative Adversarial Networks,” in IEEE Transactionson Geoscience and Remote Sensing, doi: 10.1109/TGRS.2020.3027819.

[177] O. Ronneberger, P. Fischer, and T. Brox. “U-net: Convolutional net-works for biomedical image segmentation,” International Conference onMedical image computing and computer-assisted intervention. Springer,Cham, 2015.

[178] J. Lu, Y. Chen, and R. He, “A Learning-Based Approach for AgileSatellite Onboard Scheduling,” in IEEE Access, vol. 8, pp. 16941-16952,2020, doi: 10.1109/ACCESS.2020.2968051.

[179] R. Mital, K. Cates, J. Coughlin and G. Ganji, “A Machine LearningApproach to Modeling Satellite Behavior,” 2019 IEEE InternationalConference on Space Mission Challenges for Information Technology(SMC-IT), Pasadena, CA, USA, pp.62–69, 2019, doi: 10.1109/SMC-IT.2019.00013.

[180] K. Weasenforth, J. Hollon, T. Payne, K. Kinateder, and A. Kruchten,“Machine Learning-based Stability Assessment and Change Detection forGeosynchronous Satellites,” Advanced Maui Optical and Space Surveil-lance Technologies Conference, 2018.

[181] B. Jia, K. D. Pham, E. Blasch, Z. Wang, D. Shen, and G. Chen,“Space object classification using deep neural networks,” in 2018 IEEEAerospace Conference, Big Sky, MT, pp. 1—8, 2018.

[182] K. Hundman, V. Constantinou, C. Laporte, I. Colwell, and T. Soder-strom, “Detecting Spacecraft Anomalies Using LSTMs and Nonparamet-ric Dynamic Thresholding,” in Proceedings of the 24th ACM SpecialInterest Group on Knowledge Discovery and Data Mining InternationalConference on Knowledge Discovery & Data Mining - KDD ’18, London,United Kingdom, pp. 387-–395, 2018.

[183] B. Chen, J. Cao, A. Parra, and T. Chin, “Satellite Pose Estimationwith Deep Landmark Regression and Nonlinear Pose Refinement,” 2019IEEE/CVF International Conference on Computer Vision Workshop (IC-CVW), Seoul, Korea (South), pp. 2816–2824, 2019, doi: 10.1109/IC-CVW.2019.00343.

[184] M. Kisantal, S. Sharma, T. H. Park, D. Izzo, M. Martens, andS. D’Amico, “Satellite Pose Estimation Challenge: Dataset, Compe-tition Design, and Results,” in IEEE Transactions on Aerospace andElectronic Systems, vol. 56, no. 5, pp. 4083–4098, Oct. 2020, doi:10.1109/TAES.2020.2989063.

[185] S. Jahirabadkar, P. Narsay, S. Pharande, G. Deshpande, andA. Kitture, “Space Objects Classification Techniques: A Survey,”2020 International Conference on Computational PerformanceEvaluation (ComPE), Shillong, India, pp. 786–791, 2020, doi:10.1109/ComPE49325.2020.9199996.

[186] D. Yadava, R. Hosangadi, S. Krishna, P. Paliwal, and A. Jain, “Attitudecontrol of a nanosatellite system using reinforcement learning and neuralnetworks,” 2018 IEEE Aerospace Conference, Big Sky, MT, pp. 1–8,2018, doi: 10.1109/AERO.2018.8396409.

[187] A. M. Ahmed, A. Salama, H. A. Ibrahim, M. A. E. Sayed, and S.Yacout, “Prediction of Battery Remaining Useful Life on Board SatellitesUsing Logical Analysis of Data,” 2019 IEEE Aerospace Conference, BigSky, MT, USA, pp. 1–8, 2019, doi: 10.1109/AERO.2019.8741717.

[188] JH. Lee, J. Park, M. Bennis, YC. Ko, “Integrating LEO Satelliteand UAV Relaying via Reinforcement Learning for Non-Terrestrial Net-works,” arXiv preprint arXiv:2005.12521, 2020.

[189] N. Cheng, F. Lyu, W. Quan, C. Zhou, H. He, W. Shi, and X.Shen, “Space/Aerial-Assisted Computing Offloading for IoT Applica-tions: A Learning-Based Approach,” in IEEE Journal on Selected Areasin Communications, vol. 37, no. 5, pp. 1117–1129, May 2019, doi:10.1109/JSAC.2019.2906789.

[190] C. Jiang and X. Zhu, “Reinforcement Learning Based Capacity Man-agement in Multi-Layer Satellite Networks,” in IEEE Transactions onWireless Communications, vol. 19, no. 7, pp. 4685–4699, July 2020, doi:10.1109/TWC.2020.2986114.

[191] C. Qiu, H. Yao, F. R. Yu, F. Xu, and C. Zhao, “Deep Q-LearningAided Networking, Caching, and Computing Resources Allocation inSoftware-Defined Satellite-Terrestrial Networks,” in IEEE Transactionson Vehicular Technology, vol. 68, no. 6, pp. 5871–5883, June 2019, doi:10.1109/TVT.2019.2907682.

Page 20: JAN. 2021 1 Artificial Intelligence for Satellite ...

JAN. 2021 20

[192] W. Liu, F. Tian, and Z. Jiang, “Beam-hopping based resource allocationalgorithm in LEO satellite network,” in Proc. Int. Conf. Space Inf. Netw.Singapore: Springer, pp. 113—123, 2018.

[193] Z. Qu, G. Zhang, H. Cao, and J. Xie, “LEO satellite constellation forInternet of Things,” IEEE Access, vol. 5, pp. 18391—18401, 2017.

[194] H. Tsuchida, Y. Kawamoto, N. Kato, K. Kaneko, S. Tani, S. Uchida,and H. Aruga, “Efficient Power Control for Satellite-Borne BatteriesUsing Q-Learning in Low-Earth-Orbit Satellite Constellations,” in IEEEWireless Communications Letters, vol. 9, no. 6, pp. 809-812, June 2020,doi: 10.1109/LWC.2020.2970711.

[195] B. Zhao, J. Liu, Z. Wei, and I. You, “A Deep Reinforcement LearningBased Approach for Energy-Efficient Channel Allocation in SatelliteInternet of Things,” in IEEE Access, vol. 8, pp. 62197-62206, 2020, doi:10.1109/ACCESS.2020.2983437.

[196] G. Cui, X. Li, L. Xu, and W. Wang, “Latency and Energy Optimizationfor MEC Enhanced SAT-IoT Networks,” in IEEE Access, vol. 8, pp.55915-55926, 2020, doi: 10.1109/ACCESS.2020.2982356.

[197] C. Zhang, “An AI-based optimization of handover strategy in non-terrestrial networks,” presented at the 12th ITU Academic ConferenceKaleidoscope Industry-driven digital transformation, Online, Dec. 7-11,2020.

[198] X. Chen, W. Yao, Y. Zhao, X. Chen, and X. Zheng, “A practicalsatellite layout optimization design approach based on enhanced finite-circle method”, Struct. Multidisciplinary Optim., vol. 58, no. 6, pp. 2635-2653, Dec. 2018.

[199] K. Chen, J. Xing, S. Wang, and M. Song, “Heat source layout opti-mization in two-dimensional heat conduction using simulated annealingmethod”, Int. J. Heat Mass Transf., vol. 108, pp. 210-219, May 2017.

[200] Y. Aslan, J. Puskely, and A. Yarovoy, “Heat source layout optimizationfor two-dimensional heat conduction using iterative reweighted L1-normconvex minimization,” Int. J. Heat Mass Transf., vol. 122, pp. 432-441,Jul. 2018.

[201] K. Chen, S. Wang, and M. Song, “Temperature-gradient-aware bionicoptimization method for heat source distribution in heat conduction”, Int.J. Heat Mass Transf., vol. 100, pp. 737-746, Sep. 2016.

[202] J. Sun, J. Zhang, X. Zhang, and W. Zhou, “A Deep Learning-BasedMethod for Heat Source Layout Inverse Design,” in IEEE Access, vol.8, pp. 140038-140053, 2020, doi: 10.1109/ACCESS.2020.3013394.

[203] H. Li, P. Wang, C. Shen, and G. Zhang, ”Show, attend and read: Asimple and strong baseline for irregular text recognition,” Proceedings ofthe AAAI Conference on Artificial Intelligence. Vol. 33. 2019.

[204] Y. Zhang and W. Ye, “Deep learning–based inverse method for layoutdesign”, Struct. Multidisciplinary Optim., vol. 16, no. 3, pp. 774-788,2019.

[205] J. Peurifoy, Y. Shen, L. Jing, Y. Yang, F. Cano-Renteria, B. G. DeLacy,J. D. Joannopoulos, M. Tegmark, and M. Soljacic, “Nanophotonic particlesimulation and inverse design using artificial neural networks”, Sci. Adv.,vol. 4, no. 6, Jun. 2018.

[206] J. Tompson, K. Schlachter, P. Sprechmann, and K. Perlin, “Acceleratingeulerian fluid simulation with convolutional networks”, Proc. 5th Int.Conf. Learn. Represent. ICLR, pp. 3424-3433, Apr. 2017, [online]Available: http://OpenReview.net.

[207] A. Agrawal, P. D. Deshpande, A. Cecen, G. P. Basavarsu, A. N.Choudhary, and S. R. Kalidindi, “Exploration of data science techniquesto predict fatigue strength of steel from composition and processingparameters”, Integrating Mater. Manuf. Innov., vol. 3, no. 1, pp. 90-108,Dec. 2014.

[208] P. Robustillo, J. Zapata, J. A. Encinar, and J. Rubio, “ANN charac-terization of multi-layer reflectarray elements for contoured-beam spaceantennas in the Ku-band”, IEEE Trans. Antennas Propag., vol. 60, no. 7,pp. 3205-3214, Jul. 2012.

[209] A. Freni, M. Mussetta and P. Pirinoli, “Neural network characterizationof reflectarray antennas”, Int. J. Antennas Propag., vol. 2012, pp. 1-10,May 2012.

[210] F. Gunes, S. Nesil, and S. Demirel, “Design and analysis of Minkowskireflectarray antenna using 3-D CST Microwave Studio-based neuralnetwork model with particle swarm optimization”, Int. J. RF Microw.Comput. Eng., vol. 23, no. 2, pp. 272-284, Mar. 2013.

[211] P. Robustillo, J. Zapata, J. A. Encinar, and M. Arrebola, “Design ofa contoured-beam reflectarray for a Eutelsat European coverage usinga stacked-patch element characterized by an artificial neural network”,IEEE Antennas Wireless Propag. Lett., vol. 11, pp. 977-980, 2012

[212] T. Shan, M. Li, S. Xu and F. Yang, “Synthesis of refiectarray based ondeep learning technique”, Proc. Cross Strait Quad-Regional Radio Sci.Wireless Technol. Conf., pp. 1-2, Jul. 2018.

[213] M. Salucci, L. Tenuti, G. Oliveri, and A. Massa, “Efficient predictionof the EM response of reflectarray antenna elements by an advanced

statistical learning method”, IEEE Trans. Antennas Propag., vol. 66, no.8, pp. 3995-4007, Aug. 2018.

[214] D. R. Prado, J. A. Lopez-Fernandez, G. Barquero, M. Arrebola, andF. Las-Heras, “Fast and accurate modeling of dual-polarized reflectarrayunit cells using support vector machines”, IEEE Trans. Antennas Propag.,vol. 66, no. 3, pp. 1258-1270, Mar. 2018.

[215] D. R. Prado, J. A. Lopez-Fernandez, M. Arrebola, and G. Goussetis,“Support vector regression to accelerate design and crosspolar optimiza-tion of shaped-beam reflectarray antennas for space applications”, IEEETrans. Antennas Propag., vol. 67, no. 3, pp. 1659-1668, Mar. 2019.

[216] D. R. Prado, J. A. Lopez-Fernandez, M. Arrebola, M. R. Pino,and G. Goussetis, “Wideband Shaped-Beam Reflectarray Design UsingSupport Vector Regression Analysis,” in IEEE Antennas and WirelessPropagation Letters, vol. 18, no. 11, pp. 2287-2291, Nov. 2019, doi:10.1109/LAWP.2019.2932902.

[217] P. Henttu and S. Aromaa, “Consecutive mean excision algorithm”, Proc.IEEE 7th Int. Symp. Spread Spectr. Techn. Appl., vol. 2, pp. 450-454,Sep. 2002.

[218] H. Saarnisaari, “Consecutive mean excision algorithms in narrowbandor short time interference mitigation”, Proc. PLNS, pp. 447-454, Apr.2004.

[219] H. Saarnisaari and P. Henttu, “Impulse detection and rejection methodsfor radio systems”, Proc. MILCOM, vol. 2, pp. 1126-1131, Oct. 2003.

[220] H. G. Keane, “A new approach to frequency line tracking”, Proc.ACSSC, vol. 2, pp. 808-812, Nov. 1991.

[221] R. Eschbach, Z. Fan, K. T. Knox, and G. Marcu, “Threshold modulationand stability in error diffusion”, IEEE Signal Process. Mag., vol. 20, pp.39-50, Jul. 2003.

[222] H. Mustafa, M. Doroslovacki, and H. Deng, “Algorithms for emitterdetection based on the shape of power spectrum”, Proc. CISS, pp. 808-812, Mar. 2003.

[223] J. Vartiainen, J. Lehtomaki, S. Aromaa, and H. Saarnisaari, “Local-ization of multiple narrowband signals based on the FCME algorithm,”Proc. NRS, vol. 1, pp. 5, Aug. 2004.

[224] J. Vartiainen, J. J. Lehtomaki, and H. Saarnisaari, “Double-thresholdbased narrowband signal extraction,” Proc. VTC, vol. 2, pp. 1288-1292,May 2005.

[225] J. Kim, M. Kim, I. Won, S. Yang, K. Lee, and W. Huh, “A biomedicalsignal segmentation algorithm for event detection based on slope tracing,”Proc. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc., pp. 1889-1892, Sep.2009.

[226] O. A. Morozov, and P. E. Ovchinnikov, “Neural network detection ofMSK signals,” Proc. IEEE 13th Digit. Signal Process. Workshop 5th IEEESignal Process. Educ. Workshop, pp. 594-596, Jan. 2009.

[227] Y. Yuan, Z. Sun, Z. Wei, and K. Jia, “DeepMorse: A deep convolutionallearning method for blind morse signal detection in wideband wirelessspectrum,” IEEE Access, vol. 7, pp. 80577-80587, 2019.

[228] H. Huang, J. Li, J. Wang, and H. Wang, “FCN-Based Carrier SignalDetection in Broadband Power Spectrum,” in IEEE Access, vol. 8, pp.113042-113051, 2020, doi: 10.1109/ACCESS.2020.3003683.

[229] E. Shelhamer, J. Long, and T. Darrell, “Fully convolutional networksfor semantic segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., vol.39, no. 4, pp. 640-651, Apr. 2017.

[230] K. He, G. Gkioxari, P. Dollar, and R. Girshick, “Mask R-CNN”, Proc.IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2961-2969, Oct. 2017.

[231] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional net-works for biomedical image segmentation,” in Medical Image Computingand Computer-Assisted Intervention, Munich, Germany:Springer, vol.9351, pp. 234-241, 2015.


Recommended