11th International Conference on Wireless, Mobile Network & Applications (WiMoA 2019)

December 21~22, 2019, Dubai, UAE

Accepted Papers


Prediction and Causality analysis of Churn using Deep Learning
Darshan Adiga, Shabir Bhat, Muzaffar Shah and Viveka Vyeth, Datoin, Bengaluru, India
ABSTRACT
In almost every type of business a retention stage is very important in customer life cycle because according to market theory, it is always expensive to attract new customers than retaining the existing ones. Thus, a churn prediction system which is able to predict accurately ahead of time, whether a customer will churn in the foreseeable future and also help the enterprises with the possible reasons which may cause a customer to churn is an extremely powerful tool for any marketing team. In this paper, we propose an approach to predict customer churn for non-subscription based business settings. We suggest a set of generic features which can be extracted from sales and payment data of almost all non-subscription based businesses and can be used in predicting customer churn. We have used the neural network based Multilayer perceptron for prediction purpose. The proposed method achieves an F1-Score of 80% and a recall of 85%, comparable to the accuracy of churn prediction for subscription-based business settings. We also propose a system for causality analysis of churn, which will predict a set of causes which may have led to the customer churn and helps to derive customer retention strategies.
KEYWORDS

Churn Prediction, Causality Analysis, Machine Learning, Business Analytics, Deep Neural Networks


Fuzzy-controlled Genetic algorithm for Fault Detection in Distributed Systems
Krishna Prasad, Department of Computer Science & Engineering, SRM University, Andhra Pradesh, India
ABSTRACT
This work investigates the Fuzzy-Genetic approach for fault detection in distribution systems with having at most (N-1)/2 faulty systems out of N systems tested. This study also reviews different types of faults occurring in the distributed system and fault detection techniques. A fault can occur due to many reasons like link or resource failures or other and is to be detected and provisioned for working of the system smoothly and accurately. These faults need be detected and recovered by suitable techniques according to the requirement. An efficient fault detector can avoid loss due to system crash by triggering the required fault tolerance mechanism. This work provides how a fuzzy-controlled genetic algorithm can be applied to detect faults in Distributed Systems.
KEYWORDS

Fault Detection, Genetic algorithms, Fuzzy mutation, Distributed System


Technical Analysis of Selenium and Cypress as Functional Automation Framework for Modern Web Application Testing
Fatini Mobaraya and Shahid Ali, Department of Information Technology, AGI Institute, Auckland, New Zealand
ABSTRACT
Automation testing has become increasingly needed due to the nature of the current software development project which comprises of complex application with shorter development time. Most of the companies in the industry have used Selenium extensively as functional automation tool to verify their web application’s functionalities are working as expected. However, the limitation in Selenium with waitTime has significantly affect its test execution and efficiency. Hence, this research study is focused on experimenting a new automation tool in the market, Cypress, to overcome the said limitation in Selenium. This research further compares the test execution results in Selenium and Cypress to observe each tool’s effectiveness in writing and executing the automation test script. Results of this study will be helpful towards determining a better tool in automating dynamic modern web application and providing an insight into Cypress as the future of automation testing tool.
KEYWORDS

Automation Testing, Regression Test Suite, Selenium, Cypress, JavaScript Automation Framework


Pseudo Random Number Generator Based on Look-up Table and Chaotic Maps
Mousa Farajallaha1, Mohammed Abu Judeha1, Mohammed Abu Taha1, Omar Salhab1 and Noor Jweihana1, Nikita Rechal and Sooriyan Aliyoglu2, 1College of Information Technology and Computer Engineering - Palestine Polytechnic University-Hebron – Palestine and 2College of Applied Professions, Palestine Polytechnic University-Hebron – Palestine
ABSTRACT
Pseudo Random Number Generators (PRNGs) play an important role in many cryptographic applications. Many network security algorithms and protocols that are based on cryptography also require random or pseudorandom numbers at certain points. PRNG is an algorithm that is used to generate a random sequence which has the same behavior of truly random numbers. Each generated number should not depend on the previously generated numbers, and thus it is not possible to predict such numbers. However, not all PRNGs are suitable for cryptographic applications. Therefore, various statistical tests can be applied to the obtained sequence in order to evaluate and compare it with truly random sequences. A PRNG that passes all statistical tests can be considered as a statistically secure PRNG. Based on some properties of chaos, such as randomness, unpredictability, and high sensitivity to the secret key, we proposed a new PRNG. The proposed PRNG is based on a modified chaotic map presented by Farajallah PhD thesis, and a proposed lookup table, which is created in such a way that the stored numbers will not be duplicated in the same row, column, or diameter. The proposed map and lookup table are used in order to produce non-linear and non-invertible functions, which are the main targets of any secure PRNG. As a result, the behavior of the proposed PRNG simulates the behavior of TRNGs. The obtained results from the cryptographic analysis and the standard statistical National Institute of Standards and Technology (NIST) tests indicate the robustness of the proposed PRNG. Furthermore, it is robust against known cryptographic attacks, and it has a strong non-linearity compared to the other systems. A comparison study of efficiency in terms of both speed performance and robustness against cryptanalysis was done, and the results demonstrate the superiority of the proposed algorithm.
KEYWORDS

PRNG, cryptographic, randomness, Statistical Tests, Chaos.


Methodology and Architecture for Safety Management
Matthieu Carré1,2, Ernesto Exposito1 Javier Ibañez-Guzmán1,2, 1Univ Pau & Pays Adour, E2S UPPA, LIUPPA, EA3000, Anglet, 64600, France and 2Renault S.A.S, 1 av. du Golf, Guyancourt, 78288, France
ABSTRACT
The design of complex systems, as in the case of autonomous vehicles, requires a specialized systems engineering methodology and an adapted modelling framework. In particular, the integration of nonfunctional requirements, as important as the Safety, requires from this methodological framework the welladapted semantic expression of constraints as well as their traceability during all phases of analysis, design and implementation. This paper focuses on the study of model-based autonomous system design and investigates the design flows and initiatives grasping with this complex computational model. The specialization of the ARCADIA methodology will be illustrated in a real industrial case.
KEYWORDS

Model Based System Engineering, Safety, Autonomous vehicles, System Engineering analysis, System Engineering design.


Appropriating Theatre Methodology for Robot Behaviour
Julienne A. Greer, Department of Theatre Arts, University of Texas at Arlington, Arlington, Texas
ABSTRACT
Theatre Arts is and has been an effective tool to engage humans in deeply emotional human behaviour. The discipline of theatre is a global and cultural performing arts practice expressing the human condition. The appropriation of the human-human relational experience for a human-robot relational experience can be articulated in the understanding of authentic human behaviour and structure of dramatic theatrical elements of behaviour, gesture and relationship.
KEYWORDS

Human-Robot Interaction, Affective robotics, Social Robots, Applied Theatre


Performance Evaluation of Multi-Agent Agriculture System Using Markov Processes
Hayam Seireg1, Ahmed Elmahalawy2, Yasser Omar3, Adel S.El-Fishawy4 and Fathi E. Abd El-Samie5, 1Department of Computer Science & Engineering, Faculty of Electronic Engineering, Menoufia University, Menouf, Egypt, 2Department of Computer Science & Engineering, Faculty of Electronic Engineering, Menoufia University, Menouf, Egypt, 3Arab Academy for Science Technology & Maritime Transport, Cairo, Egypt, 4Faculty of Electronic Engineering, Menoufia University, Menouf, Egypt, 5Faculty of Electronic Engineering, Menoufia University, Menouf, Egypt
ABSTRACT
The multi-agent agriculture system is a new application in AI to solve the problem of food shortage in the world and to decrease the gap between agriculture production and the need of people. In this paper we explorer the modelling and simulation of the new trend of agriculture using AI with Markov processes. Simulation is used to check the performance of the proposed model. Steady state analysis of the proposed model is given. The steady state performance indicates that agriculture automation can close the gap between the actual production and the need of people. The main goal of using multi-agents technology is to achieve the goal of FAO org & cover the gap between the need of the people and the production of the agriculture org.
KEYWORDS

Markov processes, Steady state, Multi-agent system, Transition probability matrix, Complex real world systems.


A Survey of Random Forest Pruning Techniques
Minu Rose1 and Hani Ragab Hassen2, 1Department of Mathematical and Computer Sciences, Heriot Watt University, UAE and 2, 1Department of Mathematical and Computer Sciences, Heriot Watt University, UAE
ABSTRACT
Random Forest is an ensemble machine learning method developed by Leo Breiman in 2001. Since then, it has been considered the state of the art solution in machine learning applications. Compared with the other ensemble methods, random forests exhibit superior predictive performance. However, empirical and statistical studies prove that the random forest algorithm generates unnecessarily large number of base decision trees. This may cost high computational efficiency, predictive time, and occasional decrease in effectiveness. In this paper, we survey existing random forest pruning techniques and compare the performance between them. The research analyses the scope of improving performance of random forest by techniques such as generating diverse and accurate decision trees, selecting high performance subset of decision trees from random forest, and by using different genetic algorithms.
KEYWORDS

Ensemble Learning, Random Forest, Pruning Techniques


Comparison of Classification Techniques for Wall Following Robot Navigation And Improvements to the KNN Algorithm
Sarah Madi, Riadh Baba-Ali, LRPE, USTHB BP 32 El Alia, Bab Ezzouar Algiers, 16111, Algeria
ABSTRACT
Autonomous navigation is an important feature that allows the robot to move independently from a point to another without a tele-operator. This feature makes mobile robots useful in many tasks that require transportation, exploration, surveillance, guidance, inspection …etc. Furthermore, autonomous robots deal with real time environments that tend to be complex, non-linear and partially observed. They also operate with limited memory resources and tight time constraints. In this paper, we present an investigation related to mobile robot navigation. We first compare a group of classification algorithms using real traces of wall following robot navigation. Then we focus on the k Nearest Neighbors (KNN) algorithm to improve it and help it be more applicable in autonomous robot navigation. We applied a Nearest Neighbor set reduction technique to help reduce the high running time of KNN. The results indicate that KNN is a competing algorithm especially after decreasing the running time significantly by a factor of 19 and combining that with the KNN’s existing features. Results are further improved by applying an attribute selection method.
KEYWORDS

Machine learning, wall following Robot navigation, CNN, Supervised Learning, KNN.


Self-organizing Algorithm for Massive Tractography Datasets Clustering With Outliers Elimination Based on Multiple Species Flocking Model
Chekir Amira, LRPE Laboratory, FEI, USTHB University, Algiers, Algeria
ABSTRACT
The study of white matter (WM) connectivity is of general interest in neuroscience, which is achieved by the analysis and clustering of the streamlines composed the tractography dataset. The WM streamlines clustering is a challenge, because of the complexity and the huge size of the WM tractography dataset and its composition by various streamlines, in additionally of the presence of outliers. Several WM clustering methods have been proposed in the literature to overcome these constraints, However, these methods stay statics, once streamline is affected to the cluster it stays in this one. In this paper, we propose a new framework of distributed multiagent, improving and adapting a bio-inspired model called Multiple Species Flocking (MSF) for WM streamlines clustering and automatic outliers elimination. The basic MSF rules are modified and adapted to perform streamlines clustering in higher dimensions. Specifically, each streamline is associated with a mobile agent and move onto a virtual space to form a group following the defined rules. Only the agents assigned to similar streamlines form a flock, whereas the agents assigned to dissimilar streamlines to each other are sidelined and considered as outliers. Swarm intelligence features of the approach, such as adaptivity, parallelism, dynamism, and decentralization make our algorithm scalable to large datasets, very fast and accurate, which are confirmed by experimental results on synthetic and real datasets.
KEYWORDS

White Matter, Clustering, Outliers detection, Multiple Species Flocking Model, Swarm intelligence, multi-agent system.


Deviations in Linear Dynamical Systems Subjected to Uncertainty: Upper Estimates
Mikhail Khlebnikov, Laboratory of Adaptive and Robust Systems, V. A. Trapeznikov Institute of Control Sciences of Russian Academy of Sciences, Moscow, Russia
ABSTRACT

In this talk, the linear dynamical system subjected to uncertainty in the system matrix is considered. Using the linear matrix inequality technique we obtain the upper bounds for the deviations in linear systems. An LMI-based stabilizing feedback procedure is proposed which guarantees "as small as possible" deviations. The results of numerical simulations demonstrate the low conservatism of the obtained bounds.

KEYWORDS

Linear Dynamical Systems, Deviations, Uncertainty, Robust Stability, LMIs


Speckle Denoising of the Multipolarization Imges by Hybrid Filters
Mohamed Yahia, Tarig Ali, Md Maruf Mortula, and Arampola Mudiyanselage Arampola, American University of Sharjah, Sharjah, UAE
ABSTRACT
Speckle filtering in synthetic aperture radar (SAR) images and polarimetric SAR (PolSAR) is essential for the extraction of significant information for homogeneous extended targets. Obtaining good compromise between high equivalent number of looks (ENL) and spatial detail preservation remains a challenge. In this paper, we validate the use of the infinite number of looks filter (INLP) technique to filter the multipolarization images (i. e. hh, hv and vv) by demonstrating their analogy with the eingenvalues of the covariance matrix. The filtered pixels obtained by a classical filter is served as inputs for the INLP filter. Results showed that the hybrid filter outperforms conventional filtering techniques such as boxcar and the refined sigma filter in terms of noise reduction and spatial detail preservation.
KEYWORDS

Synthetic aperture radar images, speckle filtering, boxcar filter, refined sigma filter, INLP filter


Enhancement in Leach-RFID Using Contention Avoidance Algorithm
Tarun Sharma , Sanchita Arora, Department of Computer Science & Engineering, India
ABSTRACT
A wireless sensor network consists of large no of sensor nodes which can be used as an important tool for collecting data in different situations. The major issue in the WSN is the energy consumption of the network. In this paper we represent the RFID (Radio frequency identification which works on modes that is sleep, active and ready for packet collision) protocol. In RFID protocol there are three types of nodes that is sleep, active, ready. As energy consumption is less in leach-RFID because some nodes are in sleep mode and others are in active and ready modes. The packet loss is more in Leach-RFID protocol due to non synchronization of the nodes. To overcome this problem we are using Contention Avoidance Algorithm. The simulation performs better results as compare to the previous results.
KEYWORDS

WSN, Leach-RFID, RTS, CTS


Implementation of VLAN via Wireless Networks Using OPNET Modeler
Tareq Al-Khraishi1 and Muhannad Quwaider2, 1Department of Computer Engineering, Jordan University of Science and Technology, Irbid , Jordan, 2Department of Computer Engineering, Jordan University of Science and Technology, Irbid , Jordan
ABSTRACT
A VLAN is a logical rather than physical connection that allows grouping hosts together in the same broadcast domain, so that packets are only delivered to ports that are combined to the same VLAN. By characteristic VLAN network, we can improve efficiency of wireless network and save bandwidth. Furthermore, implementing VLAN greatly improves wireless network security by decreasing the number of hosts that receive copies of frames broadcasted by switches, so hosts holding critical data are kept on a separate VLAN. This paper compares wireless network with wireless network having VLAN deployment. The proposed Network is evaluated in terms of average throughput and delay using file transfer in heavy traffic and web browsing applications. The simulation was carried out by employing OPNET 14.5 modeler simulation and the results show that the use of VLAN via wireless network had improved the performance by decreasing the traffic resulting in minimizing delay time. In addition, implementing VLAN reduces the network throughput because the traffic that is received and forwarded has a positive relationship with throughput. Furthermore, we investigated to improve the throughput in a wireless VLAN network by using adhoc routing protocols. Evaluation, comparison of broad adhoc routing protocols like AODV, DSR, OLSR, TORA and GPR are conducted in order to show the effect of the proposed VLAN on the performance results, like throughput and delay.
KEYWORDS

WLAN, OPNET, AODV, Throughput, VLAN, Routing Protocols, Access Point.


An Overview of Auto-configuration Protocols in Mobile Ad Hoc Wireless Multi-hop Network
Adel R. Alharbi, Computer Engineering Department, University of Tabuk, Saudi Arabia
ABSTRACT
An ad hoc wireless IP multi-hop network is a collection of wireless IP protocol capable nodes that start in an unknown physical formation in the vicinity of a wireless IP portal to a wired IP network. While some wireless nodes might be in radio (wireless) range of the portal, other nodes might only be in radio range of one or more other nodes that in turn may in range of the portal and/or other wireless nodes. IP data-grams would travel from one node to another until the data-gram is delivered to the portal or the destination node. All wireless nodes are assumed to be one or more hop away from the wireless IP portal. This paper reviews an auto-configure method of a mobile ad hoc network and to route IP traffic using existing mobile ad hoc network routing protocols. This method have the best characteristics in protocol overhead, robustness, convergence time, and scalability. The optimal mobile ad hoc network routing protocol can be chosen which best meets these characteristics for the given topology and operational profile. Finally, this method will efficiently use the address space allotted to the DHCP server.
KEYWORDS

Wireless LAN, communication systems routing, mobile communications, auto-configuration protocols, MANET.


Comparison of Time Series Prediction of Healthcare Emergency Department Indicators with ARIMA and Prophet
Diego Duarte1 and Julio Faerman2, 1FLAS (Faculty of Liberal Arts & Sciences) University of Greenwich, London, UK and 2Universidade Autonoma de Barcelona, Plaça Cívica, Barcelona, Spain
ABSTRACT

Predicting emergency department (ED) indicators in time series may benefit hospital planning, improving quality of care and optimising resources. It motivates analysis of models that can forecast relevant KPIs (Key Performance Indicators) for identifying future pressure. This paper analyses the Autoregressive Integrated Moving Average (ARIMA) method in comparison to the analysis of Prophet, an autoregressive forecasting model based on Re-current Neural Networks. The dataset analysed is formed by hourly valued hospital indicators, composed by Wait to be Seen Major in ED, Number of Attendances Major in ED, Unallocated Patients in ED with a DTA and Number of Beds Available on Medical Acute Unit. A comparison of predictions models ARIMA and Prophet is the focus. Each model is designed to provide better predictions for different time series characteristics. Measurements of best prediction for each indicator are based in accuracy, reliability bands and indicator meta information.

KEYWORDS

Predicting,Healthcare, ARIMA, Prophet& Time series


Ransomware Mitigation Using Dynamic Features and Machine Learning
Hiba Zuhair, Department of Systems Engineering, College of Information Engineering, Al-Nahrain University, Baghdad, Iraq
ABSTRACT

The escalating advancement of ransomware attacks motivated researchers to enhance their detective and protective approaches to survive users and their information systems linking across the cyber-space. However, such anti-ransomware approaches still have been defeated by new ransomware behaviours in the real-time application that is caused by partial ransomware characterization, limited decision settings, frequent update of databases, and inefficient security and privacy settings of the targeted information systems. Thus, advanced strategies are needed to boost up the efficiency and effectiveness of the existing anti-ransomware approaches and to mitigate ransomware threats and damages. For this purpose, hybrid machine learning based ransomware detector is proposed in this paper. The proposed detector hybridizes the decisive margins of Naïve Bays and Decision Tree learners to holistically characterize and accurately categorize zero-day ransomware attacks with minimal errors. The proposed ransomware detector shows a high performance and resistance across a flow of real-life data set. Furthermore, a high distinction of the proposed detector is demonstrated in a comparative experiment versus the prominent ransomware filters.

KEYWORDS

Anomaly-Based Detection, Signature-Based Detection, Dynamic Features, Machine Learning, Ransomware Attacks.