Search results for: prediction of publications
1646 Small Scale Mobile Robot Auto-Parking Using Deep Learning, Image Processing, and Kinematics-Based Target Prediction
Authors: Mingxin Li, Liya Ni
Abstract:
Autonomous parking is a valuable feature applicable to many robotics applications such as tour guide robots, UV sanitizing robots, food delivery robots, and warehouse robots. With auto-parking, the robot will be able to park at the charging zone and charge itself without human intervention. As compared to self-driving vehicles, auto-parking is more challenging for a small-scale mobile robot only equipped with a front camera due to the camera view limited by the robot’s height and the narrow Field of View (FOV) of the inexpensive camera. In this research, auto-parking of a small-scale mobile robot with a front camera only was achieved in a four-step process: Firstly, transfer learning was performed on the AlexNet, a popular pre-trained convolutional neural network (CNN). It was trained with 150 pictures of empty parking slots and 150 pictures of occupied parking slots from the view angle of a small-scale robot. The dataset of images was divided into a group of 70% images for training and the remaining 30% images for validation. An average success rate of 95% was achieved. Secondly, the image of detected empty parking space was processed with edge detection followed by the computation of parametric representations of the boundary lines using the Hough Transform algorithm. Thirdly, the positions of the entrance point and center of available parking space were predicted based on the robot kinematic model as the robot was driving closer to the parking space because the boundary lines disappeared partially or completely from its camera view due to the height and FOV limitations. The robot used its wheel speeds to compute the positions of the parking space with respect to its changing local frame as it moved along, based on its kinematic model. Lastly, the predicted entrance point of the parking space was used as the reference for the motion control of the robot until it was replaced by the actual center when it became visible again by the robot. The linear and angular velocities of the robot chassis center were computed based on the error between the current chassis center and the reference point. Then the left and right wheel speeds were obtained using inverse kinematics and sent to the motor driver. The above-mentioned four subtasks were all successfully accomplished, with the transformed learning, image processing, and target prediction performed in MATLAB, while the motion control and image capture conducted on a self-built small scale differential drive mobile robot. The small-scale robot employs a Raspberry Pi board, a Pi camera, an L298N dual H-bridge motor driver, a USB power module, a power bank, four wheels, and a chassis. Future research includes three areas: the integration of all four subsystems into one hardware/software platform with the upgrade to an Nvidia Jetson Nano board that provides superior performance for deep learning and image processing; more testing and validation on the identification of available parking space and its boundary lines; improvement of performance after the hardware/software integration is completed.Keywords: autonomous parking, convolutional neural network, image processing, kinematics-based prediction, transfer learning
Procedia PDF Downloads 1321645 The Influence of Infiltration and Exfiltration Processes on Maximum Wave Run-Up: A Field Study on Trinidad Beaches
Authors: Shani Brathwaite, Deborah Villarroel-Lamb
Abstract:
Wave run-up may be defined as the time-varying position of the landward extent of the water’s edge, measured vertically from the mean water level position. The hydrodynamics of the swash zone and the accurate prediction of maximum wave run-up, play a critical role in the study of coastal engineering. The understanding of these processes is necessary for the modeling of sediment transport, beach recovery and the design and maintenance of coastal engineering structures. However, due to the complex nature of the swash zone, there remains a lack of detailed knowledge in this area. Particularly, there has been found to be insufficient consideration of bed porosity and ultimately infiltration/exfiltration processes, in the development of wave run-up models. Theoretically, there should be an inverse relationship between maximum wave run-up and beach porosity. The greater the rate of infiltration during an event, associated with a larger bed porosity, the lower the magnitude of the maximum wave run-up. Additionally, most models have been developed using data collected on North American or Australian beaches and may have limitations when used for operational forecasting in Trinidad. This paper aims to assess the influence and significance of infiltration and exfiltration processes on wave run-up magnitudes within the swash zone. It also seeks to pay particular attention to how well various empirical formulae can predict maximum run-up on contrasting beaches in Trinidad. Traditional surveying techniques will be used to collect wave run-up and cross-sectional data on various beaches. Wave data from wave gauges and wave models will be used as well as porosity measurements collected using a double ring infiltrometer. The relationship between maximum wave run-up and differing physical parameters will be investigated using correlation analyses. These physical parameters comprise wave and beach characteristics such as wave height, wave direction, period, beach slope, the magnitude of wave setup, and beach porosity. Most parameterizations to determine the maximum wave run-up are described using differing parameters and do not always have a good predictive capability. This study seeks to improve the formulation of wave run-up by using the aforementioned parameters to generate a formulation with a special focus on the influence of infiltration/exfiltration processes. This will further contribute to the improvement of the prediction of sediment transport, beach recovery and design of coastal engineering structures in Trinidad.Keywords: beach porosity, empirical models, infiltration, swash, wave run-up
Procedia PDF Downloads 3571644 Revealing of the Wave-Like Process in Kinetics of the Structural Steel Radiation Degradation
Authors: E. A. Krasikov
Abstract:
Dependence of the materials properties on neutron irradiation intensity (flux) is a key problem while usage data of the accelerated materials irradiation in test reactors for forecasting of their capacity for work in realistic (practical) circumstances of operation. Investigations of the reactor pressure vessel steel radiation degradation dependence on fast neutron fluence (embrittlement kinetics) at low flux reveal the instability in the form of the scatter of the experimental data and wave-like sections of embrittlement kinetics appearance. Disclosure of the steel degradation oscillating is a sign of the steel structure cyclic self-recovery transformation as it take place in self-organization processes. This assumption has received support through the discovery of the similar ‘anomalous’ data in scientific publications and by means of own additional experiments. Data obtained stimulate looking-for ways to management of the structural steel radiation stability (for example, by means of nano - structure modification for radiation defects annihilation intensification) for creation of the intelligent self-recovering material. Expected results: - radiation degradation theory and mechanisms development, - more adequate models of the radiation embrittlement elaboration, - surveillance specimen programs improvement, - methods and facility development for usage data of the accelerated materials irradiation for forecasting of their capacity for work in realistic (practical) circumstances of operation, - search of the ways for creating of the radiation stable self-recovery intelligent materials.Keywords: degradation, radiation, steel, wave-like kinetics
Procedia PDF Downloads 3041643 Modeling and Prediction of Zinc Extraction Efficiency from Concentrate by Operating Condition and Using Artificial Neural Networks
Authors: S. Mousavian, D. Ashouri, F. Mousavian, V. Nikkhah Rashidabad, N. Ghazinia
Abstract:
PH, temperature, and time of extraction of each stage, agitation speed, and delay time between stages effect on efficiency of zinc extraction from concentrate. In this research, efficiency of zinc extraction was predicted as a function of mentioned variable by artificial neural networks (ANN). ANN with different layer was employed and the result show that the networks with 8 neurons in hidden layer has good agreement with experimental data.Keywords: zinc extraction, efficiency, neural networks, operating condition
Procedia PDF Downloads 5451642 Women's Liberation: A Study of the Movement in Saudi Arabia
Authors: Rachel Hasan
Abstract:
Kingdom of Saudi Arabia has witnessed various significant social and political developments in 2018. Crown Prince of Kingdom of Saudi Arabia, Muhammad bin Salman, also serving as Deputy Prime Minister of Saudi Arabia, has made several social, cultural, and political changes in the country under his grand National Transformation Program. Program provides a vision of more economically viable, culturally liberal, and politically pleasant Saudi Arabia. One of the most significant and ground breaking changes that has been made under this program is awarding women the long awaited rights. Legislative changes are made to allow woman to drive. Seemingly basic on surface but driving rights to women represent much deeper meaning to the culture of Saudi Arabia and to the world outside. Ever since this right is awarded to the women, world media is interpreting this change in various colors. This paper aims to investigate the portrayal of gender rights in various online media publications and websites. The methodology applied has been quantitative content analysis method to analyze the various aspects of media's coverage of various social and cultural changes with reference to women's rights. For the purpose of research, convenience sampling was done for eight international online articles from media websites. The articles discussed the lifting of ban for females on driving cars in Saudi Arabia as well as gender development for these women. These articles were analyzed for media frames, and various categories of analysis were developed, which highlighted the stance that was observed. Certain terms were conceptualized and operationalized and were also explained for better understanding of the context.Keywords: gender rights, media coverage, political change, women's liberation
Procedia PDF Downloads 1091641 Nonlinear Modelling of Sloshing Waves and Solitary Waves in Shallow Basins
Authors: Mohammad R. Jalali, Mohammad M. Jalali
Abstract:
The earliest theories of sloshing waves and solitary waves based on potential theory idealisations and irrotational flow have been extended to be applicable to more realistic domains. To this end, the computational fluid dynamics (CFD) methods are widely used. Three-dimensional CFD methods such as Navier-Stokes solvers with volume of fluid treatment of the free surface and Navier-Stokes solvers with mappings of the free surface inherently impose high computational expense; therefore, considerable effort has gone into developing depth-averaged approaches. Examples of such approaches include Green–Naghdi (GN) equations. In Cartesian system, GN velocity profile depends on horizontal directions, x-direction and y-direction. The effect of vertical direction (z-direction) is also taken into consideration by applying weighting function in approximation. GN theory considers the effect of vertical acceleration and the consequent non-hydrostatic pressure. Moreover, in GN theory, the flow is rotational. The present study illustrates the application of GN equations to propagation of sloshing waves and solitary waves. For this purpose, GN equations solver is verified for the benchmark tests of Gaussian hump sloshing and solitary wave propagation in shallow basins. Analysis of the free surface sloshing of even harmonic components of an initial Gaussian hump demonstrates that the GN model gives predictions in satisfactory agreement with the linear analytical solutions. Discrepancies between the GN predictions and the linear analytical solutions arise from the effect of wave nonlinearities arising from the wave amplitude itself and wave-wave interactions. Numerically predicted solitary wave propagation indicates that the GN model produces simulations in good agreement with the analytical solution of the linearised wave theory. Comparison between the GN model numerical prediction and the result from perturbation analysis confirms that nonlinear interaction between solitary wave and a solid wall is satisfactorilly modelled. Moreover, solitary wave propagation at an angle to the x-axis and the interaction of solitary waves with each other are conducted to validate the developed model.Keywords: Green–Naghdi equations, nonlinearity, numerical prediction, sloshing waves, solitary waves
Procedia PDF Downloads 2841640 Prediction of Alzheimer's Disease Based on Blood Biomarkers and Machine Learning Algorithms
Authors: Man-Yun Liu, Emily Chia-Yu Su
Abstract:
Alzheimer's disease (AD) is the public health crisis of the 21st century. AD is a degenerative brain disease and the most common cause of dementia, a costly disease on the healthcare system. Unfortunately, the cause of AD is poorly understood, furthermore; the treatments of AD so far can only alleviate symptoms rather cure or stop the progress of the disease. Currently, there are several ways to diagnose AD; medical imaging can be used to distinguish between AD, other dementias, and early onset AD, and cerebrospinal fluid (CSF). Compared with other diagnostic tools, blood (plasma) test has advantages as an approach to population-based disease screening because it is simpler, less invasive also cost effective. In our study, we used blood biomarkers dataset of The Alzheimer’s disease Neuroimaging Initiative (ADNI) which was funded by National Institutes of Health (NIH) to do data analysis and develop a prediction model. We used independent analysis of datasets to identify plasma protein biomarkers predicting early onset AD. Firstly, to compare the basic demographic statistics between the cohorts, we used SAS Enterprise Guide to do data preprocessing and statistical analysis. Secondly, we used logistic regression, neural network, decision tree to validate biomarkers by SAS Enterprise Miner. This study generated data from ADNI, contained 146 blood biomarkers from 566 participants. Participants include cognitive normal (healthy), mild cognitive impairment (MCI), and patient suffered Alzheimer’s disease (AD). Participants’ samples were separated into two groups, healthy and MCI, healthy and AD, respectively. We used the two groups to compare important biomarkers of AD and MCI. In preprocessing, we used a t-test to filter 41/47 features between the two groups (healthy and AD, healthy and MCI) before using machine learning algorithms. Then we have built model with 4 machine learning methods, the best AUC of two groups separately are 0.991/0.709. We want to stress the importance that the simple, less invasive, common blood (plasma) test may also early diagnose AD. As our opinion, the result will provide evidence that blood-based biomarkers might be an alternative diagnostics tool before further examination with CSF and medical imaging. A comprehensive study on the differences in blood-based biomarkers between AD patients and healthy subjects is warranted. Early detection of AD progression will allow physicians the opportunity for early intervention and treatment.Keywords: Alzheimer's disease, blood-based biomarkers, diagnostics, early detection, machine learning
Procedia PDF Downloads 3221639 Prediction of Turbulent Separated Flow in a Wind Tunel
Authors: Karima Boukhadia
Abstract:
In the present study, the subsonic flow in an asymmetrical diffuser was simulated numerically using code CFX 11.0 and its generator of grid ICEM CFD. Two models of turbulence were tested: K- ε and K- ω SST. The results obtained showed that the K- ε model singularly over-estimates the speed value close to the wall and that the K- ω SST model is qualitatively in good agreement with the experimental results of Buice and Eaton 1997. They also showed that the separation and reattachment of the fluid on the tilted wall strongly depends on its angle of inclination and that the length of the zone of separation increases with the angle of inclination of the lower wall of the diffuser.Keywords: asymmetric diffuser, separation, reattachment, tilt angle, separation zone
Procedia PDF Downloads 5761638 Prediction of Thermodynamic Properties of N-Heptane in the Critical Region
Authors: Sabrina Ladjama, Aicha Rizi, Azzedine Abbaci
Abstract:
In this work, we use the crossover model to formulate a comprehensive fundamental equation of state for the thermodynamic properties for several n-alkanes in the critical region that extends to the classical region. This equation of state is constructed on the basis of comparison of selected measurements of pressure-density-temperature data, isochoric and isobaric heat capacity. The model can be applied in a wide range of temperatures and densities around the critical point for n-heptane. It is found that the developed model represents most of the reliable experimental data accurately.Keywords: crossover model, critical region, fundamental equation, n-heptane
Procedia PDF Downloads 4741637 Atomistic Study of Structural and Phases Transition of TmAs Semiconductor, Using the FPLMTO Method
Authors: Rekab Djabri Hamza, Daoud Salah
Abstract:
We report first-principles calculations of structural and magnetic properties of TmAs compound in zinc blende(B3) and CsCl(B2), structures employing the density functional theory (DFT) within the local density approximation (LDA). We use the full potential linear muffin-tin orbitals (FP-LMTO) as implemented in the LMTART-MINDLAB code (Calculation). Results are given for lattice parameters (a), bulk modulus (B), and its first derivatives(B’) in the different structures NaCl (B1) and CsCl (B2). The most important result in this work is the prediction of the possibility of transition; from cubic rocksalt (NaCl)→ CsCl (B2) (32.96GPa) for TmAs. These results use the LDA approximation.Keywords: LDA, phase transition, properties, DFT
Procedia PDF Downloads 1171636 Inherited Eye Diseases in Africa: A Scoping Review and Strategy for an African Longitudinal Eye Study
Authors: Bawa Yusuf Muhammad, Musa Abubakar Kana, Aminatu Abdulrahman, Kerry Goetz
Abstract:
Background: Inherited eye diseases are disorders that affect globally, 1 in 1000 people. The six main world populations have created databases containing information on eye genotypes. Aim: The aim of the scoping review was to mine and present the available information to date on the genetics of inherited eye diseases within the African continent. Method: Literature Search Strategy was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). PubMed and Google Scholar searched for articles on inherited eye diseases from inception to 20th June 2022. Both Original and review articles that report on inherited, genetic or developmental/congenital eye diseases within the African Continent were included in the research. Results: A total of 1162 citations were obtained, but only 37 articles were reviewed based on the inclusion and exclusion criteria. The highest output of publications on inherited eye diseases comes from South Africa and Tunisia (about 43%), followed by Morocco and Egypt (27%), then Sub-Saharan Africa and North Africa (13.50%), while the remaining articles (16.5%) originated from Nigeria, Ghana, Mauritania Cameroon, Zimbabwe and combined article between Zimbabwe and Cameroon. Glaucoma and inherited retinal disorders represent the most studied diseases, followed by Albinism and congenital cataracts, respectively. Conclusion: Despite the growing research from Tunisia, Morocco, Egypt and South Africa, Sub-Saharan Africa remains almost a virgin region to explore the genetics of eye diseases.Keywords: inherited eye diseases, Africa, scoping review, longitudinal eye study
Procedia PDF Downloads 571635 Realising the Socio-Economic Rights of Refugees Under Human Rights Law: A Case Study of South Africa
Authors: Taguekou Kenfack Alexie
Abstract:
For a long time, refugee protection has constituted one of the main concerns of the international community as a whole and for the South African government in particular.The focus of this paper is on the challenges refugees face in accessing their rights in South Africa. In particular, it analyses the legal framework for the protection of the socio economic rights of refugees under international law, regional and domestic law and the extent to which the rights have been realized. The main hypothesis of the study centered on the fact that the social protection of refugees in South Africa is in conformity with international standards. To test this hypothesis, the qualitative research method was applied. Refugee related legal instruments were analyzed as well as academic publications, organizational reports and internet sources. The data analyzed revealed that there has been enormous progress in meeting international standards in the areas of education, emergency relief and assistance, protection of women and refugee children. The results also indicated that much remain to be desired in such areas as nutrition, shelter, health care, freedom of movement and very importantly, employment and social security. The paper also seeks to address the obstacles which prevent the proper treatment of refugees and to make recommendations as how the South African government can better regulate the treatment of refugees living in its territory.Recommendations include the amendment of the legal instruments that provide the normative framework for protection and improvement of protection policies to reflect the changing dynamics.Keywords: international community, refugee, socioeconomic rights, social protection
Procedia PDF Downloads 2821634 A Systematic Literature Review on the Prevalence of Academic Plagiarism and Cheating in Higher Educational Institutions
Authors: Sozon, Pok Wei Fong, Sia Bee Chuan, Omar Hamdan Mohammad
Abstract:
Owing to the widespread phenomenon of plagiarism and cheating in higher education institutions (HEIs), it is now difficult to ensure academic integrity and quality education. Moreover, the COVID-19 pandemic has intensified the issue by shifting educational institutions into virtual teaching and assessment mode. Thus, there is a need to carry out an extensive and holistic systematic review of the literature to highlight plagiarism and cheating in both prevalence and form among HEIs. This paper systematically reviews the literature concerning academic plagiarism and cheating in HEIs to determine the most common forms and suggest strategies for resolution and boosting the academic integrity of students. The review included 45 articles and publications for the period from February 12, 2018, to September 12, 2022, in the Scopus database aligned with the Systematic Review and Meta-Analysis (PRISMA) guidelines in the selection, filtering, and reporting of the papers for review from which a conclusion can be drawn. Based on the results, out of the studies reviewed, 48% of the quantitative results of students were plagiarized and obtained through cheating, with 84% coming from the fields of Humanities. Moreover, Psychology and Social Sciences studies accumulated 9% and 7% articles respectively. Based on the results, individual factors, institutional factors, and social and cultural factors have contributed to plagiarism and cheating cases in HEIs. The resolution of this issue can be the establishment of ethical and moral development initiatives and modern academic policies and guidelines supported by technological strategies of testing.Keywords: plagiarism, cheating, systematic review, academic integrity
Procedia PDF Downloads 741633 A Generalized Model for Performance Analysis of Airborne Radar in Clutter Scenario
Authors: Vinod Kumar Jaysaval, Prateek Agarwal
Abstract:
Performance prediction of airborne radar is a challenging and cumbersome task in clutter scenario for different types of targets. A generalized model requires to predict the performance of Radar for air targets as well as ground moving targets. In this paper, we propose a generalized model to bring out the performance of airborne radar for different Pulsed Repetition Frequency (PRF) as well as different type of targets. The model provides a platform to bring out different subsystem parameters for different applications and performance requirements under different types of clutter terrain.Keywords: airborne radar, blind zone, clutter, probability of detection
Procedia PDF Downloads 4701632 Dairy Products on the Algerian Market: Proportion of Imitation and Degree of Processing
Authors: Bentayeb-Ait Lounis Saïda, Cheref Zahia, Cherifi Thizi, Ri Kahina Bahmed, Kahina Hallali Yasmine Abdellaoui, Kenza Adli
Abstract:
Algeria is the leading consumer of dairy products in North Africa. This is a fact. However, the nutritional quality of the latter remains unknown. The aim of this study is to characterise the dairy products available on the Algerian market in order to assess whether they constitute a healthy and safe choice. To do this, it collected data on the labelling of 390 dairy products, including cheese, yoghurt, UHT milk and milk drinks, infant formula and dairy creams. We assessed their degree of processing according to the NOVA classification, as well as the proportion of imitation products. The study was carried out between March 2020 and August 2023. The results show that 88% are ultra-processed; 84% for 'cheese', 92% for dairy creams, 92% for 'yoghurt', 100% for infant formula, 92% for margarines and 36% for UHT milk/dairy drinks. As for imitation/analogue dairy products, the study revealed the following proportions: 100% for infant formula, 78% for butter/margarine, 18% for UHT milk/milk-based drinks, 54% for cheese, 2% for camembert and 75% for dairy cream. The harmful effects of consuming ultra-processed products on long-term health are increasingly documented in dozens of publications. The findings of this study sound the alarm about the health risks to which Algerian consumers are exposed. Various scientific, economic and industrial bodies need to be involved in order to safeguard consumer health in both the short and long term. Food awareness and education campaigns should be organised.Keywords: dairy, UPF, NOVA, yoghurt, cheese
Procedia PDF Downloads 351631 Revolutionizing Financial Forecasts: Enhancing Predictions with Graph Convolutional Networks (GCN) - Long Short-Term Memory (LSTM) Fusion
Authors: Ali Kazemi
Abstract:
Those within the volatile and interconnected international economic markets, appropriately predicting market trends, hold substantial fees for traders and financial establishments. Traditional device mastering strategies have made full-size strides in forecasting marketplace movements; however, monetary data's complicated and networked nature calls for extra sophisticated processes. This observation offers a groundbreaking method for monetary marketplace prediction that leverages the synergistic capability of Graph Convolutional Networks (GCNs) and Long Short-Term Memory (LSTM) networks. Our suggested algorithm is meticulously designed to forecast the traits of inventory market indices and cryptocurrency costs, utilizing a comprehensive dataset spanning from January 1, 2015, to December 31, 2023. This era, marked by sizable volatility and transformation in financial markets, affords a solid basis for schooling and checking out our predictive version. Our algorithm integrates diverse facts to construct a dynamic economic graph that correctly reflects market intricacies. We meticulously collect opening, closing, and high and low costs daily for key inventory marketplace indices (e.g., S&P 500, NASDAQ) and widespread cryptocurrencies (e.g., Bitcoin, Ethereum), ensuring a holistic view of marketplace traits. Daily trading volumes are also incorporated to seize marketplace pastime and liquidity, providing critical insights into the market's shopping for and selling dynamics. Furthermore, recognizing the profound influence of the monetary surroundings on financial markets, we integrate critical macroeconomic signs with hobby fees, inflation rates, GDP increase, and unemployment costs into our model. Our GCN algorithm is adept at learning the relational patterns amongst specific financial devices represented as nodes in a comprehensive market graph. Edges in this graph encapsulate the relationships based totally on co-movement styles and sentiment correlations, enabling our version to grasp the complicated community of influences governing marketplace moves. Complementing this, our LSTM algorithm is trained on sequences of the spatial-temporal illustration discovered through the GCN, enriched with historic fee and extent records. This lets the LSTM seize and expect temporal marketplace developments accurately. Inside the complete assessment of our GCN-LSTM algorithm across the inventory marketplace and cryptocurrency datasets, the version confirmed advanced predictive accuracy and profitability compared to conventional and opportunity machine learning to know benchmarks. Specifically, the model performed a Mean Absolute Error (MAE) of 0.85%, indicating high precision in predicting day-by-day charge movements. The RMSE was recorded at 1.2%, underscoring the model's effectiveness in minimizing tremendous prediction mistakes, which is vital in volatile markets. Furthermore, when assessing the model's predictive performance on directional market movements, it achieved an accuracy rate of 78%, significantly outperforming the benchmark models, averaging an accuracy of 65%. This high degree of accuracy is instrumental for techniques that predict the course of price moves. This study showcases the efficacy of mixing graph-based totally and sequential deep learning knowledge in economic marketplace prediction and highlights the fee of a comprehensive, records-pushed evaluation framework. Our findings promise to revolutionize investment techniques and hazard management practices, offering investors and economic analysts a powerful device to navigate the complexities of cutting-edge economic markets.Keywords: financial market prediction, graph convolutional networks (GCNs), long short-term memory (LSTM), cryptocurrency forecasting
Procedia PDF Downloads 651630 Determination of the Effective Economic and/or Demographic Indicators in Classification of European Union Member and Candidate Countries Using Partial Least Squares Discriminant Analysis
Authors: Esra Polat
Abstract:
Partial Least Squares Discriminant Analysis (PLSDA) is a statistical method for classification and consists a classical Partial Least Squares Regression (PLSR) in which the dependent variable is a categorical one expressing the class membership of each observation. PLSDA can be applied in many cases when classical discriminant analysis cannot be applied. For example, when the number of observations is low and when the number of independent variables is high. When there are missing values, PLSDA can be applied on the data that is available. Finally, it is adapted when multicollinearity between independent variables is high. The aim of this study is to determine the economic and/or demographic indicators, which are effective in grouping the 28 European Union (EU) member countries and 7 candidate countries (including potential candidates Bosnia and Herzegovina (BiH) and Kosova) by using the data set obtained from database of the World Bank for 2014. Leaving the political issues aside, the analysis is only concerned with the economic and demographic variables that have the potential influence on country’s eligibility for EU entrance. Hence, in this study, both the performance of PLSDA method in classifying the countries correctly to their pre-defined groups (candidate or member) and the differences between the EU countries and candidate countries in terms of these indicators are analyzed. As a result of the PLSDA, the value of percentage correctness of 100 % indicates that overall of the 35 countries is classified correctly. Moreover, the most important variables that determine the statuses of member and candidate countries in terms of economic indicators are identified as 'external balance on goods and services (% GDP)', 'gross domestic savings (% GDP)' and 'gross national expenditure (% GDP)' that means for the 2014 economical structure of countries is the most important determinant of EU membership. Subsequently, the model validated to prove the predictive ability by using the data set for 2015. For prediction sample, %97,14 of the countries are correctly classified. An interesting result is obtained for only BiH, which is still a potential candidate for EU, predicted as a member of EU by using the indicators data set for 2015 as a prediction sample. Although BiH has made a significant transformation from a war-torn country to a semi-functional state, ethnic tensions, nationalistic rhetoric and political disagreements are still evident, which inhibit Bosnian progress towards the EU.Keywords: classification, demographic indicators, economic indicators, European Union, partial least squares discriminant analysis
Procedia PDF Downloads 2801629 Cities Idioms Together with ICT and Countries Interested in the Smart City: A Review of Current Status
Authors: Qasim HamaKhurshid HamaMurad, Normal Mat Jusoh, Uznir Ujang
Abstract:
The concept of the city with an infrastructure of (information and communication) Technology embraces several definitions depending on the meanings of the word "smart" are (intelligent city, smart city, knowledge city, ubiquitous city, sustainable city, digital city). Many definitions of the city exist, but this chapter explores which one has been universally acknowledged. From literature analysis, it emerges that Smart City is the most used terminologies in literature through the digital database to indicate the smartness of a city. This paper share exploration the research from main seven website digital databases and journal about Smart City from "January 2015 to the February of 2020" to (a) Time research, to examine the causes of the Smart City phenomenon and other concept literature in the last five years (b) Review of words, to see how and where the smart city specification and relation different definition And(c) Geographical research to consider where Smart Cities' greatest concentrations are in the world and are Malaysia has interacting with the smart city, and (d) how many papers published from all Malaysia from 2015 to 2020 about smart citie. Three steps are followed to accomplish the goal. (1)The analysis covered publications Build a systematic literature review search strategy to gather a representative sub-set of papers on Smart City and other definitions utilizing (GoogleScholar, Elsevier, Scopus, ScienceDirect, IEEEXplore, WebofScience, Springer) January2015-February2020. (2)A bibliometric map was formed based on the bibliometric evaluation using the mapping technique VOSviewer to visualize differences. (3)VOSviewer application program was used to build initial clusters. The Map of Bibliometric Visualizes the analytical findings which targeted the word harmony.Keywords: bibliometric research, smart city, ICT, VOSviewer, urban modernization
Procedia PDF Downloads 2021628 Identifying Diabetic Retinopathy Complication by Predictive Techniques in Indian Type 2 Diabetes Mellitus Patients
Authors: Faiz N. K. Yusufi, Aquil Ahmed, Jamal Ahmad
Abstract:
Predicting the risk of diabetic retinopathy (DR) in Indian type 2 diabetes patients is immensely necessary. India, being the second largest country after China in terms of a number of diabetic patients, to the best of our knowledge not a single risk score for complications has ever been investigated. Diabetic retinopathy is a serious complication and is the topmost reason for visual impairment across countries. Any type or form of DR has been taken as the event of interest, be it mild, back, grade I, II, III, and IV DR. A sample was determined and randomly collected from the Rajiv Gandhi Centre for Diabetes and Endocrinology, J.N.M.C., A.M.U., Aligarh, India. Collected variables include patients data such as sex, age, height, weight, body mass index (BMI), blood sugar fasting (BSF), post prandial sugar (PP), glycosylated haemoglobin (HbA1c), diastolic blood pressure (DBP), systolic blood pressure (SBP), smoking, alcohol habits, total cholesterol (TC), triglycerides (TG), high density lipoprotein (HDL), low density lipoprotein (LDL), very low density lipoprotein (VLDL), physical activity, duration of diabetes, diet control, history of antihypertensive drug treatment, family history of diabetes, waist circumference, hip circumference, medications, central obesity and history of DR. Cox proportional hazard regression is used to design risk scores for the prediction of retinopathy. Model calibration and discrimination are assessed from Hosmer Lemeshow and area under receiver operating characteristic curve (ROC). Overfitting and underfitting of the model are checked by applying regularization techniques and best method is selected between ridge, lasso and elastic net regression. Optimal cut off point is chosen by Youden’s index. Five-year probability of DR is predicted by both survival function, and Markov chain two state model and the better technique is concluded. The risk scores developed can be applied by doctors and patients themselves for self evaluation. Furthermore, the five-year probabilities can be applied as well to forecast and maintain the condition of patients. This provides immense benefit in real application of DR prediction in T2DM.Keywords: Cox proportional hazard regression, diabetic retinopathy, ROC curve, type 2 diabetes mellitus
Procedia PDF Downloads 1861627 Predicting Wealth Status of Households Using Ensemble Machine Learning Algorithms
Authors: Habtamu Ayenew Asegie
Abstract:
Wealth, as opposed to income or consumption, implies a more stable and permanent status. Due to natural and human-made difficulties, households' economies will be diminished, and their well-being will fall into trouble. Hence, governments and humanitarian agencies offer considerable resources for poverty and malnutrition reduction efforts. One key factor in the effectiveness of such efforts is the accuracy with which low-income or poor populations can be identified. As a result, this study aims to predict a household’s wealth status using ensemble Machine learning (ML) algorithms. In this study, design science research methodology (DSRM) is employed, and four ML algorithms, Random Forest (RF), Adaptive Boosting (AdaBoost), Light Gradient Boosted Machine (LightGBM), and Extreme Gradient Boosting (XGBoost), have been used to train models. The Ethiopian Demographic and Health Survey (EDHS) dataset is accessed for this purpose from the Central Statistical Agency (CSA)'s database. Various data pre-processing techniques were employed, and the model training has been conducted using the scikit learn Python library functions. Model evaluation is executed using various metrics like Accuracy, Precision, Recall, F1-score, area under curve-the receiver operating characteristics (AUC-ROC), and subjective evaluations of domain experts. An optimal subset of hyper-parameters for the algorithms was selected through the grid search function for the best prediction. The RF model has performed better than the rest of the algorithms by achieving an accuracy of 96.06% and is better suited as a solution model for our purpose. Following RF, LightGBM, XGBoost, and AdaBoost algorithms have an accuracy of 91.53%, 88.44%, and 58.55%, respectively. The findings suggest that some of the features like ‘Age of household head’, ‘Total children ever born’ in a family, ‘Main roof material’ of their house, ‘Region’ they lived in, whether a household uses ‘Electricity’ or not, and ‘Type of toilet facility’ of a household are determinant factors to be a focal point for economic policymakers. The determinant risk factors, extracted rules, and designed artifact achieved 82.28% of the domain expert’s evaluation. Overall, the study shows ML techniques are effective in predicting the wealth status of households.Keywords: ensemble machine learning, households wealth status, predictive model, wealth status prediction
Procedia PDF Downloads 381626 Classification of Germinatable Mung Bean by Near Infrared Hyperspectral Imaging
Authors: Kaewkarn Phuangsombat, Arthit Phuangsombat, Anupun Terdwongworakul
Abstract:
Hard seeds will not grow and can cause mold in sprouting process. Thus, the hard seeds need to be separated from the normal seeds. Near infrared hyperspectral imaging in a range of 900 to 1700 nm was implemented to develop a model by partial least squares discriminant analysis to discriminate the hard seeds from the normal seeds. The orientation of the seeds was also studied to compare the performance of the models. The model based on hilum-up orientation achieved the best result giving the coefficient of determination of 0.98, and root mean square error of prediction of 0.07 with classification accuracy was equal to 100%.Keywords: mung bean, near infrared, germinatability, hard seed
Procedia PDF Downloads 3051625 CFD Modeling of Pollutant Dispersion in a Free Surface Flow
Authors: Sonia Ben Hamza, Sabra Habli, Nejla Mahjoub Said, Hervé Bournot, Georges Le Palec
Abstract:
In this work, we determine the turbulent dynamic structure of pollutant dispersion in two-phase free surface flow. The numerical simulation was performed using ANSYS Fluent. The flow study is three-dimensional, unsteady and isothermal. The study area has been endowed with a rectangular obstacle to analyze its influence on the hydrodynamic variables and progression of the pollutant. The numerical results show that the hydrodynamic model provides prediction of the dispersion of a pollutant in an open channel flow and reproduces the recirculation and trapping the pollutant downstream near the obstacle.Keywords: CFD, free surface, polluant dispersion, turbulent flows
Procedia PDF Downloads 5451624 Understanding of Malaysian Community Disaster Resilience: Australian Scorecard Adaptation
Authors: Salizar Mohamed Ludin, Mohd Khairul Hasyimi Firdaus, Paul Arbon
Abstract:
Purpose: This paper aims to develop Malaysian Government and community-level critical thinking, planning and action for improving community disaster resilience by reporting Phase 1, Part 1 of a larger community disaster resilience measurement study about adapting the Torrens Resilience Institute Australian Community Disaster Resilience Scorecard to the Malaysian context. Methodology: Pparticipatory action research encouraged key people involved in managing the six most affected areas in the 2014 flooding of Kelantan in Malaysia’s north-east to participate in discussions about adapting and self-testing the Australian Community Disaster Resilience Scorecard to measure and improve their communities’ disaster resilience. Findings: Communities need to strengthen their disaster resilience through better communication, cross-community cooperation, maximizing opportunities to compare their plans, actions and reactions with those reported in research publications, and aligning their community disaster management with reported best practice internationally while acknowledging the need to adapt such practice to local contexts. Research implications: There is a need for a Malaysia-wide, simple-to-use, standardized disaster resilience scorecard to improve the quality, quantity and capability of healthcare and emergency services’ preparedness, and to facilitate urgent reallocation of aid. Value: This study is the first of its kind in Malaysia. The resulting community disaster resilience guideline based on participants’ feedback about the Kelantan floods and scorecard self-testing has the potential for further adaptation to suit contexts across Malaysia, as well as demonstrating how the scorecard can be adapted for international use.Keywords: community disaster resilience, CDR Scorecard, participatory action research, flooding, Malaysia
Procedia PDF Downloads 3361623 Prediction of Mental Health: Heuristic Subjective Well-Being Model on Perceived Stress Scale
Authors: Ahmet Karakuş, Akif Can Kilic, Emre Alptekin
Abstract:
A growing number of studies have been conducted to determine how well-being may be predicted using well-designed models. It is necessary to investigate the backgrounds of features in order to construct a viable Subjective Well-Being (SWB) model. We have picked the suitable variables from the literature on SWB that are acceptable for real-world data instructions. The goal of this work is to evaluate the model by feeding it with SWB characteristics and then categorizing the stress levels using machine learning methods to see how well it performs on a real dataset. Despite the fact that it is a multiclass classification issue, we have achieved significant metric scores, which may be taken into account for a specific task.Keywords: machine learning, multiclassification problem, subjective well-being, perceived stress scale
Procedia PDF Downloads 1301622 Multi-Scale Damage Modelling for Microstructure Dependent Short Fiber Reinforced Composite Structure Design
Authors: Joseph Fitoussi, Mohammadali Shirinbayan, Abbas Tcharkhtchi
Abstract:
Due to material flow during processing, short fiber reinforced composites structures obtained by injection or compression molding generally present strong spatial microstructure variation. On the other hand, quasi-static, dynamic, and fatigue behavior of these materials are highly dependent on microstructure parameters such as fiber orientation distribution. Indeed, because of complex damage mechanisms, SFRC structures design is a key challenge for safety and reliability. In this paper, we propose a micromechanical model allowing prediction of damage behavior of real structures as a function of microstructure spatial distribution. To this aim, a statistical damage criterion including strain rate and fatigue effect at the local scale is introduced into a Mori and Tanaka model. A critical local damage state is identified, allowing fatigue life prediction. Moreover, the multi-scale model is coupled with an experimental intrinsic link between damage under monotonic loading and fatigue life in order to build an abacus giving Tsai-Wu failure criterion parameters as a function of microstructure and targeted fatigue life. On the other hand, the micromechanical damage model gives access to the evolution of the anisotropic stiffness tensor of SFRC submitted to complex thermomechanical loading, including quasi-static, dynamic, and cyclic loading with temperature and amplitude variations. Then, the latter is used to fill out microstructure dependent material cards in finite element analysis for design optimization in the case of complex loading history. The proposed methodology is illustrated in the case of a real automotive component made of sheet molding compound (PSA 3008 tailgate). The obtained results emphasize how the proposed micromechanical methodology opens a new path for the automotive industry to lighten vehicle bodies and thereby save energy and reduce gas emission.Keywords: short fiber reinforced composite, structural design, damage, micromechanical modelling, fatigue, strain rate effect
Procedia PDF Downloads 1071621 Hidden Markov Model for the Simulation Study of Neural States and Intentionality
Authors: R. B. Mishra
Abstract:
Hidden Markov Model (HMM) has been used in prediction and determination of states that generate different neural activations as well as mental working conditions. This paper addresses two applications of HMM; one to determine the optimal sequence of states for two neural states: Active (AC) and Inactive (IA) for the three emission (observations) which are for No Working (NW), Waiting (WT) and Working (W) conditions of human beings. Another is for the determination of optimal sequence of intentionality i.e. Believe (B), Desire (D), and Intention (I) as the states and three observational sequences: NW, WT and W. The computational results are encouraging and useful.Keywords: hiden markov model, believe desire intention, neural activation, simulation
Procedia PDF Downloads 3761620 Hagios Spyridon Church in Selymbria and Its Particular Standing in Middle Byzantine Architecture
Authors: Görkem Günay, Bilge Ar
Abstract:
Selymbria is an ancient maritime city, approximately 60 km west to Constantinople. Although it was a particularly important settlement during the Byzantine period, our knowledge about its Byzantine layer is rather sketchy. On the other hand, one of the Byzantine churches of Selymbria, namely Hagios Spyridon which had been survived until the beginning of 20th century, deserves special attention. The church is mainly known via textual and visual data from the end of 19th and the beginning of 20th century. These documents, together with some architectural pieces which most probably were belonging to the church, indicate that Hagios Spyridon Church was built in ‘simple domed octagon’ plan-scheme. Nothing from the building is preserved in-situ today. However, this small church helps to fill a very important gap in the history of Middle Byzantine architecture and occupies a notable place in the on-going discussion of the origins of ‘domed octagon’ churches of Helladic paradigm and their link with the capital. This study aims to reexamine the now lost church of Hagios Spyridon in the context of architectural developments of Middle Byzantine period. In the presentation, the exact location and the architecture of the church will be tried to be clarified using the existing documents and the publications of previous scholars. Some new architectural pieces which possibly belonged to the church, will be introduced and interpretations on existing restitution drawings will be made. The church will be architecturally compared with the oldest known example of the plan-scheme, Nea Moni on Chios and its later local copies. The study of Hagios Spyridon Church of Selymbria, hopefully, will contribute to the discussion of the possible influence of the capital on the plan-scheme and will help us to ask further questions about the close relations between Constantinopolitan and provincial architecture.Keywords: Hagios Spyridon church, insular domed octagon, middle Byzantine architecture, silymarin
Procedia PDF Downloads 2031619 A Review on Artificial Neural Networks in Image Processing
Authors: B. Afsharipoor, E. Nazemi
Abstract:
Artificial neural networks (ANNs) are powerful tool for prediction which can be trained based on a set of examples and thus, it would be useful for nonlinear image processing. The present paper reviews several paper regarding applications of ANN in image processing to shed the light on advantage and disadvantage of ANNs in this field. Different steps in the image processing chain including pre-processing, enhancement, segmentation, object recognition, image understanding and optimization by using ANN are summarized. Furthermore, results on using multi artificial neural networks are presented.Keywords: neural networks, image processing, segmentation, object recognition, image understanding, optimization, MANN
Procedia PDF Downloads 4061618 Air Handling Units Power Consumption Using Generalized Additive Model for Anomaly Detection: A Case Study in a Singapore Campus
Authors: Ju Peng Poh, Jun Yu Charles Lee, Jonathan Chew Hoe Khoo
Abstract:
The emergence of digital twin technology, a digital replica of physical world, has improved the real-time access to data from sensors about the performance of buildings. This digital transformation has opened up many opportunities to improve the management of the building by using the data collected to help monitor consumption patterns and energy leakages. One example is the integration of predictive models for anomaly detection. In this paper, we use the GAM (Generalised Additive Model) for the anomaly detection of Air Handling Units (AHU) power consumption pattern. There is ample research work on the use of GAM for the prediction of power consumption at the office building and nation-wide level. However, there is limited illustration of its anomaly detection capabilities, prescriptive analytics case study, and its integration with the latest development of digital twin technology. In this paper, we applied the general GAM modelling framework on the historical data of the AHU power consumption and cooling load of the building between Jan 2018 to Aug 2019 from an education campus in Singapore to train prediction models that, in turn, yield predicted values and ranges. The historical data are seamlessly extracted from the digital twin for modelling purposes. We enhanced the utility of the GAM model by using it to power a real-time anomaly detection system based on the forward predicted ranges. The magnitude of deviation from the upper and lower bounds of the uncertainty intervals is used to inform and identify anomalous data points, all based on historical data, without explicit intervention from domain experts. Notwithstanding, the domain expert fits in through an optional feedback loop through which iterative data cleansing is performed. After an anomalously high or low level of power consumption detected, a set of rule-based conditions are evaluated in real-time to help determine the next course of action for the facilities manager. The performance of GAM is then compared with other approaches to evaluate its effectiveness. Lastly, we discuss the successfully deployment of this approach for the detection of anomalous power consumption pattern and illustrated with real-world use cases.Keywords: anomaly detection, digital twin, generalised additive model, GAM, power consumption, supervised learning
Procedia PDF Downloads 1541617 The Impact of Technology on Media Content Regulation
Authors: Eugene Mashapa
Abstract:
The age of information has witnessed countless unprecedented technological developments, which signal the articulation of succinct technological capabilities that can match these cutting-edge technological trends. These changes have impacted patterns in the production, distribution, and consumption of media content, a space that the Film and Publication Board (FPB) is concerned with. Consequently, the FPB is keen to understand the nature and impact of these technological changes on media content regulation. This exploratory study sought to investigate how content regulators in high and middle-income economies have adapted to the changes in this space, seeking insights into innovations, technological and operational, that facilitate continued relevance during this fast-changing environment. The study is aimed at developing recommendations that could assist and inform the organisation in regulating media content as it evolves. Thus, the overall research strategy in this analysis is applied research, and the analytical model adopted is a mixed research design guided by both qualitative and quantitative research instruments. It was revealed in the study that the FPB was significantly impacted by the unprecedented technological advancements in the media regulation space. Additionally, there exists a need for the FPB to understand the current and future penetrations of 4IR technology in the industry and its impact on media governance and policy implementation. This will range from reskilling officials to align with the technological skills to developing technological innovations as well as adopting co-regulatory or self-regulatory arrangements together with content distributors, where more content is distributed in higher volumes and with increased frequency. Importantly, initiating an interactive learning process for both FPB employees and the general public can assist the regulator and improve FPB’s operational efficiency and effectiveness.Keywords: media, regulation, technology, film and publications board
Procedia PDF Downloads 106