Search results for: data security assurance
24318 Penguins Search Optimization Algorithm for Chaotic Synchronization System
Authors: Sofiane Bououden, Ilyes Boulkaibet
Abstract:
In terms of security of the information signal, the meta-heuristic Penguins Search Optimization Algorithm (PeSOA) is applied to synchronize chaotic encryption communications in the case of sensitive dependence on initial conditions in chaotic generator oscillator. The objective of this paper is the use of the PeSOA algorithm to exploring search space with random and iterative processes for synchronization of symmetric keys in both transmission and reception. Simulation results show the effectiveness of the PeSOA algorithm in generating symmetric keys of the encryption process and synchronizing.Keywords: meta-heuristic, PeSOA, chaotic systems, encryption, synchronization optimization
Procedia PDF Downloads 19524317 Text Mining of Veterinary Forums for Epidemiological Surveillance Supplementation
Authors: Samuel Munaf, Kevin Swingler, Franz Brülisauer, Anthony O’Hare, George Gunn, Aaron Reeves
Abstract:
Web scraping and text mining are popular computer science methods deployed by public health researchers to augment traditional epidemiological surveillance. However, within veterinary disease surveillance, such techniques are still in the early stages of development and have not yet been fully utilised. This study presents an exploration into the utility of incorporating internet-based data to better understand the smallholder farming communities within Scotland by using online text extraction and the subsequent mining of this data. Web scraping of the livestock fora was conducted in conjunction with text mining of the data in search of common themes, words, and topics found within the text. Results from bi-grams and topic modelling uncover four main topics of interest within the data pertaining to aspects of livestock husbandry: feeding, breeding, slaughter, and disposal. These topics were found amongst both the poultry and pig sub-forums. Topic modeling appears to be a useful method of unsupervised classification regarding this form of data, as it has produced clusters that relate to biosecurity and animal welfare. Internet data can be a very effective tool in aiding traditional veterinary surveillance methods, but the requirement for human validation of said data is crucial. This opens avenues of research via the incorporation of other dynamic social media data, namely Twitter and Facebook/Meta, in addition to time series analysis to highlight temporal patterns.Keywords: veterinary epidemiology, disease surveillance, infodemiology, infoveillance, smallholding, social media, web scraping, sentiment analysis, geolocation, text mining, NLP
Procedia PDF Downloads 9924316 Longitudinal Analysis of Internet Speed Data in the Gulf Cooperation Council Region
Authors: Musab Isah
Abstract:
This paper presents a longitudinal analysis of Internet speed data in the Gulf Cooperation Council (GCC) region, focusing on the most populous cities of each of the six countries – Riyadh, Saudi Arabia; Dubai, UAE; Kuwait City, Kuwait; Doha, Qatar; Manama, Bahrain; and Muscat, Oman. The study utilizes data collected from the Measurement Lab (M-Lab) infrastructure over a five-year period from January 1, 2019, to December 31, 2023. The analysis includes downstream and upstream throughput data for the cities, covering significant events such as the launch of 5G networks in 2019, COVID-19-induced lockdowns in 2020 and 2021, and the subsequent recovery period and return to normalcy. The results showcase substantial increases in Internet speeds across the cities, highlighting improvements in both download and upload throughput over the years. All the GCC countries have achieved above-average Internet speeds that can conveniently support various online activities and applications with excellent user experience.Keywords: internet data science, internet performance measurement, throughput analysis, internet speed, measurement lab, network diagnostic tool
Procedia PDF Downloads 6224315 A Web Service Based Sensor Data Management System
Authors: Rose A. Yemson, Ping Jiang, Oyedeji L. Inumoh
Abstract:
The deployment of wireless sensor network has rapidly increased, however with the increased capacity and diversity of sensors, and applications ranging from biological, environmental, military etc. generates tremendous volume of data’s where more attention is placed on the distributed sensing and little on how to manage, analyze, retrieve and understand the data generated. This makes it more quite difficult to process live sensor data, run concurrent control and update because sensor data are either heavyweight, complex, and slow. This work will focus on developing a web service platform for automatic detection of sensors, acquisition of sensor data, storage of sensor data into a database, processing of sensor data using reconfigurable software components. This work will also create a web service based sensor data management system to monitor physical movement of an individual wearing wireless network sensor technology (SunSPOT). The sensor will detect movement of that individual by sensing the acceleration in the direction of X, Y and Z axes accordingly and then send the sensed reading to a database that will be interfaced with an internet platform. The collected sensed data will determine the posture of the person such as standing, sitting and lying down. The system is designed using the Unified Modeling Language (UML) and implemented using Java, JavaScript, html and MySQL. This system allows real time monitoring an individual closely and obtain their physical activity details without been physically presence for in-situ measurement which enables you to work remotely instead of the time consuming check of an individual. These details can help in evaluating an individual’s physical activity and generate feedback on medication. It can also help in keeping track of any mandatory physical activities required to be done by the individuals. These evaluations and feedback can help in maintaining a better health status of the individual and providing improved health care.Keywords: HTML, java, javascript, MySQL, sunspot, UML, web-based, wireless network sensor
Procedia PDF Downloads 21224314 Unlocking Health Insights: Studying Data for Better Care
Authors: Valentina Marutyan
Abstract:
Healthcare data mining is a rapidly developing field at the intersection of technology and medicine that has the potential to change our understanding and approach to providing healthcare. Healthcare and data mining is the process of examining huge amounts of data to extract useful information that can be applied in order to improve patient care, treatment effectiveness, and overall healthcare delivery. This field looks for patterns, trends, and correlations in a variety of healthcare datasets, such as electronic health records (EHRs), medical imaging, patient demographics, and treatment histories. To accomplish this, it uses advanced analytical approaches. Predictive analysis using historical patient data is a major area of interest in healthcare data mining. This enables doctors to get involved early to prevent problems or improve results for patients. It also assists in early disease detection and customized treatment planning for every person. Doctors can customize a patient's care by looking at their medical history, genetic profile, current and previous therapies. In this way, treatments can be more effective and have fewer negative consequences. Moreover, helping patients, it improves the efficiency of hospitals. It helps them determine the number of beds or doctors they require in regard to the number of patients they expect. In this project are used models like logistic regression, random forests, and neural networks for predicting diseases and analyzing medical images. Patients were helped by algorithms such as k-means, and connections between treatments and patient responses were identified by association rule mining. Time series techniques helped in resource management by predicting patient admissions. These methods improved healthcare decision-making and personalized treatment. Also, healthcare data mining must deal with difficulties such as bad data quality, privacy challenges, managing large and complicated datasets, ensuring the reliability of models, managing biases, limited data sharing, and regulatory compliance. Finally, secret code of data mining in healthcare helps medical professionals and hospitals make better decisions, treat patients more efficiently, and work more efficiently. It ultimately comes down to using data to improve treatment, make better choices, and simplify hospital operations for all patients.Keywords: data mining, healthcare, big data, large amounts of data
Procedia PDF Downloads 7624313 A Novel Heuristic for Analysis of Large Datasets by Selecting Wrapper-Based Features
Authors: Bushra Zafar, Usman Qamar
Abstract:
Large data sample size and dimensions render the effectiveness of conventional data mining methodologies. A data mining technique are important tools for collection of knowledgeable information from variety of databases and provides supervised learning in the form of classification to design models to describe vital data classes while structure of the classifier is based on class attribute. Classification efficiency and accuracy are often influenced to great extent by noisy and undesirable features in real application data sets. The inherent natures of data set greatly masks its quality analysis and leave us with quite few practical approaches to use. To our knowledge first time, we present a new approach for investigation of structure and quality of datasets by providing a targeted analysis of localization of noisy and irrelevant features of data sets. Machine learning is based primarily on feature selection as pre-processing step which offers us to select few features from number of features as a subset by reducing the space according to certain evaluation criterion. The primary objective of this study is to trim down the scope of the given data sample by searching a small set of important features which may results into good classification performance. For this purpose, a heuristic for wrapper-based feature selection using genetic algorithm and for discriminative feature selection an external classifier are used. Selection of feature based on its number of occurrence in the chosen chromosomes. Sample dataset has been used to demonstrate proposed idea effectively. A proposed method has improved average accuracy of different datasets is about 95%. Experimental results illustrate that proposed algorithm increases the accuracy of prediction of different diseases.Keywords: data mining, generic algorithm, KNN algorithms, wrapper based feature selection
Procedia PDF Downloads 31624312 Liability of AI in Workplace: A Comparative Approach Between Shari’ah and Common Law
Authors: Barakat Adebisi Raji
Abstract:
In the workplace, Artificial Intelligence has, in recent years, emerged as a transformative technology that revolutionizes how organizations operate and perform tasks. It is a technology that has a significant impact on transportation, manufacturing, education, cyber security, robotics, agriculture, healthcare, and so many other organizations. By harnessing AI technology, workplaces can enhance productivity, streamline processes, and make more informed decisions. Given the potential of AI to change the way we work and its impact on the labor market in years to come, employers understand that it entails legal challenges and risks despite the advantages inherent in it. Therefore, as AI continues to integrate into various aspects of the workplace, understanding the legal and ethical implications becomes paramount. Also central to this study is the question of who is held liable where AI makes any defaults; the person (company) who created the AI, the person who programmed the AI algorithm or the person who uses the AI? Thus, the aim of this paper is to provide a detailed overview of how AI-related liabilities are addressed under each legal tradition and shed light on potential areas of accord and divergence between the two legal cultures. The objectives of this paper are to (i) examine the ability of Common law and Islamic law to accommodate the issues and damage caused by AI in the workplace and the legality of compensation for such injury sustained; (ii) to discuss the extent to which AI can be described as a legal personality to bear responsibility: (iii) examine the similarities and disparities between Common Law and Islamic Jurisprudence on the liability of AI in the workplace. The methodology adopted in this work was qualitative, and the method was purely a doctrinal research method where information is gathered from the primary and secondary sources of law, such as comprehensive materials found in journal articles, expert-authored books and online news sources. Comparative legal method was also used to juxtapose the approach of Islam and Common Law. The paper concludes that since AI, in its current legal state, is not recognized as a legal entity, operators or manufacturers of AI should be held liable for any damage that arises, and the determination of who bears the responsibility should be dependent on the circumstances surrounding each scenario. The study recommends the granting of legal personality to AI systems, the establishment of legal rights and liabilities for AI, the establishment of a holistic Islamic virtue-based AI ethics framework, and the consideration of Islamic ethics.Keywords: AI, health- care, agriculture, cyber security, common law, Shari'ah
Procedia PDF Downloads 3724311 Improve Student Performance Prediction Using Majority Vote Ensemble Model for Higher Education
Authors: Wade Ghribi, Abdelmoty M. Ahmed, Ahmed Said Badawy, Belgacem Bouallegue
Abstract:
In higher education institutions, the most pressing priority is to improve student performance and retention. Large volumes of student data are used in Educational Data Mining techniques to find new hidden information from students' learning behavior, particularly to uncover the early symptom of at-risk pupils. On the other hand, data with noise, outliers, and irrelevant information may provide incorrect conclusions. By identifying features of students' data that have the potential to improve performance prediction results, comparing and identifying the most appropriate ensemble learning technique after preprocessing the data, and optimizing the hyperparameters, this paper aims to develop a reliable students' performance prediction model for Higher Education Institutions. Data was gathered from two different systems: a student information system and an e-learning system for undergraduate students in the College of Computer Science of a Saudi Arabian State University. The cases of 4413 students were used in this article. The process includes data collection, data integration, data preprocessing (such as cleaning, normalization, and transformation), feature selection, pattern extraction, and, finally, model optimization and assessment. Random Forest, Bagging, Stacking, Majority Vote, and two types of Boosting techniques, AdaBoost and XGBoost, are ensemble learning approaches, whereas Decision Tree, Support Vector Machine, and Artificial Neural Network are supervised learning techniques. Hyperparameters for ensemble learning systems will be fine-tuned to provide enhanced performance and optimal output. The findings imply that combining features of students' behavior from e-learning and students' information systems using Majority Vote produced better outcomes than the other ensemble techniques.Keywords: educational data mining, student performance prediction, e-learning, classification, ensemble learning, higher education
Procedia PDF Downloads 10824310 Foundation of the Information Model for Connected-Cars
Authors: Hae-Won Seo, Yong-Gu Lee
Abstract:
Recent progress in the next generation of automobile technology is geared towards incorporating information technology into cars. Collectively called smart cars are bringing intelligence to cars that provides comfort, convenience and safety. A branch of smart cars is connected-car system. The key concept in connected-cars is the sharing of driving information among cars through decentralized manner enabling collective intelligence. This paper proposes a foundation of the information model that is necessary to define the driving information for smart-cars. Road conditions are modeled through a unique data structure that unambiguously represent the time variant traffics in the streets. Additionally, the modeled data structure is exemplified in a navigational scenario and usage using UML. Optimal driving route searching is also discussed using the proposed data structure in a dynamically changing road conditions.Keywords: connected-car, data modeling, route planning, navigation system
Procedia PDF Downloads 37424309 The Impact of Artificial Intelligence on Human Rights Development
Authors: Kerols Seif Said Botros
Abstract:
The relationship between development and human rights has been debated for a long time. Various principles, from the right to development to development-based human rights, are applied to understand the dynamics between these two concepts. Despite the measures calculated, the connection between enhancement and human rights remains vague. Despite, the connection between these two opinions and the need to strengthen human rights have increased in recent years. It will then be examined whether the right to sustainable development is acceptable or not. In various human rights instruments and this is a good vibe to the request cited above. The book then cites domestic and international human rights treaties, as well as jurisprudence and regulations defining human rights institutions, to support this view.Keywords: sustainable development, human rights, the right to development, the human rights-based approach to development, environmental rights, economic development, social sustainability human rights protection, human rights violations, workers’ rights, justice, security.
Procedia PDF Downloads 5424308 A Supervised Learning Data Mining Approach for Object Recognition and Classification in High Resolution Satellite Data
Authors: Mais Nijim, Rama Devi Chennuboyina, Waseem Al Aqqad
Abstract:
Advances in spatial and spectral resolution of satellite images have led to tremendous growth in large image databases. The data we acquire through satellites, radars and sensors consists of important geographical information that can be used for remote sensing applications such as region planning, disaster management. Spatial data classification and object recognition are important tasks for many applications. However, classifying objects and identifying them manually from images is a difficult task. Object recognition is often considered as a classification problem, this task can be performed using machine-learning techniques. Despite of many machine-learning algorithms, the classification is done using supervised classifiers such as Support Vector Machines (SVM) as the area of interest is known. We proposed a classification method, which considers neighboring pixels in a region for feature extraction and it evaluates classifications precisely according to neighboring classes for semantic interpretation of region of interest (ROI). A dataset has been created for training and testing purpose; we generated the attributes by considering pixel intensity values and mean values of reflectance. We demonstrated the benefits of using knowledge discovery and data-mining techniques, which can be on image data for accurate information extraction and classification from high spatial resolution remote sensing imagery.Keywords: remote sensing, object recognition, classification, data mining, waterbody identification, feature extraction
Procedia PDF Downloads 34024307 Automated Multisensory Data Collection System for Continuous Monitoring of Refrigerating Appliances Recycling Plants
Authors: Georgii Emelianov, Mikhail Polikarpov, Fabian Hübner, Jochen Deuse, Jochen Schiemann
Abstract:
Recycling refrigerating appliances plays a major role in protecting the Earth's atmosphere from ozone depletion and emissions of greenhouse gases. The performance of refrigerator recycling plants in terms of material retention is the subject of strict environmental certifications and is reviewed periodically through specialized audits. The continuous collection of Refrigerator data required for the input-output analysis is still mostly manual, error-prone, and not digitalized. In this paper, we propose an automated data collection system for recycling plants in order to deduce expected material contents in individual end-of-life refrigerating appliances. The system utilizes laser scanner measurements and optical data to extract attributes of individual refrigerators by applying transfer learning with pre-trained vision models and optical character recognition. Based on Recognized features, the system automatically provides material categories and target values of contained material masses, especially foaming and cooling agents. The presented data collection system paves the way for continuous performance monitoring and efficient control of refrigerator recycling plants.Keywords: automation, data collection, performance monitoring, recycling, refrigerators
Procedia PDF Downloads 16424306 Sales Patterns Clustering Analysis on Seasonal Product Sales Data
Authors: Soojin Kim, Jiwon Yang, Sungzoon Cho
Abstract:
As a seasonal product is only in demand for a short time, inventory management is critical to profits. Both markdowns and stockouts decrease the return on perishable products; therefore, researchers have been interested in the distribution of seasonal products with the aim of maximizing profits. In this study, we propose a data-driven seasonal product sales pattern analysis method for individual retail outlets based on observed sales data clustering; the proposed method helps in determining distribution strategies.Keywords: clustering, distribution, sales pattern, seasonal product
Procedia PDF Downloads 59524305 Probability Sampling in Matched Case-Control Study in Drug Abuse
Authors: Surya R. Niraula, Devendra B Chhetry, Girish K. Singh, S. Nagesh, Frederick A. Connell
Abstract:
Background: Although random sampling is generally considered to be the gold standard for population-based research, the majority of drug abuse research is based on non-random sampling despite the well-known limitations of this kind of sampling. Method: We compared the statistical properties of two surveys of drug abuse in the same community: one using snowball sampling of drug users who then identified “friend controls” and the other using a random sample of non-drug users (controls) who then identified “friend cases.” Models to predict drug abuse based on risk factors were developed for each data set using conditional logistic regression. We compared the precision of each model using bootstrapping method and the predictive properties of each model using receiver operating characteristics (ROC) curves. Results: Analysis of 100 random bootstrap samples drawn from the snowball-sample data set showed a wide variation in the standard errors of the beta coefficients of the predictive model, none of which achieved statistical significance. One the other hand, bootstrap analysis of the random-sample data set showed less variation, and did not change the significance of the predictors at the 5% level when compared to the non-bootstrap analysis. Comparison of the area under the ROC curves using the model derived from the random-sample data set was similar when fitted to either data set (0.93, for random-sample data vs. 0.91 for snowball-sample data, p=0.35); however, when the model derived from the snowball-sample data set was fitted to each of the data sets, the areas under the curve were significantly different (0.98 vs. 0.83, p < .001). Conclusion: The proposed method of random sampling of controls appears to be superior from a statistical perspective to snowball sampling and may represent a viable alternative to snowball sampling.Keywords: drug abuse, matched case-control study, non-probability sampling, probability sampling
Procedia PDF Downloads 49324304 Bioinformatics High Performance Computation and Big Data
Authors: Javed Mohammed
Abstract:
Right now, bio-medical infrastructure lags well behind the curve. Our healthcare system is dispersed and disjointed; medical records are a bit of a mess; and we do not yet have the capacity to store and process the crazy amounts of data coming our way from widespread whole-genome sequencing. And then there are privacy issues. Despite these infrastructure challenges, some researchers are plunging into bio medical Big Data now, in hopes of extracting new and actionable knowledge. They are doing delving into molecular-level data to discover bio markers that help classify patients based on their response to existing treatments; and pushing their results out to physicians in novel and creative ways. Computer scientists and bio medical researchers are able to transform data into models and simulations that will enable scientists for the first time to gain a profound under-standing of the deepest biological functions. Solving biological problems may require High-Performance Computing HPC due either to the massive parallel computation required to solve a particular problem or to algorithmic complexity that may range from difficult to intractable. Many problems involve seemingly well-behaved polynomial time algorithms (such as all-to-all comparisons) but have massive computational requirements due to the large data sets that must be analyzed. High-throughput techniques for DNA sequencing and analysis of gene expression have led to exponential growth in the amount of publicly available genomic data. With the increased availability of genomic data traditional database approaches are no longer sufficient for rapidly performing life science queries involving the fusion of data types. Computing systems are now so powerful it is possible for researchers to consider modeling the folding of a protein or even the simulation of an entire human body. This research paper emphasizes the computational biology's growing need for high-performance computing and Big Data. It illustrates this article’s indispensability in meeting the scientific and engineering challenges of the twenty-first century, and how Protein Folding (the structure and function of proteins) and Phylogeny Reconstruction (evolutionary history of a group of genes) can use HPC that provides sufficient capability for evaluating or solving more limited but meaningful instances. This article also indicates solutions to optimization problems, and benefits Big Data and Computational Biology. The article illustrates the Current State-of-the-Art and Future-Generation Biology of HPC Computing with Big Data.Keywords: high performance, big data, parallel computation, molecular data, computational biology
Procedia PDF Downloads 36324303 Evaluating the Effectiveness of Science Teacher Training Programme in National Colleges of Education: a Preliminary Study, Perceptions of Prospective Teachers
Authors: A. S. V Polgampala, F. Huang
Abstract:
This is an overview of what is entailed in an evaluation and issues to be aware of when class observation is being done. This study examined the effects of evaluating teaching practice of a 7-day ‘block teaching’ session in a pre -service science teacher training program at a reputed National College of Education in Sri Lanka. Effects were assessed in three areas: evaluation of the training process, evaluation of the training impact, and evaluation of the training procedure. Data for this study were collected by class observation of 18 teachers during 9th February to 16th of 2017. Prospective teachers of science teaching, the participants of the study were evaluated based on newly introduced format by the NIE. The data collected was analyzed qualitatively using the Miles and Huberman procedure for analyzing qualitative data: data reduction, data display and conclusion drawing/verification. It was observed that the trainees showed their confidence in teaching those competencies and skills. Teacher educators’ dissatisfaction has been a great impact on evaluation process.Keywords: evaluation, perceptions & perspectives, pre-service, science teachering
Procedia PDF Downloads 31524302 Generalized Approach to Linear Data Transformation
Authors: Abhijith Asok
Abstract:
This paper presents a generalized approach for the simple linear data transformation, Y=bX, through an integration of multidimensional coordinate geometry, vector space theory and polygonal geometry. The scaling is performed by adding an additional ’Dummy Dimension’ to the n-dimensional data, which helps plot two dimensional component-wise straight lines on pairs of dimensions. The end result is a set of scaled extensions of observations in any of the 2n spatial divisions, where n is the total number of applicable dimensions/dataset variables, created by shifting the n-dimensional plane along the ’Dummy Axis’. The derived scaling factor was found to be dependent on the coordinates of the common point of origin for diverging straight lines and the plane of extension, chosen on and perpendicular to the ’Dummy Axis’, respectively. This result indicates the geometrical interpretation of a linear data transformation and hence, opportunities for a more informed choice of the factor ’b’, based on a better choice of these coordinate values. The paper follows on to identify the effect of this transformation on certain popular distance metrics, wherein for many, the distance metric retained the same scaling factor as that of the features.Keywords: data transformation, dummy dimension, linear transformation, scaling
Procedia PDF Downloads 29724301 Price Control: A Comprehensive Step to Control Corruption in the Society
Authors: Muhammad Zia Ullah Baig, Atiq Uz Zama
Abstract:
The motivation of the project is to facilitate the governance body, as well as the common man in his/her daily life consuming product rates, to easily monitor the expense, to control the budget with the help of single SMS (message), e-mail facility, and to manage governance body by task management system. The system will also be capable of finding irregularities being done by the concerned department in mitigating the complaints generated by the customer and also provide a solution to overcome problems. We are building a system that easily controls the price control system of any country, we will feeling proud to give this system free of cost to Indian Government also. The system is able to easily manage and control the price control department of government all over the country. Price control department run in different cities under City District Government, so the system easily run in different cities with different SMS Code and decentralize Database ensure the non-functional requirement of system (scalability, reliability, availability, security, safety). The customer request for the government official price list with respect to his/her city SMS code (price list of all city available on website or application), the server will forward the price list through a SMS, if the product is not available according to the price list the customer generate a complaint through an SMS or using website/smartphone application, complaint is registered in complaint database and forward to inspection department when the complaint is entertained, the inspection department will forward a message about the complaint to customer. Inspection department physically checks the seller who does not follow the price list, but the major issue of the system is corruption, may be inspection officer will take a bribe and resolve the complaint (complaint is fake) in that case the customer will not use the system. The major issue of the system is to distinguish the fake and real complain and fight for corruption in the department. To counter the corruption, our strategy is to rank the complain if the same type of complaint is generated the complaint is in high rank and the higher authority will also notify about that complain, now the higher authority of department have reviewed the complaint and its history, the officer who resolve that complaint in past and the action against the complaint, these data will help in decision-making process, if the complaint was resolved because the officer takes bribe, the higher authority will take action against that officer. When the price of any good is decided the market/former representative is also there, with the mutual understanding of both party the price is decided, the system facilitate the decision-making process. The system shows the price history of any goods, inflation rate, available supply, demand, and the gap between supply and demand, these data will help to allot for the decision-making process.Keywords: price control, goods, government, inspection, department, customer, employees
Procedia PDF Downloads 41124300 The Comparison of Safety Factor in Dry and Rainy Condition at Coal Bearing Formation. Case Study: Lahat Area South Sumatera Province, Indonesia
Authors: Teguh Nurhidayat, Nurhamid, Dicky Muslim, Zufialdi Zakaria, Irvan Sophian
Abstract:
This paper presents the role of climate change as the factor that induces landslide. Case study is located at Lahat Regency, South Sumatera Province, Indonesia. Study area has high economic value of coal reserves (mostly subbituminous – bituminous), which is developable for open pit coal mining in the future. Seams are found in Muara Enim Formation. This formation is at south Sumatera basin which is formed at Tertiary as a result of collision between the indian plate and eurasian plate. South Sumatera basin which is a basin located in back arc basin. This study aims to unravel the relationship between slope stability with different season condition in tropical climate. Undisturbed soil samples were obtained in the field along with other geological data. Laboratory works were carried out to obtain physical and mechanical properties of soils. Methodology to analyze slope stability is bishop method. Bishop methods are used to identify safety factor of slope. Result shows that slopes in rainy season conditions are more prone to landslides than in dry season. In the dry seasons with moisture content is 22.65%, safety factor is 1.28 the slope in stable condition. If rain is approaching with moisture content increasing to 97.8%, the slope began to be critical. On wet condition groundwater levels is increased, followed by γ (unit weight), c (cohesion), and φ (angle of friction) at 18.04, 5,88 kN/m2, and 28,04°, respectively, which ultimately determines the security factor FS to be 1.01 (slope in unstable conditions).Keywords: rainfall, moisture content, slope analysis, landslide prone
Procedia PDF Downloads 31324299 Food Consumption and Adaptation to Climate Change: Evidence from Ghana
Authors: Frank Adusah-Poku, John Bosco Dramani, Prince Boakye Frimpong
Abstract:
Climate change is considered a principal threat to human existence and livelihood. The persistence and intensity of droughts and floods in recent years have adversely affected food production systems and value chains, making it impossible to end global hunger by 2030. Thus, this study aims to examine the effect of climate change on food consumption for both farm and non-farm households in Ghana. An important focus of the analysis is to investigate how climate change affects alternative dimensions of food security, examine the extent to which these effects vary across heterogeneous groups, and explore the channels through which climate change affects food consumption. Finally, we conducted a pilot study to understand the significance of farm and non-farm diversification measures in reducing the harmful impact of climate change on farm households. The approach of this article is to use two secondary and one primary datasets. The first secondary dataset is the Ghana Socioeconomic Panel Survey (GSPS). The GSPS is a household panel dataset collected during the period 2009 to 2019. The second dataset is monthly district rainfall and temperature gridded data from the Ghana Meteorological Agency. This data was matched to the GSPS dataset at the district level. Finally, the primary data was obtained from a survey of farm and non-farm adaptation practices used by farmers in three regions in Northern Ghana. The study employed the household fixed effects model to estimate the effect of climate change (measured by temperature and rainfall) on food consumption in Ghana. Again, it used the spatial and temporal variation in temperature and rainfall across the districts in Ghana to estimate the household-level model. Evidence of potential mechanisms through which climate change affects food consumption was explored using two steps. First, the potential mechanism variables were regressed on temperature, rainfall, and the control variables. In the second and final step, the potential mechanism variables were included as extra covariates in the first model. The results revealed that extreme average temperature and drought had caused a decrease in food consumption as well as reduced the intake of important food nutrients such as carbohydrates, protein and vitamins. The results further indicated that low rainfall increased food insecurity among households with no education compared with those with primary and secondary education. Again, non-farm activity and silos have been revealed as the transmission pathways through which the effect of climate change on farm households can be moderated. Finally, the results indicated over 90% of the small-holder farmers interviewed had no farm diversification adaptation strategies for climate change, and a little over 50% of the farmers owned unskilled or manual non-farm economic ventures. This makes it very difficult for the majority of the farmers to withstand climate-related shocks. These findings suggest that achieving the Sustainable Development Goal of Zero Hunger by 2030 needs an integrated approach, such as reducing the over-reliance on rainfed agriculture, educating farmers, and implementing non-farm interventions to improve food consumption in Ghana.Keywords: climate change, food consumption, Ghana, non-farm activity
Procedia PDF Downloads 724298 Using Learning Apps in the Classroom
Authors: Janet C. Read
Abstract:
UClan set collaboration with Lingokids to assess the Lingokids learning app's impact on learning outcomes in classrooms in the UK for children with ages ranging from 3 to 5 years. Data gathered during the controlled study with 69 children includes attitudinal data, engagement, and learning scores. Data shows that children enjoyment while learning was higher among those children using the game-based app compared to those children using other traditional methods. It’s worth pointing out that engagement when using the learning app was significantly higher than other traditional methods among older children. According to existing literature, there is a direct correlation between engagement, motivation, and learning. Therefore, this study provides relevant data points to conclude that Lingokids learning app serves its purpose of encouraging learning through playful and interactive content. That being said, we believe that learning outcomes should be assessed with a wider range of methods in further studies. Likewise, it would be beneficial to assess the level of usability and playability of the app in order to evaluate the learning app from other angles.Keywords: learning app, learning outcomes, rapid test activity, Smileyometer, early childhood education, innovative pedagogy
Procedia PDF Downloads 7124297 Illicit Arms and the Emergence of Armed Groups in Nigeria
Authors: Halilu Babaji, Adamu Buba
Abstract:
Illicit arms and the emergence of armed groups have witnessed unprecedented situations of political uncertainties in Nigeria, and the twenty-first century globalisation has established the process that has benefited a good number of militia groups and thereby boosting both illicit arms movement and the thriving of terrorist groups, which are largely responsible for the longstanding threat to the national security and stability of the country. This has unleashed unforeseen consequences on the entire Sub-region, following an inflow of weapons and armed fighter which are motivated by weak governance, insecurity and poverty. The social, economic and political environments make it a fertile breeding ground for the penetration and development of terrorist groups in Sub-Saharan Africa.Keywords: arms, emergence, insecurity, groups
Procedia PDF Downloads 26324296 Road Safety in the Great Britain: An Exploratory Data Analysis
Authors: Jatin Kumar Choudhary, Naren Rayala, Abbas Eslami Kiasari, Fahimeh Jafari
Abstract:
The Great Britain has one of the safest road networks in the world. However, the consequences of any death or serious injury are devastating for loved ones, as well as for those who help the severely injured. This paper aims to analyse the Great Britain's road safety situation and show the response measures for areas where the total damage caused by accidents can be significantly and quickly reduced. In this paper, we do an exploratory data analysis using STATS19 data. For the past 30 years, the UK has had a good record in reducing fatalities. The UK ranked third based on the number of road deaths per million inhabitants. There were around 165,000 accidents reported in the Great Britain in 2009 and it has been decreasing every year until 2019 which is under 120,000. The government continues to scale back road deaths empowering responsible road users by identifying and prosecuting the parameters that make the roads less safe.Keywords: road safety, data analysis, openstreetmap, feature expanding.
Procedia PDF Downloads 14024295 Polyvinyl Alcohol Incorporated with Hibiscus Extract Microcapsules as Combined Active and Intelligent Composite Film for Meat Preservation: Antimicrobial, Antioxidant, and Physicochemical Investigations
Authors: Ahmed F. Ghanem, Marwa I. Wahba, Asmaa N. El-Dein, Mohamed A. EL-Raey, Ghada E. A. Awad
Abstract:
Numerous attempts are being performed in order to formulate suitable packaging materials for the meat products. However, to the best of our knowledge, the incorporation of the free hibiscus extract or its microcapsules in the pure polyvinyl alcohol (PVA) matrix as packaging materials for the meats is seldom reported. Therefore, this study aims at the protection of the aqueous crude extract of the hibiscus flowers utilizing the spry drying encapsulation technique. Results of the Fourier transform infrared (FTIR), the scanning electron microscope (SEM), and the particle size analyzer confirmed the successful formation of the assembled capsules via strong interactions, the spherical rough microparticles, and the particle size of ~ 235 nm, respectively. Also, the obtained microcapsules enjoy higher thermal stability than the free extract. Then, the obtained spray-dried particles were incorporated into the casting solution of the pure PVA film with a concentration of 10 wt. %. The segregated free-standing composite films were investigated, compared to the neat matrix, with several characterization techniques such as FTIR, SEM, thermal gravimetric analysis (TGA), mechanical tester, contact angle, water vapor permeability, and oxygen transmission. The results demonstrated variations in the physicochemical properties of the PVA film after the inclusion of the free and the extract microcapsules. Moreover, biological studies emphasized the biocidal potential of the hybrid films against the microorganisms contaminating the meat. Specifically, the microcapsules imparted not only antimicrobial but also antioxidant activities to the PVA matrix. Application of the prepared films on the real meat samples displayed a low bacterial growth with a slight increase in the pH over the storage time which continued up to 10 days at 4 oC, as further evidence to the meat safety. Moreover, the colors of the films did not significantly changed except after 21 days indicating the spoilage of the meat samples. No doubt, the dual-functional of the prepared composite films pave the way towards combined active and smart food packaging applications. This would play a vital role in the food hygiene, including also the quality control and the assurance.Keywords: PVA, hibiscus, extraction, encapsulation, active packaging, smart and intelligent packaging, meat spoilage
Procedia PDF Downloads 9024294 Intrusion Detection System Using Linear Discriminant Analysis
Authors: Zyad Elkhadir, Khalid Chougdali, Mohammed Benattou
Abstract:
Most of the existing intrusion detection systems works on quantitative network traffic data with many irrelevant and redundant features, which makes detection process more time’s consuming and inaccurate. A several feature extraction methods, such as linear discriminant analysis (LDA), have been proposed. However, LDA suffers from the small sample size (SSS) problem which occurs when the number of the training samples is small compared with the samples dimension. Hence, classical LDA cannot be applied directly for high dimensional data such as network traffic data. In this paper, we propose two solutions to solve SSS problem for LDA and apply them to a network IDS. The first method, reduce the original dimension data using principal component analysis (PCA) and then apply LDA. In the second solution, we propose to use the pseudo inverse to avoid singularity of within-class scatter matrix due to SSS problem. After that, the KNN algorithm is used for classification process. We have chosen two known datasets KDDcup99 and NSLKDD for testing the proposed approaches. Results showed that the classification accuracy of (PCA+LDA) method outperforms clearly the pseudo inverse LDA method when we have large training data.Keywords: LDA, Pseudoinverse, PCA, IDS, NSL-KDD, KDDcup99
Procedia PDF Downloads 22624293 Solutions for Comfort and Safety on Vibrations Resulting from the Action of the Wind on the Building in the Form of Portico with Four Floors
Authors: G. B. M. Carvalho, V. A. C. Vale, E. T. L. Cöuras Ford
Abstract:
With the aim of increasing the levels of comfort and security structures, the study of dynamic loads on buildings has been one of the focuses in the area of control engineering, civil engineering and architecture. Thus, this work presents a study based on simulation of the dynamics of buildings in the form of portico subjected to wind action, besides presenting an action of passive control, using for this the dynamics of the structure, consequently representing a system appropriated on environmental issues. These control systems are named the dynamic vibration absorbers.Keywords: dynamic vibration absorber, structure, comfort, safety, wind behavior, structure
Procedia PDF Downloads 40724292 Studies of Rule Induction by STRIM from the Decision Table with Contaminated Attribute Values from Missing Data and Noise — in the Case of Critical Dataset Size —
Authors: Tetsuro Saeki, Yuichi Kato, Shoutarou Mizuno
Abstract:
STRIM (Statistical Test Rule Induction Method) has been proposed as a method to effectively induct if-then rules from the decision table which is considered as a sample set obtained from the population of interest. Its usefulness has been confirmed by simulation experiments specifying rules in advance, and by comparison with conventional methods. However, scope for future development remains before STRIM can be applied to the analysis of real-world data sets. The first requirement is to determine the size of the dataset needed for inducting true rules, since finding statistically significant rules is the core of the method. The second is to examine the capacity of rule induction from datasets with contaminated attribute values created by missing data and noise, since real-world datasets usually contain such contaminated data. This paper examines the first problem theoretically, in connection with the rule length. The second problem is then examined in a simulation experiment, utilizing the critical size of dataset derived from the first step. The experimental results show that STRIM is highly robust in the analysis of datasets with contaminated attribute values, and hence is applicable to realworld data.Keywords: rule induction, decision table, missing data, noise
Procedia PDF Downloads 39624291 Cloud-Based Mobile-to-Mobile Computation Offloading
Authors: Ebrahim Alrashed, Yousef Rafique
Abstract:
Mobile devices have drastically changed the way we do things on the move. They are being extremely relied on to perform tasks that are analogous to desktop computer capability. There has been a rapid increase of computational power on these devices; however, battery technology is still the bottleneck of evolution. The primary modern approach day approach to tackle this issue is offloading computation to the cloud, proving to be latency expensive and requiring high network bandwidth. In this paper, we explore efforts to perform barter-based mobile-to-mobile offloading. We present define a protocol and present an architecture to facilitate the development of such a system. We further highlight the deployment and security challenges.Keywords: computational offloading, power conservation, cloud, sandboxing
Procedia PDF Downloads 38824290 Machine Learning Strategies for Data Extraction from Unstructured Documents in Financial Services
Authors: Delphine Vendryes, Dushyanth Sekhar, Baojia Tong, Matthew Theisen, Chester Curme
Abstract:
Much of the data that inform the decisions of governments, corporations and individuals are harvested from unstructured documents. Data extraction is defined here as a process that turns non-machine-readable information into a machine-readable format that can be stored, for instance, in a database. In financial services, introducing more automation in data extraction pipelines is a major challenge. Information sought by financial data consumers is often buried within vast bodies of unstructured documents, which have historically required thorough manual extraction. Automated solutions provide faster access to non-machine-readable datasets, in a context where untimely information quickly becomes irrelevant. Data quality standards cannot be compromised, so automation requires high data integrity. This multifaceted task is broken down into smaller steps: ingestion, table parsing (detection and structure recognition), text analysis (entity detection and disambiguation), schema-based record extraction, user feedback incorporation. Selected intermediary steps are phrased as machine learning problems. Solutions leveraging cutting-edge approaches from the fields of computer vision (e.g. table detection) and natural language processing (e.g. entity detection and disambiguation) are proposed.Keywords: computer vision, entity recognition, finance, information retrieval, machine learning, natural language processing
Procedia PDF Downloads 11324289 Regression Approach for Optimal Purchase of Hosts Cluster in Fixed Fund for Hadoop Big Data Platform
Authors: Haitao Yang, Jianming Lv, Fei Xu, Xintong Wang, Yilin Huang, Lanting Xia, Xuewu Zhu
Abstract:
Given a fixed fund, purchasing fewer hosts of higher capability or inversely more of lower capability is a must-be-made trade-off in practices for building a Hadoop big data platform. An exploratory study is presented for a Housing Big Data Platform project (HBDP), where typical big data computing is with SQL queries of aggregate, join, and space-time condition selections executed upon massive data from more than 10 million housing units. In HBDP, an empirical formula was introduced to predict the performance of host clusters potential for the intended typical big data computing, and it was shaped via a regression approach. With this empirical formula, it is easy to suggest an optimal cluster configuration. The investigation was based on a typical Hadoop computing ecosystem HDFS+Hive+Spark. A proper metric was raised to measure the performance of Hadoop clusters in HBDP, which was tested and compared with its predicted counterpart, on executing three kinds of typical SQL query tasks. Tests were conducted with respect to factors of CPU benchmark, memory size, virtual host division, and the number of element physical host in cluster. The research has been applied to practical cluster procurement for housing big data computing.Keywords: Hadoop platform planning, optimal cluster scheme at fixed-fund, performance predicting formula, typical SQL query tasks
Procedia PDF Downloads 232