Search results for: enterprise data warehouse
23152 Geological Mapping of Gabel Humr Akarim Area, Southern Eastern Desert, Egypt: Constrain from Remote Sensing Data, Petrographic Description and Field Investigation
Authors: Doaa Hamdi, Ahmed Hashem
Abstract:
The present study aims at integrating the ASTER data and Landsat 8 data to discriminate and map alteration and/or mineralization zones in addition to delineating different lithological units of Humr Akarim Granites area. The study area is located at 24º9' to 24º13' N and 34º1' to 34º2'45"E., covering a total exposed surface area of about 17 km². The area is characterized by rugged topography with low to moderate relief. Geologic fieldwork and petrographic investigations revealed that the basement complex of the study area is composed of metasediments, mafic dikes, older granitoids, and alkali-feldspar granites. Petrographic investigations revealed that the secondary minerals in the study area are mainly represented by chlorite, epidote, clay minerals and iron oxides. These minerals have specific spectral signatures in the region of visible near-infrared and short-wave infrared (0.4 to 2.5 µm). So that the ASTER imagery processing was concentrated on VNIR-SWIR spectrometric data in order to achieve the purposes of this study (geologic mapping of hydrothermal alteration zones and delineate possible radioactive potentialities). Mapping of hydrothermal alterations zones in addition to discriminating the lithological units in the study area are achieved through the utilization of some different image processing, including color band composites (CBC) and data transformation techniques such as band ratios (BR), band ratio codes (BRCs), principal component analysis(PCA), Crosta Technique and minimum noise fraction (MNF). The field verification and petrographic investigation confirm the results of ASTER imagery and Landsat 8 data, proposing a geological map (scale 1:50000).Keywords: remote sensing, petrography, mineralization, alteration detection
Procedia PDF Downloads 16423151 Measuring Student Teachers' Attitude and Intention toward Cell-Phone Use for Learning in Nigeria
Authors: Shittu Ahmed Tajudeen
Abstract:
This study examines student-teachers’ attitude and intention towards cell-phone use for learning. The study involves one hundred and ninety (190) trainee teachers in one of the Institutes of Education in Nigeria. The data of the study was collected through a questionnaire on a rating of seven point likert-type Scale. The data collected was used to test the hypothesized model of the study using Structural Equation Modeling approach. The finding of the study revealed that Perceived Usefulness (PU), Perceived Ease of Use (PEU), Subjective Norm (SN) and Attitude significantly influence students’ intention towards adoption of cell-phone for learning. The study showed that perceived ease of use stands to be the strongest predictor of cell-phone use. The model of the study exhibits a good-fit with the data and provides an explanation on student- teachers’ attitude and intention towards cell-phone for learning.Keywords: cell-phone, adoption, structural equation modeling, technology acceptance model
Procedia PDF Downloads 45323150 Architectural Framework to Preserve Information of Cardiac Valve Control
Authors: Lucia Carrion Gordon, Jaime Santiago Sanchez Reinoso
Abstract:
According to the relation of Digital Preservation and the Health field as a case of study, the architectural model help us to explain that definitions. .The principal goal of Data Preservation is to keep information for a long term. Regarding of Mediacal information, in order to perform a heart transplant, physicians need to preserve this organ in an adequate way. This approach between the two perspectives, the medical and the technological allow checking the similarities about the concepts of preservation. Digital preservation and medical advances are related in the same level as knowledge improvement.Keywords: medical management, digital, data, heritage, preservation
Procedia PDF Downloads 42023149 Study and Analysis of the Factors Affecting Road Safety Using Decision Tree Algorithms
Authors: Naina Mahajan, Bikram Pal Kaur
Abstract:
The purpose of traffic accident analysis is to find the possible causes of an accident. Road accidents cannot be totally prevented but by suitable traffic engineering and management the accident rate can be reduced to a certain extent. This paper discusses the classification techniques C4.5 and ID3 using the WEKA Data mining tool. These techniques use on the NH (National highway) dataset. With the C4.5 and ID3 technique it gives best results and high accuracy with less computation time and error rate.Keywords: C4.5, ID3, NH(National highway), WEKA data mining tool
Procedia PDF Downloads 33823148 Geopotential Models Evaluation in Algeria Using Stochastic Method, GPS/Leveling and Topographic Data
Authors: M. A. Meslem
Abstract:
For precise geoid determination, we use a reference field to subtract long and medium wavelength of the gravity field from observations data when we use the remove-compute-restore technique. Therefore, a comparison study between considered models should be made in order to select the optimal reference gravity field to be used. In this context, two recent global geopotential models have been selected to perform this comparison study over Northern Algeria. The Earth Gravitational Model (EGM2008) and the Global Gravity Model (GECO) conceived with a combination of the first model with anomalous potential derived from a GOCE satellite-only global model. Free air gravity anomalies in the area under study have been used to compute residual data using both gravity field models and a Digital Terrain Model (DTM) to subtract the residual terrain effect from the gravity observations. Residual data were used to generate local empirical covariance functions and their fitting to the closed form in order to compare their statistical behaviors according to both cases. Finally, height anomalies were computed from both geopotential models and compared to a set of GPS levelled points on benchmarks using least squares adjustment. The result described in details in this paper regarding these two models has pointed out a slight advantage of GECO global model globally through error degree variances comparison and ground-truth evaluation.Keywords: quasigeoid, gravity aomalies, covariance, GGM
Procedia PDF Downloads 13723147 Analysis of Transformer Reactive Power Fluctuations during Adverse Space Weather
Authors: Patience Muchini, Electdom Matandiroya, Emmanuel Mashonjowa
Abstract:
A ground-end manifestation of space weather phenomena is known as geomagnetically induced currents (GICs). GICs flow along the electric power transmission cables connecting the transformers and between the grounding points of power transformers during significant geomagnetic storms. Geomagnetically induced currents have been studied in other regions and have been noted to affect the power grid network. In Zimbabwe, grid failures have been experienced, but it is yet to be proven if these failures have been due to GICs. The purpose of this paper is to characterize geomagnetically induced currents with a power grid network. This paper analyses data collected, which is geomagnetic data, which includes the Kp index, DST index, and the G-Scale from geomagnetic storms and also analyses power grid data, which includes reactive power, relay tripping, and alarms from high voltage substations and then correlates the data. This research analysis was first theoretically analyzed by studying geomagnetic parameters and then experimented upon. To correlate, MATLAB was used as the basic software to analyze the data. Latitudes of the substations were also brought into scrutiny to note if they were an impact due to the location as low latitudes areas like most parts of Zimbabwe, there are less severe geomagnetic variations. Based on theoretical and graphical analysis, it has been proven that there is a slight relationship between power system failures and GICs. Further analyses can be done by implementing measuring instruments to measure any currents in the grounding of high-voltage transformers when geomagnetic storms occur. Mitigation measures can then be developed to minimize the susceptibility of the power network to GICs.Keywords: adverse space weather, DST index, geomagnetically induced currents, KP index, reactive power
Procedia PDF Downloads 11423146 A Study on the HTML5 Based Multi Media Contents Authority Tool
Authors: Heesuk Seo, Yongtae Kim
Abstract:
Online learning started in the 1990s, the spread of the Internet has been through the era of e-learning paradigm of online education in the era of smart learning change. Reflecting the different nature of the mobile to anywhere anytime, anywhere was also allows the form of learning, it was also available through the learning content and interaction. We are developing a cloud system, 'TLINKS CLOUD' that allows you to configure the environment of the smart learning without the need for additional infrastructure. Using the big-data analysis for e-learning contents, we provide an integrated solution for e-learning tailored to individual study.Keywords: authority tool, big data analysis, e-learning, HTML5
Procedia PDF Downloads 40623145 Arduino Pressure Sensor Cushion for Tracking and Improving Sitting Posture
Authors: Andrew Hwang
Abstract:
The average American worker sits for thirteen hours a day, often with poor posture and infrequent breaks, which can lead to health issues and back problems. The Smart Cushion was created to alert individuals of their poor postures, and may potentially alleviate back problems and correct poor posture. The Smart Cushion is a portable, rectangular, foam cushion, with five strategically placed pressure sensors, that utilizes an Arduino Uno circuit board and specifically designed software, allowing it to collect data from the five pressure sensors and store the data on an SD card. The data is then compiled into graphs and compared to controlled postures. Before volunteers sat on the cushion, their levels of back pain were recorded on a scale from 1-10. Data was recorded for an hour during sitting, and then a new, corrected posture was suggested. After using the suggested posture for an hour, the volunteers described their level of discomfort on a scale from 1-10. Different patterns of sitting postures were generated that were able to serve as early warnings of potential back problems. By using the Smart Cushion, the areas where different volunteers were applying the most pressure while sitting could be identified, and the sitting postures could be corrected. Further studies regarding the relationships between posture and specific regions of the body are necessary to better understand the origins of back pain; however, the Smart Cushion is sufficient for correcting sitting posture and preventing the development of additional back pain.Keywords: Arduino Sketch Algorithm, biomedical technology, pressure sensors, Smart Cushion
Procedia PDF Downloads 13423144 Calculation the Left Ventricle Wall Radial Strain and Radial SR Using Tagged Magnetic Resonance Imaging Data (tMRI)
Authors: Mohammed Alenezy
Abstract:
The function of cardiac motion can be used as an indicator of the heart abnormality by evaluating longitudinal, circumferential, and Radial Strain of the left ventricle. In this paper, the Radial Strain and SR is studied using tagged MRI (tMRI) data during the cardiac cycle on the mid-ventricle level of the left ventricle. Materials and methods: The short-axis view of the left ventricle of five healthy human (three males and two females) and four healthy male rats were imaged using tagged magnetic resonance imaging (tMRI) technique covering the whole cardiac cycle on the mid-ventricle level. Images were processed using Image J software to calculate the left ventricle wall Radial Strain and radial SR. The left ventricle Radial Strain and radial SR were calculated at the mid-ventricular level during the cardiac cycle. The peak Radial Strain for the human and rat heart was 40.7±1.44, and 46.8±0.68 respectively, and it occurs at 40% of the cardiac cycle for both human and rat heart. The peak diastolic and systolic radial SR for human heart was -1.78 s-1 ± 0.02 s-1 and 1.10±0.08 s-1 respectively, while for rat heart it was -5.16± 0.23s-1 and 4.25±0.02 s-1 respectively. Conclusion: This results show the ability of the tMRI data to characterize the cardiac motion during the cardiac cycle including diastolic and systolic phases which can be used as an indicator of the cardiac dysfunction by estimating the left ventricle Radial Strain and radial SR at different locations of the cardiac tissue. This study approves the validity of the tagged MRI data to describe accurately the cardiac radial motion.Keywords: left ventricle, radial strain, tagged MRI, cardiac cycle
Procedia PDF Downloads 48323143 Allocating Channels and Flow Estimation at Flood Prone Area in Desert, Example from AlKharj City, Saudi Arabia
Authors: Farhan Aljuaidi
Abstract:
The rapid expansion of Alkarj city, Saudi Arabia, towards the outlet of Wadi AlAin is critical for the planners and decision makers. Nowadays, two major projects such as Salman bin Abdulaziz University compound and new industrial area are developed in this flood prone area where no channels are clear and identified. The main contribution of this study is to divert the flow away from these vital projects by reconstructing new channels. To do so, Lidar data were used to generate contour lines for the actual elevation of the highways and local roads. These data were analyzed and compared to the contour lines derived from the topographical maps 1:50.000. The magnitude of the expected flow was estimated using Snyder's Model based on the morphometric data acquired by DEM of the catchment area. The results indicate that maximum discharge peak reaches 2694,3 m3/sec, the mean is 303,7 m3/sec and the minimum is 74,3 m3/sec. The runoff was estimated at 252,2. 610 m3/s, the mean is 41,5. 610 m3/s and the minimum is 12,4. 610 m3/s.Keywords: Desert flood, Saudi Arabia, Snyder's Model, flow estimation
Procedia PDF Downloads 30923142 Public Bus Transport Passenger Safety Evaluations in Ghana: A Phenomenological Constructivist Exploration
Authors: Enoch F. Sam, Kris Brijs, Stijn Daniels, Tom Brijs, Geert Wets
Abstract:
Notwithstanding the growing body of literature that recognises the importance of personal safety to public transport (PT) users, it remains unclear what PT users consider regarding their safety. In this study, we explore the criteria PT users in Ghana use to assess bus safety. This knowledge will afford a better understanding of PT users’ risk perceptions and assessments which may contribute to theoretical models of PT risk perceptions. We utilised phenomenological research methodology, with data drawn from 61 purposively sampled participants. Data collection (through focus group discussions and in-depth interviews) and analyses were done concurrently to the point of saturation. Our inductive data coding and analyses through the constant comparison and content analytic techniques resulted in 4 code categories (conceptual dimensions), 27 codes (safety items/criteria), and 100 quotations (data segments). Of the number of safety criteria participants use to assess bus safety, vehicle condition, driver’s marital status, and transport operator’s safety records were the most considered. With each criterion, participants rightly demonstrated its respective relevance to bus safety. These findings imply that investment in and maintenance of safer vehicles, and responsible and safety-conscious drivers, and prioritization of passengers’ safety are key-targets for public bus/minibus operators in Ghana.Keywords: safety evaluations, public bus/minibus, passengers, phenomenology, Ghana
Procedia PDF Downloads 33723141 Data-Driven Analysis of Velocity Gradient Dynamics Using Neural Network
Authors: Nishant Parashar, Sawan S. Sinha, Balaji Srinivasan
Abstract:
We perform an investigation of the unclosed terms in the evolution equation of the velocity gradient tensor (VGT) in compressible decaying turbulent flow. Velocity gradients in a compressible turbulent flow field influence several important nonlinear turbulent processes like cascading and intermittency. In an attempt to understand the dynamics of the velocity gradients various researchers have tried to model the unclosed terms in the evolution equation of the VGT. The existing models proposed for these unclosed terms have limited applicability. This is mainly attributable to the complex structure of the higher order gradient terms appearing in the evolution equation of VGT. We investigate these higher order gradients using the data from direct numerical simulation (DNS) of compressible decaying isotropic turbulent flow. The gas kinetic method aided with weighted essentially non-oscillatory scheme (WENO) based flow- reconstruction is employed to generate DNS data. By applying neural-network to the DNS data, we map the structure of the unclosed higher order gradient terms in the evolution of the equation of the VGT with VGT itself. We validate our findings by performing alignment based study of the unclosed higher order gradient terms obtained using the neural network with the strain rate eigenvectors.Keywords: compressible turbulence, neural network, velocity gradient tensor, direct numerical simulation
Procedia PDF Downloads 16823140 Comparison of Authentication Methods in Internet of Things Technology
Authors: Hafizah Che Hasan, Fateen Nazwa Yusof, Maslina Daud
Abstract:
Internet of Things (IoT) is a powerful industry system, which end-devices are interconnected and automated, allowing the devices to analyze data and execute actions based on the analysis. The IoT technology leverages the technology of Radio-Frequency Identification (RFID) and Wireless Sensor Network (WSN), including mobile and sensor. These technologies contribute to the evolution of IoT. However, due to more devices are connected each other in the Internet, and data from various sources exchanged between things, confidentiality of the data becomes a major concern. This paper focuses on one of the major challenges in IoT; authentication, in order to preserve data integrity and confidentiality are in place. A few solutions are reviewed based on papers from the last few years. One of the proposed solutions is securing the communication between IoT devices and cloud servers with Elliptic Curve Cryptograhpy (ECC) based mutual authentication protocol. This solution focuses on Hyper Text Transfer Protocol (HTTP) cookies as security parameter. Next proposed solution is using keyed-hash scheme protocol to enable IoT devices to authenticate each other without the presence of a central control server. Another proposed solution uses Physical Unclonable Function (PUF) based mutual authentication protocol. It emphasizes on tamper resistant and resource-efficient technology, which equals a 3-way handshake security protocol.Keywords: Internet of Things (IoT), authentication, PUF ECC, keyed-hash scheme protocol
Procedia PDF Downloads 26423139 Data Analysis Tool for Predicting Water Scarcity in Industry
Authors: Tassadit Issaadi Hamitouche, Nicolas Gillard, Jean Petit, Valerie Lavaste, Celine Mayousse
Abstract:
Water is a fundamental resource for the industry. It is taken from the environment either from municipal distribution networks or from various natural water sources such as the sea, ocean, rivers, aquifers, etc. Once used, water is discharged into the environment, reprocessed at the plant or treatment plants. These withdrawals and discharges have a direct impact on natural water resources. These impacts can apply to the quantity of water available, the quality of the water used, or to impacts that are more complex to measure and less direct, such as the health of the population downstream from the watercourse, for example. Based on the analysis of data (meteorological, river characteristics, physicochemical substances), we wish to predict water stress episodes and anticipate prefectoral decrees, which can impact the performance of plants and propose improvement solutions, help industrialists in their choice of location for a new plant, visualize possible interactions between companies to optimize exchanges and encourage the pooling of water treatment solutions, and set up circular economies around the issue of water. The development of a system for the collection, processing, and use of data related to water resources requires the functional constraints specific to the latter to be made explicit. Thus the system will have to be able to store a large amount of data from sensors (which is the main type of data in plants and their environment). In addition, manufacturers need to have 'near-real-time' processing of information in order to be able to make the best decisions (to be rapidly notified of an event that would have a significant impact on water resources). Finally, the visualization of data must be adapted to its temporal and geographical dimensions. In this study, we set up an infrastructure centered on the TICK application stack (for Telegraf, InfluxDB, Chronograf, and Kapacitor), which is a set of loosely coupled but tightly integrated open source projects designed to manage huge amounts of time-stamped information. The software architecture is coupled with the cross-industry standard process for data mining (CRISP-DM) data mining methodology. The robust architecture and the methodology used have demonstrated their effectiveness on the study case of learning the level of a river with a 7-day horizon. The management of water and the activities within the plants -which depend on this resource- should be considerably improved thanks, on the one hand, to the learning that allows the anticipation of periods of water stress, and on the other hand, to the information system that is able to warn decision-makers with alerts created from the formalization of prefectoral decrees.Keywords: data mining, industry, machine Learning, shortage, water resources
Procedia PDF Downloads 12123138 Recommendations Using Online Water Quality Sensors for Chlorinated Drinking Water Monitoring at Drinking Water Distribution Systems Exposed to Glyphosate
Authors: Angela Maria Fasnacht
Abstract:
Detection of anomalies due to contaminants’ presence, also known as early detection systems in water treatment plants, has become a critical point that deserves an in-depth study for their improvement and adaptation to current requirements. The design of these systems requires a detailed analysis and processing of the data in real-time, so it is necessary to apply various statistical methods appropriate to the data generated, such as Spearman’s Correlation, Factor Analysis, Cross-Correlation, and k-fold Cross-validation. Statistical analysis and methods allow the evaluation of large data sets to model the behavior of variables; in this sense, statistical treatment or analysis could be considered a vital step to be able to develop advanced models focused on machine learning that allows optimized data management in real-time, applied to early detection systems in water treatment processes. These techniques facilitate the development of new technologies used in advanced sensors. In this work, these methods were applied to identify the possible correlations between the measured parameters and the presence of the glyphosate contaminant in the single-pass system. The interaction between the initial concentration of glyphosate and the location of the sensors on the reading of the reported parameters was studied.Keywords: glyphosate, emergent contaminants, machine learning, probes, sensors, predictive
Procedia PDF Downloads 12123137 Generating Arabic Fonts Using Rational Cubic Ball Functions
Authors: Fakharuddin Ibrahim, Jamaludin Md. Ali, Ahmad Ramli
Abstract:
In this paper, we will discuss about the data interpolation by using the rational cubic Ball curve. To generate a curve with a better and satisfactory smoothness, the curve segments must be connected with a certain amount of continuity. The continuity that we will consider is of type G1 continuity. The conditions considered are known as the G1 Hermite condition. A simple application of the proposed method is to generate an Arabic font satisfying the required continuity.Keywords: data interpolation, rational ball curve, hermite condition, continuity
Procedia PDF Downloads 42923136 Teenagers’ Decisions to Undergo Orthodontic Treatment: A Qualitative Study
Authors: Babak Nematshahrbabaki, Fallahi Arezoo
Abstract:
Objective: The aim of this study was to describe teenagers’ decisions to undergo orthodontic treatment through a qualitative study. Materials and methods: Twenty-three patients (12 girls), aged 12–18 years, at a dental clinic in Sanandaj the western part of Iran participated. Face-to-face and semi-structured interviews and two focus group discussions were held to gather data. Data analyzed by the grounded theory method. Results: ‘Decision-making’ was the core category. During the data analysis four main themes were developed: ‘being like everyone else’, ‘being diagnosed’, ‘maintaining the mouth’ and ‘cultural-social and environmental factors’. Conclusions: cultural- social and environmental factors have crucial role in decision-making to undergo orthodontic treatment. The teenagers were not fully conscious of these external influences. They thought their decision to undergo orthodontic treatment is independent while it is related to cultural- social and environmental factors.Keywords: decision-making, qualitative study, teenager, orthodontic treatment
Procedia PDF Downloads 45223135 A Generalized Framework for Adaptive Machine Learning Deployments in Algorithmic Trading
Authors: Robert Caulk
Abstract:
A generalized framework for adaptive machine learning deployments in algorithmic trading is introduced, tested, and released as open-source code. The presented software aims to test the hypothesis that recent data contains enough information to form a probabilistically favorable short-term price prediction. Further, the framework contains various adaptive machine learning techniques that are geared toward generating profit during strong trends and minimizing losses during trend changes. Results demonstrate that this adaptive machine learning approach is capable of capturing trends and generating profit. The presentation also discusses the importance of defining the parameter space associated with the dynamic training data-set and using the parameter space to identify and remove outliers from prediction data points. Meanwhile, the generalized architecture enables common users to exploit the powerful machinery while focusing on high-level feature engineering and model testing. The presentation also highlights common strengths and weaknesses associated with the presented technique and presents a broad range of well-tested starting points for feature set construction, target setting, and statistical methods for enforcing risk management and maintaining probabilistically favorable entry and exit points. The presentation also describes the end-to-end data processing tools associated with FreqAI, including automatic data fetching, data aggregation, feature engineering, safe and robust data pre-processing, outlier detection, custom machine learning and statistical tools, data post-processing, and adaptive training backtest emulation, and deployment of adaptive training in live environments. Finally, the generalized user interface is also discussed in the presentation. Feature engineering is simplified so that users can seed their feature sets with common indicator libraries (e.g. TA-lib, pandas-ta). The user also feeds data expansion parameters to fill out a large feature set for the model, which can contain as many as 10,000+ features. The presentation describes the various object-oriented programming techniques employed to make FreqAI agnostic to third-party libraries and external data sources. In other words, the back-end is constructed in such a way that users can leverage a broad range of common regression libraries (Catboost, LightGBM, Sklearn, etc) as well as common Neural Network libraries (TensorFlow, PyTorch) without worrying about the logistical complexities associated with data handling and API interactions. The presentation finishes by drawing conclusions about the most important parameters associated with a live deployment of the adaptive learning framework and provides the road map for future development in FreqAI.Keywords: machine learning, market trend detection, open-source, adaptive learning, parameter space exploration
Procedia PDF Downloads 8923134 Smart Sensor Data to Predict Machine Performance with IoT-Based Machine Learning and Artificial Intelligence
Authors: C. J. Rossouw, T. I. van Niekerk
Abstract:
The global manufacturing industry is utilizing the internet and cloud-based services to further explore the anatomy and optimize manufacturing processes in support of the movement into the Fourth Industrial Revolution (4IR). The 4IR from a third world and African perspective is hindered by the fact that many manufacturing systems that were developed in the third industrial revolution are not inherently equipped to utilize the internet and services of the 4IR, hindering the progression of third world manufacturing industries into the 4IR. This research focuses on the development of a non-invasive and cost-effective cyber-physical IoT system that will exploit a machine’s vibration to expose semantic characteristics in the manufacturing process and utilize these results through a real-time cloud-based machine condition monitoring system with the intention to optimize the system. A microcontroller-based IoT sensor was designed to acquire a machine’s mechanical vibration data, process it in real-time, and transmit it to a cloud-based platform via Wi-Fi and the internet. Time-frequency Fourier analysis was applied to the vibration data to form an image representation of the machine’s behaviour. This data was used to train a Convolutional Neural Network (CNN) to learn semantic characteristics in the machine’s behaviour and relate them to a state of operation. The same data was also used to train a Convolutional Autoencoder (CAE) to detect anomalies in the data. Real-time edge-based artificial intelligence was achieved by deploying the CNN and CAE on the sensor to analyse the vibration. A cloud platform was deployed to visualize the vibration data and the results of the CNN and CAE in real-time. The cyber-physical IoT system was deployed on a semi-automated metal granulation machine with a set of trained machine learning models. Using a single sensor, the system was able to accurately visualize three states of the machine’s operation in real-time. The system was also able to detect a variance in the material being granulated. The research demonstrates how non-IoT manufacturing systems can be equipped with edge-based artificial intelligence to establish a remote machine condition monitoring system.Keywords: IoT, cyber-physical systems, artificial intelligence, manufacturing, vibration analytics, continuous machine condition monitoring
Procedia PDF Downloads 8823133 Assessment of Land Suitability for Tea Cultivation Using Geoinformatics in the Mansehra and Abbottabad District, Pakistan
Authors: Nasir Ashraf, Sajid Rahid Ahmad, Adeel Ahmad
Abstract:
Pakistan is a major tea consumer country and ranked as the third largest importer of tea worldwide. Out of all beverage consumed in Pakistan, tea is the one with most demand for which tea import is inevitable. Being an agrarian country, Pakistan should cultivate its own tea and save the millions of dollars cost from tea import. So the need is to identify the most suitable areas with favorable weather condition and suitable soils where tea can be planted. This research is conducted over District Mansehra and District Abbottabad in Khyber Pakhtoonkhwah Province of Pakistan where the most favorable conditions for tea cultivation already exist and National Tea Research Institute has done successful experiments to cultivate high quality tea. High tech approach is adopted to meet the objectives of this research by using the remotely sensed data i.e. Aster DEM, Landsat8 Imagery. The Remote Sensing data was processed in Erdas Imagine, Envi and further analyzed in ESRI ArcGIS spatial analyst for final results and representation of result data in map layouts. Integration of remote sensing data with GIS provided the perfect suitability analysis. The results showed that out of all study area, 13.4% area is highly suitable while 33.44% area is suitable for tea plantation. The result of this research is an impressive GIS based outcome and structured format of data for the agriculture planners and Tea growers. Identification of suitable tea growing areas by using remotely sensed data and GIS techniques is a pressing need for the country. Analysis of this research lets the planners to address variety of action plans in an economical and scientific manner which can lead tea production in Pakistan to meet demand. This geomatics based model and approach may be used to identify more areas for tea cultivation to meet our demand which we can reduce by planting our own tea, and our country can be independent in tea production.Keywords: agrarian country, GIS, geoinformatics, suitability analysis, remote sensing
Procedia PDF Downloads 38923132 Machine Learning Algorithms for Rocket Propulsion
Authors: Rômulo Eustáquio Martins de Souza, Paulo Alexandre Rodrigues de Vasconcelos Figueiredo
Abstract:
In recent years, there has been a surge in interest in applying artificial intelligence techniques, particularly machine learning algorithms. Machine learning is a data-analysis technique that automates the creation of analytical models, making it especially useful for designing complex situations. As a result, this technology aids in reducing human intervention while producing accurate results. This methodology is also extensively used in aerospace engineering since this is a field that encompasses several high-complexity operations, such as rocket propulsion. Rocket propulsion is a high-risk operation in which engine failure could result in the loss of life. As a result, it is critical to use computational methods capable of precisely representing the spacecraft's analytical model to guarantee its security and operation. Thus, this paper describes the use of machine learning algorithms for rocket propulsion to aid the realization that this technique is an efficient way to deal with challenging and restrictive aerospace engineering activities. The paper focuses on three machine-learning-aided rocket propulsion applications: set-point control of an expander-bleed rocket engine, supersonic retro-propulsion of a small-scale rocket, and leak detection and isolation on rocket engine data. This paper describes the data-driven methods used for each implementation in depth and presents the obtained results.Keywords: data analysis, modeling, machine learning, aerospace, rocket propulsion
Procedia PDF Downloads 11523131 Resource Sharing Issues of Distributed Systems Influences on Healthcare Sector Concurrent Environment
Authors: Soo Hong Da, Ng Zheng Yao, Burra Venkata Durga Kumar
Abstract:
The Healthcare sector is a business that consists of providing medical services, manufacturing medical equipment and drugs as well as providing medical insurance to the public. Most of the time, the data stored in the healthcare database is to be related to patient’s information which is required to be accurate when it is accessed by authorized stakeholders. In distributed systems, one important issue is concurrency in the system as it ensures the shared resources to be synchronized and remains consistent through multiple read and write operations by multiple clients. The problems of concurrency in the healthcare sector are who gets the access and how the shared data is synchronized and remains consistent when there are two or more stakeholders attempting to the shared data simultaneously. In this paper, a framework that is beneficial to distributed healthcare sector concurrent environment is proposed. In the proposed framework, four different level nodes of the database, which are national center, regional center, referral center, and local center are explained. Moreover, the frame synchronization is not symmetrical. There are two synchronization techniques, which are complete and partial synchronization operation are explained. Furthermore, when there are multiple clients accessed at the same time, synchronization types are also discussed with cases at different levels and priorities to ensure data is synchronized throughout the processes.Keywords: resources, healthcare, concurrency, synchronization, stakeholders, database
Procedia PDF Downloads 14923130 Evaluation of Longitudinal Relaxation Time (T1) of Bone Marrow in Lumbar Vertebrae of Leukaemia Patients Undergoing Magnetic Resonance Imaging
Authors: M. G. R. S. Perera, B. S. Weerakoon, L. P. G. Sherminie, M. L. Jayatilake, R. D. Jayasinghe, W. Huang
Abstract:
The aim of this study was to measure and evaluate the Longitudinal Relaxation Times (T1) in bone marrow of an Acute Myeloid Leukaemia (AML) patient in order to explore the potential for a prognostic biomarker using Magnetic Resonance Imaging (MRI) which will be a non-invasive prognostic approach to AML. MR image data were collected in the DICOM format and MATLAB Simulink software was used in the image processing and data analysis. For quantitative MRI data analysis, Region of Interests (ROI) on multiple image slices were drawn encompassing vertebral bodies of L3, L4, and L5. T1 was evaluated using the T1 maps obtained. The estimated bone marrow mean value of T1 was 790.1 (ms) at 3T. However, the reported T1 value of healthy subjects is significantly (946.0 ms) higher than the present finding. This suggests that the T1 for bone marrow can be considered as a potential prognostic biomarker for AML patients.Keywords: acute myeloid leukaemia, longitudinal relaxation time, magnetic resonance imaging, prognostic biomarker.
Procedia PDF Downloads 53123129 Research on Reservoir Lithology Prediction Based on Residual Neural Network and Squeeze-and- Excitation Neural Network
Authors: Li Kewen, Su Zhaoxin, Wang Xingmou, Zhu Jian Bing
Abstract:
Conventional reservoir prediction methods ar not sufficient to explore the implicit relation between seismic attributes, and thus data utilization is low. In order to improve the predictive classification accuracy of reservoir lithology, this paper proposes a deep learning lithology prediction method based on ResNet (Residual Neural Network) and SENet (Squeeze-and-Excitation Neural Network). The neural network model is built and trained by using seismic attribute data and lithology data of Shengli oilfield, and the nonlinear mapping relationship between seismic attribute and lithology marker is established. The experimental results show that this method can significantly improve the classification effect of reservoir lithology, and the classification accuracy is close to 70%. This study can effectively predict the lithology of undrilled area and provide support for exploration and development.Keywords: convolutional neural network, lithology, prediction of reservoir, seismic attributes
Procedia PDF Downloads 17723128 A Comparison of Caesarean Section Indications and Characteristics in 2009 and 2020 in a Saudi Tertiary Hospital
Authors: Sarah K. Basudan, Ragad I. Al Jazzar, Zeinah Sulaihim, Hanan M. Al-Kadri
Abstract:
Background: Cesarean section has been increasing in recent years, with a wide range of etiologies contributing to this rise. This study aimed to assess the indications, outcomes, and complications in Riyadh, Saudi Arabia. Methods: A Retrospective Cohort study was conducted at King Abdulaziz medical city. The study includes two cohorts: G1 (2009) and G2 (2020) groups who met the inclusion criteria. The data was transferred to the SPSS (statistical package for social sciences) version 24 for analysis. The initial descriptive statistics were run for all variables, including numerical and categorical data. The numerical data were reported as median, and standard deviation and categorical data were reported as frequencies and percentages. Results: The data were collected from 399 women who were divided into two groups, G1(199) and G2(200). The mean age of all participants is 32+-6; G1 and G2 had significant differences in age means with 30+-6 and 34+-5, respectively, with a p-value of <0.001, which indicates delayed fertility by four years. Moreover, a breech presentation was less likely to occur in G2 (OR 0.64, CI: 0.21-0.62. P<0.001). Nonetheless, maternal causes such as repeated C-sections and maternal medical conditions were more likely to happen in G2 (OR 1.5, CI: 1.04-2.38, p=0.03) and (OR 5.4, CI: 1.12-23.9, P=0.01), respectively. Furthermore, postpartum hemorrhage showed an increase of 12% in G2 (OR 5.4, CI: 2.2-13.4, p<0.001). G2 was more likely to be admitted to the neonatal intensive care unit (NICU) (OR 16, CI: 7.4-38.7) and to special care baby (SCB) (OR 7.2, CI: 3.9-13.1), both with a p-value<0.001 compared to regular nursery admission. Conclusion: There are multiple factors that are contributing to the increase in c section rate in a Saudi tertiary hospitals. The factors were suggested to be previous c-sections, abnormal fetal heart rate, malpresentation, and maternal or fetal medical conditions.Keywords: cesarean sections, maternal indications, maternal complications, neonatal condition
Procedia PDF Downloads 8823127 Optimal Cropping Pattern in an Irrigation Project: A Hybrid Model of Artificial Neural Network and Modified Simplex Algorithm
Authors: Safayat Ali Shaikh
Abstract:
Software has been developed for optimal cropping pattern in an irrigation project considering land constraint, water availability constraint and pick up flow constraint using modified Simplex Algorithm. Artificial Neural Network Models (ANN) have been developed to predict rainfall. AR (1) model used to generate 1000 years rainfall data to train the ANN. Simulation has been done with expected rainfall data. Eight number crops and three types of soil class have been considered for optimization model. Area under each crop and each soil class have been quantified using Modified Simplex Algorithm to get optimum net return. Efficacy of the software has been tested using data of large irrigation project in India.Keywords: artificial neural network, large irrigation project, modified simplex algorithm, optimal cropping pattern
Procedia PDF Downloads 20323126 Two-Phase Sampling for Estimating a Finite Population Total in Presence of Missing Values
Authors: Daniel Fundi Murithi
Abstract:
Missing data is a real bane in many surveys. To overcome the problems caused by missing data, partial deletion, and single imputation methods, among others, have been proposed. However, problems such as discarding usable data and inaccuracy in reproducing known population parameters and standard errors are associated with them. For regression and stochastic imputation, it is assumed that there is a variable with complete cases to be used as a predictor in estimating missing values in the other variable, and the relationship between the two variables is linear, which might not be realistic in practice. In this project, we estimate population total in presence of missing values in two-phase sampling. Instead of regression or stochastic models, non-parametric model based regression model is used in imputing missing values. Empirical study showed that nonparametric model-based regression imputation is better in reproducing variance of population total estimate obtained when there were no missing values compared to mean, median, regression, and stochastic imputation methods. Although regression and stochastic imputation were better than nonparametric model-based imputation in reproducing population total estimates obtained when there were no missing values in one of the sample sizes considered, nonparametric model-based imputation may be used when the relationship between outcome and predictor variables is not linear.Keywords: finite population total, missing data, model-based imputation, two-phase sampling
Procedia PDF Downloads 13123125 The Effects of Multiple Levels of Intelligence in an Algebra 1 Classroom
Authors: Abigail Gragg
Abstract:
The goal of this research study was to adjudicate if implementing Howard Gardner’s multiple levels of intelligence would enhance student achievement levels in an Algebra 1 College Preparatory class. This was conducted within every class by incorporating one level of the eight levels of intelligence into small group work in stations. Every class was conducted utilizing small-group instruction. Achievement levels were measured through various forms of collected data that expressed student understandings in class through formative assessments versus student understandings on summative assessments. The data samples included: assessments (i.e. summative and formative assessments), observable data, video recordings, a daily log book, student surveys, and checklists kept during the observation periods. Formative assessments were analyzed during each class period to measure in-class understanding. Summative assessments were dissected per question per accuracy to review the effects of each intelligence implemented. The data was collated into a coding workbook for further analysis to conclude the resulting themes of the research. These themes include 1) there was no correlation to multiple levels of intelligence enhancing student achievement, 2) bodily-kinesthetic intelligence showed to be the intelligence that had the most improvement on test questions and 3) out of all of the bits of intelligence, interpersonal intelligence enhanced student understanding in class.Keywords: stations, small group instruction, multiple levels of intelligence, Mathematics, Algebra 1, student achievement, secondary school, instructional Pedagogies
Procedia PDF Downloads 11123124 Performance Analysis of Multichannel OCDMA-FSO Network under Different Pervasive Conditions
Authors: Saru Arora, Anurag Sharma, Harsukhpreet Singh
Abstract:
To meet the growing need of high data rate and bandwidth, various efforts has been made nowadays for the efficient communication systems. Optical Code Division Multiple Access over Free space optics communication system seems an effective role for providing transmission at high data rate with low bit error rate and low amount of multiple access interference. This paper demonstrates the OCDMA over FSO communication system up to the range of 7000 m at a data rate of 5 Gbps. Initially, the 8 user OCDMA-FSO system is simulated and pseudo orthogonal codes are used for encoding. Also, the simulative analysis of various performance parameters like power and core effective area that are having an effect on the Bit error rate (BER) of the system is carried out. The simulative analysis reveals that the length of the transmission is limited by the multi-access interference (MAI) effect which arises when the number of users increases in the system.Keywords: FSO, PSO, bit error rate (BER), opti system simulation, multiple access interference (MAI), q-factor
Procedia PDF Downloads 36523123 The Study of Implications on Modern Businesses Performances by Digital Communities: Case of Data Leak
Authors: Asim Majeed, Anwar Ul Haq, Ayesha Asim, Mike Lloyd-Williams, Arshad Jamal, Usman Butt
Abstract:
This study aims to investigate the impact of data leak of M&S customers on digital communities. Modern businesses are using digital communities as an important public relations tool for marketing purposes. This form of communication helps companies to build better relationship with their customers which also act as another source of information. The communication between the customers and the organizations is not regulated so users may post positive and negative comments. There are new platforms being developed on a daily basis and it is very crucial for the businesses to not only get themselves familiar with those but also know how to reach their existing and perspective consumers. The driving force of marketing and communication in modern businesses is the digital communities and these are continuously increasing and developing. This phenomenon is changing the way marketing is conducted. The current research has discussed the implications on M&S business performance since the data was exploited on digital communities; users contacted M&S and raised the security concerns. M&S closed down its website for few hours to try to resolve the issue. The next day M&S made a public apology about this incidence. This information was proliferated on various digital communities and it has impacted negatively on M&S brand name, sales and customers. The content analysis approach is being used to collect qualitative data from 100 digital bloggers including social media communities such as Facebook and Twitter. The results and finding provide useful new insights into the nature and form of security concerns of digital users. Findings have theoretical and practical implications. This research will showcase a large corporation utilizing various digital community platforms and can serve as a model for future organizations.Keywords: Digital, communities, performance, dissemination, implications, data, exploitation
Procedia PDF Downloads 402