Search results for: classification algorithms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3850

Search results for: classification algorithms

550 Selective Guest Accommodation in Zn(II) Bimetallic: Organic Coordination Frameworks

Authors: Bukunola K. Oguntade, Gareth M. Watkins

Abstract:

The synthesis and characterization of metal-organic frameworks (MOFs) is an area of coordination chemistry which has grown rapidly in recent years. Worldwide there has been growing concerns about future energy supplies, and its environmental impacts. A good number of MOFs have been tested for the adsorption of small molecules in the vapour phase. An important issue for potential applications of MOFs for gas adsorption and storage materials is the stability of their structure upon sorption. Therefore, study on the thermal stability of MOFs upon adsorption is important. The incorporation of two or more transition metals in a coordination polymer is a current challenge for designed synthesis. This work focused on the synthesis, characterization and small molecule adsorption properties of three microporous (one zinc monometal and two bimetallics) complexes involving Cu(II), Zn(II) and 1,2,4,5-benzenetetracarboxylic acid using the ambient precipitation and solvothermal method. The complexes were characterized by elemental analysis, Infrared spectroscopy, Scanning Electron microscopy, Thermogravimetry analysis and X-ray Powder diffraction. The N2-adsorption Isotherm showed the complexes to be of TYPE III in reference to IUPAC classification, with very small pores only capable for small molecule sorption. All the synthesized compounds were observed to contain water as guest. Investigations of their inclusion properties for small molecules in the vapour phase showed water and methanol as the only possible inclusion candidates with 10.25H2O in the monometal complex [Zn4(H2B4C)2.5(OH)3(H2O)]·10H2O but not reusable after a complete structural collapse. The ambient precipitation bimetallic; [(CuZnB4C(H2O)2]·5H2O, was found to be reusable and recoverable from structure collapse after adsorption of 5.75H2O. In addition, Solvo-[CuZnB4C(H2O)2.5]·2H2O obtained from solvothermal method show two cycles of rehydration with 1.75H2O and 0.75MeOH inclusion while structure remains unaltered upon dehydration and adsorption.

Keywords: adsorption, characterization, copper, metal -organic frameworks, zinc

Procedia PDF Downloads 130
549 Indian Premier League (IPL) Score Prediction: Comparative Analysis of Machine Learning Models

Authors: Rohini Hariharan, Yazhini R, Bhamidipati Naga Shrikarti

Abstract:

In the realm of cricket, particularly within the context of the Indian Premier League (IPL), the ability to predict team scores accurately holds significant importance for both cricket enthusiasts and stakeholders alike. This paper presents a comprehensive study on IPL score prediction utilizing various machine learning algorithms, including Support Vector Machines (SVM), XGBoost, Multiple Regression, Linear Regression, K-nearest neighbors (KNN), and Random Forest. Through meticulous data preprocessing, feature engineering, and model selection, we aimed to develop a robust predictive framework capable of forecasting team scores with high precision. Our experimentation involved the analysis of historical IPL match data encompassing diverse match and player statistics. Leveraging this data, we employed state-of-the-art machine learning techniques to train and evaluate the performance of each model. Notably, Multiple Regression emerged as the top-performing algorithm, achieving an impressive accuracy of 77.19% and a precision of 54.05% (within a threshold of +/- 10 runs). This research contributes to the advancement of sports analytics by demonstrating the efficacy of machine learning in predicting IPL team scores. The findings underscore the potential of advanced predictive modeling techniques to provide valuable insights for cricket enthusiasts, team management, and betting agencies. Additionally, this study serves as a benchmark for future research endeavors aimed at enhancing the accuracy and interpretability of IPL score prediction models.

Keywords: indian premier league (IPL), cricket, score prediction, machine learning, support vector machines (SVM), xgboost, multiple regression, linear regression, k-nearest neighbors (KNN), random forest, sports analytics

Procedia PDF Downloads 43
548 Level Set Based Extraction and Update of Lake Contours Using Multi-Temporal Satellite Images

Authors: Yindi Zhao, Yun Zhang, Silu Xia, Lixin Wu

Abstract:

The contours and areas of water surfaces, especially lakes, often change due to natural disasters and construction activities. It is an effective way to extract and update water contours from satellite images using image processing algorithms. However, to produce optimal water surface contours that are close to true boundaries is still a challenging task. This paper compares the performances of three different level set models, including the Chan-Vese (CV) model, the signed pressure force (SPF) model, and the region-scalable fitting (RSF) energy model for extracting lake contours. After experiment testing, it is indicated that the RSF model, in which a region-scalable fitting (RSF) energy functional is defined and incorporated into a variational level set formulation, is superior to CV and SPF, and it can get desirable contour lines when there are “holes” in the regions of waters, such as the islands in the lake. Therefore, the RSF model is applied to extracting lake contours from Landsat satellite images. Four temporal Landsat satellite images of the years of 2000, 2005, 2010, and 2014 are used in our study. All of them were acquired in May, with the same path/row (121/036) covering Xuzhou City, Jiangsu Province, China. Firstly, the near infrared (NIR) band is selected for water extraction. Image registration is conducted on NIR bands of different temporal images for information update, and linear stretching is also done in order to distinguish water from other land cover types. Then for the first temporal image acquired in 2000, lake contours are extracted via the RSF model with initialization of user-defined rectangles. Afterwards, using the lake contours extracted the previous temporal image as the initialized values, lake contours are updated for the current temporal image by means of the RSF model. Meanwhile, the changed and unchanged lakes are also detected. The results show that great changes have taken place in two lakes, i.e. Dalong Lake and Panan Lake, and RSF can actually extract and effectively update lake contours using multi-temporal satellite image.

Keywords: level set model, multi-temporal image, lake contour extraction, contour update

Procedia PDF Downloads 362
547 Comparison Of Virtual Non-Contrast To True Non-Contrast Images Using Dual Layer Spectral Computed Tomography

Authors: O’Day Luke

Abstract:

Purpose: To validate virtual non-contrast reconstructions generated from dual-layer spectral computed tomography (DL-CT) data as an alternative for the acquisition of a dedicated true non-contrast dataset during multiphase contrast studies. Material and methods: Thirty-three patients underwent a routine multiphase clinical CT examination, using Dual-Layer Spectral CT, from March to August 2021. True non-contrast (TNC) and virtual non-contrast (VNC) datasets, generated from both portal venous and arterial phase imaging were evaluated. For every patient in both true and virtual non-contrast datasets, a region-of-interest (ROI) was defined in aorta, liver, fluid (i.e. gallbladder, urinary bladder), kidney, muscle, fat and spongious bone, resulting in 693 ROIs. Differences in attenuation for VNC and TNV images were compared, both separately and combined. Consistency between VNC reconstructions obtained from the arterial and portal venous phase was evaluated. Results: Comparison of CT density (HU) on the VNC and TNC images showed a high correlation. The mean difference between TNC and VNC images (excluding bone results) was 5.5 ± 9.1 HU and > 90% of all comparisons showed a difference of less than 15 HU. For all tissues but spongious bone, the mean absolute difference between TNC and VNC images was below 10 HU. VNC images derived from the arterial and the portal venous phase showed a good correlation in most tissue types. The aortic attenuation was somewhat dependent however on which dataset was used for reconstruction. Bone evaluation with VNC datasets continues to be a problem, as spectral CT algorithms are currently poor in differentiating bone and iodine. Conclusion: Given the increasing availability of DL-CT and proven accuracy of virtual non-contrast processing, VNC is a promising tool for generating additional data during routine contrast-enhanced studies. This study shows the utility of virtual non-contrast scans as an alternative for true non-contrast studies during multiphase CT, with potential for dose reduction, without loss of diagnostic information.

Keywords: dual-layer spectral computed tomography, virtual non-contrast, true non-contrast, clinical comparison

Procedia PDF Downloads 136
546 Joint Training Offer Selection and Course Timetabling Problems: Models and Algorithms

Authors: Gianpaolo Ghiani, Emanuela Guerriero, Emanuele Manni, Alessandro Romano

Abstract:

In this article, we deal with a variant of the classical course timetabling problem that has a practical application in many areas of education. In particular, in this paper we are interested in high schools remedial courses. The purpose of such courses is to provide under-prepared students with the skills necessary to succeed in their studies. In particular, a student might be under prepared in an entire course, or only in a part of it. The limited availability of funds, as well as the limited amount of time and teachers at disposal, often requires schools to choose which courses and/or which teaching units to activate. Thus, schools need to model the training offer and the related timetabling, with the goal of ensuring the highest possible teaching quality, by meeting the above-mentioned financial, time and resources constraints. Moreover, there are some prerequisites between the teaching units that must be satisfied. We first present a Mixed-Integer Programming (MIP) model to solve this problem to optimality. However, the presence of many peculiar constraints contributes inevitably in increasing the complexity of the mathematical model. Thus, solving it through a general purpose solver may be performed for small instances only, while solving real-life-sized instances of such model requires specific techniques or heuristic approaches. For this purpose, we also propose a heuristic approach, in which we make use of a fast constructive procedure to obtain a feasible solution. To assess our exact and heuristic approaches we perform extensive computational results on both real-life instances (obtained from a high school in Lecce, Italy) and randomly generated instances. Our tests show that the MIP model is never solved to optimality, with an average optimality gap of 57%. On the other hand, the heuristic algorithm is much faster (in about the 50% of the considered instances it converges in approximately half of the time limit) and in many cases allows achieving an improvement on the objective function value obtained by the MIP model. Such an improvement ranges between 18% and 66%.

Keywords: heuristic, MIP model, remedial course, school, timetabling

Procedia PDF Downloads 602
545 ROSgeoregistration: Aerial Multi-Spectral Image Simulator for the Robot Operating System

Authors: Andrew R. Willis, Kevin Brink, Kathleen Dipple

Abstract:

This article describes a software package called ROS-georegistration intended for use with the robot operating system (ROS) and the Gazebo 3D simulation environment. ROSgeoregistration provides tools for the simulation, test, and deployment of aerial georegistration algorithms and is available at github.com/uncc-visionlab/rosgeoregistration. A model creation package is provided which downloads multi-spectral images from the Google Earth Engine database and, if necessary, incorporates these images into a single, possibly very large, reference image. Additionally a Gazebo plugin which uses the real-time sensor pose and image formation model to generate simulated imagery using the specified reference image is provided along with related plugins for UAV relevant data. The novelty of this work is threefold: (1) this is the first system to link the massive multi-spectral imaging database of Google’s Earth Engine to the Gazebo simulator, (2) this is the first example of a system that can simulate geospatially and radiometrically accurate imagery from multiple sensor views of the same terrain region, and (3) integration with other UAS tools creates a new holistic UAS simulation environment to support UAS system and subsystem development where real-world testing would generally be prohibitive. Sensed imagery and ground truth registration information is published to client applications which can receive imagery synchronously with telemetry from other payload sensors, e.g., IMU, GPS/GNSS, barometer, and windspeed sensor data. To highlight functionality, we demonstrate ROSgeoregistration for simulating Electro-Optical (EO) and Synthetic Aperture Radar (SAR) image sensors and an example use case for developing and evaluating image-based UAS position feedback, i.e., pose for image-based Guidance Navigation and Control (GNC) applications.

Keywords: EO-to-EO, EO-to-SAR, flight simulation, georegistration, image generation, robot operating system, vision-based navigation

Procedia PDF Downloads 99
544 Dietary Micronutritient and Health among Youth in Algeria

Authors: Allioua Meryem

Abstract:

Similar to much of the developing world, Algeria is currently undergoing an epidemiological transition. While mal- and under-nutrition and infectious diseases used to be the main causes of poor health, today there is a higher proportion of chronic, non-communicable diseases (NCDs), including cardiovascular disease, diabetes mellitus, cancer, etc. According to estimates for Algeria from the World Health Organization (WHO), NCDs accounted for 63% of all deaths in 2010. The objective of this study was the assessment of eating habits and anthropometric characteristics in a group of youth aged 15 to 19 years in Tlemcen. This study was conducted on a total effective of 806 youth enrolled in a descriptive cross-sectional study; the classification of nutritional status has been established by international standards IOTF, youth were defined as obese if they had a BMI ≥ 95th percentile, and youth with 85th ≤ BMI ≤ 95th percentile were defined as overweight. Wc is classified by the criteria HD, Wc with moderate risk ≥ 90th percentile and Wc with high risk ≥ 95th percentile. The dietary assessment was based on a 24-hour dietary recall assisted by food records. USDA’S nutrient database for Nutrinux® program was used to analyze dietary intake. Nutrients adequacy ratio was calculated by dividing daily individual intake to dietary recommended intake DRI for each nutrient. 9% of the population was overweight, 3% was obese, 7.5% had abdominal obesity, foods eaten in moderation are chips, cookies, chocolate 1-3 times/day and increased consumption of fried foods in the week, almost half of youth consume sugary drinks more than 3 times per week, we observe a decreased intake of energy, protein (P < 0.001, P = 0.003), SFA (P = 0.018), the NAR of phosphorus, iron, magnesium, vitamin B6, vitamin E, folate, niacin, and thiamin reflecting less consumption of fruit, vegetables, milk, and milk products. Youth surveyed have eating habits at risk of developing obesity and chronic disease.

Keywords: food intake, health, anthropometric characteristics, Algeria

Procedia PDF Downloads 538
543 Field Emission Scanning Microscope Image Analysis for Porosity Characterization of Autoclaved Aerated Concrete

Authors: Venuka Kuruwita Arachchige Don, Mohamed Shaheen, Chris Goodier

Abstract:

Aerated autoclaved concrete (AAC) is known for its lightweight, easy handling, high thermal insulation, and extremely porous structure. Investigation of pore behavior in AAC is crucial for characterizing the material, standardizing design and production techniques, enhancing the mechanical, durability, and thermal performance, studying the effectiveness of protective measures, and analyzing the effects of weather conditions. The significant details of pores are complicated to observe with acknowledged accuracy. The High-resolution Field Emission Scanning Electron Microscope (FESEM) image analysis is a promising technique for investigating the pore behavior and density of AAC, which is adopted in this study. Mercury intrusion porosimeter and gas pycnometer were employed to characterize porosity distribution and density parameters. The analysis considered three different densities of AAC blocks and three layers in the altitude direction within each block. A set of understandings was presented to extract and analyze the details of pore shape, pore size, pore connectivity, and pore percentages from FESEM images of AAC. Average pore behavior outcomes per unit area were presented. Comparison of porosity distribution and density parameters revealed significant variations. FESEM imaging offered unparalleled insights into porosity behavior, surpassing the capabilities of other techniques. The analysis conducted from a multi-staged approach provides porosity percentage occupied by various pore categories, total porosity, variation of pore distribution compared to AAC densities and layers, number of two-dimensional and three-dimensional pores, variation of apparent and matrix densities concerning pore behaviors, variation of pore behavior with respect to aluminum content, and relationship among shape, diameter, connectivity, and percentage in each pore classification.

Keywords: autoclaved aerated concrete, density, imaging technique, microstructure, porosity behavior

Procedia PDF Downloads 65
542 Some Issues of Measurement of Impairment of Non-Financial Assets in the Public Sector

Authors: Mariam Vardiashvili

Abstract:

The economic value of the asset impairment process is quite large. Impairment reflects the reduction of future economic benefits or service potentials itemized in the asset. The assets owned by public sector entities bring economic benefits or are used for delivery of the free-of-charge services. Consequently, they are classified as cash-generating and non-cash-generating assets. IPSAS 21 - Impairment of non-cash-generating assets, and IPSAS 26 - Impairment of cash-generating assets, have been designed considering this specificity.  When measuring impairment of assets, it is important to select the relevant methods. For measurement of the impaired Non-Cash-Generating Assets, IPSAS 21 recommends three methods: Depreciated Replacement Cost Approach, Restoration Cost Approach, and  Service Units Approach. Impairment of Value in Use of Cash-Generating Assets (according to IPSAS 26) is measured by discounted value of the money sources to be received in future. Value in use of the cash-generating asserts (as per IPSAS 26) is measured by the discounted value of the money sources to be received in the future. The article provides classification of the assets in the public sector  as non-cash-generating assets and cash-generating assets and, deals also with the factors which should be considered when evaluating  impairment of assets. An essence of impairment of the non-financial assets and the methods of measurement thereof evaluation are formulated according to IPSAS 21 and IPSAS 26. The main emphasis is put on different methods of measurement of the value in use of the impaired Cash-Generating Assets and Non-Cash-Generation Assets and the methods of their selection. The traditional and the expected cash flow approaches for calculation of the discounted value are reviewed. The article also discusses the issues of recognition of impairment loss and its reflection in the financial reporting. The article concludes that despite a functional purpose of the impaired asset, whichever method is used for measuring the asset, presentation of realistic information regarding the value of the assets should be ensured in the financial reporting. In the theoretical development of the issue, the methods of scientific abstraction, analysis and synthesis were used. The research was carried out with a systemic approach. The research process uses international standards of accounting, theoretical researches and publications of Georgian and foreign scientists.

Keywords: cash-generating assets, non-cash-generating assets, recoverable (usable restorative) value, value of use

Procedia PDF Downloads 139
541 GBKMeans: A Genetic Based K-Means Applied to the Capacitated Planning of Reading Units

Authors: Anderson S. Fonseca, Italo F. S. Da Silva, Robert D. A. Santos, Mayara G. Da Silva, Pedro H. C. Vieira, Antonio M. S. Sobrinho, Victor H. B. Lemos, Petterson S. Diniz, Anselmo C. Paiva, Eliana M. G. Monteiro

Abstract:

In Brazil, the National Electric Energy Agency (ANEEL) establishes that electrical energy companies are responsible for measuring and billing their customers. Among these regulations, it’s defined that a company must bill your customers within 27-33 days. If a relocation or a change of period is required, the consumer must be notified in writing, in advance of a billing period. To make it easier to organize a workday’s measurements, these companies create a reading plan. These plans consist of grouping customers into reading groups, which are visited by an employee responsible for measuring consumption and billing. The creation process of a plan efficiently and optimally is a capacitated clustering problem with constraints related to homogeneity and compactness, that is, the employee’s working load and the geographical position of the consuming unit. This process is a work done manually by several experts who have experience in the geographic formation of the region, which takes a large number of days to complete the final planning, and because it’s human activity, there is no guarantee of finding the best optimization for planning. In this paper, the GBKMeans method presents a technique based on K-Means and genetic algorithms for creating a capacitated cluster that respects the constraints established in an efficient and balanced manner, that minimizes the cost of relocating consumer units and the time required for final planning creation. The results obtained by the presented method are compared with the current planning of a real city, showing an improvement of 54.71% in the standard deviation of working load and 11.97% in the compactness of the groups.

Keywords: capacitated clustering, k-means, genetic algorithm, districting problems

Procedia PDF Downloads 193
540 Clinical Advice Services: Using Lean Chassis to Optimize Nurse-Driven Telephonic Triage of After-Hour Calls from Patients

Authors: Eric Lee G. Escobedo-Wu, Nidhi Rohatgi, Fouzel Dhebar

Abstract:

It is challenging for patients to navigate through healthcare systems after-hours. This leads to delays in care, patient/provider dissatisfaction, inappropriate resource utilization, readmissions, and higher costs. It is important to provide patients and providers with effective clinical decision-making tools to allow seamless connectivity and coordinated care. In August 2015, patient-centric Stanford Health Care established Clinical Advice Services (CAS) to provide clinical decision support after-hours. CAS is founded on key Lean principles: Value stream mapping, empathy mapping, waste walk, takt time calculations, standard work, plan-do-check-act cycles, and active daily management. At CAS, Clinical Assistants take the initial call and manage all non-clinical calls (e.g., appointments, directions, general information). If the patient has a clinical symptom, the CAS nurses take the call and utilize standardized clinical algorithms to triage the patient to home, clinic, urgent care, emergency department, or 911. Nurses may also contact the on-call physician based on the clinical algorithm for further direction and consultation. Since August 2015, CAS has managed 228,990 calls from 26 clinical specialties. Reporting is built into the electronic health record for analysis and data collection. 65.3% of the after-hours calls are clinically related. Average clinical algorithm adherence rate has been 92%. An average of 9% of calls was escalated by CAS nurses to the physician on call. An average of 5% of patients was triaged to the Emergency Department by CAS. Key learnings indicate that a seamless connectivity vision, cascading, multidisciplinary ownership of the problem, and synergistic enterprise improvements have contributed to this success while striving for continuous improvement.

Keywords: after hours phone calls, clinical advice services, nurse triage, Stanford Health Care

Procedia PDF Downloads 172
539 Applying Multiple Kinect on the Development of a Rapid 3D Mannequin Scan Platform

Authors: Shih-Wen Hsiao, Yi-Cheng Tsao

Abstract:

In the field of reverse engineering and creative industries, applying 3D scanning process to obtain geometric forms of the objects is a mature and common technique. For instance, organic objects such as faces and nonorganic objects such as products could be scanned to acquire the geometric information for further application. However, although the data resolution of 3D scanning device is increasing and there are more and more abundant complementary applications, the penetration rate of 3D scanning for the public is still limited by the relative high price of the devices. On the other hand, Kinect, released by Microsoft, is known for its powerful functions, considerably low price, and complete technology and database support. Therefore, related studies can be done with the applying of Kinect under acceptable cost and data precision. Due to the fact that Kinect utilizes optical mechanism to extracting depth information, limitations are found due to the reason of the straight path of the light. Thus, various angles are required sequentially to obtain the complete 3D information of the object when applying a single Kinect for 3D scanning. The integration process which combines the 3D data from different angles by certain algorithms is also required. This sequential scanning process costs much time and the complex integration process often encounter some technical problems. Therefore, this paper aimed to apply multiple Kinects simultaneously on the field of developing a rapid 3D mannequin scan platform and proposed suggestions on the number and angles of Kinects. In the content, a method of establishing the coordination based on the relation between mannequin and the specifications of Kinect is proposed, and a suggestion of angles and number of Kinects is also described. An experiment of applying multiple Kinect on the scanning of 3D mannequin is constructed by Microsoft API, and the results show that the time required for scanning and technical threshold can be reduced in the industries of fashion and garment design.

Keywords: 3D scan, depth sensor, fashion and garment design, mannequin, multiple Kinect sensor

Procedia PDF Downloads 364
538 Analysis of Aquifer Productivity in the Mbouda Area (West Cameroon)

Authors: Folong Tchoffo Marlyse Fabiola, Anaba Onana Achille Basile

Abstract:

Located in the western region of Cameroon, in the BAMBOUTOS department, the city of Mbouda belongs to the Pan-African basement. The water resources exploited in this region consist of surface water and groundwater from weathered and fractured aquifers within the same basement. To study the factors determining the productivity of aquifers in the Mbouda area, we adopted a methodology based on collecting data from boreholes drilled in the region, identifying different types of rocks, analyzing structures, and conducting geophysical surveys in the field. The results obtained allowed us to distinguish two main types of rocks: metamorphic rocks composed of amphibolites and migmatitic gneisses and igneous rocks, namely granodiorites and granites. Several types of structures were also observed, including planar structures (foliation and schistosity), folded structures (folds), and brittle structures (fractures and lineaments). A structural synthesis combines all these elements into three major phases of deformation. Phase D1 is characterized by foliation and schistosity, phase D2 is marked by shear planes and phase D3 is characterized by open and sealed fractures. The analysis of structures (fractures in outcrops, Landsat lineaments, subsurface structures) shows a predominance of ENE-WSW and WNW-ESE directions. Through electrical surveys and borehole data, we were able to identify the sequence of different geological formations. Four geo-electric layers were identified, each with a different electrical conductivity: conductive, semi-resistive, or resistive. The last conductive layer is considered a potentially aquiferous zone. The flow rates of the boreholes ranged from 2.6 to 12 m3/h, classified as moderate to high according to the CIEH classification. The boreholes were mainly located in basalts, which are mineralogically rich in ferromagnesian minerals. This mineral composition contributes to their high productivity as they are more likely to be weathered. The boreholes were positioned along linear structures or at their intersections.

Keywords: Mbouda, Pan-African basement, productivity, west-Cameroon

Procedia PDF Downloads 56
537 A Comparative Analysis of Clustering Approaches for Understanding Patterns in Health Insurance Uptake: Evidence from Sociodemographic Kenyan Data

Authors: Nelson Kimeli Kemboi Yego, Juma Kasozi, Joseph Nkruzinza, Francis Kipkogei

Abstract:

The study investigated the low uptake of health insurance in Kenya despite efforts to achieve universal health coverage through various health insurance schemes. Unsupervised machine learning techniques were employed to identify patterns in health insurance uptake based on sociodemographic factors among Kenyan households. The aim was to identify key demographic groups that are underinsured and to provide insights for the development of effective policies and outreach programs. Using the 2021 FinAccess Survey, the study clustered Kenyan households based on their health insurance uptake and sociodemographic features to reveal patterns in health insurance uptake across the country. The effectiveness of k-prototypes clustering, hierarchical clustering, and agglomerative hierarchical clustering in clustering based on sociodemographic factors was compared. The k-prototypes approach was found to be the most effective at uncovering distinct and well-separated clusters in the Kenyan sociodemographic data related to health insurance uptake based on silhouette, Calinski-Harabasz, Davies-Bouldin, and Rand indices. Hence, it was utilized in uncovering the patterns in uptake. The results of the analysis indicate that inclusivity in health insurance is greatly related to affordability. The findings suggest that targeted policy interventions and outreach programs are necessary to increase health insurance uptake in Kenya, with the ultimate goal of achieving universal health coverage. The study provides important insights for policymakers and stakeholders in the health insurance sector to address the low uptake of health insurance and to ensure that healthcare services are accessible and affordable to all Kenyans, regardless of their socio-demographic status. The study highlights the potential of unsupervised machine learning techniques to provide insights into complex health policy issues and improve decision-making in the health sector.

Keywords: health insurance, unsupervised learning, clustering algorithms, machine learning

Procedia PDF Downloads 136
536 Understanding the Information in Principal Component Analysis of Raman Spectroscopic Data during Healing of Subcritical Calvarial Defects

Authors: Rafay Ahmed, Condon Lau

Abstract:

Bone healing is a complex and sequential process involving changes at the molecular level. Raman spectroscopy is a promising technique to study bone mineral and matrix environments simultaneously. In this study, subcritical calvarial defects are used to study bone composition during healing without discomposing the fracture. The model allowed to monitor the natural healing of bone avoiding mechanical harm to the callus. Calvarial defects were created using 1mm burr drill in the parietal bones of Sprague-Dawley rats (n=8) that served in vivo defects. After 7 days, their skulls were harvested after euthanizing. One additional defect per sample was created on the opposite parietal bone using same calvarial defect procedure to serve as control defect. Raman spectroscopy (785 nm) was established to investigate bone parameters of three different skull surfaces; in vivo defects, control defects and normal surface. Principal component analysis (PCA) was utilized for the data analysis and interpretation of Raman spectra and helped in the classification of groups. PCA was able to distinguish in vivo defects from normal surface and control defects. PC1 shows that the major variation at 958 cm⁻¹, which corresponds to ʋ1 phosphate mineral band. PC2 shows the major variation at 1448 cm⁻¹ which is the characteristic band of CH2 deformation and corresponds to collagens. Raman parameters, namely, mineral to matrix ratio and crystallinity was found significantly decreased in the in vivo defects compared to surface and controls. Scanning electron microscope and optical microscope images show the formation of newly generated matrix by means of bony bridges of collagens. Optical profiler shows that surface roughness increased by 30% from controls to in vivo defects after 7 days. These results agree with Raman assessment parameters and confirm the new collagen formation during healing.

Keywords: Raman spectroscopy, principal component analysis, calvarial defects, tissue characterization

Procedia PDF Downloads 220
535 The Impact of Adopting Cross Breed Dairy Cows on Households’ Income and Food Security in the Case of Dejen Woreda, Amhara Region, Ethiopia

Authors: Misganaw Chere Siferih

Abstract:

This study assessed the impact of crossbreed dairy cows on household income and food security. The study area is found in Dejen Woreda, East Gojam Zone, and Amhara region of Ethiopia. Random sampling technique was used to obtain a sample of 80 crossbreed dairy cow owners and 176 indigenous dairy cow owners. The study employed food consumption score analytical framework to measure food security status of the household. No Statistical significant mean difference is found between crossbreed owners and indigenous owners. Logistic regression was employed to investigate crossbreed dairy cow adoption determinants , the result indicates that gender, education, labor number, land size cultivated, dairy cooperatives membership, net income and food security status of the household are statistically significant independent variables, which explained the binary dependent variable, crossbreed dairy cow adoption. Propensity score matching (PSM) was employed to analyze the impact of crossbreed dairy cow owners on farmers’ income and food security. The average net income of crossbreed dairy cow owners was found to be significantly higher than indigenous dairy cow owners. Estimates of average treatment effect of the treated (ATT) indicated that crossbreed dairy cow is able to impact households’ net income by 42%, 38.5%, 30.8% and 44.5% higher in kernel, radius, nearest neighborhood and stratification matching algorithms respectively as compared to indigenous dairy cow owners. However, estimates of average treatment of the treated (ATT) suggest that being an owner of crossbreed dairy cow is not able to affect food security significantly. Thus, crossbreed dairy cow enables farmers to increase income but not their food security in the study area. Finally, the study recommended establishing dairy cooperatives and advice farmers to become a member of them, attention to promoting the impact of crossbreed dairy cows and promotion of nutrition focus projects.

Keywords: crossbreed dairy cow, net income, food security, propensity score matching

Procedia PDF Downloads 60
534 Using 3D Satellite Imagery to Generate a High Precision Canopy Height Model

Authors: M. Varin, A. M. Dubois, R. Gadbois-Langevin, B. Chalghaf

Abstract:

Good knowledge of the physical environment is essential for an integrated forest planning. This information enables better forecasting of operating costs, determination of cutting volumes, and preservation of ecologically sensitive areas. The use of satellite images in stereoscopic pairs gives the capacity to generate high precision 3D models, which are scale-adapted for harvesting operations. These models could represent an alternative to 3D LiDAR data, thanks to their advantageous cost of acquisition. The objective of the study was to assess the quality of stereo-derived canopy height models (CHM) in comparison to a traditional LiDAR CHM and ground tree-height samples. Two study sites harboring two different forest stand types (broadleaf and conifer) were analyzed using stereo pairs and tri-stereo images from the WorldView-3 satellite to calculate CHM. Acquisition of multispectral images from an Unmanned Aerial Vehicle (UAV) was also realized on a smaller part of the broadleaf study site. Different algorithms using two softwares (PCI Geomatica and Correlator3D) with various spatial resolutions and band selections were tested to select the 3D modeling technique, which offered the best performance when compared with LiDAR. In the conifer study site, the CHM produced with Corelator3D using only the 50-cm resolution panchromatic band was the one with the smallest Root-mean-square deviation (RMSE: 1.31 m). In the broadleaf study site, the tri-stereo model provided slightly better performance, with an RMSE of 1.2 m. The tri-stereo model was also compared to the UAV, which resulted in an RMSE of 1.3 m. At individual tree level, when ground samples were compared to satellite, lidar, and UAV CHM, RMSE were 2.8, 2.0, and 2.0 m, respectively. Advanced analysis was done for all of these cases, and it has been noted that RMSE is reduced when the canopy cover is higher when shadow and slopes are lower and when clouds are distant from the analyzed site.

Keywords: very high spatial resolution, satellite imagery, WorlView-3, canopy height models, CHM, LiDAR, unmanned aerial vehicle, UAV

Procedia PDF Downloads 124
533 The Role of ICTS in Improving the Quality of Public Spaces in Large Cities of the Third World

Authors: Ayat Ayman Abdelaziz Ibrahim Amayem, Hassan Abdel-Salam, Zeyad El-Sayad

Abstract:

Nowadays, ICTs have spread extensively in everyday life in an unprecedented way. A great attention is paid to the ICTs while ignoring the social aspect. With the immersive invasion of internet as well as smart phones’ applications and digital social networking, people become more socially connected through virtual spaces instead of meeting in physical public spaces. Thus, this paper aims to find the ways of implementing ICTs in public spaces to regain their status as attractive places for people, incite meetings in real life and create sustainable lively city centers. One selected example of urban space in the city center of Alexandria is selected for the study. Alexandria represents a large metropolitan city subjected to rapid transformation. Improving the quality of its public spaces will have great effects on the whole well-being of the city. The major roles that ICTs can play in the public space are: culture and art, education, planning and design, games and entertainment, and information and communication. Based on this classification various examples and proposals of ICTs interventions in public spaces are presented and analyzed to encourage good old fashioned social interaction by creating the New Social Public Place of this Digital Era. The paper will adopt methods such as questionnaire for evaluating the people’s willingness to accept the idea of using ICTs in public spaces, their needs and their proposals for an attractive place; the technique of observation to understand the people behavior and their movement through the space and finally will present an experimental design proposal for the selected urban space. Accordingly, this study will help to find design principles that can be adopted in the design of future public spaces to meet the needs of the digital era’s users with the new concepts of social life respecting the rules of place-making.

Keywords: Alexandria sustainable city center, digital place-making, ICTs, social interaction, social networking, urban places

Procedia PDF Downloads 417
532 Physical and Mechanical Behavior of Compressed Earth Blocks Stabilized with Ca(OH)2 on Sub-Humid Warm Weather

Authors: D. Castillo T., Luis F. Jimenez

Abstract:

The compressed earth blocks (CEBs) constitute an alternative as a constructive element for building homes in regions with high levels of poverty and marginalization. Such is the case of Southeastern Mexico, where the population, predominantly indigene, build their houses with feeble materials like wood and palm, vulnerable to extreme weather in the area, because they do not have the financial resources to acquire concrete blocks. There are several advantages that can provide BTCs compared to traditional vibro-compressed concrete blocks, such as the availability of materials, low manufacturing cost and reduced CO2 emissions to the atmosphere for not be subjected to a burning process. However, to improve its mechanical properties and resistance to adverse weather conditions in terms of humidity and temperature of the sub-humid climate zones, it requires the use of a chemical stabilizer; in this case we chose Ca(OH)2. The stabilization method Eades-Grim was employed, according to ASTM C977-03. This method measures the optimum amount of lime required to stabilize the soil, increasing the pH to 12.4 or higher. The minimum amount of lime required in this experiment was 1% and the maximum was 10%. The employed material was clay unconsolidated low to medium plasticity (CL type according to the Unified Soil Classification System). Based on these results, the CEBs manufacturing process was determined. The obtained blocks were from 10x15x30 cm using a mixture of soil, water and lime in different proportions. Later these blocks were put to dry outdoors and subjected to several physical and mechanical tests, such as compressive strength, absorption and drying shrinkage. The results were compared with the limits established by the Mexican Standard NMX-C-404-ONNCCE-2005 for the construction of housing walls. In this manner an alternative and sustainable material was obtained for the construction of rural households in the region, with better security conditions, comfort and cost.

Keywords: calcium hydroxide, chemical stabilization, compressed earth blocks, sub-humid warm weather

Procedia PDF Downloads 397
531 Evaluation of Traumatic Spine by Magnetic Resonance Imaging

Authors: Sarita Magu, Deepak Singh

Abstract:

Study Design: This prospective study was conducted at the department of Radio Diagnosis, at Pt B.D. Sharma PGIMS, Rohtak in 57 patients of spine injury on radiographs or radiographically normal patients with neurological deficits presenting within 72 hours of injury. Aims: Evaluation of the role of Magnetic Resonance Imaging (MRI) in Spinal Trauma Patients and to compare MRI findings with clinical profile and neurological status of the patient and to correlate the MRI findings with neurological recovery of the patient and predict the outcome. Material and Methods: Neurological status of patients was assessed at the time of admission and discharge in all the patients and at long term interval of six months to one year in 27 patients as per American spine injury association classification (ASIA). On MRI cord injury was categorized into cord hemorrhage, cord contusion, cord edema only, and normal cord. Quantitative assessment of injury on MRI was done using mean canal compromise (MCC), mean spinal cord compression (MSCC) and lesion length. Neurological status at admission and neurological recovery at discharge and long term follow up was compared with various qualitative cord findings and quantitative parameters on MRI. Results: Cord edema and normal cord was associated with favorable neurological outcome. Cord contusion show lesser neurological recovery as compared to cord edema. Cord hemorrhage was associated with worst neurological status at admission and poor neurological recovery. Mean MCC, MSCC, and lesion length values were higher in patients presenting with ASIA A grade injury and showed decreasing trends towards ASIA E grade injury. Patients showing neurological recovery over the period of hospital stay and long term follow up had lower mean MCC, MSCC, and lesion length as compared to patients showing no neurological recovery. The data was statistically significant with p value <.05. Conclusion: Cord hemorrhage and higher MCC, MSCC and lesion length has poor prognostic value in spine injury patients.

Keywords: spine injury, cord hemorrhage, cord contusion, MCC, MSCC, lesion length, ASIA grading

Procedia PDF Downloads 352
530 Discrimination of Bio-Analytes by Using Two-Dimensional Nano Sensor Array

Authors: P. Behera, K. K. Singh, D. K. Saini, M. De

Abstract:

Implementation of 2D materials in the detection of bio analytes is highly advantageous in the field of sensing because of its high surface to volume ratio. We have designed our sensor array with different cationic two-dimensional MoS₂, where surface modification was achieved by cationic thiol ligands with different functionality. Green fluorescent protein (GFP) was chosen as signal transducers for its biocompatibility and anionic nature, which can bind to the cationic MoS₂ surface easily, followed by fluorescence quenching. The addition of bio-analyte to the sensor can decomplex the cationic MoS₂ and GFP conjugates, followed by the regeneration of GFP fluorescence. The fluorescence response pattern belongs to various analytes collected and transformed to linear discriminant analysis (LDA) for classification. At first, 15 different proteins having wide range of molecular weight and isoelectric points were successfully discriminated at 50 nM with detection limit of 1 nM. The sensor system was also executed in biofluids such as serum, where 10 different proteins at 2.5 μM were well separated. After successful discrimination of protein analytes, the sensor array was implemented for bacteria sensing. Six different bacteria were successfully classified at OD = 0.05 with a detection limit corresponding to OD = 0.005. The optimized sensor array was able to classify uropathogens from non-uropathogens in urine medium. Further, the technique was applied for discrimination of bacteria possessing resistance to different types and amounts of drugs. We found out the mechanism of sensing through optical and electrodynamic studies, which indicates the interaction between bacteria with the sensor system was mainly due to electrostatic force of interactions, but the separation of native bacteria from their drug resistant variant was due to Van der Waals forces. There are two ways bacteria can be detected, i.e., through bacterial cells and lysates. The bacterial lysates contain intracellular information and also safe to analysis as it does not contain live cells. Lysates of different drug resistant bacteria were patterned effectively from the native strain. From unknown sample analysis, we found that discrimination of bacterial cells is more sensitive than that of lysates. But the analyst can prefer bacterial lysates over live cells for safer analysis.

Keywords: array-based sensing, drug resistant bacteria, linear discriminant analysis, two-dimensional MoS₂

Procedia PDF Downloads 139
529 Debris Flow Mapping Using Geographical Information System Based Model and Geospatial Data in Middle Himalayas

Authors: Anand Malik

Abstract:

The Himalayas with high tectonic activities poses a great threat to human life and property. Climate change is another reason which triggering extreme events multiple fold effect on high mountain glacial environment, rock falls, landslides, debris flows, flash flood and snow avalanches. One such extreme event of cloud burst along with breach of moraine dammed Chorabri Lake occurred from June 14 to June 17, 2013, triggered flooding of Saraswati and Mandakini rivers in the Kedarnath Valley of Rudraprayag district of Uttrakhand state of India. As a result, huge volume of water with its high velocity created a catastrophe of the century, which resulted into loss of large number of human/animals, pilgrimage, tourism, agriculture and property. Thus a comprehensive assessment of debris flow hazards requires GIS-based modeling using numerical methods. The aim of present study is to focus on analysis and mapping of debris flow movements using geospatial data with flow-r (developed by team at IGAR, University of Lausanne). The model is based on combined probabilistic and energetic algorithms for the assessment of spreading of flow with maximum run out distances. Aster Digital Elevation Model (DEM) with 30m x 30m cell size (resolution) is used as main geospatial data for preparing the run out assessment, while Landsat data is used to analyze land use land cover change in the study area. The results of the study area show that model can be applied with great accuracy as the model is very useful in determining debris flow areas. The results are compared with existing available landslides/debris flow maps. ArcGIS software is used in preparing run out susceptibility maps which can be used in debris flow mitigation and future land use planning.

Keywords: debris flow, geospatial data, GIS based modeling, flow-R

Procedia PDF Downloads 269
528 Understanding the Heterogeneity of Polycystic Ovarian Syndrome: The Influence of Ethnicity and Body Mass

Authors: Hamza Ikhlaq, Stephen Franks

Abstract:

Background: Polycystic ovarian syndrome (PCOS) is one of the most common endocrine disorders affecting women of reproductive age. The aetiology behind PCOS is poorly understood but influencing ethnic, environmental, and genetic factors have been recognised. However, literature examining the impact of ethnicity is scarce. We hypothesised Body Mass Index (BMI) and ethnicity influence the clinical, metabolic, and biochemical presentations of PCOS, with an interaction between these factors. Methods: A database of 1081 women with PCOS and a control group of 72 women were analysed. BMIs were grouped using the World Health Organisation classification into normal weight, overweight and obese groups. Ethnicities were classified into European, South Asian, and Afro-Caribbean groups. Biochemical and clinical presentations were compared amongst these groups, and statistical analyses were performed to assess significance. Results: This study revealed ethnicity significantly influences biochemical and clinical presentations of PCOS. A greater proportion of South Asian women are impacted by menstrual cycle disturbances and hirsutism than European and Afro-Caribbean women. South Asian and Afro-Caribbean women show greater measures of insulin resistance and weight gain when compared to their European peers. Women with increased BMI are shown to have an increased prevalence of PCOS phenotypes alongside increased levels of insulin resistance and testosterone. Furthermore, significantly different relationships between the waist-hip ratio and measures of insulin and glucose control for Afro-Caribbean women were identified compared to other ethnic groups. Conclusions: The findings of this study show ethnicity significantly influence the phenotypic and biochemical presentations of PCOS, with an interaction between body habitus and ethnicity found. Furthermore, we provide further data on the influences of BMI on the manifestations of PCOS. Therefore, we highlight the need to consider these factors when reviewing diagnostic criteria and delivering clinical care for these groups.

Keywords: PCOS, ethnicity, BMI, clinical

Procedia PDF Downloads 107
527 Approaches to Reduce the Complexity of Mathematical Models for the Operational Optimization of Large-Scale Virtual Power Plants in Public Energy Supply

Authors: Thomas Weber, Nina Strobel, Thomas Kohne, Eberhard Abele

Abstract:

In context of the energy transition in Germany, the importance of so-called virtual power plants in the energy supply continues to increase. The progressive dismantling of the large power plants and the ongoing construction of many new decentralized plants result in great potential for optimization through synergies between the individual plants. These potentials can be exploited by mathematical optimization algorithms to calculate the optimal application planning of decentralized power and heat generators and storage systems. This also includes linear or linear mixed integer optimization. In this paper, procedures for reducing the number of decision variables to be calculated are explained and validated. On the one hand, this includes combining n similar installation types into one aggregated unit. This aggregated unit is described by the same constraints and target function terms as a single plant. This reduces the number of decision variables per time step and the complexity of the problem to be solved by a factor of n. The exact operating mode of the individual plants can then be calculated in a second optimization in such a way that the output of the individual plants corresponds to the calculated output of the aggregated unit. Another way to reduce the number of decision variables in an optimization problem is to reduce the number of time steps to be calculated. This is useful if a high temporal resolution is not necessary for all time steps. For example, the volatility or the forecast quality of environmental parameters may justify a high or low temporal resolution of the optimization. Both approaches are examined for the resulting calculation time as well as for optimality. Several optimization models for virtual power plants (combined heat and power plants, heat storage, power storage, gas turbine) with different numbers of plants are used as a reference for the investigation of both processes with regard to calculation duration and optimality.

Keywords: CHP, Energy 4.0, energy storage, MILP, optimization, virtual power plant

Procedia PDF Downloads 173
526 Innovation in "Low-Tech" Industries: Portuguese Footwear Industry

Authors: Antonio Marques, Graça Guedes

Abstract:

The Portuguese footwear industry had in the last five years a remarkable performance in the exportation values, the trade balance and others economic indicators. After a long period of difficulties and with a strong reduction of companies and employees since 1994 until 2009, the Portuguese footwear industry changed the strategy and is now a success case between the international players of footwear. Only the Italian industry sells footwear with a higher value than the Portuguese and the distance between them is decreasing year by year. This paper analyses how the Portuguese footwear companies innovate and make innovation, according the classification proposed by the Oslo Manual. Also analyses the strategy follow in the innovation process, as suggested by Freeman and Soete, and shows the linkage between the type of innovation and the strategy of innovation. The research methodology was qualitative and the strategy for data collection was the case study. The qualitative data will be analyzed with the MAXQDA software. The economic results of the footwear companies studied shows differences between all of them and these differences are related with the innovation strategy adopted. The companies focused in product and marketing innovation, oriented to their target market, have higher ratios “turnover per worker” than the companies focused in process innovation. However, all the footwear companies in this “low-tech” industry create value and contribute to a positive foreign trade of 1.310 million euros in 2013. The growth strategies implemented has the participation of the sectorial organizations in several innovative projects. And it’s obvious that cooperation between all of them is a critical element to the performance achieved by the companies and the innovation observed. Can conclude that the Portuguese footwear sector has in the last years an excellent performance (economic results, exportation values, trade balance, brands and international image) and his performance is strongly related with the strategy in innovation followed, the type of innovation and the networks in the cluster. A simplified model, called “Ace of Diamonds”, is proposed by the authors and explains the way how this performance was reached by the seven companies that participate in the study (two of them are the leaders in the setor), and if this model can be used in others traditional and “low-tech” industries.

Keywords: footwear, innovation, “low-tech” industry, Oslo manual

Procedia PDF Downloads 376
525 History of Pediatric Renal Pathology

Authors: Mostafa Elbaba

Abstract:

Because childhood renal diseases are grossly different compared to adult diseases, pediatric nephrology was founded as a specialty in 1965. Renal pathology specialty was introduced at the London Ciba Symposium in 1961. The history of renal pathology can be divided into two eras: one starting in the 1650s with the invention of the microscope, the second in the 1950s with the implementation of renal biopsy, and the presence of electron microscopy and immunofluorescence study. Prior to the 1950s, the study of diseased human kidneys was restricted to postmortem examination by gross pathology. In 1827, Richard Bright first described his triad of kidney disease, which was confirmed by morbid kidney changes at autopsy. In 1905 Friedrich Mueller coined the term “nephrosis” describing the inflammatory form of “degenerative” diseases, and later F. Munk added the term “lipoid nephrosis”. The most profound influence on renal diseases’ classification came from the publication of Volhard and Fahr in 1914. In 1899, Carl Max Wilhelm Wilms described Wilms' tumor of the kidneys in children. Chronic pyelonephritis was a popular renal diagnosis and the most common cause of uremia until the 1960s. Although kidney biopsy had been used early in the 1930s for renal tumors, the earliest reports of its use in the diagnosis of medical kidney disease were by Iversen and Brun in 1951, followed by Alwall in 1952, then by Pardo in 1953. The earliest intentional renal biopsies were done in 1944 by Nils Alwall, while the procedure was abandoned after the death of one of his 13 patients who biopsied. In 1950, Antonino Perez-Ara attempted renal biopsies, but his results were missed because of an unpopular journal publication. In the year 1951, Claus Brun and Poul Iverson developed the biopsy procedure using an aspiration technique. Popularizing renal biopsy practice is accredited to Robert Kark, who published his distinct work in 1954. He perfected the technique of renal biopsy in the prone position using the Vim-Silverman needle and used intravenous pyelography to improve the localization of the kidney.

Keywords: history, medicine, nephrology, pediatrics, pathology

Procedia PDF Downloads 58
524 Effect of Mineral Additives on Improving the Geotechnical Properties of Soils in Chlef

Authors: Messaoudi Mohammed Amin

Abstract:

The reduction of available land resources and the increased cout associated with the use of hight quality materials have led to the need for local soils to be used in geotecgnical construction however, poor engineering properties of these soils pose difficulties for constructions project and need to be stabilized to improve their properties in oyher works unsuitable soils with low bearing capacity, high plasticity coupled with high insatbility are frequently encountered hense, there is a need to improve the physical and mechanical charateristics of these soils to make theme more suitable for construction this can be done by using different mechanical and chemical methods clayey soil stabilization has been practiced for quite sometime bu mixing additives, such us cement, lime and fly ash to the soil to increase its strength. The aim of this project is to study the effect of using lime, natural pozzolana or combination of both on the geotecgnical cherateristics of clayey soil. Test specimen were subjected to atterberg limits test, compaction test, box shear test and uncomfined compression test Lime or natural pozzolana was added to clayey soil at rangs of 0-8% and 0-20% respectively. In addition combinations of lime –natural pozzolana were added to clayey soil at the same ranges specimen were cured for 1-7, and 28 days after which they were tested for uncofined compression tests. Based on the experimental results, it was concluded that an important decrease of plasticity index was observed for thr samples stabilized with the combinition lime-natural pozzolana in addition, the use of the combination lime-natural pozzolana modifies the clayey soil classification according to casagrand plasiticity chart. Moreover, based on the favourable results of shear and compression strength obtained, it can be concluded that clayey soil can be successfuly stabilized by combined action of lime and natural pozzolana also this combination showed an appreciable improvement of the shear parameters. Finally, since natural pozzolana is much cheaper than lime ,the addition of natural pozzolana in lime soil mix may particulary become attractive and can result in cost reduction of construction.

Keywords: clay, soil stabilization, natural pozzolana, atterberg limits, compaction, compressive strength shear strength, curing

Procedia PDF Downloads 300
523 Feature Engineering Based Detection of Buffer Overflow Vulnerability in Source Code Using Deep Neural Networks

Authors: Mst Shapna Akter, Hossain Shahriar

Abstract:

One of the most important challenges in the field of software code audit is the presence of vulnerabilities in software source code. Every year, more and more software flaws are found, either internally in proprietary code or revealed publicly. These flaws are highly likely exploited and lead to system compromise, data leakage, or denial of service. C and C++ open-source code are now available in order to create a largescale, machine-learning system for function-level vulnerability identification. We assembled a sizable dataset of millions of opensource functions that point to potential exploits. We developed an efficient and scalable vulnerability detection method based on deep neural network models that learn features extracted from the source codes. The source code is first converted into a minimal intermediate representation to remove the pointless components and shorten the dependency. Moreover, we keep the semantic and syntactic information using state-of-the-art word embedding algorithms such as glove and fastText. The embedded vectors are subsequently fed into deep learning networks such as LSTM, BilSTM, LSTM-Autoencoder, word2vec, BERT, and GPT-2 to classify the possible vulnerabilities. Furthermore, we proposed a neural network model which can overcome issues associated with traditional neural networks. Evaluation metrics such as f1 score, precision, recall, accuracy, and total execution time have been used to measure the performance. We made a comparative analysis between results derived from features containing a minimal text representation and semantic and syntactic information. We found that all of the deep learning models provide comparatively higher accuracy when we use semantic and syntactic information as the features but require higher execution time as the word embedding the algorithm puts on a bit of complexity to the overall system.

Keywords: cyber security, vulnerability detection, neural networks, feature extraction

Procedia PDF Downloads 86
522 Optimizing Wind Turbine Blade Geometry for Enhanced Performance and Durability: A Computational Approach

Authors: Nwachukwu Ifeanyi

Abstract:

Wind energy is a vital component of the global renewable energy portfolio, with wind turbines serving as the primary means of harnessing this abundant resource. However, the efficiency and stability of wind turbines remain critical challenges in maximizing energy output and ensuring long-term operational viability. This study proposes a comprehensive approach utilizing computational aerodynamics and aeromechanics to optimize wind turbine performance across multiple objectives. The proposed research aims to integrate advanced computational fluid dynamics (CFD) simulations with structural analysis techniques to enhance the aerodynamic efficiency and mechanical stability of wind turbine blades. By leveraging multi-objective optimization algorithms, the study seeks to simultaneously optimize aerodynamic performance metrics such as lift-to-drag ratio and power coefficient while ensuring structural integrity and minimizing fatigue loads on the turbine components. Furthermore, the investigation will explore the influence of various design parameters, including blade geometry, airfoil profiles, and turbine operating conditions, on the overall performance and stability of wind turbines. Through detailed parametric studies and sensitivity analyses, valuable insights into the complex interplay between aerodynamics and structural dynamics will be gained, facilitating the development of next-generation wind turbine designs. Ultimately, this research endeavours to contribute to the advancement of sustainable energy technologies by providing innovative solutions to enhance the efficiency, reliability, and economic viability of wind power generation systems. The findings have the potential to inform the design and optimization of wind turbines, leading to increased energy output, reduced maintenance costs, and greater environmental benefits in the transition towards a cleaner and more sustainable energy future.

Keywords: computation, robotics, mathematics, simulation

Procedia PDF Downloads 56
521 Comparison of Data Reduction Algorithms for Image-Based Point Cloud Derived Digital Terrain Models

Authors: M. Uysal, M. Yilmaz, I. Tiryakioğlu

Abstract:

Digital Terrain Model (DTM) is a digital numerical representation of the Earth's surface. DTMs have been applied to a diverse field of tasks, such as urban planning, military, glacier mapping, disaster management. In the expression of the Earth' surface as a mathematical model, an infinite number of point measurements are needed. Because of the impossibility of this case, the points at regular intervals are measured to characterize the Earth's surface and DTM of the Earth is generated. Hitherto, the classical measurement techniques and photogrammetry method have widespread use in the construction of DTM. At present, RADAR, LiDAR, and stereo satellite images are also used for the construction of DTM. In recent years, especially because of its superiorities, Airborne Light Detection and Ranging (LiDAR) has an increased use in DTM applications. A 3D point cloud is created with LiDAR technology by obtaining numerous point data. However recently, by the development in image mapping methods, the use of unmanned aerial vehicles (UAV) for photogrammetric data acquisition has increased DTM generation from image-based point cloud. The accuracy of the DTM depends on various factors such as data collection method, the distribution of elevation points, the point density, properties of the surface and interpolation methods. In this study, the random data reduction method is compared for DTMs generated from image based point cloud data. The original image based point cloud data set (100%) is reduced to a series of subsets by using random algorithm, representing the 75, 50, 25 and 5% of the original image based point cloud data set. Over the ANS campus of Afyon Kocatepe University as the test area, DTM constructed from the original image based point cloud data set is compared with DTMs interpolated from reduced data sets by Kriging interpolation method. The results show that the random data reduction method can be used to reduce the image based point cloud datasets to 50% density level while still maintaining the quality of DTM.

Keywords: DTM, Unmanned Aerial Vehicle (UAV), uniform, random, kriging

Procedia PDF Downloads 150