Search results for: precision molding
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1038

Search results for: precision molding

258 Mechanical Properties of Poly(Propylene)-Based Graphene Nanocomposites

Authors: Luiza Melo De Lima, Tito Trindade, Jose M. Oliveira

Abstract:

The development of thermoplastic-based graphene nanocomposites has been of great interest not only to the scientific community but also to different industrial sectors. Due to the possible improvement of performance and weight reduction, thermoplastic nanocomposites are a great promise as a new class of materials. These nanocomposites are of relevance for the automotive industry, namely because the emission limits of CO2 emissions imposed by the European Commission (EC) regulations can be fulfilled without compromising the car’s performance but by reducing its weight. Thermoplastic polymers have some advantages over thermosetting polymers such as higher productivity, lower density, and recyclability. In the automotive industry, for example, poly(propylene) (PP) is a common thermoplastic polymer, which represents more than half of the polymeric raw material used in automotive parts. Graphene-based materials (GBM) are potential nanofillers that can improve the properties of polymer matrices at very low loading. In comparison to other composites, such as fiber-based composites, weight reduction can positively affect their processing and future applications. However, the properties and performance of GBM/polymer nanocomposites depend on the type of GBM and polymer matrix, the degree of dispersion, and especially the type of interactions between the fillers and the polymer matrix. In order to take advantage of the superior mechanical strength of GBM, strong interfacial strength between GBM and the polymer matrix is required for efficient stress transfer from GBM to the polymer. Thus, chemical compatibilizers and physicochemical modifications have been reported as important tools during the processing of these nanocomposites. In this study, PP-based nanocomposites were obtained by a simple melt blending technique, using a Brabender type mixer machine. Graphene nanoplatelets (GnPs) were applied as structural reinforcement. Two compatibilizers were used to improve the interaction between PP matrix and GnPs: PP graft maleic anhydride (PPgMA) and PPgMA modified with tertiary amine alcohol (PPgDM). The samples for tensile and Charpy impact tests were obtained by injection molding. The results suggested the GnPs presence can increase the mechanical strength of the polymer. However, it was verified that the GnPs presence can promote a decrease of impact resistance, turning the nanocomposites more fragile than neat PP. The compatibilizers’ incorporation increases the impact resistance, suggesting that the compatibilizers can enhance the adhesion between PP and GnPs. Compared to neat PP, Young’s modulus of non-compatibilized nanocomposite increase demonstrated that GnPs incorporation can promote a stiffness improvement of the polymer. This trend can be related to the several physical crosslinking points between the PP matrix and the GnPs. Furthermore, the decrease of strain at a yield of PP/GnPs, together with the enhancement of Young’s modulus, confirms that the GnPs incorporation led to an increase in stiffness but to a decrease in toughness. Moreover, the results demonstrated that incorporation of compatibilizers did not affect Young’s modulus and strain at yield results compared to non-compatibilized nanocomposite. The incorporation of these compatibilizers showed an improvement of nanocomposites’ mechanical properties compared both to those the non-compatibilized nanocomposite and to a PP sample used as reference.

Keywords: graphene nanoplatelets, mechanical properties, melt blending processing, poly(propylene)-based nanocomposites

Procedia PDF Downloads 163
257 Crop Leaf Area Index (LAI) Inversion and Scale Effect Analysis from Unmanned Aerial Vehicle (UAV)-Based Hyperspectral Data

Authors: Xiaohua Zhu, Lingling Ma, Yongguang Zhao

Abstract:

Leaf Area Index (LAI) is a key structural characteristic of crops and plays a significant role in precision agricultural management and farmland ecosystem modeling. However, LAI retrieved from different resolution data contain a scaling bias due to the spatial heterogeneity and model non-linearity, that is, there is scale effect during multi-scale LAI estimate. In this article, a typical farmland in semi-arid regions of Chinese Inner Mongolia is taken as the study area, based on the combination of PROSPECT model and SAIL model, a multiple dimensional Look-Up-Table (LUT) is generated for multiple crops LAI estimation from unmanned aerial vehicle (UAV) hyperspectral data. Based on Taylor expansion method and computational geometry model, a scale transfer model considering both difference between inter- and intra-class is constructed for scale effect analysis of LAI inversion over inhomogeneous surface. The results indicate that, (1) the LUT method based on classification and parameter sensitive analysis is useful for LAI retrieval of corn, potato, sunflower and melon on the typical farmland, with correlation coefficient R2 of 0.82 and root mean square error RMSE of 0.43m2/m-2. (2) The scale effect of LAI is becoming obvious with the decrease of image resolution, and maximum scale bias is more than 45%. (3) The scale effect of inter-classes is higher than that of intra-class, which can be corrected efficiently by the scale transfer model established based Taylor expansion and Computational geometry. After corrected, the maximum scale bias can be reduced to 1.2%.

Keywords: leaf area index (LAI), scale effect, UAV-based hyperspectral data, look-up-table (LUT), remote sensing

Procedia PDF Downloads 422
256 Electrical Conductivity as Pedotransfer Function in the Determination of Sodium Adsorption Ratio in Soil System in Managing Micro Level Farming Practices in India: An Effective Low Cost Technology

Authors: Usha Loganathan, Haresh Pandya

Abstract:

Analysis and correlation of soil properties represent an important outset for precision agriculture and is currently promoted and implemented in the developed world. Establishing relationships among indices of soil salinity has always been a challenging task in salt affected soils necessitating unique approaches for their reclamation and management to sustain long term productivity of Soil. Soil salinity indices like Electrical Conductivity (EC) and Sodium Adsorption Ratio (SAR) are normally used to characterize soils as either sodic or saline sodic. Currently, Determination of Soil sodium adsorption ratio is a more accepted and reliable measure of soil salinity. However, it involves arduous and protracted laboratory investigations which demand evolving new and economical methods to determine SAR based on simple soil salinity index. A linear regression model to predict soil SAR from soil electrical conductivity has been developed and presented in this paper as per which, soil SAR could very well be worked out as a pedotransfer function of soil EC. The present study was carried out in Orathupalayam (11.09-11.11 N latitude and 74.54-77.59 E longitude) in the vicinity of Orathupalayam Reservoir of Noyyal River Basin, India, over a period of 3 consecutive years from September 2013 through February 2016 in different locations chosen randomly through different seasons. The research findings are discussed in the light of micro level farming practices in India and recommend determination of SAR as a low cost technology aiding in the effective management of salt affected agricultural land.

Keywords: electrical conductivity, orathupalayam, pedotranfer function, sodium adsorption ratio

Procedia PDF Downloads 231
255 Hand Symbol Recognition Using Canny Edge Algorithm and Convolutional Neural Network

Authors: Harshit Mittal, Neeraj Garg

Abstract:

Hand symbol recognition is a pivotal component in the domain of computer vision, with far-reaching applications spanning sign language interpretation, human-computer interaction, and accessibility. This research paper discusses the approach with the integration of the Canny Edge algorithm and convolutional neural network. The significance of this study lies in its potential to enhance communication and accessibility for individuals with hearing impairments or those engaged in gesture-based interactions with technology. In the experiment mentioned, the data is manually collected by the authors from the webcam using Python codes, to increase the dataset augmentation, is applied to original images, which makes the model more compatible and advanced. Further, the dataset of about 6000 coloured images distributed equally in 5 classes (i.e., 1, 2, 3, 4, 5) are pre-processed first to gray images and then by the Canny Edge algorithm with threshold 1 and 2 as 150 each. After successful data building, this data is trained on the Convolutional Neural Network model, giving accuracy: 0.97834, precision: 0.97841, recall: 0.9783, and F1 score: 0.97832. For user purposes, a block of codes is built in Python to enable a window for hand symbol recognition. This research, at its core, seeks to advance the field of computer vision by providing an advanced perspective on hand sign recognition. By leveraging the capabilities of the Canny Edge algorithm and convolutional neural network, this study contributes to the ongoing efforts to create more accurate, efficient, and accessible solutions for individuals with diverse communication needs.

Keywords: hand symbol recognition, computer vision, Canny edge algorithm, convolutional neural network

Procedia PDF Downloads 34
254 Management Effects on Different Sustainable Agricultural with Diverse Topography

Authors: Kusay Wheib, Alexandra Krvchenko

Abstract:

Crop yields are influenced by many factors, including natural ones, such as soil and environmental characteristics of the agricultural land, as well as manmade ones, such as management applications. One of the factors that frequently affect crop yields in undulating Midwest landscapes is topography, which controls the movement of water and nutrients necessary for plant life. The main objective of this study is to examine how field topography influences performance of different management practices in undulated terrain of southwest Michigan. A total of 26 agricultural fields, ranging in size from 1.1 to 7.4 ha, from the Scale-Up at Kellogg Biological Station were included in the study. The two studied factors were crop species with three levels, i.e., corn (Zea mays L.) soybean (Glycine max L.), and wheat (Triticum aestivum L.), and management practice with three levels, i.e., conventional, low input, and organic managements. They were compared under three contrasting topographical settings, namely, summit (includes summits and shoulders), slope (includes backslopes), and depression (includes footslope and toeslope). Yield data of years 2007 through 2012 was processed, cleaned, and filtered, average yield then was calculated for each field, topographic setting, and year. Topography parameters, including terrain, slope, curvature, flow direction and wetness index were computed under ArcGIS environment for each topographic class of each field to seek their effects on yield. Results showed that topographical depressions produced greatest yields in most studied fields, while managements with chemical inputs, both low input and conventional, resulted in higher yields than the organic management.

Keywords: sustainable agriculture, precision agriculture, topography, yield

Procedia PDF Downloads 92
253 Method Development for the Determination of Gamma-Aminobutyric Acid in Rice Products by Lc-Ms-Ms

Authors: Cher Rong Matthew Kong, Edmund Tian, Seng Poon Ong, Chee Sian Gan

Abstract:

Gamma-aminobutyric acid (GABA) is a non-protein amino acid that is a functional constituent of certain rice varieties. When consumed, it decreases blood pressure and reduces the risk of hypertension-related diseases. This has led to more research dedicated towards the development of functional food products (e.g. germinated brown rice) with enhanced GABA content, and the development of these functional food products has led to increased demand for instrument-based methods that can efficiently and effectively determine GABA content. Current analytical methods require analyte derivatisation, and have significant disadvantages such as being labour intensive and time-consuming, and being subject to analyte loss due to the increased complexity of the sample preparation process. To address this, an LC-MS-MS method for the determination of GABA in rice products has been developed and validated. This developed method involves a relatively simple sample preparation process before analysis using HILIC LC-MS-MS. This method eliminates the need for derivatisation, thereby significantly reducing the labour and time associated with such an analysis. Using LC-MS-MS also allows for better differentiation of GABA from any potential co-eluting compounds in the sample matrix. Results obtained from the developed method demonstrated high linearity, accuracy, and precision for the determination of GABA (1ng/L to 8ng/L) in a variety of brown rice products. The method can significantly simplify sample preparation steps, improve the accuracy of quantitation, and increase the throughput of analyses, thereby providing a quick but effective alternative to established instrumental analysis methods for GABA in rice.

Keywords: functional food, gamma-aminobutyric acid, germinated brown rice, method development

Procedia PDF Downloads 234
252 Early Gastric Cancer Prediction from Diet and Epidemiological Data Using Machine Learning in Mizoram Population

Authors: Brindha Senthil Kumar, Payel Chakraborty, Senthil Kumar Nachimuthu, Arindam Maitra, Prem Nath

Abstract:

Gastric cancer is predominantly caused by demographic and diet factors as compared to other cancer types. The aim of the study is to predict Early Gastric Cancer (ECG) from diet and lifestyle factors using supervised machine learning algorithms. For this study, 160 healthy individual and 80 cases were selected who had been followed for 3 years (2016-2019), at Civil Hospital, Aizawl, Mizoram. A dataset containing 11 features that are core risk factors for the gastric cancer were extracted. Supervised machine algorithms: Logistic Regression, Naive Bayes, Support Vector Machine (SVM), Multilayer perceptron, and Random Forest were used to analyze the dataset using Python Jupyter Notebook Version 3. The obtained classified results had been evaluated using metrics parameters: minimum_false_positives, brier_score, accuracy, precision, recall, F1_score, and Receiver Operating Characteristics (ROC) curve. Data analysis results showed Naive Bayes - 88, 0.11; Random Forest - 83, 0.16; SVM - 77, 0.22; Logistic Regression - 75, 0.25 and Multilayer perceptron - 72, 0.27 with respect to accuracy and brier_score in percent. Naive Bayes algorithm out performs with very low false positive rates as well as brier_score and good accuracy. Naive Bayes algorithm classification results in predicting ECG showed very satisfactory results using only diet cum lifestyle factors which will be very helpful for the physicians to educate the patients and public, thereby mortality of gastric cancer can be reduced/avoided with this knowledge mining work.

Keywords: Early Gastric cancer, Machine Learning, Diet, Lifestyle Characteristics

Procedia PDF Downloads 125
251 The Impact of Artificial Intelligence on Pharmacy and Pharmacology

Authors: Mamdouh Milad Adly Morkos

Abstract:

Despite having the greatest rates of mortality and morbidity in the world, low- and middle-income (LMIC) nations trail high-income nations in terms of the number of clinical trials, the number of qualified researchers, and the amount of research information specific to their people. Health inequities and the use of precision medicine may be hampered by a lack of local genomic data, clinical pharmacology and pharmacometrics competence, and training opportunities. These issues can be solved by carrying out health care infrastructure development, which includes data gathering and well-designed clinical pharmacology training in LMICs. It will be advantageous if there is international cooperation focused at enhancing education and infrastructure and promoting locally motivated clinical trials and research. This paper outlines various instances where clinical pharmacology knowledge could be put to use, including pharmacogenomic opportunities that could lead to better clinical guideline recommendations. Examples of how clinical pharmacology training can be successfully implemented in LMICs are also provided, including clinical pharmacology and pharmacometrics training programmes in Africa and a Tanzanian researcher's personal experience while on a training sabbatical in the United States. These training initiatives will profit from advocacy for clinical pharmacologists' employment prospects and career development pathways, which are gradually becoming acknowledged and established in LMICs. The advancement of training and research infrastructure to increase clinical pharmacologists' knowledge in LMICs would be extremely beneficial because they have a significant role to play in global health

Keywords: electromagnetic solar system, nano-material, nano pharmacology, pharmacovigilance, quantum theoryclinical simulation, education, pharmacology, simulation, virtual learning low- and middle-income, clinical pharmacology, pharmacometrics, career development pathways

Procedia PDF Downloads 40
250 Development and Validation Method for Quantitative Determination of Rifampicin in Human Plasma and Its Application in Bioequivalence Test

Authors: Endang Lukitaningsih, Fathul Jannah, Arief R. Hakim, Ratna D. Puspita, Zullies Ikawati

Abstract:

Rifampicin is a semisynthetic antibiotic derivative of rifamycin B produced by Streptomyces mediterranei. RIF has been used worldwide as first line drug-prescribed throughout tuberculosis therapy. This study aims to develop and to validate an HPLC method couple with a UV detection for determination of rifampicin in spiked human plasma and its application for bioequivalence study. The chromatographic separation was achieved on an RP-C18 column (LachromHitachi, 250 x 4.6 mm., 5μm), utilizing a mobile phase of phosphate buffer/acetonitrile (55:45, v/v, pH 6.8 ± 0.1) at a flow of 1.5 mL/min. Detection was carried out at 337 nm by using spectrophotometer. The developed method was statistically validated for the linearity, accuracy, limit of detection, limit of quantitation, precise and specifity. The specifity of the method was ascertained by comparing chromatograms of blank plasma and plasma containing rifampicin; the matrix and rifampicin were well separated. The limit of detection and limit of quantification were 0.7 µg/mL and 2.3 µg/mL, respectively. The regression curve of standard was linear (r > 0.999) over a range concentration of 20.0 – 100.0 µg/mL. The mean recovery of the method was 96.68 ± 8.06 %. Both intraday and interday precision data showed reproducibility (R.S.D. 2.98% and 1.13 %, respectively). Therefore, the method can be used for routine analysis of rifampicin in human plasma and in bioequivalence study. The validated method was successfully applied in pharmacokinetic and bioequivalence study of rifampicin tablet in a limited number of subjects (under an Ethical Clearance No. KE/FK/6201/EC/2015). The mean values of Cmax, Tmax, AUC(0-24) and AUC(o-∞) for the test formulation of rifampicin were 5.81 ± 0.88 µg/mL, 1.25 hour, 29.16 ± 4.05 µg/mL. h. and 29.41 ± 4.07 µg/mL. h., respectively. Meanwhile for the reference formulation, the values were 5.04 ± 0.54 µg/mL, 1.31 hour, 27.20 ± 3.98 µg/mL.h. and 27.49 ± 4.01 µg/mL.h. From bioequivalence study, the 90% CIs for the test formulation/reference formulation ratio for the logarithmic transformations of Cmax and AUC(0-24) were 97.96-129.48% and 99.13-120.02%, respectively. According to the bioequivamence test guidelines of the European Commission-European Medicines Agency, it can be concluded that the test formulation of rifampicin is bioequivalence with the reference formulation.

Keywords: validation, HPLC, plasma, bioequivalence

Procedia PDF Downloads 270
249 Appraisal of Trace Elements in Scalp Hair of School Children in Kandal Province, Cambodia

Authors: Alireza Yavar, Sukiman Sarmani, Kok Siong Khoo

Abstract:

Trace element analysis of human hair has the potential to disclose retroactive information about an individual’s nutritional status and exposure. The residents of villages in Kandal province of Cambodia, due to dietary habits, lifestyle and ecological conditions, are unprotected from toxic elements particularly arsenic (As). The purpose of this research was to valuation levels of toxic and vital elements in scalp human hair. Scalp hair samples of 12-17 school children from three villages of Anglong Romiot (AR), Svay Romiot (SR) and Kampong Kong (KK) in the Kandal province of Cambodia were evaluated using k0- instrumental neutron activation method (k0-INAA). The samples were irradiated 6 hours in a Malaysian nuclear agency (MNA) research reactor and afterward, an HPGe detector was utilized to obtain gamma peaks of radionuclides in samples. We achieved profiles of 31 elements in human hair in our studied area, namely, As, Au, Br, Ca, Ce, Co, Dy, Eu152m, Hg197, Hg203, Ho, Ir, K, La, Lu, Mn, Na, Pa, Pt195m, Pt197, Sb, Sc46, Sc47, Sm, Sn117m, W181, W187, Yb169, Yb175, Zn and Zn69m. The precision of the method was assessed by evaluating ERM-DB001-human hair as certified reference materials (CRMs), and which experimental result of ERM-DB001 was consistent with certified values. Whereas Arsenic (As) pollution is major contamination in our studied area, correlation between the concentration of As and other elements were determined by Pearson’s correlation test that it may be useful as a database source for toxic and essential elements in the hair of teenage individuals in our studied area

Keywords: scalp human hair, toxic and essential elements, Kandal province of Cambodia, k₀- instrumental neutron activation method

Procedia PDF Downloads 84
248 Close-Range Remote Sensing Techniques for Analyzing Rock Discontinuity Properties

Authors: Sina Fatolahzadeh, Sergio A. Sepúlveda

Abstract:

This paper presents advanced developments in close-range, terrestrial remote sensing techniques to enhance the characterization of rock masses. The study integrates two state-of-the-art laser-scanning technologies, the HandySCAN and GeoSLAM laser scanners, to extract high-resolution geospatial data for rock mass analysis. These instruments offer high accuracy, precision, low acquisition time, and high efficiency in capturing intricate geological features in small to medium size outcrops and slope cuts. Using the HandySCAN and GeoSLAM laser scanners facilitates real-time, three-dimensional mapping of rock surfaces, enabling comprehensive assessments of rock mass characteristics. The collected data provide valuable insights into structural complexities, surface roughness, and discontinuity patterns, which are essential for geological and geotechnical analyses. The synergy of these advanced remote sensing technologies contributes to a more precise and straightforward understanding of rock mass behavior. In this case, the main parameters of RQD, joint spacing, persistence, aperture, roughness, infill, weathering, water condition, and joint orientation in a slope cut along the Sea-to-Sky Highway, BC, were remotely analyzed to calculate and evaluate the Rock Mass Rating (RMR) and Geological Strength Index (GSI) classification systems. Automatic and manual analyses of the acquired data are then compared with field measurements. The results show the usefulness of the proposed remote sensing methods and their appropriate conformity with the actual field data.

Keywords: remote sensing, rock mechanics, rock engineering, slope stability, discontinuity properties

Procedia PDF Downloads 26
247 Infusion Pump Historical Development, Measurement and Parts of Infusion Pump

Authors: Samuel Asrat

Abstract:

Infusion pumps have become indispensable tools in modern healthcare, allowing for precise and controlled delivery of fluids, medications, and nutrients to patients. This paper provides an overview of the historical development, measurement, and parts of infusion pumps. The historical development of infusion pumps can be traced back to the early 1960s when the first rudimentary models were introduced. These early pumps were large, cumbersome, and often unreliable. However, advancements in technology and engineering over the years have led to the development of smaller, more accurate, and user-friendly infusion pumps. Measurement of infusion pumps involves assessing various parameters such as flow rate, volume delivered, and infusion duration. Flow rate, typically measured in milliliters per hour (mL/hr), is a critical parameter that determines the rate at which fluids or medications are delivered to the patient. Accurate measurement of flow rate is essential to ensure the proper administration of therapy and prevent adverse effects. Infusion pumps consist of several key parts, including the pump mechanism, fluid reservoir, tubing, and control interface. The pump mechanism is responsible for generating the necessary pressure to push fluids through the tubing and into the patient's bloodstream. The fluid reservoir holds the medication or solution to be infused, while the tubing serves as the conduit through which the fluid travels from the reservoir to the patient. The control interface allows healthcare providers to program and adjust the infusion parameters, such as flow rate and volume. In conclusion, infusion pumps have evolved significantly since their inception, offering healthcare providers unprecedented control and precision in delivering fluids and medications to patients. Understanding the historical development, measurement, and parts of infusion pumps is essential for ensuring their safe and effective use in clinical practice.

Keywords: dip, ip, sp, is

Procedia PDF Downloads 31
246 Hands-off Parking: Deep Learning Gesture-based System for Individuals with Mobility Needs

Authors: Javier Romera, Alberto Justo, Ignacio Fidalgo, Joshue Perez, Javier Araluce

Abstract:

Nowadays, individuals with mobility needs face a significant challenge when docking vehicles. In many cases, after parking, they encounter insufficient space to exit, leading to two undesired outcomes: either avoiding parking in that spot or settling for improperly placed vehicles. To address this issue, the following paper presents a parking control system employing gestural teleoperation. The system comprises three main phases: capturing body markers, interpreting gestures, and transmitting orders to the vehicle. The initial phase is centered around the MediaPipe framework, a versatile tool optimized for real-time gesture recognition. MediaPipe excels at detecting and tracing body markers, with a special emphasis on hand gestures. Hands detection is done by generating 21 reference points for each hand. Subsequently, after data capture, the project employs the MultiPerceptron Layer (MPL) for indepth gesture classification. This tandem of MediaPipe's extraction prowess and MPL's analytical capability ensures that human gestures are translated into actionable commands with high precision. Furthermore, the system has been trained and validated within a built-in dataset. To prove the domain adaptation, a framework based on the Robot Operating System (ROS), as a communication backbone, alongside CARLA Simulator, is used. Following successful simulations, the system is transitioned to a real-world platform, marking a significant milestone in the project. This real vehicle implementation verifies the practicality and efficiency of the system beyond theoretical constructs.

Keywords: gesture detection, mediapipe, multiperceptron layer, robot operating system

Procedia PDF Downloads 62
245 Geospatial Curve Fitting Methods for Disease Mapping of Tuberculosis in Eastern Cape Province, South Africa

Authors: Davies Obaromi, Qin Yongsong, James Ndege

Abstract:

To interpolate scattered or regularly distributed data, there are imprecise or exact methods. However, there are some of these methods that could be used for interpolating data in a regular grid and others in an irregular grid. In spatial epidemiology, it is important to examine how a disease prevalence rates are distributed in space, and how they relate with each other within a defined distance and direction. In this study, for the geographic and graphic representation of the disease prevalence, linear and biharmonic spline methods were implemented in MATLAB, and used to identify, localize and compare for smoothing in the distribution patterns of tuberculosis (TB) in Eastern Cape Province. The aim of this study is to produce a more “smooth” graphical disease map for TB prevalence patterns by a 3-D curve fitting techniques, especially the biharmonic splines that can suppress noise easily, by seeking a least-squares fit rather than exact interpolation. The datasets are represented generally as a 3D or XYZ triplets, where X and Y are the spatial coordinates and Z is the variable of interest and in this case, TB counts in the province. This smoothing spline is a method of fitting a smooth curve to a set of noisy observations using a spline function, and it has also become the conventional method for its high precision, simplicity and flexibility. Surface and contour plots are produced for the TB prevalence at the provincial level for 2012 – 2015. From the results, the general outlook of all the fittings showed a systematic pattern in the distribution of TB cases in the province and this is consistent with some spatial statistical analyses carried out in the province. This new method is rarely used in disease mapping applications, but it has a superior advantage to be assessed at subjective locations rather than only on a rectangular grid as seen in most traditional GIS methods of geospatial analyses.

Keywords: linear, biharmonic splines, tuberculosis, South Africa

Procedia PDF Downloads 217
244 Microwave-Assisted 3D Porous Graphene for Its Multi-Functionalities

Authors: Jung-Hwan Oh, Rajesh Kumar, Il-Kwon Oh

Abstract:

Porous graphene has extensive potential applications in variety of fields such as hydrogen storage, CO oxidation, gas separation, supercapacitors, fuel cells, nanoelectronics, oil adsorption, and so on. However, the generation of some carbon atoms vacancies for precise small holes have been not extensively studied to prevent the agglomerates of graphene sheets and to obtain porous graphene with high surface area. Recently, many research efforts have been presented to develop physical and chemical synthetic approaches for porous graphene. But physical method has very high cost of manufacture and chemical method consumes so many hours for porous graphene. Herein, we propose a porous graphene contained holes with atomic scale precision by embedding metal nano-particles through microwave irradiation for hydrogen storage and CO oxidation multi- functionalities. This proposed synthetic method is appropriate for fast and convenient production of three dimensional nanostructures, which have nanoholes on the graphene surface in consequence of microwave irradiation. The metal nanoparticles are dispersed quickly on the graphene surface and generated uniform nanoholes on the graphene nanosheets. The morphological and structural characterization of the porous graphene were examined by scanning electron microscopy (SEM), transmission scanning electron microscopy (TEM) and RAMAN spectroscopy, respectively. The metal nanoparticle-embedded porous graphene exhibits a microporous volume of 2.586cm3g-1 with an average pore radius of 0.75 nm. HR-TEM analysis was carried out to further characterize the microstructures. By investigating the RAMAN spectra, we can understand the structural changes of graphene. The results of this work demonstrate a possibility to produce a new class of porous graphene. Furthermore, the newly acquired knowledge for the diffusion into graphene can provide useful guidance for the development of the growth of nanostructure.

Keywords: CO oxidation, hydrogen storage, nanocomposites, porous graphene

Procedia PDF Downloads 352
243 Exploring the Use of Drones for Corn Borer Management: A Case Study in Central Italy

Authors: Luana Centorame, Alessio Ilari, Marco Giustozzi, Ester Foppa Pedretti

Abstract:

Maize is one of the most important agricultural cash crops in the world, involving three different chains: food, feed, and bioenergy production. Nowadays, the European corn borer (ECB), Ostrinia nubilalis, to the best of the author's knowledge, is the most important pest to control for maize growers. The ECB is harmful to maize; young larvae are responsible for minor damage to the leaves, while the most serious damage is tunneling by older larvae that burrow into the stock. Soon after, larvae can affect cobs, and it was found that ECB can foster mycotoxin contamination; this is why it is crucial to control it. There are multiple control methods available: agronomic, biological, and microbiological means, agrochemicals, and genetically modified plants. Meanwhile, the European Union’s policy focuses on the transition to sustainable supply chains and translates into the goal of reducing the use of agrochemicals by 50%. The current work aims to compare the agrochemical treatment of ECB and biological control through beneficial insects released by drones. The methodology used includes field trials of both chemical and biological control, considering a farm in central Italy as a case study. To assess the mechanical and technical efficacy of drones with respect to standard machinery, the available literature was consulted. The findings are positive because drones allow them to get in the field promptly, in difficult conditions and with lower costs if compared to traditional techniques. At the same time, it is important to consider the limits of drones regarding pilot certification, no-fly zones, etc. In the future, it will be necessary to deepen the topic with the real application in the field of both systems, expanding the scenarios in which drones can be used and the type of material distributed.

Keywords: beneficial insects, corn borer management, drones, precision agriculture

Procedia PDF Downloads 73
242 Comparison of Deep Learning and Machine Learning Algorithms to Diagnose and Predict Breast Cancer

Authors: F. Ghazalnaz Sharifonnasabi, Iman Makhdoom

Abstract:

Breast cancer is a serious health concern that affects many people around the world. According to a study published in the Breast journal, the global burden of breast cancer is expected to increase significantly over the next few decades. The number of deaths from breast cancer has been increasing over the years, but the age-standardized mortality rate has decreased in some countries. It’s important to be aware of the risk factors for breast cancer and to get regular check- ups to catch it early if it does occur. Machin learning techniques have been used to aid in the early detection and diagnosis of breast cancer. These techniques, that have been shown to be effective in predicting and diagnosing the disease, have become a research hotspot. In this study, we consider two deep learning approaches including: Multi-Layer Perceptron (MLP), and Convolutional Neural Network (CNN). We also considered the five-machine learning algorithm titled: Decision Tree (C4.5), Naïve Bayesian (NB), Support Vector Machine (SVM), K-Nearest Neighbors (KNN) Algorithm and XGBoost (eXtreme Gradient Boosting) on the Breast Cancer Wisconsin Diagnostic dataset. We have carried out the process of evaluating and comparing classifiers involving selecting appropriate metrics to evaluate classifier performance and selecting an appropriate tool to quantify this performance. The main purpose of the study is predicting and diagnosis breast cancer, applying the mentioned algorithms and also discovering of the most effective with respect to confusion matrix, accuracy and precision. It is realized that CNN outperformed all other classifiers and achieved the highest accuracy (0.982456). The work is implemented in the Anaconda environment based on Python programing language.

Keywords: breast cancer, multi-layer perceptron, Naïve Bayesian, SVM, decision tree, convolutional neural network, XGBoost, KNN

Procedia PDF Downloads 46
241 Development of an Interactive and Robust Image Analysis and Diagnostic Tool in R for Early Detection of Cervical Cancer

Authors: Kumar Dron Shrivastav, Ankan Mukherjee Das, Arti Taneja, Harpreet Singh, Priya Ranjan, Rajiv Janardhanan

Abstract:

Cervical cancer is one of the most common cancer among women worldwide which can be cured if detected early. Manual pathology which is typically utilized at present has many limitations. The current gold standard for cervical cancer diagnosis is exhaustive and time-consuming because it relies heavily on the subjective knowledge of the oncopathologists which leads to mis-diagnosis and missed diagnosis resulting false negative and false positive. To reduce time and complexities associated with early diagnosis, we require an interactive diagnostic tool for early detection particularly in developing countries where cervical cancer incidence and related mortality is high. Incorporation of digital pathology in place of manual pathology for cervical cancer screening and diagnosis can increase the precision and strongly reduce the chances of error in a time-specific manner. Thus, we propose a robust and interactive cervical cancer image analysis and diagnostic tool, which can categorically process both histopatholgical and cytopathological images to identify abnormal cells in the least amount of time and settings with minimum resources. Furthermore, incorporation of a set of specific parameters that are typically referred to for identification of abnormal cells with the help of open source software -’R’ is one of the major highlights of the tool. The software has the ability to automatically identify and quantify the morphological features, color intensity, sensitivity and other parameters digitally to differentiate abnormal from normal cells, which may improve and accelerate screening and early diagnosis, ultimately leading to timely treatment of cervical cancer.

Keywords: cervical cancer, early detection, digital Pathology, screening

Procedia PDF Downloads 145
240 Use of Real Time Ultrasound for the Prediction of Carcass Composition in Serrana Goats

Authors: Antonio Monteiro, Jorge Azevedo, Severiano Silva, Alfredo Teixeira

Abstract:

The objective of this study was to compare the carcass and in vivo real-time ultrasound measurements (RTU) and their capacity to predict the composition of Serrana goats up to 40% of maturity. Twenty one females (11.1 ± 3.97 kg) and Twenty one males (15.6 ± 5.38 kg) were utilized to made in vivo measurements with a 5 MHz probe (ALOKA 500V scanner) at the 9th-10th, 10th-11th thoracic vertebrae (uT910 and uT1011, respectively), at the 1st- 2nd, 3rd-4th, and 4th-5th lumbar vertebrae (uL12, ul34 and uL45, respectively) and also at the 3rd-4th sternebrae (EEST). It was recorded the images of RTU measurements of Longissimus thoracis et lumborum muscle (LTL) depth (EM), width (LM), perimeter (PM), area (AM) and subcutaneous fat thickness (SFD) above the LTL, as well as the depth of tissues of the sternum (EEST) between the 3rd-4th sternebrae. All RTU images were analyzed using the ImageJ software. After slaughter, the carcasses were stored at 4 ºC for 24 h. After this period the carcasses were divided and the left half was entirely dissected into muscle, dissected fat (subcutaneous fat plus intermuscular fat) and bone. Prior to the dissection measurements equivalent to those obtained in vivo with RTU were recorded. Using the Statistica 5, correlation and regression analyses were performed. The prediction of carcass composition was achieved by stepwise regression procedure, with live weight and RTU measurements with and without transformation of variables to the same dimension. The RTU and carcass measurements, except for SFD measurements, showed high correlation (r > 0.60, P < 0.001). The RTU measurements and the live weight, showed ability to predict carcass composition on muscle (R2 = 0.99, P < 0.001), subcutaneous fat (R2 = 0.41, P < 0.001), intermuscular fat (R2 = 0.84, P < 0.001), dissected fat (R2 = 0.71, P < 0.001) and bone (R2 = 0.94, P < 0.001). The transformation of variables allowed a slight increase of precision, but with the increase in the number of variables, with the exception of subcutaneous fat prediction. In vivo measurements by RTU can be applied to predict kid goat carcass composition, from 5 measurements of RTU and the live weight.

Keywords: carcass, goats, real time, ultrasound

Procedia PDF Downloads 238
239 A Compact Extended Laser Diode Cavity Centered at 780 nm for Use in High-Resolution Laser Spectroscopy

Authors: J. Alvarez, J. Pimienta, R. Sarmiento

Abstract:

Diode lasers working in free mode present different shifting and broadening determined by external factors such as temperature, current or mechanical vibrations, and they are not more useful in applications such as spectroscopy, metrology, and cooling of atoms, among others. Different configurations can reduce the spectral width of a laser; one of the most effective is to extend the optical resonator of the laser diode and use optical feedback either with the help of a partially reflective mirror or with a diffraction grating; this latter configuration is not only allowed to reduce the spectral width of the laser line but also to coarsely adjust its working wavelength, within a wide range typically ~ 10nm by slightly varying the angle of the diffraction grating. Two settings are commonly used for this purpose, the Littrow configuration and the Littmann Metcalf. In this paper, we present the design, construction, and characterization of a compact extended laser cavity in Littrow configuration. The designed cavity is compact and was machined on an aluminum block using computer numerical control (CNC); it has a mass of only 380 g. The design was tested on laser diodes with different wavelengths, 650nm, 780nm, and 795 nm, but can be equally efficient at other wavelengths. This report details the results obtained from the extended cavity working at a wavelength of 780 nm, with an output power of around 35mW and a line width of less than 1Mhz. The cavity was used to observe the spectrum of the corresponding Rubidium D2 line. By modulating the current and with the help of phase detection techniques, a dispersion signal with an excellent signal-to-noise ratio was generated that allowed the stabilization of the laser to a transition of the hyperfine structure of Rubidium with an integral proportional controller (PI) circuit made with precision operational amplifiers.

Keywords: Littrow, Littman-Metcalf, line width, laser stabilization, hyperfine structure

Procedia PDF Downloads 193
238 Immature Palm Tree Detection Using Morphological Filter for Palm Counting with High Resolution Satellite Image

Authors: Nur Nadhirah Rusyda Rosnan, Nursuhaili Najwa Masrol, Nurul Fatiha MD Nor, Mohammad Zafrullah Mohammad Salim, Sim Choon Cheak

Abstract:

Accurate inventories of oil palm planted areas are crucial for plantation management as this would impact the overall economy and production of oil. One of the technological advancements in the oil palm industry is semi-automated palm counting, which is replacing conventional manual palm counting via digitizing aerial imagery. Most of the semi-automated palm counting method that has been developed was limited to mature palms due to their ideal canopy size represented by satellite image. Therefore, immature palms were often left out since the size of the canopy is barely visible from satellite images. In this paper, an approach using a morphological filter and high-resolution satellite image is proposed to detect immature palm trees. This approach makes it possible to count the number of immature oil palm trees. The method begins with an erosion filter with an appropriate window size of 3m onto the high-resolution satellite image. The eroded image was further segmented using watershed segmentation to delineate immature palm tree regions. Then, local minimum detection was used because it is hypothesized that immature oil palm trees are located at the local minimum within an oil palm field setting in a grayscale image. The detection points generated from the local minimum are displaced to the center of the immature oil palm region and thinned. Only one detection point is left that represents a tree. The performance of the proposed method was evaluated on three subsets with slopes ranging from 0 to 20° and different planting designs, i.e., straight and terrace. The proposed method was able to achieve up to more than 90% accuracy when compared with the ground truth, with an overall F-measure score of up to 0.91.

Keywords: immature palm count, oil palm, precision agriculture, remote sensing

Procedia PDF Downloads 41
237 Design, Fabrication and Analysis of Molded and Direct 3D-Printed Soft Pneumatic Actuators

Authors: N. Naz, A. D. Domenico, M. N. Huda

Abstract:

Soft Robotics is a rapidly growing multidisciplinary field where robots are fabricated using highly deformable materials motivated by bioinspired designs. The high dexterity and adaptability to the external environments during contact make soft robots ideal for applications such as gripping delicate objects, locomotion, and biomedical devices. The actuation system of soft robots mainly includes fluidic, tendon-driven, and smart material actuation. Among them, Soft Pneumatic Actuator, also known as SPA, remains the most popular choice due to its flexibility, safety, easy implementation, and cost-effectiveness. However, at present, most of the fabrication of SPA is still based on traditional molding and casting techniques where the mold is 3d printed into which silicone rubber is cast and consolidated. This conventional method is time-consuming and involves intensive manual labour with the limitation of repeatability and accuracy in design. Recent advancements in direct 3d printing of different soft materials can significantly reduce the repetitive manual task with an ability to fabricate complex geometries and multicomponent designs in a single manufacturing step. The aim of this research work is to design and analyse the Soft Pneumatic Actuator (SPA) utilizing both conventional casting and modern direct 3d printing technologies. The mold of the SPA for traditional casting is 3d printed using fused deposition modeling (FDM) with the polylactic acid (PLA) thermoplastic wire. Hyperelastic soft materials such as Ecoflex-0030/0050 are cast into the mold and consolidated using a lab oven. The bending behaviour is observed experimentally with different pressures of air compressor to ensure uniform bending without any failure. For direct 3D-printing of SPA fused deposition modeling (FDM) with thermoplastic polyurethane (TPU) and stereolithography (SLA) with an elastic resin are used. The actuator is modeled using the finite element method (FEM) to analyse the nonlinear bending behaviour, stress concentration and strain distribution of different hyperelastic materials after pressurization. FEM analysis is carried out using Ansys Workbench software with a Yeon-2nd order hyperelastic material model. FEM includes long-shape deformation, contact between surfaces, and gravity influences. For mesh generation, quadratic tetrahedron, hybrid, and constant pressure mesh are used. SPA is connected to a baseplate that is in connection with the air compressor. A fixed boundary is applied on the baseplate, and static pressure is applied orthogonally to all surfaces of the internal chambers and channels with a closed continuum model. The simulated results from FEM are compared with the experimental results. The experiments are performed in a laboratory set-up where the developed SPA is connected to a compressed air source with a pressure gauge. A comparison study based on performance analysis is done between FDM and SLA printed SPA with the molded counterparts. Furthermore, the molded and 3d printed SPA has been used to develop a three-finger soft pneumatic gripper and has been tested for handling delicate objects.

Keywords: finite element method, fused deposition modeling, hyperelastic, soft pneumatic actuator

Procedia PDF Downloads 58
236 Machine Learning Techniques for COVID-19 Detection: A Comparative Analysis

Authors: Abeer A. Aljohani

Abstract:

COVID-19 virus spread has been one of the extreme pandemics across the globe. It is also referred to as coronavirus, which is a contagious disease that continuously mutates into numerous variants. Currently, the B.1.1.529 variant labeled as omicron is detected in South Africa. The huge spread of COVID-19 disease has affected several lives and has surged exceptional pressure on the healthcare systems worldwide. Also, everyday life and the global economy have been at stake. This research aims to predict COVID-19 disease in its initial stage to reduce the death count. Machine learning (ML) is nowadays used in almost every area. Numerous COVID-19 cases have produced a huge burden on the hospitals as well as health workers. To reduce this burden, this paper predicts COVID-19 disease is based on the symptoms and medical history of the patient. This research presents a unique architecture for COVID-19 detection using ML techniques integrated with feature dimensionality reduction. This paper uses a standard UCI dataset for predicting COVID-19 disease. This dataset comprises symptoms of 5434 patients. This paper also compares several supervised ML techniques to the presented architecture. The architecture has also utilized 10-fold cross validation process for generalization and the principal component analysis (PCA) technique for feature reduction. Standard parameters are used to evaluate the proposed architecture including F1-Score, precision, accuracy, recall, receiver operating characteristic (ROC), and area under curve (AUC). The results depict that decision tree, random forest, and neural networks outperform all other state-of-the-art ML techniques. This achieved result can help effectively in identifying COVID-19 infection cases.

Keywords: supervised machine learning, COVID-19 prediction, healthcare analytics, random forest, neural network

Procedia PDF Downloads 66
235 Food Insecurity Assessment, Consumption Pattern and Implications of Integrated Food Security Phase Classification: Evidence from Sudan

Authors: Ahmed A. A. Fadol, Guangji Tong, Wlaa Mohamed

Abstract:

This paper provides a comprehensive analysis of food insecurity in Sudan, focusing on consumption patterns and their implications, employing the Integrated Food Security Phase Classification (IPC) assessment framework. Years of conflict and economic instability have driven large segments of the population in Sudan into crisis levels of acute food insecurity according to the (IPC). A substantial number of people are estimated to currently face emergency conditions, with an additional sizeable portion categorized under less severe but still extreme hunger levels. In this study, we explore the multifaceted nature of food insecurity in Sudan, considering its historical, political, economic, and social dimensions. An analysis of consumption patterns and trends was conducted, taking into account cultural influences, dietary shifts, and demographic changes. Furthermore, we employ logistic regression and random forest analysis to identify significant independent variables influencing food security status in Sudan. Random forest clearly outperforms logistic regression in terms of area under curve (AUC), accuracy, precision and recall. Forward projections of the IPC for Sudan estimate that 15 million individuals are anticipated to face Crisis level (IPC Phase 3) or worse acute food insecurity conditions between October 2023 and February 2024. Of this, 60% are concentrated in Greater Darfur, Greater Kordofan, and Khartoum State, with Greater Darfur alone representing 29% of this total. These findings emphasize the urgent need for both short-term humanitarian aid and long-term strategies to address Sudan's deepening food insecurity crisis.

Keywords: food insecurity, consumption patterns, logistic regression, random forest analysis

Procedia PDF Downloads 36
234 Emerging Therapeutic Approach with Dandelion Phytochemicals in Breast Cancer Treatment

Authors: Angel Champion, Sadia Kanwal, Rafat Siddiqui

Abstract:

Harnessing phytochemicals from plant sources presents a novel opportunity to prevent or treat malignant diseases, including breast cancer. Chemotherapy lacks precision in targeting cancerous cells while sparing normal cells, but a phytopharmaceutical approach may offer a solution. Dandelion, a common weed plant, is rich in phytochemicals and provides a safer, more cost-effective alternative with lower toxicity than traditional pharmaceuticals for conditions such as breast cancer. In this study, an in-vitro experiment will be conducted using the ethanol extract of Dandelion on triple-negative MDA-231 breast cancer cell lines. The polyphenolic analysis revealed that the Dandelion extract, particularly from the root and leaf (both cut and sifted), had the most potent antioxidant properties and exhibited the most potent antioxidation activity from the powdered leaf extract. The extract exhibits prospective promising effects for inducing cell proliferation and apoptosis in breast cancer cells, highlighting its potential for targeted therapeutic interventions. Standardizing methods for Dandelion use is crucial for future clinical applications in cancer treatment. Combining plant-derived compounds with cancer nanotechnology holds the potential for effective strategies in battling malignant diseases. Utilizing liposomes as carriers for phytoconstituent anti-cancer agents offers improved solubility, bioavailability, immunoregulatory effects, advancing anticancer immune function, and reducing toxicity. This integrated approach of natural products and nanotechnology has significant potential to revolutionize healthcare globally, especially in underserved communities where herbal medicine is prevalent.

Keywords: apoptosis, antioxidant activity, cancer nanotechnology, phytopharmaceutical

Procedia PDF Downloads 27
233 The Impact of Bim Technology on the Whole Process Cost Management of Civil Engineering Projects in Kenya

Authors: Nsimbe Allan

Abstract:

The study examines the impact of Building Information Modeling (BIM) on the cost management of engineering projects, focusing specifically on the Mombasa Port Area Development Project. The objective of this research venture is to determine the mechanisms through which Building Information Modeling (BIM) facilitates stakeholder collaboration, reduces construction-related expenses, and enhances the precision of cost estimation. Furthermore, the study investigates barriers to execution, assesses the impact on the project's transparency, and suggests approaches to maximize resource utilization. The study, selected for its practical significance and intricate nature, conducted a Systematic Literature Review (SLR) using credible databases, including ScienceDirect and IEEE Xplore. To constitute the diverse sample, 69 individuals, including project managers, cost estimators, and BIM administrators, were selected via stratified random sampling. The data were obtained using a mixed-methods approach, which prioritized ethical considerations. SPSS and Microsoft Excel were applied to the analysis. The research emphasizes the crucial role that project managers, architects, and engineers play in the decision-making process (47% of respondents). Furthermore, a significant improvement in cost estimation accuracy was reported by 70% of the participants. It was found that the implementation of BIM resulted in enhanced project visibility, which in turn optimized resource allocation and facilitated the process of budgeting. In brief, the study highlights the positive impacts of Building Information Modeling (BIM) on collaborative decision-making and cost estimation, addresses challenges related to implementation, and provides solutions for the efficient assimilation and understanding of BIM principles.

Keywords: cost management, resource utilization, stakeholder collaboration, project transparency

Procedia PDF Downloads 30
232 Assistive Kitchenware Design for Hemiparetics

Authors: Daniel F. Madrinan-Chiquito

Abstract:

Hemiparesis affects about eight out of ten stroke survivors, causing weakness or the inability to move one side of the body. One-sided weakness can affect arms, hands, legs, or facial muscles. People with one-sided weakness may have trouble performing everyday activities such as eating, cooking, dressing, and using the bathroom. Rehabilitation treatments, exercises at home, and assistive devices can help with mobility and recovery. Historically, such treatments and devices were developed within the fields of medicine and biomedical engineering. However, innovators outside of the traditional medical device community, such as Industrial Designers, have recently brought their knowledge and expertise to assistive technologies. Primary and secondary research was done in three parts. The primary research collected data by talking with several occupational therapists currently attending to stroke patients, and surveys were given to patients with hemiparesis and hemiplegia. The secondary research collected data through observation and testing of products currently marketed for single-handed people. Modern kitchenware available in the market for people with an acquired brain injury has deficiencies in both aesthetic and functional values. Object design for people with hemiparesis or hemiplegia has not been meaningfully explored. Most cookware is designed for use with two hands and possesses little room for adaptation to the needs of one-handed individuals. This project focuses on the design and development of two kitchenware devices. These devices assist hemiparetics with different cooking-related tasks such as holding, grasping, cutting, slicing, chopping, grating, and other essential activities. These intentionally designed objects will improve the quality of life of hemiparetics by enabling greater independence and providing an enhanced ability for precision tasks in a cooking environment.

Keywords: assistive technologies, hemiparetics, industrial design, kitchenware

Procedia PDF Downloads 76
231 The Impact of Cognitive Load on Deceit Detection and Memory Recall in Children’s Interviews: A Meta-Analysis

Authors: Sevilay Çankaya

Abstract:

The detection of deception in children’s interviews is essential for statement veracity. The widely used method for deception detection is building cognitive load, which is the logic of the cognitive interview (CI), and its effectiveness for adults is approved. This meta-analysis delves into the effectiveness of inducing cognitive load as a means of enhancing veracity detection during interviews with children. Additionally, the effectiveness of cognitive load on children's total number of events recalled is assessed as a second part of the analysis. The current meta-analysis includes ten effect sizes from search using databases. For the effect size calculation, Hedge’s g was used with a random effect model by using CMA version 2. Heterogeneity analysis was conducted to detect potential moderators. The overall result indicated that cognitive load had no significant effect on veracity outcomes (g =0.052, 95% CI [-.006,1.25]). However, a high level of heterogeneity was found (I² = 92%). Age, participants’ characteristics, interview setting, and characteristics of the interviewer were coded as possible moderators to explain variance. Age was significant moderator (β = .021; p = .03, R2 = 75%) but the analysis did not reveal statistically significant effects for other potential moderators: participants’ characteristics (Q = 0.106, df = 1, p = .744), interview setting (Q = 2.04, df = 1, p = .154), and characteristics of interviewer (Q = 2.96, df = 1, p = .086). For the second outcome, the total number of events recalled, the overall effect was significant (g =4.121, 95% CI [2.256,5.985]). The cognitive load was effective in total recalled events when interviewing with children. All in all, while age plays a crucial role in determining the impact of cognitive load on veracity, the surrounding context, interviewer attributes, and inherent participant traits may not significantly alter the relationship. These findings throw light on the need for more focused, age-specific methods when using cognitive load measures. It may be possible to improve the precision and dependability of deceit detection in children's interviews with the help of more studies in this field.

Keywords: deceit detection, cognitive load, memory recall, children interviews, meta-analysis

Procedia PDF Downloads 35
230 Probability Sampling in Matched Case-Control Study in Drug Abuse

Authors: Surya R. Niraula, Devendra B Chhetry, Girish K. Singh, S. Nagesh, Frederick A. Connell

Abstract:

Background: Although random sampling is generally considered to be the gold standard for population-based research, the majority of drug abuse research is based on non-random sampling despite the well-known limitations of this kind of sampling. Method: We compared the statistical properties of two surveys of drug abuse in the same community: one using snowball sampling of drug users who then identified “friend controls” and the other using a random sample of non-drug users (controls) who then identified “friend cases.” Models to predict drug abuse based on risk factors were developed for each data set using conditional logistic regression. We compared the precision of each model using bootstrapping method and the predictive properties of each model using receiver operating characteristics (ROC) curves. Results: Analysis of 100 random bootstrap samples drawn from the snowball-sample data set showed a wide variation in the standard errors of the beta coefficients of the predictive model, none of which achieved statistical significance. One the other hand, bootstrap analysis of the random-sample data set showed less variation, and did not change the significance of the predictors at the 5% level when compared to the non-bootstrap analysis. Comparison of the area under the ROC curves using the model derived from the random-sample data set was similar when fitted to either data set (0.93, for random-sample data vs. 0.91 for snowball-sample data, p=0.35); however, when the model derived from the snowball-sample data set was fitted to each of the data sets, the areas under the curve were significantly different (0.98 vs. 0.83, p < .001). Conclusion: The proposed method of random sampling of controls appears to be superior from a statistical perspective to snowball sampling and may represent a viable alternative to snowball sampling.

Keywords: drug abuse, matched case-control study, non-probability sampling, probability sampling

Procedia PDF Downloads 471
229 Hyper Parameter Optimization of Deep Convolutional Neural Networks for Pavement Distress Classification

Authors: Oumaima Khlifati, Khadija Baba

Abstract:

Pavement distress is the main factor responsible for the deterioration of road structure durability, damage vehicles, and driver comfort. Transportation agencies spend a high proportion of their funds on pavement monitoring and maintenance. The auscultation of pavement distress was based on the manual survey, which was extremely time consuming, labor intensive, and required domain expertise. Therefore, the automatic distress detection is needed to reduce the cost of manual inspection and avoid more serious damage by implementing the appropriate remediation actions at the right time. Inspired by recent deep learning applications, this paper proposes an algorithm for automatic road distress detection and classification using on the Deep Convolutional Neural Network (DCNN). In this study, the types of pavement distress are classified as transverse or longitudinal cracking, alligator, pothole, and intact pavement. The dataset used in this work is composed of public asphalt pavement images. In order to learn the structure of the different type of distress, the DCNN models are trained and tested as a multi-label classification task. In addition, to get the highest accuracy for our model, we adjust the structural optimization hyper parameters such as the number of convolutions and max pooling, filers, size of filters, loss functions, activation functions, and optimizer and fine-tuning hyper parameters that conclude batch size and learning rate. The optimization of the model is executed by checking all feasible combinations and selecting the best performing one. The model, after being optimized, performance metrics is calculated, which describe the training and validation accuracies, precision, recall, and F1 score.

Keywords: distress pavement, hyperparameters, automatic classification, deep learning

Procedia PDF Downloads 60