Search results for: large organizations
2190 Interactive IoT-Blockchain System for Big Data Processing
Authors: Abdallah Al-ZoubI, Mamoun Dmour
Abstract:
The spectrum of IoT devices is becoming widely diversified, entering almost all possible fields and finding applications in industry, health, finance, logistics, education, to name a few. The IoT active endpoint sensors and devices exceeded the 12 billion mark in 2021 and are expected to reach 27 billion in 2025, with over $34 billion in total market value. This sheer rise in numbers and use of IoT devices bring with it considerable concerns regarding data storage, analysis, manipulation and protection. IoT Blockchain-based systems have recently been proposed as a decentralized solution for large-scale data storage and protection. COVID-19 has actually accelerated the desire to utilize IoT devices as it impacted both demand and supply and significantly affected several regions due to logistic reasons such as supply chain interruptions, shortage of shipping containers and port congestion. An IoT-blockchain system is proposed to handle big data generated by a distributed network of sensors and controllers in an interactive manner. The system is designed using the Ethereum platform, which utilizes smart contracts, programmed in solidity to execute and manage data generated by IoT sensors and devices. such as Raspberry Pi 4, Rasbpian, and add-on hardware security modules. The proposed system will run a number of applications hosted by a local machine used to validate transactions. It then sends data to the rest of the network through InterPlanetary File System (IPFS) and Ethereum Swarm, forming a closed IoT ecosystem run by blockchain where a number of distributed IoT devices can communicate and interact, thus forming a closed, controlled environment. A prototype has been deployed with three IoT handling units distributed over a wide geographical space in order to examine its feasibility, performance and costs. Initial results indicated that big IoT data retrieval and storage is feasible and interactivity is possible, provided that certain conditions of cost, speed and thorough put are met.Keywords: IoT devices, blockchain, Ethereum, big data
Procedia PDF Downloads 1502189 Risk Assessment of Trace Metals in the Soil Surface of an Abandoned Mine, El-Abed Northwestern Algeria
Authors: Farida Mellah, Abdelhak Boutaleb, Bachir Henni, Dalila Berdous, Abdelhamid Mellah
Abstract:
Context/Purpose: One of the largest mining operations for lead and zinc deposits in northwestern Algeria in more than thirty years, El Abed is now the abandoned mine that has been inactive since 2004, leaving large amounts of accumulated mining waste under the influence of Wind, erosion, rain, and near agricultural lands. Materials & Methods: This study aims to verify the concentrations and sources of heavy metals for surface samples containing randomly taken soil. Chemical analyses were performed using iCAP 7000 Series ICP-optical emission spectrometer, using a set of environmental quality indicators by calculating the enrichment factor using iron and aluminum references, geographic accumulation index and geographic information system (GIS). On the basis of the spatial distribution. Results: The results indicated that the average metal concentration was: (As = 30,82),(Pb = 1219,27), (Zn = 2855,94), (Cu = 5,3), mg/Kg,based on these results, all metals except Cu passed by GBV in the Earth's crust. Environmental quality indicators were calculated based on the concentrations of trace metals such as lead, arsenic, zinc, copper, iron and aluminum. Interpretation: This study investigated the concentrations and sources of trace metals, and by using quality indicators and statistical methods, lead, zinc, and arsenic were determined from human sources, while copper was a natural source. And based on the spatial analysis on the basis of GIS, many hot spots were identified in the El-Abed region. Conclusion: These results could help in the development of future treatment strategies aimed primarily at eliminating materials from mining waste.Keywords: soil contamination, trace metals, geochemical indices, El Abed mine, Algeria
Procedia PDF Downloads 712188 Short-Term Effects of Extreme Temperatures on Cause Specific Cardiovascular Admissions in Beijing, China
Authors: Deginet Aklilu, Tianqi Wang, Endwoke Amsalu, Wei Feng, Zhiwei Li, Xia Li, Lixin Tao, Yanxia Luo, Moning Guo, Xiangtong Liu, Xiuhua Guo
Abstract:
Extreme temperature-related cardiovascular diseases (CVDs) have become a growing public health concern. However, the impact of temperature on the cause of specific CVDs has not been well studied in the study area. The objective of this study was to assess the impact of temperature on cause-specific cardiovascular hospital admissions in Beijing, China. We obtained data from 172 large general hospitals from the Beijing Public Health Information Center Cardiovascular Case Database and China. Meteorological Administration covering 16 districts in Beijing from 2013 to 2017. We used a time-stratified case crossover design with a distributed lag nonlinear model (DLNM) to derive the impact of temperature on CVD in hospitals back to 27 days on CVD admissions. The temperature data were stratified as cold (extreme and moderate ) and hot (moderate and extreme ). Within five years (January 2013-December 2017), a total of 460,938 (male 54.9% and female 45.1%) CVD admission cases were reported. The exposure-response relationship for hospitalization was described by a "J" shape for the total and cause-specific. An increase in the six-day moving average temperature from moderate hot (30.2 °C) to extreme hot (36.9 °C) resulted in a significant increase in CVD admissions of 16.1%(95% CI = 12.8%-28.9%). However, the effect of cold temperature exposure on CVD admissions over a lag time of 0-27 days was found to be non significant, with a relative risk of 0.45 (95% CI = 0.378-0.55) for extreme cold (-8.5 °C)and 0.53 (95% CI = 0.47-0.60) for moderate cold (-5.6 °C). The results of this study indicate that exposure to extremely high temperatures is highly associated with an increase in cause-specific CVD admissions. These finding may guide to create and raise awareness of the general population, government and private sectors regarding on the effects of current weather conditions on CVD.Keywords: admission, Beijing, cardiovascular diseases, distributed lag non linear model, temperature
Procedia PDF Downloads 632187 Integrated Decision Support for Energy/Water Planning in Zayandeh Rud River Basin in Iran
Authors: Safieh Javadinejad
Abstract:
In order to make well-informed decisions respecting long-term system planning, resource managers and policy creators necessitate to comprehend the interconnections among energy and water utilization and manufacture—and also the energy-water nexus. Planning and assessment issues contain the enhancement of strategies for declining the water and energy system’s vulnerabilities to climate alteration with also emissions of decreasing greenhouse gas. In order to deliver beneficial decision support for climate adjustment policy and planning, understanding the regionally-specific features of the energy-water nexus, and the history-future of the water and energy source systems serving is essential. It will be helpful for decision makers understand the nature of current water-energy system conditions and capacity for adaptation plans for future. This research shows an integrated hydrology/energy modeling platform which is able to extend water-energy examines based on a detailed illustration of local circumstances. The modeling links the Water Evaluation and Planning (WEAP) and the Long Range Energy Alternatives Planning (LEAP) system to create full picture of water-energy processes. This will allow water managers and policy-decision makers to simply understand links between energy system improvements and hydrological processing and realize how future climate change will effect on water-energy systems. The Zayandeh Rud river basin in Iran is selected as a case study to show the results and application of the analysis. This region is known as an area with large integration of both the electric power and water sectors. The linkages between water, energy and climate change and possible adaptation strategies are described along with early insights from applications of the integration modeling system.Keywords: climate impacts, hydrology, water systems, adaptation planning, electricity, integrated modeling
Procedia PDF Downloads 2922186 Measurement of Radon Exhalation Rate, Natural Radioactivity, and Radiation Hazard Assessment in Soil Samples from the Surrounding Area of Kasimpur Thermal Power Plant Kasimpur (U. P.), India
Authors: Anil Sharma, Ajay Kumar Mahur, R. G. Sonkawade, A. C. Sharma, R. Prasad
Abstract:
In coal fired thermal power stations, large amount of fly ash is produced after burning of coal. Fly ash is spread and distributed in the surrounding area by air and may be deposited on the soil of the region surrounding the power plant. Coal contains increased levels of these radionuclides and fly ash may increase the radioactivity in the soil around the power plant. Radon atoms entering into the pore space from the mineral grain are transported by diffusion and advection through this space until they in turn decay or are released into the atmosphere. In the present study, Soil samples were collected from the region around a Kasimpur Thermal Power Plant, Kasimpur, Aligarh (U.P.). Radon activity, radon surface exhalation and mass exhalation rates were measured using “sealed can technique” using LR 115-type II nuclear track detectors. Radon activities vary from 92.9 to 556.8 Bq m-3 with mean value of 279.8 Bq m-3. Surface exhalation rates (EX) in these samples are found to vary from 33.4 to 200.2 mBq m-2 h-1 with an average value of 100.5 mBq m-2 h-1 whereas, Mass exhalation rates (EM) vary from 1.2 to 7.7 mBq kg-1 h-1 with an average value of 3.8 mBq kg-1 h-1. Activity concentrations of radionuclides were measured in these samples by using a low level NaI (Tl) based gamma ray spectrometer. Activity concentrations of 226Ra 232Th and 40K vary from 12 to 49 Bq kg-1, 24 to 49 Bq kg-1 and 135 to 546 Bq kg-1 with overall mean values of 30.3 Bq kg-1, 38.5 Bq kg-1 and 317.8 Bq kg-1, respectively. Radium equivalent activity has been found to vary from 80.0 to 143.7 Bq kg-1 with an average value of 109.7 Bq kg-1. Absorbed dose rate varies from 36.1 to 66.4 nGy h-1 with an average value of 50.4 nGy h-1 and corresponding outdoor annual effective dose varies from 0.044 to 0.081 mSv with an average value of 0.061 mSv. Values of external and internal hazard index Hex, Hin in this study vary from 0.21 to 0.38 and 0.27 to 0.50 with an average value of 0.29 and 0.37, Respectively. The results will be discussed in light of various factors.Keywords: natural radioactivity, radium equivalent activity, absorbed dose rate, gamma ray spectroscopy
Procedia PDF Downloads 3622185 Radiofrequency Ablation: A Technique in the Management of Low Anal Fistula
Authors: R. Suresh, C. B. Singh, A. K. Sarda
Abstract:
Background: Over the decades, several surgical techniques have been developed to treat anal fistulas with variable success rates and complications. Large amount of work has been done in radiofrequency excision of the fistula for several years but no work has been done for ablating the tract. Therefore one can consider for obliteration ofanal fistula by Radiofrequency ablation (RFA). Material and Methods: A randomized controlled clinical trial was conducted at Lok Nayak Hospital, where a total of 40 patients were enrolled in the study and they were randomly assigned to Group I (fistulectomy)(n=20) and Group II (RFA) (n=20). Aim of the study was to compare the efficacy of RFA of fistula versus fistulectomy in the treatment of a low anal fistula and to evaluate RFA as an effective alternative to fistulectomy with respect to time taken for wound healing as primary outcome and post-operative pain, time taken to return to work as secondary outcomes. Patients with simple low anal fistulas, single internal and external opening, not more than two secondary tracts were included. Patients with high complex fistula, fistulas communicating with cavity, fistula due to condition like tuberculosis, Crohn's, malignancy were excluded from the study. Results: Both groups were comparable with respect to age, sex ratio, type of fistula. Themean healing time was significantly shorter in group II (41.02 days) than in group I(62.68 days).The mean operative time was significantly shorter in groupII (21.40 min) than in group I(28.50 min). The mean time taken to return to work was significantly shorter in group II(8.30 days)than in group I(12.01 days).There was no significant difference in the post operative hospital stay, mean postoperative pain score, wound infection and recurrence between the two groups. Conclusion: The patients who underwent RFA of fistula had shorter wound healing time, operative time and time taken to return to work when compared to those who underwent fistulectomy and therefore RFA shows outcome comparable to fistulectomy in the treatment of low anal fistula.Keywords: fistulectomy, low anal fistula, radio frequency ablation, wound healing
Procedia PDF Downloads 3442184 Development of a Predictive Model to Prevent Financial Crisis
Authors: Tengqin Han
Abstract:
Delinquency has been a crucial factor in economics throughout the years. Commonly seen in credit card and mortgage, it played one of the crucial roles in causing the most recent financial crisis in 2008. In each case, a delinquency is a sign of the loaner being unable to pay off the debt, and thus may cause a lost of property in the end. Individually, one case of delinquency seems unimportant compared to the entire credit system. China, as an emerging economic entity, the national strength and economic strength has grown rapidly, and the gross domestic product (GDP) growth rate has remained as high as 8% in the past decades. However, potential risks exist behind the appearance of prosperity. Among the risks, the credit system is the most significant one. Due to long term and a large amount of balance of the mortgage, it is critical to monitor the risk during the performance period. In this project, about 300,000 mortgage account data are analyzed in order to develop a predictive model to predict the probability of delinquency. Through univariate analysis, the data is cleaned up, and through bivariate analysis, the variables with strong predictive power are detected. The project is divided into two parts. In the first part, the analysis data of 2005 are split into 2 parts, 60% for model development, and 40% for in-time model validation. The KS of model development is 31, and the KS for in-time validation is 31, indicating the model is stable. In addition, the model is further validation by out-of-time validation, which uses 40% of 2006 data, and KS is 33. This indicates the model is still stable and robust. In the second part, the model is improved by the addition of macroeconomic economic indexes, including GDP, consumer price index, unemployment rate, inflation rate, etc. The data of 2005 to 2010 is used for model development and validation. Compared with the base model (without microeconomic variables), KS is increased from 41 to 44, indicating that the macroeconomic variables can be used to improve the separation power of the model, and make the prediction more accurate.Keywords: delinquency, mortgage, model development, model validation
Procedia PDF Downloads 2282183 The Uses of Photodynamic Therapy versus Anti-vascular Endothelial Growth Factor in the Management of Acute Central Serous Chorioretinopathy: Systematic Review and Meta-Analysis
Authors: Hadeel Seraj, Mohammed Khoshhal, Mustafa Alhamoud, Hassan Alhashim, Anas Alsaif, Amro Abukhashabah
Abstract:
Central serous chorioretinopathy (CSCR) is an idiopathic retinal disease characterized by localized serous detachment of the neurosensory retina at the macula. To date, there is no high-quality evidence of recent updates on treating acute CSCR, focusing on photodynamic therapy (PDT) and anti-vascular endothelial growth factor (anti-VEGF). Hence, this review aims to systematically review the latest treatment strategies for acute CSCR. Methodology: The following electronic databases were used for a comprehensive and systematic literature review: MEDLINE, EMBASE, and Cochrane. In addition, we analyzed studies comparing PDT with placebo, anti-VEGF with placebo, or PDT with anti-VEGF in treating acute CSC eyes with no previous intervention. Results: Seven studies were included, with a total of 292 eyes. The overall positive results were significantly higher among patients who received PDT compared to control groups (OR = 7.96, 95% CI, 3.02 to 20.95, p < 0.001). The proportions of positive results were 81.0% and 97.1% among patients who received anti-VEGF and PDT, respectively, with no statistically significant differences between the groups. In addition, there were no significant differences between anti-VEGF and control groups. In contrast, PDT was significantly associated with lower recurrence odds than the control groups (OR = 0.12, 95% CI, 0.04 to 0.39, p = 0.042). Conclusion: According to our findings, PDT showed higher positive results than Anti-VEGF in acute CSCR. In addition, PDT was significantly associated with a lower recurrence rate than the control group. However, the analysis needs to be confirmed and updated by large-scale, well-designed RCTs.Keywords: central serous chorioretinopathy, Acute CSCR, photodynamic therapy, anti-vascular endothelial growth factor
Procedia PDF Downloads 792182 Theoretical Prediction on the Lifetime of Sessile Evaporating Droplet in Blade Cooling
Authors: Yang Shen, Yongpan Cheng, Jinliang Xu
Abstract:
The effective blade cooling is of great significance for improving the performance of turbine. The mist cooling emerges as the promising way compared with the transitional single-phase cooling. In the mist cooling, the injected droplet will evaporate rapidly, and cool down the blade surface due to the absorbed latent heat, hence the lifetime for evaporating droplet becomes critical for design of cooling passages for the blade. So far there have been extensive studies on the droplet evaporation, but usually the isothermal model is applied for most of the studies. Actually the surface cooling effect can affect the droplet evaporation greatly, it can prolong the droplet evaporation lifetime significantly. In our study, a new theoretical model for sessile droplet evaporation with surface cooling effect is built up in toroidal coordinate. Three evaporation modes are analyzed during the evaporation lifetime, include “Constant Contact Radius”(CCR) mode、“Constant Contact Angle”(CCA) mode and “stick-slip”(SS) mode. The dimensionless number E0 is introduced to indicate the strength of the evaporative cooling, it is defined based on the thermal properties of the liquid and the atmosphere. Our model can predict accurately the lifetime of evaporation by validating with available experimental data. Then the temporal variation of droplet volume, contact angle and contact radius are presented under CCR, CCA and SS mode, the following conclusions are obtained. 1) The larger the dimensionless number E0, the longer the lifetime of three evaporation cases is; 2) The droplet volume over time still follows “2/3 power law” in the CCA mode, as in the isothermal model without the cooling effect; 3) In the “SS” mode, the large transition contact angle can reduce the evaporation time in CCR mode, and increase the time in CCA mode, the overall lifetime will be increased; 4) The correction factor for predicting instantaneous volume of the droplet is derived to predict the droplet life time accurately. These findings may be of great significance to explore the dynamics and heat transfer of sessile droplet evaporation.Keywords: blade cooling, droplet evaporation, lifetime, theoretical analysis
Procedia PDF Downloads 1422181 Unsupervised Classification of DNA Barcodes Species Using Multi-Library Wavelet Networks
Authors: Abdesselem Dakhli, Wajdi Bellil, Chokri Ben Amar
Abstract:
DNA Barcode, a short mitochondrial DNA fragment, made up of three subunits; a phosphate group, sugar and nucleic bases (A, T, C, and G). They provide good sources of information needed to classify living species. Such intuition has been confirmed by many experimental results. Species classification with DNA Barcode sequences has been studied by several researchers. The classification problem assigns unknown species to known ones by analyzing their Barcode. This task has to be supported with reliable methods and algorithms. To analyze species regions or entire genomes, it becomes necessary to use similarity sequence methods. A large set of sequences can be simultaneously compared using Multiple Sequence Alignment which is known to be NP-complete. To make this type of analysis feasible, heuristics, like progressive alignment, have been developed. Another tool for similarity search against a database of sequences is BLAST, which outputs shorter regions of high similarity between a query sequence and matched sequences in the database. However, all these methods are still computationally very expensive and require significant computational infrastructure. Our goal is to build predictive models that are highly accurate and interpretable. This method permits to avoid the complex problem of form and structure in different classes of organisms. On empirical data and their classification performances are compared with other methods. Our system consists of three phases. The first is called transformation, which is composed of three steps; Electron-Ion Interaction Pseudopotential (EIIP) for the codification of DNA Barcodes, Fourier Transform and Power Spectrum Signal Processing. The second is called approximation, which is empowered by the use of Multi Llibrary Wavelet Neural Networks (MLWNN).The third is called the classification of DNA Barcodes, which is realized by applying the algorithm of hierarchical classification.Keywords: DNA barcode, electron-ion interaction pseudopotential, Multi Library Wavelet Neural Networks (MLWNN)
Procedia PDF Downloads 3182180 The Vulnerability of Farmers in Valencia Negros Oriental to Climate Change: El Niño Phenomenon and Malnutrition
Authors: J. K. Pis-An
Abstract:
Objective: The purpose of the study was to examine the vulnerability of farmers to the effects of climate change, specifically the El Niño phenomenon was felt in the Philippines in 2009-2010. Methods: KAP Survey determines behavioral response to vulnerability to the effects of El Niño. Body Mass Index: Dietary Assessment using 24-hour food recall. Results: 75% of the respondents claimed that crop significantly decreased during drought. Indications that households of farmers are large where 51.6% are composed of 6-10 family members with 68% annual incomes below Php 100,00. Anthropometric assessment showed that the prevalence of Chronic Energy Deficiency Grade 1 among females 17% and 28.57% for low normal. While male body mass index result for chronic energy deficiency grade 1 10%, low normal 18.33% and and obese grade 1, 31.67%. Dietary assessment of macronutrient intake of carbohydrates, protein, and fat 31.6 % among respondents are below recommended amounts. Micronutrient deficiency of calcium, iron, vit. A, thiamine, riboflavin, niacin, and Vit. C. Conclusion: Majority of the rural populations are engaged into farming livelihood that makes up the backbone of their economic growth. Placing the current nutritional status of the farmers in the context of food security, there are reasons to believe that the status will go for worse if the extreme climatic conditions will once again prevail in the region. Farmers rely primarily on home grown crops for their food supply, a reduction in farm production during drought is expected to adversely affect dietary intake. The local government therefore institute programs to increase food resiliency and to prioritize health of the population as the moving force for productivity and development.Keywords: world health organization, united nation framework convention on climate change, anthropometric, macronutrient, micronutrient
Procedia PDF Downloads 4442179 Computational Fluid Dynamics Modeling of Physical Mass Transfer of CO₂ by N₂O Analogy Using One Fluid Formulation in OpenFOAM
Authors: Phanindra Prasad Thummala, Umran Tezcan Un, Ahmet Ozan Celik
Abstract:
Removal of CO₂ by MEA (monoethanolamine) in structured packing columns depends highly on the gas-liquid interfacial area and film thickness (liquid load). CFD (computational fluid dynamics) is used to find the interfacial area, film thickness and their impact on mass transfer in gas-liquid flow effectively in any column geometry. In general modeling approaches used in CFD derive mass transfer parameters from standard correlations based on penetration or surface renewal theories. In order to avoid the effect of assumptions involved in deriving the correlations and model the mass transfer based solely on fluid properties, state of art approaches like one fluid formulation is useful. In this work, the one fluid formulation was implemented and evaluated for modeling the physical mass transfer of CO₂ by N₂O analogy in OpenFOAM CFD software. N₂O analogy avoids the effect of chemical reactions on absorption and allows studying the amount of CO₂ physical mass transfer possible in a given geometry. The computational domain in the current study was a flat plate with gas and liquid flowing in the countercurrent direction. The effect of operating parameters such as flow rate, the concentration of MEA and angle of inclination on the physical mass transfer is studied in detail. Liquid side mass transfer coefficients obtained by simulations are compared to the correlations available in the literature and it was found that the one fluid formulation was effectively capturing the effects of interface surface instabilities on mass transfer coefficient with higher accuracy. The high mesh refinement near the interface region was found as a limiting reason for utilizing this approach on large-scale simulations. Overall, the one fluid formulation is found more promising for CFD studies involving the CO₂ mass transfer.Keywords: one fluid formulation, CO₂ absorption, liquid mass transfer coefficient, OpenFOAM, N₂O analogy
Procedia PDF Downloads 2202178 Anti-Parasite Targeting with Amino Acid-Capped Nanoparticles Modulates Multiple Cellular Processes in Host
Authors: Oluyomi Stephen Adeyemi, Kentaro Kato
Abstract:
Toxoplasma gondii is the etiological agent of toxoplasmosis, a common parasitic disease capable of infecting a range of hosts, including nearly one-third of the human population. Current treatment options for toxoplasmosis patients are limited. In consequence, toxoplasmosis represents a large global burden that is further enhanced by the shortcomings of the current therapeutic options. These factors underscore the need for better anti-T. gondii agents and/or new treatment approach. In the present study, we sought to find out whether preparing and capping nanoparticles (NPs) in amino acids, would enhance specificity toward the parasite versus the host cell. The selection of amino acids was premised on the fact that T. gondii is auxotrophic for some amino acids. The amino acid-nanoparticles (amino-NPs) were synthesized, purified and characterized following established protocols. Next, we tested to determine the anti-T. gondii activity of the amino-NPs using in vitro experimental model of infection. Overall, our data show evidence that supports enhanced and excellent selective action against the parasite versus the host cells by amino-NPs. The findings are promising and provide additional support that warrants exploring the prospects of NPs as alternative anti-parasite agents. In addition, the anti-parasite action by amino-NPs indicates that nutritional requirement of parasite may represent a viable target in the development of better alternative anti-parasite agents. Furthermore, data suggest the anti-parasite mechanism of the amino-NPs involves multiple cellular processes including the production of reactive oxygen species (ROS), modulation of hypoxia-inducing factor-1 alpha (HIF-1α) as well as the activation of kynurenine pathway. Taken together, findings highlight further, the prospects of NPs as alternative source of anti-parasite agents.Keywords: drug discovery, infectious diseases, mode of action, nanomedicine
Procedia PDF Downloads 1122177 Minimizing the Drilling-Induced Damage in Fiber Reinforced Polymeric Composites
Authors: S. D. El Wakil, M. Pladsen
Abstract:
Fiber reinforced polymeric (FRP) composites are finding wide-spread industrial applications because of their exceptionally high specific strength and specific modulus of elasticity. Nevertheless, it is very seldom to get ready-for-use components or products made of FRP composites. Secondary processing by machining, particularly drilling, is almost always required to make holes for fastening components together to produce assemblies. That creates problems since the FRP composites are neither homogeneous nor isotropic. Some of the problems that are encountered include the subsequent damage in the region around the drilled hole and the drilling – induced delamination of the layer of ply, that occurs both at the entrance and the exit planes of the work piece. Evidently, the functionality of the work piece would be detrimentally affected. The current work was carried out with the aim of eliminating or at least minimizing the work piece damage associated with drilling of FPR composites. Each test specimen involves a woven reinforced graphite fiber/epoxy composite having a thickness of 12.5 mm (0.5 inch). A large number of test specimens were subjected to drilling operations with different combinations of feed rates and cutting speeds. The drilling induced damage was taken as the absolute value of the difference between the drilled hole diameter and the nominal one taken as a percentage of the nominal diameter. The later was determined for each combination of feed rate and cutting speed, and a matrix comprising those values was established, where the columns indicate varying feed rate while and rows indicate varying cutting speeds. Next, the analysis of variance (ANOVA) approach was employed using Minitab software, in order to obtain the combination that would improve the drilling induced damage. Experimental results show that low feed rates coupled with low cutting speeds yielded the best results.Keywords: drilling of composites, dimensional accuracy of holes drilled in composites, delamination and charring, graphite-epoxy composites
Procedia PDF Downloads 3902176 Use Cloud-Based Watson Deep Learning Platform to Train Models Faster and More Accurate
Authors: Susan Diamond
Abstract:
Machine Learning workloads have traditionally been run in high-performance computing (HPC) environments, where users log in to dedicated machines and utilize the attached GPUs to run training jobs on huge datasets. Training of large neural network models is very resource intensive, and even after exploiting parallelism and accelerators such as GPUs, a single training job can still take days. Consequently, the cost of hardware is a barrier to entry. Even when upfront cost is not a concern, the lead time to set up such an HPC environment takes months from acquiring hardware to set up the hardware with the right set of firmware, software installed and configured. Furthermore, scalability is hard to achieve in a rigid traditional lab environment. Therefore, it is slow to react to the dynamic change in the artificial intelligent industry. Watson Deep Learning as a service, a cloud-based deep learning platform that mitigates the long lead time and high upfront investment in hardware. It enables robust and scalable sharing of resources among the teams in an organization. It is designed for on-demand cloud environments. Providing a similar user experience in a multi-tenant cloud environment comes with its own unique challenges regarding fault tolerance, performance, and security. Watson Deep Learning as a service tackles these challenges and present a deep learning stack for the cloud environments in a secure, scalable and fault-tolerant manner. It supports a wide range of deep-learning frameworks such as Tensorflow, PyTorch, Caffe, Torch, Theano, and MXNet etc. These frameworks reduce the effort and skillset required to design, train, and use deep learning models. Deep Learning as a service is used at IBM by AI researchers in areas including machine translation, computer vision, and healthcare.Keywords: deep learning, machine learning, cognitive computing, model training
Procedia PDF Downloads 2092175 Hyperspectral Imaging and Nonlinear Fukunaga-Koontz Transform Based Food Inspection
Authors: Hamidullah Binol, Abdullah Bal
Abstract:
Nowadays, food safety is a great public concern; therefore, robust and effective techniques are required for detecting the safety situation of goods. Hyperspectral Imaging (HSI) is an attractive material for researchers to inspect food quality and safety estimation such as meat quality assessment, automated poultry carcass inspection, quality evaluation of fish, bruise detection of apples, quality analysis and grading of citrus fruits, bruise detection of strawberry, visualization of sugar distribution of melons, measuring ripening of tomatoes, defect detection of pickling cucumber, and classification of wheat kernels. HSI can be used to concurrently collect large amounts of spatial and spectral data on the objects being observed. This technique yields with exceptional detection skills, which otherwise cannot be achieved with either imaging or spectroscopy alone. This paper presents a nonlinear technique based on kernel Fukunaga-Koontz transform (KFKT) for detection of fat content in ground meat using HSI. The KFKT which is the nonlinear version of FKT is one of the most effective techniques for solving problems involving two-pattern nature. The conventional FKT method has been improved with kernel machines for increasing the nonlinear discrimination ability and capturing higher order of statistics of data. The proposed approach in this paper aims to segment the fat content of the ground meat by regarding the fat as target class which is tried to be separated from the remaining classes (as clutter). We have applied the KFKT on visible and nearinfrared (VNIR) hyperspectral images of ground meat to determine fat percentage. The experimental studies indicate that the proposed technique produces high detection performance for fat ratio in ground meat.Keywords: food (ground meat) inspection, Fukunaga-Koontz transform, hyperspectral imaging, kernel methods
Procedia PDF Downloads 4312174 Object Detection in Digital Images under Non-Standardized Conditions Using Illumination and Shadow Filtering
Authors: Waqqas-ur-Rehman Butt, Martin Servin, Marion Pause
Abstract:
In recent years, object detection has gained much attention and very encouraging research area in the field of computer vision. The robust object boundaries detection in an image is demanded in numerous applications of human computer interaction and automated surveillance systems. Many methods and approaches have been developed for automatic object detection in various fields, such as automotive, quality control management and environmental services. Inappropriately, to the best of our knowledge, object detection under illumination with shadow consideration has not been well solved yet. Furthermore, this problem is also one of the major hurdles to keeping an object detection method from the practical applications. This paper presents an approach to automatic object detection in images under non-standardized environmental conditions. A key challenge is how to detect the object, particularly under uneven illumination conditions. Image capturing conditions the algorithms need to consider a variety of possible environmental factors as the colour information, lightening and shadows varies from image to image. Existing methods mostly failed to produce the appropriate result due to variation in colour information, lightening effects, threshold specifications, histogram dependencies and colour ranges. To overcome these limitations we propose an object detection algorithm, with pre-processing methods, to reduce the interference caused by shadow and illumination effects without fixed parameters. We use the Y CrCb colour model without any specific colour ranges and predefined threshold values. The segmented object regions are further classified using morphological operations (Erosion and Dilation) and contours. Proposed approach applied on a large image data set acquired under various environmental conditions for wood stack detection. Experiments show the promising result of the proposed approach in comparison with existing methods.Keywords: image processing, illumination equalization, shadow filtering, object detection
Procedia PDF Downloads 2162173 Surface Deformation Studies in South of Johor Using the Integration of InSAR and Resistivity Methods
Authors: Sirajo Abubakar, Ismail Ahmad Abir, Muhammad Sabiu Bala, Muhammad Mustapha Adejo, Aravind Shanmugaveloo
Abstract:
Over the years, land subsidence has been a serious threat mostly to urban areas. Land subsidence is the sudden sinking or gradual downward settling of the ground’s surface with little or no horizontal motion. In most areas, land subsidence is a slow process that covers a large area; therefore, it is sometimes left unnoticed. South of Johor is the area of interest for this project because it is going through rapid urbanization. The objective of this research is to evaluate and identify potential deformations in the south of Johor using integrated remote sensing and 2D resistivity methods. Synthetic aperture radar interferometry (InSAR) which is a remote sensing technique has the potential to map coherent displacements at centimeter to millimeter resolutions. Persistent scatterer interferometry (PSI) stacking technique was applied to Sentinel-1 data to detect the earth deformation in the study area. A dipole-dipole configuration resistivity profiling was conducted in three areas to determine the subsurface features in that area. This subsurface features interpreted were then correlated with the remote sensing technique to predict the possible causes of subsidence and uplifts in the south of Johor. Based on the results obtained, West Johor Bahru (0.63mm/year) and Ulu Tiram (1.61mm/year) are going through uplift due to possible geological uplift. On the other end, East Johor Bahru (-0.26mm/year) and Senai (-1.16mm/year) undergo subsidence due to possible fracture and granitic boulders loading. Land subsidence must be taken seriously as it can cause serious damages to infrastructures and human life. Monitoring land subsidence and taking preventive actions must be done to prevent any disasters.Keywords: interferometric synthetic aperture radar, persistent scatter, minimum spanning tree, resistivity, subsidence
Procedia PDF Downloads 1472172 Recovery of Selenium from Scrubber Sludge in Copper Process
Authors: Lakshmikanth Reddy, Bhavin Desai, Chandrakala Kari, Sanjay Sarkar, Pradeep Binu
Abstract:
The sulphur dioxide gases generated as a by-product of smelting and converting operations of copper concentrate contain selenium apart from zinc, lead, copper, cadmium, bismuth, antimony, and arsenic. The gaseous stream is treated in waste heat boiler, electrostatic precipitator and scrubbers to remove coarse particulate matter in order to produce commercial grade sulfuric acid. The gas cleaning section of the acid plant uses water to scrub the smelting gases. After scrubbing, the sludge settled at the bottom of the scrubber, was analyzed in present investigation. It was found to contain 30 to 40 wt% copper and selenium up to 40 wt% selenium. The sludge collected during blow-down is directly recycled to the smelter for copper recovery. However, the selenium is expected to again vaporize due to high oxidation potential during smelting and converting, causing accumulation of selenium in sludge. In present investigation, a roasting process has been developed to recover the selenium before the copper recovery from the sludge at smelter. Selenium is associated with copper in sludge as copper selenide, as determined by X-ray diffraction and electron microscopy. The thermodynamic and thermos-gravimetry study revealed that the copper selenide phase present in the sludge was amenable to oxidation at 600°C forming oxides of copper and selenium (Cu-Se-O). However, the dissociation of selenium from the copper oxide was made possible by sulfatation using sulfur dioxide between 450 to 600°C, resulting into the formation of CuSO₄ (s) and SeO₂ (g). Lab scale trials were carried out in vertical tubular furnace to determine the optimum roasting conditions with respect to roasting time, temperature and molar ratio of O₂:SO₂. Using these optimum conditions, selenium up to 90 wt% in the form of SeO₂ vapors could be recovered from the sludge in a large-scale commercial roaster. Roasted sludge free from the selenium and containing oxides and sulfates of copper could now be recycled in the smelter for copper recovery.Keywords: copper, selenium, copper selenide, sludge, roasting, SeO₂
Procedia PDF Downloads 2052171 Community Perceptions on Honey Quality in Tobacco Growing Areas in Kigoma Region, Tanzania
Authors: Pilly Kagosi, Cherestino Balama
Abstract:
Beekeeping plays major role in improving biodiversity, increasing household income, and crop production through pollination. Tobacco farming is also the main source of household income for smallholder farmers. In Kigoma, production of Tobacco has increased and is perceived to threaten honey quality. The study explored the perception of the community on quality of honey in tobacco and non tobacco growing areas. The study was conducted in Kigoma Region, Tanzania. District and Villages were purposively sampled based on large numbers of people dealing with beekeeping activities and tobacco farming. Socioeconomic data were collected and analysed using Statistical Package for Social Sciences and content analysis. The perception of stakeholders on honey quality was analysed using Likert scale. Majority of the respondents agreed that tobacco farming greatly affects honey quality because honey from beehives near tobacco farms test bitter and sometimes irritating, which was associated with nicotine content and agrochemicals applied to tobacco crops. Though they cannot differentiate honey bitterness from agrochemicals and bee fodders. Furthermore, it was revealed that chemicals applied to tobacco and vegetables have negative effect on the bees and honey quality. Respondents believe that setting bee hives near tobacco farms might contaminate honey and therefore affect its quality. Beekeepers are not aware of the nicotine content from other bee fodders like miombo of which do not have any effect on human beings. Actually, tobacco farming does not affect beekeeping activities in issue of quality when farmers follow proper management of tobacco flowers and proper handling of honey. Though, big challenge in tobacco farming is chemically applied to the crops and harvest bee fodders for curing tobacco. The study recommends training to community on proper management of tobacco and proper handling of bee products.Keywords: community, honey, perceptions, tobacco
Procedia PDF Downloads 1442170 Legal Problems with the Thai Political Party Establishment
Authors: Paiboon Chuwatthanakij
Abstract:
Each of the countries around the world has different ways of management and many of them depend on people to administrate their country. Thailand, for example, empowers the sovereignty of Thai people under constitution; however, our Thai voting system is not able to flow fast enough under the current Political management system. The sovereignty of Thai people is addressing this problem through representatives during current elections, in order to set a new policy for the countries ideology to change in the House and the Cabinet. This is particularly important in a democracy to be developed under our current political institution. The Organic Act on Political Parties 2007 is the establishment we have today that is causing confrontations within the establishment. There are many political parties that will soon be abolished. Many political parties have already been subsidized. This research study is to analyze the legal problems with the political party establishment under the Organic Act on Political Parties 2007. This will focus on the freedom of each political establishment compared to an effective political operation. Textbooks and academic papers will be referenced from studies home and abroad. The study revealed that Organic Act on Political Parties 2007 has strict provisions on the political structure over the number of members and the number of branches involved within political parties system. Such operations shall be completed within one year; but under the existing laws the small parties are not able to participate with the bigger parties. The cities are capable of fulfilling small political party requirements but fail to become coalesced because the current laws won't allow them to be united as one. It is important to allow all independent political parties to join our current political structure. Board members can’t help the smaller parties to become a large organization under the existing Thai laws. Creating a new establishment that functions efficiently throughout all branches would be one solution to these legal problems between all political parties. With this new operation, individual political parties can participate with the bigger parties during elections. Until current political institutions change their system to accommodate public opinion, these current Thai laws will continue to be a problem with all political parties in Thailand.Keywords: coalesced, political party, sovereignty, elections
Procedia PDF Downloads 3142169 Chikungunya Virus Detection Utilizing an Origami Based Electrochemical Paper Analytical Device
Authors: Pradakshina Sharma, Jagriti Narang
Abstract:
Due to the critical significance in the early identification of infectious diseases, electrochemical sensors have garnered considerable interest. Here, we develop a detection platform for the chikungunya virus by rationally implementing the extremely high charge-transfer efficiency of a ternary nanocomposite of graphene oxide, silver, and gold (G/Ag/Au) (CHIKV). Because paper is an inexpensive substrate and can be produced in large quantities, the use of electrochemical paper analytical device (EPAD) origami further enhances the sensor's appealing qualities. A cost-effective platform for point-of-care diagnostics is provided by paper-based testing. These types of sensors are referred to as eco-designed analytical tools due to their efficient production, usage of the eco-friendly substrate, and potential to reduce waste management after measuring by incinerating the sensor. In this research, the paper's foldability property has been used to develop and create 3D multifaceted biosensors that can specifically detect the CHIKVX-ray diffraction, scanning electron microscopy, UV-vis spectroscopy, and transmission electron microscopy (TEM) were used to characterize the produced nanoparticles. In this work, aptamers are used since they are thought to be a unique and sensitive tool for use in rapid diagnostic methods. Cyclic voltammetry (CV) and linear sweep voltammetry (LSV), which were both validated with a potentiostat, were used to measure the analytical response of the biosensor. The target CHIKV antigen was hybridized with using the aptamer-modified electrode as a signal modulation platform, and its presence was determined by a decline in the current produced by its interaction with an anionic mediator, Methylene Blue (MB). Additionally, a detection limit of 1ng/ml and a broad linear range of 1ng/ml-10µg/ml for the CHIKV antigen were reported.Keywords: biosensors, ePAD, arboviral infections, point of care
Procedia PDF Downloads 972168 Analysis of a IncResU-Net Model for R-Peak Detection in ECG Signals
Authors: Beatriz Lafuente Alcázar, Yash Wani, Amit J. Nimunkar
Abstract:
Cardiovascular Diseases (CVDs) are the leading cause of death globally, and around 80% of sudden cardiac deaths are due to arrhythmias or irregular heartbeats. The majority of these pathologies are revealed by either short-term or long-term alterations in the electrocardiogram (ECG) morphology. The ECG is the main diagnostic tool in cardiology. It is a non-invasive, pain free procedure that measures the heart’s electrical activity and that allows the detecting of abnormal rhythms and underlying conditions. A cardiologist can diagnose a wide range of pathologies based on ECG’s form alterations, but the human interpretation is subjective and it is contingent to error. Moreover, ECG records can be quite prolonged in time, which can further complicate visual diagnosis, and deeply retard disease detection. In this context, deep learning methods have risen as a promising strategy to extract relevant features and eliminate individual subjectivity in ECG analysis. They facilitate the computation of large sets of data and can provide early and precise diagnoses. Therefore, the cardiology field is one of the areas that can most benefit from the implementation of deep learning algorithms. In the present study, a deep learning algorithm is trained following a novel approach, using a combination of different databases as the training set. The goal of the algorithm is to achieve the detection of R-peaks in ECG signals. Its performance is further evaluated in ECG signals with different origins and features to test the model’s ability to generalize its outcomes. Performance of the model for detection of R-peaks for clean and noisy ECGs is presented. The model is able to detect R-peaks in the presence of various types of noise, and when presented with data, it has not been trained. It is expected that this approach will increase the effectiveness and capacity of cardiologists to detect divergences in the normal cardiac activity of their patients.Keywords: arrhythmia, deep learning, electrocardiogram, machine learning, R-peaks
Procedia PDF Downloads 1862167 Improving Graduate Student Writing Skills: Best Practices and Outcomes
Authors: Jamie Sundvall, Lisa Jennings
Abstract:
A decline in writing skills and abilities of students entering graduate school has become a focus for university systems within the United States. This decline has become a national trend that requires reflection on the intervention strategies used to address the deficit and unintended consequences as outcomes in the profession. Social work faculty is challenged to increase written scholarship within the academic setting. However, when a large number of students in each course have writing deficits, there is a shift from focus on content, ability to demonstrate competency, and application of core social work concepts. This qualitative study focuses on the experiences of online faculty who support increasing scholarship through writing and are following best practices preparing students academically to see improvements in written presentation in classroom work. This study outlines best practices to improve written academic presentation, especially in an online setting. The research also highlights how a student’s ability to show competency and application of concepts may be overlooked in the online setting. This can lead to new social workers who are prepared academically, but may unable to effectively advocate and document thought presentation in their writing. The intended progression of writing across all levels of higher education moves from summary, to application, and into abstract problem solving. Initial findings indicate that it is important to reflect on practices used to address writing deficits in terms of academic writing, competency, and application. It is equally important to reflect on how these methods of intervention impact a student post-graduation. Specifically, for faculty, it is valuable to assess a social worker’s ability to engage in continuity of documentation and advocacy at micro, mezzo, macro, and international levels of practice.Keywords: intervention, professional impact, scholarship, writing
Procedia PDF Downloads 1392166 A Serious Game to Upgrade the Learning of Organizational Skills in Nursing Schools
Authors: Benoit Landi, Hervé Pingaud, Jean-Benoit Culie, Michel Galaup
Abstract:
Serious games have been widely disseminated in the field of digital learning. They have proved their utility in improving skills through virtual environments that simulate the field where new competencies have to be improved and assessed. This paper describes how we created CLONE, a serious game whose purpose is to help nurses create an efficient work plan in a hospital care unit. In CLONE, the number of patients to take care of is similar to the reality of their job, going far beyond what is currently practiced in nurse school classrooms. This similarity with the operational field increases proportionally the number of activities to be scheduled. Moreover, very often, the team of nurses is composed of regular nurses and nurse assistants that must share the work with respect to the regulatory obligations. Therefore, on the one hand, building a short-term planning is a complex task with a large amount of data to deal with, and on the other, good clinical practices have to be systematically applied. We present how reference planning has been defined by addressing an optimization problem formulation using the expertise of teachers. This formulation ensures the gameplay feasibility for the scenario that has been produced and enhanced throughout the game design process. It was also crucial to steer a player toward a specific gaming strategy. As one of our most important learning outcomes is a clear understanding of the workload concept, its factual calculation for each caregiver along time and its inclusion in the nurse reasoning during planning elaboration are focal points. We will demonstrate how to modify the game scenario to create a digital environment in which these somewhat abstract principles can be understood and applied. Finally, we give input on an experience we had on a pilot of a thousand undergraduate nursing students.Keywords: care planning, workload, game design, hospital nurse, organizational skills, digital learning, serious game
Procedia PDF Downloads 1912165 Information Visualization Methods Applied to Nanostructured Biosensors
Authors: Osvaldo N. Oliveira Jr.
Abstract:
The control of molecular architecture inherent in some experimental methods to produce nanostructured films has had great impact on devices of various types, including sensors and biosensors. The self-assembly monolayers (SAMs) and the electrostatic layer-by-layer (LbL) techniques, for example, are now routinely used to produce tailored architectures for biosensing where biomolecules are immobilized with long-lasting preserved activity. Enzymes, antigens, antibodies, peptides and many other molecules serve as the molecular recognition elements for detecting an equally wide variety of analytes. The principles of detection are also varied, including electrochemical methods, fluorescence spectroscopy and impedance spectroscopy. In this presentation an overview will be provided of biosensors made with nanostructured films to detect antibodies associated with tropical diseases and HIV, in addition to detection of analytes of medical interest such as cholesterol and triglycerides. Because large amounts of data are generated in the biosensing experiments, use has been made of computational and statistical methods to optimize performance. Multidimensional projection techniques such as Sammon´s mapping have been shown more efficient than traditional multivariate statistical analysis in identifying small concentrations of anti-HIV antibodies and for distinguishing between blood serum samples of animals infected with two tropical diseases, namely Chagas´ disease and Leishmaniasis. Optimization of biosensing may include a combination of another information visualization method, the Parallel Coordinate technique, with artificial intelligence methods in order to identify the most suitable frequencies for reaching higher sensitivity using impedance spectroscopy. Also discussed will be the possible convergence of technologies, through which machine learning and other computational methods may be used to treat data from biosensors within an expert system for clinical diagnosis.Keywords: clinical diagnosis, information visualization, nanostructured films, layer-by-layer technique
Procedia PDF Downloads 3372164 Modified 'Perturb and Observe' with 'Incremental Conductance' Algorithm for Maximum Power Point Tracking
Authors: H. Fuad Usman, M. Rafay Khan Sial, Shahzaib Hamid
Abstract:
The trend of renewable energy resources has been amplified due to global warming and other environmental related complications in the 21st century. Recent research has very much emphasized on the generation of electrical power through renewable resources like solar, wind, hydro, geothermal, etc. The use of the photovoltaic cell has become very public as it is very useful for the domestic and commercial purpose overall the world. Although a single cell gives the low voltage output but connecting a number of cells in a series formed a complete module of the photovoltaic cells, it is becoming a financial investment as the use of it fetching popular. This also reduced the prices of the photovoltaic cell which gives the customers a confident of using this source for their electrical use. Photovoltaic cell gives the MPPT at single specific point of operation at a given temperature and level of solar intensity received at a given surface whereas the focal point changes over a large range depending upon the manufacturing factor, temperature conditions, intensity for insolation, instantaneous conditions for shading and aging factor for the photovoltaic cells. Two improved algorithms have been proposed in this article for the MPPT. The widely used algorithms are the ‘Incremental Conductance’ and ‘Perturb and Observe’ algorithms. To extract the maximum power from the source to the load, the duty cycle of the convertor will be effectively controlled. After assessing the previous techniques, this paper presents the improved and reformed idea of harvesting maximum power point from the photovoltaic cells. A thoroughly go through of the previous ideas has been observed before constructing the improvement in the traditional technique of MPP. Each technique has its own importance and boundaries at various weather conditions. An improved technique of implementing the use of both ‘Perturb and Observe’ and ‘Incremental Conductance’ is introduced.Keywords: duty cycle, MPPT (Maximum Power Point Tracking), perturb and observe (P&O), photovoltaic module
Procedia PDF Downloads 1762163 A Flexible Real-Time Eco-Drive Strategy for Electric Minibus
Authors: Felice De Luca, Vincenzo Galdi, Piera Stella, Vito Calderaro, Adriano Campagna, Antonio Piccolo
Abstract:
Sustainable mobility has become one of the major issues of recent years. The challenge in reducing polluting emissions as much as possible has led to the production and diffusion of vehicles with internal combustion engines that are less polluting and to the adoption of green energy vectors, such as vehicles powered by natural gas or LPG and, more recently, with hybrid and electric ones. While on the one hand, the spread of electric vehicles for private use is becoming a reality, albeit rather slowly, not the same is happening for vehicles used for public transport, especially those that operate in the congested areas of the cities. Even if the first electric buses are increasingly being offered on the market, it remains central to the problem of autonomy for battery fed vehicles with high daily routes and little time available for recharging. In fact, at present, solid-state batteries are still too large in size, heavy, and unable to guarantee the required autonomy. Therefore, in order to maximize the energy management on the vehicle, the optimization of driving profiles offer a faster and cheaper contribution to improve vehicle autonomy. In this paper, following the authors’ precedent works on electric vehicles in public transport and energy management strategies in the electric mobility area, an eco-driving strategy for electric bus is presented and validated. Particularly, the characteristics of the prototype bus are described, and a general-purpose eco-drive methodology is briefly presented. The model is firstly simulated in MATLAB™ and then implemented on a mobile device installed on-board of a prototype bus developed by the authors in a previous research project. The solution implemented furnishes the bus-driver suggestions on the guide style to adopt. The result of the test in a real case will be shown to highlight the effectiveness of the solution proposed in terms of energy saving.Keywords: eco-drive, electric bus, energy management, prototype
Procedia PDF Downloads 1422162 Design Thinking and Project-Based Learning: Opportunities, Challenges, and Possibilities
Authors: Shoba Rathilal
Abstract:
High unemployment rates and a shortage of experienced and qualified employees appear to be a paradox that currently plagues most countries worldwide. In a developing country like South Africa, the rate of unemployment is reported to be approximately 35%, the highest recorded globally. At the same time, a countrywide deficit in experienced and qualified potential employees is reported in South Africa, which is causing fierce rivalry among firms. Employers have reported that graduates are very rarely able to meet the demands of the job as there are gaps in their knowledge and conceptual understanding and other 21st-century competencies, attributes, and dispositions required to successfully negotiate the multiple responsibilities of employees in organizations. In addition, the rates of unemployment and suitability of graduates appear to be skewed by race and social class, the continued effects of a legacy of inequitable educational access. Higher Education in the current technologically advanced and dynamic world needs to serve as an agent of transformation, aspiring to develop graduates to be creative, flexible, critical, and with entrepreneurial acumen. This requires that higher education curricula and pedagogy require a re-envisioning of our selection, sequencing, and pacing of the learning, teaching, and assessment. At a particular Higher education Institution in South Africa, Design Thinking and Project Based learning are being adopted as two approaches that aim to enhance the student experience through the provision of a “distinctive education” that brings together disciplinary knowledge, professional engagement, technology, innovation, and entrepreneurship. Using these methodologies forces the students to solve real-time applied problems using various forms of knowledge and finding innovative solutions that can result in new products and services. The intention is to promote the development of skills for self-directed learning, facilitate the development of self-awareness, and contribute to students being active partners in the application and production of knowledge. These approaches emphasize active and collaborative learning, teamwork, conflict resolution, and problem-solving through effective integration of theory and practice. In principle, both these approaches are extremely impactful. However, at the institution in this study, the implementation of the PBL and DT was not as “smooth” as anticipated. This presentation reports on the analysis of the implementation of these two approaches within higher education curricula at a particular university in South Africa. The study adopts a qualitative case study design. Data were generated through the use of surveys, evaluation feedback at workshops, and content analysis of project reports. Data were analyzed using document analysis, content, and thematic analysis. Initial analysis shows that the forces constraining the implementation of PBL and DT range from the capacity to engage with DT and PBL, both from staff and students, educational contextual realities of higher education institutions, administrative processes, and resources. At the same time, the implementation of DT and PBL was enabled through the allocation of strategic funding and capacity development workshops. These factors, however, could not achieve maximum impact. In addition, the presentation will include recommendations on how DT and PBL could be adapted for differing contexts will be explored.Keywords: design thinking, project based learning, innovative higher education pedagogy, student and staff capacity development
Procedia PDF Downloads 772161 Rejuvenation of Aged Kraft-Cellulose Insulating Paper Used in Transformers
Authors: Y. Jeon, A. Bissessur, J. Lin, P. Ndungu
Abstract:
Most transformers employ the usage of cellulose paper, which has been chemically modified through the Kraft process that acts as an effective insulator. Cellulose ageing and oil degradation are directly linked to fouling of the transformer and accumulation of large quantities of waste insulating paper. In addition to technical difficulties, this proves costly for power utilities to deal with. Currently there are no cost effective method for the rejuvenation of cellulose paper that has been documented nor proposed, since renewal of used insulating paper is implemented as the best option. This study proposes and contrasts different rejuvenation methods of accelerated aged cellulose insulating paper by chemical and bio-bleaching processes. Of the three bleaching methods investigated, two are, conventional chlorine-based sodium hypochlorite (m/v), and chlorine-free hydrogen peroxide (v/v), whilst the third is a bio-bleaching technique that uses a bacterium isolate, Acinetobacter strain V2. Through chemical bleaching, varying the strengths of the bleaching reagents at 0.3 %, 0.6 %, 0.9 %, 1.2 %, 1.5 % and 1.8 % over 4 hrs. were analyzed. Bio-bleaching implemented a bacterium isolate, Acinetobacter strain V2, to bleach the aged Kraft paper over 4 hrs. The determination of the amount of alpha cellulose, degree of polymerization and viscosity carried out on Kraft-cellulose insulating paper before and after bleaching. Overall the investigated techniques of chemical and bio-bleaching were successful and effective in treating degraded and accelerated aged Kraft-cellulose insulating paper, however, to varying extents. Optimum conditions for chemical bleaching were attained at bleaching strengths of 1.2 % (m/v) NaOCl and 1.5 % (v/v) H2O2 yielding alpha cellulose contents of 82.4 % and 80.7 % and degree of polymerizations of 613 and 616 respectively. Bio-bleaching using Acinetobacter strain V2 proved to be the superior technique with alpha cellulose levels of 89.0 % and a degree of polymerization of 620. Chemical bleaching techniques require careful and controlled clean-up treatments as it is chlorine and hydrogen peroxide based while bio-bleaching is an extremely eco-friendly technique.Keywords: alpha cellulose, bio-bleaching, degree of polymerization, Kraft-cellulose insulating paper, transformer, viscosity
Procedia PDF Downloads 270