Search results for: conventionally manufacturing techniques
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8289

Search results for: conventionally manufacturing techniques

1659 Geospatial Techniques for Impact Assessment of Canal Rehabilitation Program in Sindh, Pakistan

Authors: Sumaira Zafar, Arjumand Zaidi, Muhammad Arslan Hafeez

Abstract:

Indus Basin Irrigation System (IBIS) is the largest contiguous irrigation system of the world comprising Indus River and its tributaries, canals, distributaries, and watercourses. A big challenge faced by IBIS is transmission losses through seepage and leaks that account to 41 percent of the total water derived from the river and about 40 percent of that is through watercourses. Irrigation system rehabilitation programs in Pakistan are focused on improvement of canal system at the watercourse level (tertiary channels). Under these irrigation system management programs more than 22,800 watercourses have been improved or lined out of 43,000 (12,900 Kilometers) watercourses. The evaluation of the improvement work is required at this stage to testify the success of the programs. In this paper, emerging technologies of GIS and satellite remote sensing are used for impact assessment of watercourse rehabilitation work in Sindh. To evaluate the efficiency of the improved watercourses, few parameters are selected like soil moisture along watercourses, availability of water at tail end and changes in cultivable command areas. Improved watercourses details and maps are acquired from National Program for Improvement of Watercourses (NPIW) and Space and Upper Atmospheric Research Commission (SUPARCO). High resolution satellite images of Google Earth for the year of 2004 to 2013 are used for digitizing command areas. Temporal maps of cultivable command areas show a noticeable increase in the cultivable land served by improved watercourses. Field visits are conducted to validate the results. Interviews with farmers and landowners also reveal their overall satisfaction in terms of availability of water at the tail end and increased crop production.

Keywords: geospatial, impact assessment, watercourses, GIS, remote sensing, seepage, canal lining

Procedia PDF Downloads 326
1658 Reactivation of Hydrated Cement and Recycled Concrete Powder by Thermal Treatment for Partial Replacement of Virgin Cement

Authors: Gustave Semugaza, Anne Zora Gierth, Tommy Mielke, Marianela Escobar Castillo, Nat Doru C. Lupascu

Abstract:

The generation of Construction and Demolition Waste (CDW) has globally increased enormously due to the enhanced need in construction, renovation, and demolition of construction structures. Several studies investigated the use of CDW materials in the production of new concrete and indicated the lower mechanical properties of the resulting concrete. Many other researchers considered the possibility of using the Hydrated Cement Powder (HCP) to replace a part of Ordinary Portland Cement (OPC), but only very few investigated the use of Recycled Concrete Powder (RCP) from CDW. The partial replacement of OPC for making new concrete intends to decrease the CO₂ emissions associated with OPC production. However, the RCP and HCP need treatment to produce the new concrete of required mechanical properties. The thermal treatment method has proven to improve HCP properties before their use. Previous research has stated that for using HCP in concrete, the optimum results are achievable by heating HCP between 400°C and 800°C. The optimum heating temperature depends on the type of cement used to make the Hydrated Cement Specimens (HCS), the crushing and heating method of HCP, and the curing method of the Rehydrated Cement Specimens (RCS). This research assessed the quality of recycled materials by using different techniques such as X-ray Diffraction (XRD), Differential Scanning Calorimetry (DSC) and thermogravimetry (TG), Scanning electron Microscopy (SEM), and X-ray Fluorescence (XRF). These recycled materials were thermally pretreated at different temperatures from 200°C to 1000°C. Additionally, the research investigated to what extent the thermally treated recycled cement could partially replace the OPC and if the new concrete produced would achieve the required mechanical properties. The mechanical properties were evaluated on the RCS, obtained by mixing the Dehydrated Cement Powder and Recycled Powder (DCP and DRP) with water (w/c = 0.6 and w/c = 0.45). The research used the compressive testing machine for compressive strength testing, and the three-point bending test was used to assess the flexural strength.

Keywords: hydrated cement powder, dehydrated cement powder, recycled concrete powder, thermal treatment, reactivation, mechanical performance

Procedia PDF Downloads 129
1657 Financial Burden of Occupational Slip and Fall Incidences in Taiwan

Authors: Kai Way Li, Lang Gan

Abstract:

Slip &Fall are common in Taiwan. They could result in injuries and even fatalities. Official statistics indicate that more than 15% of all occupational incidences were slip/fall related. All the workers in Taiwan are required by the law to join the worker’s insurance program administered by the Bureau of Labor Insurance (BLI). The BLI is a government agency under the supervision of the Ministry of Labor. Workers claim with the BLI for insurance compensations when they suffer fatalities or injuries at work. Injuries statistics based on worker’s compensation claims were rarely studied. The objective of this study was to quantify the injury statistics and financial cost due to slip-fall incidences based on the BLI compensation records. Compensation records in the BLI during 2007 to 2013 were retrieved. All the original application forms, approval opinions, results for worker’s compensations were in hardcopy and were stored in the BLI warehouses. Xerox copies of the claims, excluding the personal information of the applicants (or the victim if passed away), were obtained. The content in the filing forms were coded in an Excel worksheet for further analyses. Descriptive statistics were performed to analyze the data. There were a total of 35,024 claims including 82 deaths, 878 disabilities, and 34,064 injuries/illnesses which were slip/fall related. It was found that the average losses for the death cases were 40 months. The total dollar amount for these cases paid was 86,913,195 NTD. For the disability cases, the average losses were 367.36 days. The total dollar amount for these cases paid was almost 2.6 times of those for the death cases (233,324,004 NTD). For the injury/illness cases, the average losses for the illness cases were 58.78 days. The total dollar amount for these cases paid was approximately 13 times of those of the death cases (1134,850,821 NTD). For the applicants/victims, 52.3% were males. There were more males than females for the deaths, disability, and injury/illness cases. Most (57.8%) of the female victims were between 45 to 59 years old. Most of the male victims (62.6%) were, on the other hand, between 25 to 39 years old. Most of the victims were in manufacturing industry (26.41%), next the construction industry (22.20%), and next the retail industry (13.69%). For the fatality cases, head injury was the main problem for immediate or eventual death (74.4%). For the disability case, foot (17.46%) and knee (9.05%) injuries were the leading problems. The compensation claims other than fatality and disability were mainly associated with injuries of the foot (18%), hand (12.87%), knee (10.42%), back (8.83%), and shoulder (6.77%). The slip/fall cases studied indicate that the ratios among the death, disability, and injury/illness counts were 1:10:415. The ratios of dollar amount paid by the BLI for the three categories were 1:2.6:13. Such results indicate the significance of slip-fall incidences resulting in different severity. Such information should be incorporated in to slip-fall prevention program in industry.

Keywords: epidemiology, slip and fall, social burden, workers’ compensation

Procedia PDF Downloads 307
1656 A Mixed Methods Study: Evaluation of Experiential Learning Techniques throughout a Nursing Curriculum to Promote Empathy

Authors: Joan Esper Kuhnly, Jess Holden, Lynn Shelley, Nicole Kuhnly

Abstract:

Empathy serves as a foundational nursing principle inherent in the nurse’s ability to form those relationships from which to care for patients. Evidence supports, including empathy in nursing and healthcare education, but there is limited data on what methods are effective to do so. Building evidence supports experiential and interactive learning methods to be effective for students to gain insight and perspective from a personalized experience. The purpose of this project is to evaluate learning activities designed to promote the attainment of empathic behaviors across 5 levels of the nursing curriculum. Quantitative analysis will be conducted on data from pre and post-learning activities using the Toronto Empathy Questionnaire. The main hypothesis, that simulation learning activities will increase empathy, will be examined using a repeated measures Analysis of Variance (ANOVA) on Pre and Post Toronto Empathy Questionnaire scores for three simulation activities (Stroke, Poverty, Dementia). Pearson product-moment correlations will be conducted to examine the relationships between continuous demographic variables, such as age, credits earned, and years practicing, with the dependent variable of interest, Post Test Toronto Empathy Scores. Krippendorff’s method of content analysis will be conducted to identify the quantitative incidence of empathic responses. The researchers will use Colaizzi’s descriptive phenomenological method to describe the students’ simulation experience and understand its impact on caring and empathy behaviors employing bracketing to maintain objectivity. The results will be presented, answering multiple research questions. The discussion will be relevant to results and educational pedagogy in the nursing curriculum as they relate to the attainment of empathic behaviors.

Keywords: curriculum, empathy, nursing, simulation

Procedia PDF Downloads 94
1655 A Methodology Based on Image Processing and Deep Learning for Automatic Characterization of Graphene Oxide

Authors: Rafael do Amaral Teodoro, Leandro Augusto da Silva

Abstract:

Originated from graphite, graphene is a two-dimensional (2D) material that promises to revolutionize technology in many different areas, such as energy, telecommunications, civil construction, aviation, textile, and medicine. This is possible because its structure, formed by carbon bonds, provides desirable optical, thermal, and mechanical characteristics that are interesting to multiple areas of the market. Thus, several research and development centers are studying different manufacturing methods and material applications of graphene, which are often compromised by the scarcity of more agile and accurate methodologies to characterize the material – that is to determine its composition, shape, size, and the number of layers and crystals. To engage in this search, this study proposes a computational methodology that applies deep learning to identify graphene oxide crystals in order to characterize samples by crystal sizes. To achieve this, a fully convolutional neural network called U-net has been trained to segment SEM graphene oxide images. The segmentation generated by the U-net is fine-tuned with a standard deviation technique by classes, which allows crystals to be distinguished with different labels through an object delimitation algorithm. As a next step, the characteristics of the position, area, perimeter, and lateral measures of each detected crystal are extracted from the images. This information generates a database with the dimensions of the crystals that compose the samples. Finally, graphs are automatically created showing the frequency distributions by area size and perimeter of the crystals. This methodological process resulted in a high capacity of segmentation of graphene oxide crystals, presenting accuracy and F-score equal to 95% and 94%, respectively, over the test set. Such performance demonstrates a high generalization capacity of the method in crystal segmentation, since its performance considers significant changes in image extraction quality. The measurement of non-overlapping crystals presented an average error of 6% for the different measurement metrics, thus suggesting that the model provides a high-performance measurement for non-overlapping segmentations. For overlapping crystals, however, a limitation of the model was identified. To overcome this limitation, it is important to ensure that the samples to be analyzed are properly prepared. This will minimize crystal overlap in the SEM image acquisition and guarantee a lower error in the measurements without greater efforts for data handling. All in all, the method developed is a time optimizer with a high measurement value, considering that it is capable of measuring hundreds of graphene oxide crystals in seconds, saving weeks of manual work.

Keywords: characterization, graphene oxide, nanomaterials, U-net, deep learning

Procedia PDF Downloads 146
1654 Poly(Acrylamide-Co-Itaconic Acid) Nanocomposite Hydrogels and Its Use in the Removal of Lead in Aqueous Solution

Authors: Majid Farsadrouh Rashti, Alireza Mohammadinejad, Amir Shafiee Kisomi

Abstract:

Lead (Pb²⁺), a cation, is a prime constituent of the majority of the industrial effluents such as mining, smelting and coal combustion, Pb-based painting and Pb containing pipes in water supply systems, paper and pulp refineries, printing, paints and pigments, explosive manufacturing, storage batteries, alloy and steel industries. The maximum permissible limit of lead in the water used for drinking and domesticating purpose is 0.01 mg/L as advised by Bureau of Indian Standards, BIS. This becomes the acceptable 'safe' level of lead(II) ions in water beyond which, the water becomes unfit for human use and consumption, and is potential enough to lead health problems and epidemics leading to kidney failure, neuronal disorders, and reproductive infertility. Superabsorbent hydrogels are loosely crosslinked hydrophilic polymers that in contact with aqueous solution can easily water and swell to several times to their initial volume without dissolving in aqueous medium. Superabsorbents are kind of hydrogels capable to swell and absorb a large amount of water in their three-dimensional networks. While the shapes of hydrogels do not change extensively during swelling, because of tremendously swelling capacity of superabsorbent, their shape will broadly change.Because of their superb response to changing environmental conditions including temperature pH, and solvent composition, superabsorbents have been attracting in numerous industrial applications. For instance, water retention property and subsequently. Natural-based superabsorbent hydrogels have attracted much attention in medical pharmaceutical, baby diapers, agriculture, and horticulture because of their non-toxicity, biocompatibility, and biodegradability. Novel superabsorbent hydrogel nanocomposites were prepared by graft copolymerization of acrylamide and itaconic acid in the presence of nanoclay (laponite), using methylene bisacrylamide (MBA) and potassium persulfate, former as a crosslinking agent and the second as an initiator. The superabsorbent hydrogel nanocomposites structure was characterized by FTIR spectroscopy, SEM and TGA Spectroscopy adsorption of metal ions on poly (AAm-co-IA). The equilibrium swelling values of copolymer was determined by gravimetric method. During the adsorption of metal ions on polymer, residual metal ion concentration in the solution and the solution pH were measured. The effects of the clay content of the hydrogel on its metal ions uptake behavior were studied. The NC hydrogels may be considered as a good candidate for environmental applications to retain more water and to remove heavy metals.

Keywords: adsorption, hydrogel, nanocomposite, super adsorbent

Procedia PDF Downloads 168
1653 Generating Synthetic Chest X-ray Images for Improved COVID-19 Detection Using Generative Adversarial Networks

Authors: Muneeb Ullah, Daishihan, Xiadong Young

Abstract:

Deep learning plays a crucial role in identifying COVID-19 and preventing its spread. To improve the accuracy of COVID-19 diagnoses, it is important to have access to a sufficient number of training images of CXRs (chest X-rays) depicting the disease. However, there is currently a shortage of such images. To address this issue, this paper introduces COVID-19 GAN, a model that uses generative adversarial networks (GANs) to generate realistic CXR images of COVID-19, which can be used to train identification models. Initially, a generator model is created that uses digressive channels to generate images of CXR scans for COVID-19. To differentiate between real and fake disease images, an efficient discriminator is developed by combining the dense connectivity strategy and instance normalization. This approach makes use of their feature extraction capabilities on CXR hazy areas. Lastly, the deep regret gradient penalty technique is utilized to ensure stable training of the model. With the use of 4,062 grape leaf disease images, the Leaf GAN model successfully produces 8,124 COVID-19 CXR images. The COVID-19 GAN model produces COVID-19 CXR images that outperform DCGAN and WGAN in terms of the Fréchet inception distance. Experimental findings suggest that the COVID-19 GAN-generated CXR images possess noticeable haziness, offering a promising approach to address the limited training data available for COVID-19 model training. When the dataset was expanded, CNN-based classification models outperformed other models, yielding higher accuracy rates than those of the initial dataset and other augmentation techniques. Among these models, ImagNet exhibited the best recognition accuracy of 99.70% on the testing set. These findings suggest that the proposed augmentation method is a solution to address overfitting issues in disease identification and can enhance identification accuracy effectively.

Keywords: classification, deep learning, medical images, CXR, GAN.

Procedia PDF Downloads 62
1652 Effect of Plasma Discharge Power on Activation Energies of Plasma Poly(Ethylene Oxide) Thin Films

Authors: Sahin Yakut, H. Kemal Ulutas, Deniz Deger

Abstract:

Plasma Assisted Physical Vapor Deposition (PAPVD) method used to produce Poly(ethylene oxide) (pPEO) thin films. Depositions were progressed at various plasma discharge powers as 0, 2, 5 and 30 W for pPEO at 500nm film thicknesses. The capacitance and dielectric dissipation of the thin films were measured at 0,1-107 Hz frequency range and 173-353 K temperature range by an impedance analyzer. Then, alternative conductivity (σac) and activation energies were derived from capacitance and dielectric dissipation. σac of conventional PEO (PEO precursor) was measured to determine the effect of plasma discharge. Differences were observed between the alternative conductivity of PEO’s and pPEO’s depending on plasma discharge power. By this purpose, structural characterization techniques such as Differential Scanning Calorimetry (DSC) and Fourier Transform Infrared Spectroscopy (FT-IR) were applied on pPEO thin films. Structural analysis showed that density of crosslinking is plasma power dependent. The crosslinking density increases with increasing plasma discharge power and this increase is displayed as increasing dynamic glass transition temperatures at DSC results. Also, shifting of frequencies of some type of bond vibrations, belonging to bond vibrations produced after fragmentation because of plasma discharge, were observed at FTIR results. The dynamic glass transition temperatures obtained from alternative conductivity results for pPEO consistent with the results of DSC. Activation energies exhibit Arrhenius behavior. Activation energies decrease with increasing plasma discharge power. This behavior supports the suggestion expressing that long polymer chains and long oligomers are fragmented into smaller oligomers or radicals.

Keywords: activation energy, dielectric spectroscopy, organic thin films, plasma polymer

Procedia PDF Downloads 284
1651 Formulation Development, Process Optimization and Comparative study of Poorly Compressible Drugs Ibuprofen, Acetaminophen Using Direct Compression and Top Spray Granulation Technique

Authors: Abhishek Pandey

Abstract:

Ibuprofen and Acetaminophen is widely used as prescription & non-prescription medicine. Ibuprofen mainly used in the treatment of mild to moderate pain related to headache, migraine, postoperative condition and in the management of spondylitis, osteoarthritis and rheumatoid arthritis. Acetaminophen is used as an analgesic and antipyretic drug. Ibuprofen having high tendency of sticking to punches of tablet punching machine while Acetaminophen is not ordinarily compressible to tablet formulation because Acetaminophen crystals are very hard and brittle in nature and fracture very easily when compressed producing capping and laminating tablet defects therefore wet granulation method is used to make them compressible. The aim of study was to prepare Ibuprofen and Acetaminophen tablets by direct compression and top spray granulation technique. In this Investigation tablets were prepared by using directly compressible grade excipients. Dibasic calcium phosphate, lactose anhydrous (DCL21), microcrystalline cellulose (Avicel PH 101). In order to obtain best or optimized formulation, nine different formulations were generated among them batch F7, F8, F9 shows good results and within the acceptable limit. Formulation (F7) selected as optimize product on the basis of dissolution study. Furtherly, directly compressible granules of both drugs were prepared by using top spray granulation technique in fluidized bed processor equipment and compressed .In order to obtain best product process optimization was carried out by performing four trials in which various parameters like inlet air temperature, spray rate, peristaltic pump rpm, % LOD, properties of granules, blending time and hardness were optimized. Batch T3 coined as optimized batch on the basis physical & chemical evaluation. Finally formulations prepared by both techniques were compared.

Keywords: direct compression, top spray granulation, process optimization, blending time

Procedia PDF Downloads 343
1650 Hormone Replacement Therapy (HRT) and Its Impact on the All-Cause Mortality of UK Women: A Matched Cohort Study 1984-2017

Authors: Nurunnahar Akter, Elena Kulinskaya, Nicholas Steel, Ilyas Bakbergenuly

Abstract:

Although Hormone Replacement Therapy (HRT) is an effective treatment in ameliorating menopausal symptoms, it has mixed effects on different health outcomes, increasing, for instance, the risk of breast cancer. Because of this, many symptomatic women are left untreated. Untreated menopausal symptoms may result in other health issues, which eventually put an extra burden and costs to the health care system. All-cause mortality analysis may explain the net benefits and risks of the HRT therapy. However, it received far less attention in HRT studies. This study investigated the impact of HRT on all-cause mortality using electronically recorded primary care data from The Health Improvement Network (THIN) that broadly represents the female population in the United Kingdom (UK). The study entry date for this study was the record of the first HRT prescription from 1984, and patients were followed up until death or transfer to another GP practice or study end date, which was January 2017. 112,354 HRT users (cases) were matched with 245,320 non-users by age at HRT initiation and general practice (GP). The hazards of all-cause mortality associated with HRT were estimated by a parametric Weibull-Cox model adjusting for a wide range of important medical, lifestyle, and socio-demographic factors. The multilevel multiple imputation techniques were used to deal with missing data. This study found that during 32 years of follow-up, combined HRT reduced the hazard ratio (HR) of all-cause mortality by 9% (HR: 0.91; 95% Confidence Interval, 0.88-0.94) in women of age between 46 to 65 at first treatment compared to the non-users of the same age. Age-specific mortality analyses found that combined HRT decreased mortality by 13% (HR: 0.87; 95% CI, 0.82-0.92), 12% (HR: 0.88; 95% CI, 0.82-0.93), and 8% (HR: 0.92; 95% CI, 0.85-0.98), in 51 to 55, 56 to 60, and 61 to 65 age group at first treatment, respectively. There was no association between estrogen-only HRT and women’s all-cause mortality. The findings from this study may help to inform the choices of women at menopause and to further educate the clinicians and resource planners.

Keywords: hormone replacement therapy, multiple imputations, primary care data, the health improvement network (THIN)

Procedia PDF Downloads 150
1649 Understanding Student Engagement through Sentiment Analytics of Response Times to Electronically Shared Feedback

Authors: Yaxin Bi, Peter Nicholl

Abstract:

The rapid advancement of Information and communication technologies (ICT) is extremely influencing every aspect of Higher Education. It has transformed traditional teaching, learning, assessment and feedback into a new era of Digital Education. This also introduces many challenges in capturing and understanding student engagement with their studies in Higher Education. The School of Computing at Ulster University has developed a Feedback And Notification (FAN) Online tool that has been used to send students links to personalized feedback on their submitted assessments and record students’ frequency of review of the shared feedback as well as the speed of collection. The feedback that the students initially receive is via a personal email directing them through to the feedback via a URL link that maps to the feedback created by the academic marker. This feedback is typically a Word or PDF report including comments and the final mark for the work submitted approximately three weeks before. When the student clicks on the link, the student’s personal feedback is viewable in the browser and they can view the contents. The FAN tool provides the academic marker with a report that includes when and how often a student viewed the feedback via the link. This paper presents an investigation into student engagement through analyzing the interaction timestamps and frequency of review by the student. We have proposed an approach to modeling interaction timestamps and use sentiment classification techniques to analyze the data collected over the last five years for a set of modules. The data studied is across a number of final years and second-year modules in the School of Computing. The paper presents the details of quantitative analysis methods and describes further their interactions with the feedback overtime on each module studied. We have projected the students into different groups of engagement based on sentiment analysis results and then provide a suggestion of early targeted intervention for the set of students seen to be under-performing via our proposed model.

Keywords: feedback, engagement, interaction modelling, sentiment analysis

Procedia PDF Downloads 84
1648 Developments in Performance of Autistic Students in the Egyptian School System

Authors: Magy Atef Awad Attia

Abstract:

The objective of this study was to study the effect of social stories on social interaction of students with autism. The sample was at level 5 student with autism, Another University Demonstration School student, who was diagnosed by the Physician as High Functioning Autism since he was able to read, write, calculate and was studying in inclusive classroom. However, he still had disability in social interaction to participate in social activity group and communication. He could not learn how to develop friendship or create relationship. He had inappropriate behavior in social context. He did not understand complex social situations. In addition, he did seemed to not know time and place. He was not able to understand feeling of oneself as well as the others. Consequently, he could not express his emotion appropriately. He did not understand or express his non-verbal language for communicating with friends. He lacked of common interest or emotion with nearby persons. He greeted inappropriately or was not interested in greeting. In addition, he did not have eye contact. He used inadequate language etc. He was elected by Purposive Sampling. His parents were willing to allow them to participate in this study. The research instruments were the lesson plan of social stories, and the picture book of social stories. The instruments used for data collection, were the social interaction evaluation of autistic students. This research was Experimental Research as One Group Pre-test, Post-test Design. For the Pre-test, the experiment was conducted by social stories. Then, the Post-test was implemented. The statistic used for data analysis. The research results were shown by scale. The results revealed that the autistic students taught by social stories indicated better social reaction after being taught by social stories.

Keywords: autism, autistic behavior, stability, harsh environments, techniques, thermal, properties, materials, applications, brittleness, fragility, disadvantages, bank, branches, profitability, setting prediction, effective target, measurement, evaluation, performance, commercial, business, sustainability, financial, system.

Procedia PDF Downloads 20
1647 Tourism Oriented Planning Experience in the Historical City Center of Trabzon (Turkey) with Strategic Spatial Planning Approach: Evaluation of Approach and Process

Authors: Emrehan Ozcan, Dilek Beyazlı

Abstract:

The development of tourism depends on an accurate planning approach as well as on the right planning process. This dependency is also a key factor in ensuring sustainability of tourism. The types of tourism, social expectations, planning practice, the socio-economic and the cultural structure of the region are determinants of planning approaches for tourism development. The tourism plans prepared for the historic city centers are usually based on the revitalization of cultural and historical values. The preservation and development of the tourism potentials of the historic city centers are important for providing an economic contribution to the locality, creating livable solutions for local residents and also the sustainability of tourism. This research is about experiencing and discussing a planning approach that will provide tourism development based on historical and cultural values. Historical and cultural values in the historical city center of Trabzon -which has a settlement history of approximately 4000 years, is located on the Black Sea coast of Turkey- wear out over years and lose their tourism potential. A planning study has been experienced with strategic spatial planning approach for Trabzon, which has not done a tourism-oriented planning study until now. The stages of the planning process provided by strategic spatial planning approach are an assessment of the current situation; vision, strategies, and actions; action planning; designing and implementation of actions and monitoring-evaluation. In the discussion section, the advantages, planning process, methods and techniques of the approach are discussed for the possibilities and constraints in terms of tourism planning. In this context, it is aimed to put forth tourism planning process, stages, and implementation tools within the scope of strategic spatial planning approach by comparing approaches used in the tourism-oriented/priority planning of historical city centers. Suggestions on the position and effect of the preferred planning approach in the existing spatial planning practice are the outputs of the study.

Keywords: cultural heritage, tourism oriented planning, Trabzon, strategic spatial Planning

Procedia PDF Downloads 242
1646 Modelling, Assessment, and Optimisation of Rules for Selected Umgeni Water Distribution Systems

Authors: Khanyisile Mnguni, Muthukrishnavellaisamy Kumarasamy, Jeff C. Smithers

Abstract:

Umgeni Water is a water board that supplies most parts of KwaZulu Natal with bulk portable water. Currently, Umgeni Water is running its distribution system based on required reservoir levels and demands and does not consider the energy cost at different times of the day, number of pump switches, and background leakages. Including these constraints can reduce operational cost, energy usage, leakages, and increase performance. Optimising pump schedules can reduce energy usage and costs while adhering to hydraulic and operational constraints. Umgeni Water has installed an online hydraulic software, WaterNet Advisor, that allows running different operational scenarios prior to implementation in order to optimise the distribution system. This study will investigate operation scenarios using optimisation techniques and WaterNet Advisor for a local water distribution system. Based on studies reported in the literature, introducing pump scheduling optimisation can reduce energy usage by approximately 30% without any change in infrastructure. Including tariff structures in an optimisation problem can reduce pumping costs by 15%, while including leakages decreases cost by 10%, and pressure drop in the system can be up to 12 m. Genetical optimisation algorithms are widely used due to their ability to solve nonlinear, non-convex, and mixed-integer problems. Other methods such as branch and bound linear programming have also been successfully used. A suitable optimisation method will be chosen based on its efficiency. The objective of the study is to reduce energy usage, operational cost, and leakages, and the feasibility of optimal solution will be checked using the Waternet Advisor. This study will provide an overview of the optimisation of hydraulic networks and progress made to date in multi-objective optimisation for a selected sub-system operated by Umgeni Water.

Keywords: energy usage, pump scheduling, WaterNet Advisor, leakages

Procedia PDF Downloads 78
1645 The Relationships between Carbon Dioxide (CO2) Emissions, Energy Consumption and GDP for Israel: Time Series Analysis, 1980-2010

Authors: Jinhoa Lee

Abstract:

The relationships between environmental quality, energy use and economic output have created growing attention over the past decades among researchers and policy makers. Focusing on the empirical aspects of the role of CO2 emissions and energy use in affecting the economic output, this paper is an effort to fulfill the gap in a comprehensive case study at a country level using modern econometric techniques. To achieve the goal, this country-specific study examines the short-run and long-run relationships among energy consumption (using disaggregated energy sources: crude oil, coal, natural gas, electricity), carbon dioxide (CO2) emissions and gross domestic product (GDP) for Israel using time series analysis from the year 1980-2010. To investigate the relationships between the variables, this paper employs the Phillips–Perron (PP) test for stationarity, Johansen maximum likelihood method for cointegration and a Vector Error Correction Model (VECM) for both short- and long-run causality among the research variables for the sample. The long-run equilibrium in the VECM suggests significant positive impacts of coal and natural gas consumptions on GDP in Israel. In the short run, GDP positively affects coal consumption. While there exists a positive unidirectional causality running from coal consumption to consumption of petroleum products and the direct combustion of crude oil, there exists a negative unidirectional causality running from natural gas consumption to consumption of petroleum products and the direct combustion of crude oil in the short run. Overall, the results support arguments that there are relationships among environmental quality, energy use and economic output but the associations can to be differed by the sources of energy in the case of Israel over of period 1980-2010.

Keywords: CO2 emissions, energy consumption, GDP, Israel, time series analysis

Procedia PDF Downloads 631
1644 Optimum Turbomachine Preliminary Selection for Power Regeneration in Vapor Compression Cool Production Plants

Authors: Sayyed Benyamin Alavi, Giovanni Cerri, Leila Chennaoui, Ambra Giovannelli, Stefano Mazzoni

Abstract:

Primary energy consumption and emissions of pollutants (including CO2) sustainability call to search methodologies to lower power absorption for unit of a given product. Cool production plants based on vapour compression are widely used for many applications: air conditioning, food conservation, domestic refrigerators and freezers, special industrial processes, etc. In the field of cool production, the amount of Yearly Consumed Primary Energy is enormous, thus, saving some percentage of it, leads to big worldwide impact in the energy consumption and related energy sustainability. Among various techniques to reduce power required by a Vapour Compression Cool Production Plant (VCCPP), the technique based on Power Regeneration by means of Internal Direct Cycle (IDC) will be considered in this paper. Power produced by IDC reduces power need for unit of produced Cool Power by the VCCPP. The paper contains basic concepts that lead to develop IDCs and the proposed options to use the IDC Power. Among various selections for using turbo machines, Best Economically Available Technologies (BEATs) have been explored. Based on vehicle engine turbochargers, they have been taken into consideration for this application. According to BEAT Database and similarity rules, the best turbo machine selection leads to the minimum nominal power required by VCCPP Main Compressor. Results obtained installing the prototype in “ad hoc” designed test bench will be discussed and compared with the expected performance. Forecasts for the upgrading VCCPP, various applications will be given and discussed. 4-6% saving is expected for air conditioning cooling plants and 15-22% is expected for cryogenic plants.

Keywords: Refrigeration Plant, Vapour Pressure Amplifier, Compressor, Expander, Turbine, Turbomachinery Selection, Power Saving

Procedia PDF Downloads 410
1643 The Advantages of Using DNA-Barcoding for Determining the Fraud in Seafood

Authors: Elif Tugce Aksun Tumerkan

Abstract:

Although seafood is an important part of human diet and categorized highly traded food industry internationally, it is remain overlooked generally in the global food security aspect. Food product authentication is the main interest in the aim of both avoids commercial fraud and to consider the risks that might be harmful to human health safety. In recent years, with increasing consumer demand for regarding food content and it's transparency, there are some instrumental analyses emerging for determining food fraud depend on some analytical methodologies such as proteomic and metabolomics. While, fish and seafood consumed as fresh previously, within advanced technology, processed or packaged seafood consumption have increased. After processing or packaging seafood, morphological identification is impossible when some of the external features have been removed. The main fish and seafood quality-related issues are the authentications of seafood contents such as mislabelling products which may be contaminated and replacement partly or completely, by lower quality or cheaper ones. For all mentioned reasons, truthful consistent and easily applicable analytical methods are needed for assurance the correct labelling and verifying of seafood products. DNA-barcoding methods become popular robust that used in taxonomic research for endangered or cryptic species in recent years; they are used for determining food traceability also. In this review, when comparing the other proteomic and metabolic analysis, DNA-based methods are allowing a chance to identification all type of food even as raw, spiced and processed products. This privilege caused by DNA is a comparatively stable molecule than protein and other molecules. Furthermore showing variations in sequence based on different species and founding in all organisms, make DNA-based analysis more preferable. This review was performed to clarify the main advantages of using DNA-barcoding for determining seafood fraud among other techniques.

Keywords: DNA-barcoding, genetic analysis, food fraud, mislabelling, packaged seafood

Procedia PDF Downloads 145
1642 Data Projects for “Social Good”: Challenges and Opportunities

Authors: Mikel Niño, Roberto V. Zicari, Todor Ivanov, Kim Hee, Naveed Mushtaq, Marten Rosselli, Concha Sánchez-Ocaña, Karsten Tolle, José Miguel Blanco, Arantza Illarramendi, Jörg Besier, Harry Underwood

Abstract:

One of the application fields for data analysis techniques and technologies gaining momentum is the area of social good or “common good”, covering cases related to humanitarian crises, global health care, or ecology and environmental issues, among others. The promotion of data-driven projects in this field aims at increasing the efficacy and efficiency of social initiatives, improving the way these actions help humanity in general and people in need in particular. This application field, however, poses its own barriers and challenges when developing data-driven projects, lagging behind in comparison with other scenarios. These challenges derive from aspects such as the scope and scale of the social issue to solve, cultural and political barriers, the skills of main stakeholders and the technological resources available, the motivation to be engaged in such projects, or the ethical and legal issues related to sensitive data. This paper analyzes the application of data projects in the field of social good, reviewing its current state and noteworthy initiatives, and presenting a framework covering the key aspects to analyze in such projects. The goal is to provide guidelines to understand the main challenges and opportunities for this type of data project, as well as identifying the main differential issues compared to “classical” data projects in general. A case study is presented on the initial steps and stakeholder analysis of a data project for the inclusion of refugees in the city of Frankfurt, Germany, in order to empirically confront the framework with a real example.

Keywords: data-driven projects, humanitarian operations, personal and sensitive data, social good, stakeholders analysis

Procedia PDF Downloads 309
1641 A Personality-Based Behavioral Analysis on eSports

Authors: Halkiopoulos Constantinos, Gkintoni Evgenia, Koutsopoulou Ioanna, Antonopoulou Hera

Abstract:

E-sports and e-gaming have emerged in recent years since the increase in internet use have become universal and e-gamers are the new reality in our homes. The excessive involvement of young adults with e-sports has already been revealed and the adverse consequences have been reported in researches in the past few years, but the issue has not been fully studied yet. The present research is conducted in Greece and studies the psychological profile of video game players and provides information on personality traits, habits and emotional status that affect online gamers’ behaviors in order to help professionals and policy makers address the problem. Three standardized self-report questionnaires were administered to participants who were young male and female adults aged from 19-26 years old. The Profile of Mood States (POMS) scale was used to evaluate people’s perceptions of their everyday life mood; the personality features that can trace back to people’s habits and anticipated reactions were measured by Eysenck Personality Questionnaire (EPQ), and the Trait Emotional Intelligence Questionnaire (TEIQue) was used to measure which cognitive (gamers’ beliefs) and emotional parameters (gamers’ emotional abilities) mainly affected/ predicted gamers’ behaviors and leisure time activities?/ gaming behaviors. Data mining techniques were used to analyze the data, which resulted in machine learning algorithms that were included in the software package R. The research findings attempt to designate the effect of personality traits, emotional status and emotional intelligence influence and correlation with e-sports, gamers’ behaviors and help policy makers and stakeholders take action, shape social policy and prevent the adverse consequences on young adults. The need for further research, prevention and treatment strategies is also addressed.

Keywords: e-sports, e-gamers, personality traits, POMS, emotional intelligence, data mining, R

Procedia PDF Downloads 211
1640 Optimal Design of Wind Turbine Blades Equipped with Flaps

Authors: I. Kade Wiratama

Abstract:

As a result of the significant growth of wind turbines in size, blade load control has become the main challenge for large wind turbines. Many advanced techniques have been investigated aiming at developing control devices to ease blade loading. Amongst them, trailing edge flaps have been proven as effective devices for load alleviation. The present study aims at investigating the potential benefits of flaps in enhancing the energy capture capabilities rather than blade load alleviation. A software tool is especially developed for the aerodynamic simulation of wind turbines utilising blades equipped with flaps. As part of the aerodynamic simulation of these wind turbines, the control system must be also simulated. The simulation of the control system is carried out via solving an optimisation problem which gives the best value for the controlling parameter at each wind turbine run condition. Developing a genetic algorithm optimisation tool which is especially designed for wind turbine blades and integrating it with the aerodynamic performance evaluator, a design optimisation tool for blades equipped with flaps is constructed. The design optimisation tool is employed to carry out design case studies. The results of design case studies on wind turbine AWT 27 reveal that, as expected, the location of flap is a key parameter influencing the amount of improvement in the power extraction. The best location for placing a flap is at about 70% of the blade span from the root of the blade. The size of the flap has also significant effect on the amount of enhancement in the average power. This effect, however, reduces dramatically as the size increases. For constant speed rotors, adding flaps without re-designing the topology of the blade can improve the power extraction capability as high as of about 5%. However, with re-designing the blade pretwist the overall improvement can be reached as high as 12%.

Keywords: flaps, design blade, optimisation, simulation, genetic algorithm, WTAero

Procedia PDF Downloads 318
1639 A Convolution Neural Network Approach to Predict Pes-Planus Using Plantar Pressure Mapping Images

Authors: Adel Khorramrouz, Monireh Ahmadi Bani, Ehsan Norouzi, Morvarid Lalenoor

Abstract:

Background: Plantar pressure distribution measurement has been used for a long time to assess foot disorders. Plantar pressure is an important component affecting the foot and ankle function and Changes in plantar pressure distribution could indicate various foot and ankle disorders. Morphologic and mechanical properties of the foot may be important factors affecting the plantar pressure distribution. Accurate and early measurement may help to reduce the prevalence of pes planus. With recent developments in technology, new techniques such as machine learning have been used to assist clinicians in predicting patients with foot disorders. Significance of the study: This study proposes a neural network learning-based flat foot classification methodology using static foot pressure distribution. Methodologies: Data were collected from 895 patients who were referred to a foot clinic due to foot disorders. Patients with pes planus were labeled by an experienced physician based on clinical examination. Then all subjects (with and without pes planus) were evaluated for static plantar pressures distribution. Patients who were diagnosed with the flat foot in both feet were included in the study. In the next step, the leg length was normalized and the network was trained for plantar pressure mapping images. Findings: From a total of 895 image data, 581 were labeled as pes planus. A computational neural network (CNN) ran to evaluate the performance of the proposed model. The prediction accuracy of the basic CNN-based model was performed and the prediction model was derived through the proposed methodology. In the basic CNN model, the training accuracy was 79.14%, and the test accuracy was 72.09%. Conclusion: This model can be easily and simply used by patients with pes planus and doctors to predict the classification of pes planus and prescreen for possible musculoskeletal disorders related to this condition. However, more models need to be considered and compared for higher accuracy.

Keywords: foot disorder, machine learning, neural network, pes planus

Procedia PDF Downloads 339
1638 Development and Characterization of Multiphase Hydrogel Systems for Wound Healing

Authors: Rajendra Jangde, Deependra Singh

Abstract:

Present work was based with objective to release of the antimicrobial and debriding agent in sustained manner at the wound surface. In order to provide a long-lasting antimicrobial action and moist environment on wound space, Biocompatible moist system was developed for complete healing. In the present study, a biocompatible moist system of PVA-gelatin hydrogel was developed capable of carrying multiple drugs- Quercetin and Cabopol in controlled manner for effective and complete wound healing. Carbopol and Quercetin were prepared by thin film hydration techniques and optimized system was incorporated in PVA-Gelatin slurry. PVA-Gelatin hydrogels were prepared by freeze thaw method. The prepared dispersion was casted into films to prepare multiphase hydrogel system and characterized by in vitro and in vivo studies. Results revealed the uniform dispersion of microspheres in a three-dimensional matrix of the PVA-Gelatin hydrogel observed at different magnifications. The in vitro release data showed typical biphasic release pattern, i.e., a burst release followed by a slower sustained release for 5 days. Prepared system was found to be stable under both normal and accelerated conditions. Histopathological study showed significant (p<0.05) increase in fibroblast cells, collagen fibres and blood vessels formation. All parameters such as wound contraction, tensile strength, histopathological and biochemical parameters- hydroxyproline content, protein level, etc. were observed significant (p<0.05) in comparison to control group. Present results suggest an accelerated re-epithelialization under moist wound environment with delivery of multiple drugs effective at different stages of wound healing cascade with minimum disturbance of wound bed.

Keywords: multiphase hydrogel, optimization quercetin, wound healing

Procedia PDF Downloads 225
1637 Process of Analysis, Evaluation and Verification of the 'Real' Redevelopment of the Public Open Space at the Neighborhood’s Stairs: Case Study of Serres, Greece

Authors: Ioanna Skoufali

Abstract:

The present study is directed towards adaptation to climate change closely related to the phenomenon of the urban heat island (UHI). This issue is widespread and common to different urban realities, but particularly in Mediterranean cities that are characterized by dense urban. The attention of this work of redevelopment of the open space is focused on mitigation techniques aiming to solve local problems such as microclimatic parameters and the conditions of thermal comfort in summer, related to urban morphology. This quantitative analysis, evaluation, and verification survey involves the methodological elaboration applied in a real study case by Serres, through the experimental support of the ENVImet Pro V4.1 and BioMet software developed: i) in two phases concerning the anteoperam (phase a1 # 2013) and the post-operam (phase a2 # 2016); ii) in scenario A (+ 25% of green # 2017). The first study tends to identify the main intervention strategies, namely: the application of cool pavements, the increase of green surfaces, the creation of water surface and external fans; moreover, it obtains the minimum results achieved by the National Program 'Bioclimatic improvement project for public open space', EPPERAA (ESPA 2007-2013) related to the four environmental parameters illustrated below: the TAir = 1.5 o C, the TSurface = 6.5 o C, CDH = 30% and PET = 20%. In addition, the second study proposes a greater potential for improvement than postoperam intervention by increasing the vegetation within the district towards the SW/SE. The final objective of this in-depth design is to be transferable in homogeneous cases of urban regeneration processes with obvious effects on the efficiency of microclimatic mitigation and thermal comfort.

Keywords: cool pavements, microclimate parameters (TAir, Tsurface, Tmrt, CDH), mitigation strategies, outdoor thermal comfort (PET & UTCI)

Procedia PDF Downloads 182
1636 Remote Sensing and Geographic Information Systems for Identifying Water Catchments Areas in the Northwest Coast of Egypt for Sustainable Agricultural Development

Authors: Mohamed Aboelghar, Ayman Abou Hadid, Usama Albehairy, Asmaa Khater

Abstract:

Sustainable agricultural development of the desert areas of Egypt under the pressure of irrigation water scarcity is a significant national challenge. Existing water harvesting techniques on the northwest coast of Egypt do not ensure the optimal use of rainfall for agricultural purposes. Basin-scale hydrology potentialities were studied to investigate how available annual rainfall could be used to increase agricultural production. All data related to agricultural production included in the form of geospatial layers. Thematic classification of Sentinal-2 imagery was carried out to produce the land cover and crop maps following the (FAO) system of land cover classification. Contour lines and spot height points were used to create a digital elevation model (DEM). Then, DEM was used to delineate basins, sub-basins, and water outlet points using the Soil and Water Assessment Tool (Arc SWAT). Main soil units of the study area identified from Land Master Plan maps. Climatic data collected from existing official sources. The amount of precipitation, surface water runoff, potential, and actual evapotranspiration for the years (2004 to 2017) shown as results of (Arc SWAT). The land cover map showed that the two tree crops (olive and fig) cover 195.8 km2 when herbaceous crops (barley and wheat) cover 154 km2. The maximum elevation was 250 meters above sea level when the lowest one was 3 meters below sea level. The study area receives a massive variable amount of precipitation; however, water harvesting methods are inappropriate to store water for purposes.

Keywords: water catchements, remote sensing, GIS, sustainable agricultural development

Procedia PDF Downloads 95
1635 Agile Software Effort Estimation Using Regression Techniques

Authors: Mikiyas Adugna

Abstract:

Effort estimation is among the activities carried out in software development processes. An accurate model of estimation leads to project success. The method of agile effort estimation is a complex task because of the dynamic nature of software development. Researchers are still conducting studies on agile effort estimation to enhance prediction accuracy. Due to these reasons, we investigated and proposed a model on LASSO and Elastic Net regression to enhance estimation accuracy. The proposed model has major components: preprocessing, train-test split, training with default parameters, and cross-validation. During the preprocessing phase, the entire dataset is normalized. After normalization, a train-test split is performed on the dataset, setting training at 80% and testing set to 20%. We chose two different phases for training the two algorithms (Elastic Net and LASSO) regression following the train-test-split. In the first phase, the two algorithms are trained using their default parameters and evaluated on the testing data. In the second phase, the grid search technique (the grid is used to search for tuning and select optimum parameters) and 5-fold cross-validation to get the final trained model. Finally, the final trained model is evaluated using the testing set. The experimental work is applied to the agile story point dataset of 21 software projects collected from six firms. The results show that both Elastic Net and LASSO regression outperformed the compared ones. Compared to the proposed algorithms, LASSO regression achieved better predictive performance and has acquired PRED (8%) and PRED (25%) results of 100.0, MMRE of 0.0491, MMER of 0.0551, MdMRE of 0.0593, MdMER of 0.063, and MSE of 0.0007. The result implies LASSO regression algorithm trained model is the most acceptable, and higher estimation performance exists in the literature.

Keywords: agile software development, effort estimation, elastic net regression, LASSO

Procedia PDF Downloads 41
1634 Life Cycle Assessment of Todays and Future Electricity Grid Mixes of EU27

Authors: Johannes Gantner, Michael Held, Rafael Horn, Matthias Fischer

Abstract:

At the United Nations Climate Change Conference 2015 a global agreement on the reduction of climate change was achieved stating CO₂ reduction targets for all countries. For instance, the EU targets a reduction of 40 percent in emissions by 2030 compared to 1990. In order to achieve this ambitious goal, the environmental performance of the different European electricity grid mixes is crucial. First, the electricity directly needed for everyone’s daily life (e.g. heating, plug load, mobility) and therefore a reduction of the environmental impacts of the electricity grid mix reduces the overall environmental impacts of a country. Secondly, the manufacturing of every product depends on electricity. Thereby a reduction of the environmental impacts of the electricity mix results in a further decrease of environmental impacts of every product. As a result, the implementation of the two-degree goal highly depends on the decarbonization of the European electricity mixes. Currently the production of electricity in the EU27 is based on fossil fuels and therefore bears a high GWP impact per kWh. Due to the importance of the environmental impacts of the electricity mix, not only today but also in future, within the European research projects, CommONEnergy and Senskin, time-dynamic Life Cycle Assessment models for all EU27 countries were set up. As a methodology, a combination of scenario modeling and life cycle assessment according to ISO14040 and ISO14044 was conducted. Based on EU27 trends regarding energy, transport, and buildings, the different national electricity mixes were investigated taking into account future changes such as amount of electricity generated in the country, change in electricity carriers, COP of the power plants and distribution losses, imports and exports. As results, time-dynamic environmental profiles for the electricity mixes of each country and for Europe overall were set up. Thereby for each European country, the decarbonization strategies of the electricity mix are critically investigated in order to identify decisions, that can lead to negative environmental effects, for instance on the reduction of the global warming of the electricity mix. For example, the withdrawal of the nuclear energy program in Germany and at the same time compensation of the missing energy by non-renewable energy carriers like lignite and natural gas is resulting in an increase in global warming potential of electricity grid mix. Just after two years this increase countervailed by the higher share of renewable energy carriers such as wind power and photovoltaic. Finally, as an outlook a first qualitative picture is provided, illustrating from environmental perspective, which country has the highest potential for low-carbon electricity production and therefore how investments in a connected European electricity grid could decrease the environmental impacts of the electricity mix in Europe.

Keywords: electricity grid mixes, EU27 countries, environmental impacts, future trends, life cycle assessment, scenario analysis

Procedia PDF Downloads 169
1633 Genetic Algorithm for In-Theatre Military Logistics Search-and-Delivery Path Planning

Authors: Jean Berger, Mohamed Barkaoui

Abstract:

Discrete search path planning in time-constrained uncertain environment relying upon imperfect sensors is known to be hard, and current problem-solving techniques proposed so far to compute near real-time efficient path plans are mainly bounded to provide a few move solutions. A new information-theoretic –based open-loop decision model explicitly incorporating false alarm sensor readings, to solve a single agent military logistics search-and-delivery path planning problem with anticipated feedback is presented. The decision model consists in minimizing expected entropy considering anticipated possible observation outcomes over a given time horizon. The model captures uncertainty associated with observation events for all possible scenarios. Entropy represents a measure of uncertainty about the searched target location. Feedback information resulting from possible sensor observations outcomes along the projected path plan is exploited to update anticipated unit target occupancy beliefs. For the first time, a compact belief update formulation is generalized to explicitly include false positive observation events that may occur during plan execution. A novel genetic algorithm is then proposed to efficiently solve search path planning, providing near-optimal solutions for practical realistic problem instances. Given the run-time performance of the algorithm, natural extension to a closed-loop environment to progressively integrate real visit outcomes on a rolling time horizon can be easily envisioned. Computational results show the value of the approach in comparison to alternate heuristics.

Keywords: search path planning, false alarm, search-and-delivery, entropy, genetic algorithm

Procedia PDF Downloads 344
1632 Destruction of Coastal Wetlands in Harper City-Liberia: Setting Nature against the Future Society

Authors: Richard Adu Antwako

Abstract:

Coastal wetland destruction and its consequences have recently taken the center stage of global discussions. This phenomenon is no gray area to humanity as coastal wetland-human interaction seems inevitably ingrained in the earliest civilizations, amidst the demanding use of its resources to meet their necessities. The severity of coastal wetland destruction parallels with growing civilizations, and it is against this backdrop that, this paper interrogated the causes of coastal wetland destruction in Harper City in Liberia, compared the degree of coastal wetland stressors to the non-equilibrium thermodynamic scale as well as suggested an integrated coastal zone management to address the problems. Literature complemented the primary data gleaned via global positioning system devices, field observation, questionnaire, and interviews. Multi-sampling techniques were used to generate data from the sand miners, institutional heads, fisherfolk, community-based groups, and other stakeholders. Non-equilibrium thermodynamic theory remains vibrant in discerning the ecological stability, and it would be employed to further understand the coastal wetland destruction in Harper City, Liberia and to measure the coastal wetland stresses-amplitude and elasticity. The non-equilibrium thermodynamics postulates that the coastal wetlands are capable of assimilating resources (inputs), as well as discharging products (outputs). However, the input-output relationship exceedingly stretches beyond the thresholds of the coastal wetlands, leading to coastal wetland disequilibrium. Findings revealed that the sand mining, mangrove removal, and crude dumping have transformed the coastal wetlands, resulting in water pollution, flooding, habitat loss and disfigured beaches in Harper City in Liberia. This paper demonstrates that the coastal wetlands are converted into developmental projects and agricultural fields, thus, endangering the future society against nature.

Keywords: amplitude, crude dumping, elasticity, non-equilibrium thermodynamics, wetland destruction

Procedia PDF Downloads 122
1631 E-Learning Platform for School Kids

Authors: Gihan Thilakarathna, Fernando Ishara, Rathnayake Yasith, Bandara A. M. R. Y.

Abstract:

E-learning is a crucial component of intelligent education. Even in the midst of a pandemic, E-learning is becoming increasingly important in the educational system. Several e-learning programs are accessible for students. Here, we decided to create an e-learning framework for children. We've found a few issues that teachers are having with their online classes. When there are numerous students in an online classroom, how does a teacher recognize a student's focus on academics and below-the-surface behaviors? Some kids are not paying attention in class, and others are napping. The teacher is unable to keep track of each and every student. Key challenge in e-learning is online exams. Because students can cheat easily during online exams. Hence there is need of exam proctoring is occurred. In here we propose an automated online exam cheating detection method using a web camera. The purpose of this project is to present an E-learning platform for math education and include games for kids as an alternative teaching method for math students. The game will be accessible via a web browser. The imagery in the game is drawn in a cartoonish style. This will help students learn math through games. Everything in this day and age is moving towards automation. However, automatic answer evaluation is only available for MCQ-based questions. As a result, the checker has a difficult time evaluating the theory solution. The current system requires more manpower and takes a long time to evaluate responses. It's also possible to mark two identical responses differently and receive two different grades. As a result, this application employs machine learning techniques to provide an automatic evaluation of subjective responses based on the keyword provided to the computer as student input, resulting in a fair distribution of marks. In addition, it will save time and manpower. We used deep learning, machine learning, image processing and natural language technologies to develop these research components.

Keywords: math, education games, e-learning platform, artificial intelligence

Procedia PDF Downloads 134
1630 An Assessment of Drainage Network System in Nigeria Urban Areas using Geographical Information Systems: A Case Study of Bida, Niger State

Authors: Yusuf Hussaini Atulukwu, Daramola Japheth, Tabitit S. Tabiti, Daramola Elizabeth Lara

Abstract:

In view of the recent limitations faced by the township concerning poorly constructed and in some cases non - existence of drainage facilities that resulted into incessant flooding in some parts of the community poses threat to life,property and the environment. The research seeks to address this issue by showing the spatial distribution of drainage network in Bida Urban using Geographic information System techniques. Relevant features were extracted from existing Bida based Map using un-screen digitization and x, y, z, data of existing drainages were acquired using handheld Global Positioning System (GPS). These data were uploaded into ArcGIS 9.2, software, and stored in the relational database structure that was used to produce the spatial data drainage network of the township. The result revealed that about 40 % of the drainages are blocked with sand and refuse, 35 % water-logged as a result of building across erosion channels and dilapidated bridges as a result of lack of drainage along major roads. The study thus concluded that drainage network systems in Bida community are not in good working condition and urgent measures must be initiated in order to avoid future disasters especially with the raining season setting in. Based on the above findings, the study therefore recommends that people within the locality should avoid dumping municipal waste within the drainage path while sand blocked or weed blocked drains should be clear by the authority concerned. In the same vein the authority should ensured that contract of drainage construction be awarded to professionals and all the natural drainages caused by erosion should be addressed to avoid future disasters.

Keywords: drainage network, spatial, digitization, relational database, waste

Procedia PDF Downloads 316