Search results for: integrated definition for process description capture (IDEF3) method
31730 Biomimetics and Additive Manufacturing for Industrial Design Innovation
Authors: Axel Thallemer, Martin Danzer, Dominik Diensthuber, Aleksandar Kostadinov, Bernhard Rogler
Abstract:
Nature has always inspired the creative mind, to a lesser or greater extent. Introduced around the 1950s, Biomimetics served as a systematic method to treat the natural world as a ‘pattern book’ for technical solutions with the aim to create innovative products. Unfortunately, this technique is prone to failure when performed as a mere reverse engineering of a natural system or appearance. Contrary to that, a solution which looks at the principles of a natural design, promises a better outcome. One such example is the here presented case study, which shows the design process of three distinctive grippers. The devices have biomimetic properties on two levels. Firstly, they use a kinematic chain found in beaks and secondly, they have a biomimetic structural geometry, which was realized using additive manufacturing. In a next step, the manufacturing method was evaluated to estimate its efficiency for commercial production. The results show that the fabrication procedure is still in its early stage and thus it is not able to guarantee satisfactory results. To summarize the study, we claim that a novel solution can be derived using principles from nature, however, for the solution to be actualized successfully, there are parameters which are beyond reach for designers. Nonetheless, industrial designers can contribute to product innovation using biomimetics.Keywords: biomimetics, innovation, design process, additive manufacturing
Procedia PDF Downloads 19131729 Removal of Na₂SO₄ by Electro-Confinement on Nanoporous Carbon Membrane
Authors: Jing Ma, Guotong Qin
Abstract:
We reported electro-confinement desalination (ECMD), a desalination method combining electric field effects and confinement effects using nanoporous carbon membranes as electrode. A carbon membrane with average pore size of 8.3 nm was prepared by organic sol-gel method. The precursor of support was prepared by curing porous phenol resin tube. Resorcinol-formaldehyde sol was coated on porous tubular resin support. The membrane was obtained by carbonisation of coated support. A well-combined top layer with the thickness of 35 μm was supported by macroporous support. Measurements of molecular weight cut-off using polyethylene glycol showed the average pore size of 8.3 nm. High salt rejection can be achieved because the water molecules need not overcome high energy barriers in confined space, while huge inherent dehydration energy was required for hydrated ions to enter the nanochannels. Additionally, carbon membrane with additional electric field can be used as an integrated membrane electrode combining the effects of confinement and electric potential gradient. Such membrane electrode can repel co-ions and attract counter-ions using pressure as the driving force for mass transport. When the carbon membrane was set as cathode, the rejection of SO₄²⁻ was 94.89%, while the removal of Na⁺ was less than 20%. We set carbon membrane as anode chamber to treat the effluent water from the cathode chamber. The rejection of SO₄²⁻ and Na⁺ reached to 100% and 88.86%, respectively. ECMD will be a promising energy efficient method for salt rejection.Keywords: nanoporous carbon membrane, confined effect, electric field, desalination, membrane reactor
Procedia PDF Downloads 12531728 Assessment of a Coupled Geothermal-Solar Thermal Based Hydrogen Production System
Authors: Maryam Hamlehdar, Guillermo A. Narsilio
Abstract:
To enhance the feasibility of utilising geothermal hot sedimentary aquifers (HSAs) for clean hydrogen production, one approach is the implementation of solar-integrated geothermal energy systems. This detailed modelling study conducts a thermo-economic assessment of an advanced Organic Rankine Cycle (ORC)-based hydrogen production system that uses low-temperature geothermal reservoirs, with a specific focus on hot sedimentary aquifers (HSAs) over a 30-year period. In the proposed hybrid system, solar-thermal energy is used to raise the water temperature extracted from the geothermal production well. This temperature increase leads to a higher steam output, powering the turbine and subsequently enhancing the electricity output for running the electrolyser. Thermodynamic modeling of a parabolic trough solar (PTS) collector is developed and integrated with modeling for a geothermal-based configuration. This configuration includes a closed regenerator cycle (CRC), proton exchange membrane (PEM) electrolyser, and thermoelectric generator (TEG). Following this, the study investigates the impact of solar energy use on the temperature enhancement of the geothermal reservoir. It assesses the resulting consequences on the lifecycle performance of the hydrogen production system in comparison with a standalone geothermal system. The results indicate that, with the appropriate solar collector area, a combined solar-geothermal hydrogen production system outperforms a standalone geothermal system in both cost and rate of production. These findings underscore a solar-assisted geothermal hybrid system holds the potential to generate lower-cost hydrogen with enhanced efficiency, thereby boosting the appeal of numerous low to medium-temperature geothermal sources for hydrogen production.Keywords: clean hydrogen production, integrated solar-geothermal, low-temperature geothermal energy, numerical modelling
Procedia PDF Downloads 6931727 A Cluster Randomised Controlled Trial Investigating the Impact of Integrating Mass Drug Administration Treating Soil Transmitted Helminths with Mass Dog Rabies Vaccination in Remote Communities in Tanzania
Authors: Felix Lankester, Alicia Davis, Safari Kinung'hi, Catherine Bunga, Shayo Alkara, Imam Mzimbiri, Jonathan Yoder, Sarah Cleaveland, Guy H. Palmer
Abstract:
Achieving the London Declaration goal of a 90% reduction in neglected tropical diseases (NTDs) by 2030 requires cost-effective strategies that attain high and comprehensive coverage. The first objective of this trial was to assess the impact on cost and coverage of employing a novel integrative One Health approach linking two NTD control programs: mass drug administration (MDA) for soil-transmitted helminths in humans (STH) and mass dog rabies vaccination (MDRV). The second objective was to compare the coverage achieved by the MDA, a community-wide deworming intervention, with that of the existing national primary school-based deworming program (NSDP), with particular focus on the proportion of primary school-age children reached and their school enrolment status. Our approach was unconventional because, in line with the One Health approach to disease control, it coupled the responsibilities and resources of the Ministries responsible for human and animal health into one program with the shared aim of preventing multiple NTDs. The trial was carried out in hard-to-reach pastoral communities comprising 24 villages of the Ngorongoro District, Tanzania, randomly allocated to either Arm A (MDA and MDRV), Arm B (MDA only) or Arm C (MDRV only). Objective one: The percentage of people in each target village that received treatment through MDA in Arms A and B was 63% and 65%, respectively (χ2 = 1, p = 0.32). The percentage of dogs vaccinated in Arm A and C was 70% and 81%, respectively (χ2 =9, p = 0.003). It took 33% less time for a single person and a dog to attend the integrated delivery than two separate events. Cost per dose (including delivery) was lower under the integrated strategy, with delivery of deworming and rabies vaccination reduced by $0.13 (54%) and $0.85 (19%) per dose, respectively. Despite a slight reduction in the proportion of village dogs vaccinated in the integrated event, both the integrated and non-integrated strategies achieved the target threshold of 70% required to eliminate rabies. Objective two: The percentages of primary school age children enrolled in school that was reached by this trial (73%) and the existing NSDP (80%) were not significantly different (F = 0.9, p = 0.36). However, of the primary school age children treated in this trial, 46% were not enrolled in school. Furthermore, 86% of the people treated would have been outside the reach of the NSDP because they were not primary school age or were primary school age children not enrolled in school. The comparable reach, the substantial reductions in cost per dose delivered and the decrease in participants’ time support this integrated One Health approach to control multiple NTDs. Further, the recorded level of non-enrolment at primary school suggests that, in remote areas, school-based delivery strategies could miss a large fraction of school-age children and that programs that focus delivery solely at the level of the primary school will miss a substantial proportion of both primary school age children as well as other individuals from the community. We have shown that these populations can be effectively reached through extramural programs.Keywords: canine mediated human rabies, integrated health interventions, mass drug administration, neglected tropical disease, One Health, soil-transmitted helminths
Procedia PDF Downloads 18131726 Definition, Barriers to and Facilitators of Moral Distress as Perceived by Neonatal Intensive Care Physicians
Authors: M. Deligianni, P. Voultsos, E. Tsamadou
Abstract:
Background/Introduction: Moral distress is a common occurrence for health professionals working in neonatal critical care. Despite a growing number of critically ill neonatal and pediatric patients, only a few articles related to moral distress as experienced by neonatal physicians have been published over the last years. Objectives/Aims: The aim of this study was to define and identify barriers to and facilitators of moral distress based on the perceptions and experiences of neonatal physicians working in neonatal intensive care units (NICUs). This pilot study is a part of a larger nationwide project. Methods: A multicenter qualitative descriptive study using focus group methodology was conducted. In-depth interviews lasting 45 to 60 minutes were audio-recorded. Once data were transcribed, conventional content analysis was used to develop the definition and categories, as well as to identify the barriers to and facilitators of moral distress. Results: Participants defined moral distress broadly in the context of neonatal critical care. A wide variation of definitions was displayed. The physicians' responses to moral distress included different feelings and other situations. The overarching categories that emerged from the data were patient-related, family-related, and physician-related factors. Moreover, organizational factors may constitute major facilitators of moral distress among neonatal physicians in NICUs. Note, however, that moral distress may be regarded as an essential component to caring for neonates in critical care. The present study provides further insight into the moral distress experienced by physicians working in Greek NICUs. Discussion/Conclusions: Understanding how neonatal and pediatric critical care nurses define moral distress and what contributes to its development is foundational to developing targeted strategies for mitigating the prevalence of moral distress among neonate physicians in the context of NICUs.Keywords: critical care, moral distress, neonatal physician, neonatal intensive care unit, NICU
Procedia PDF Downloads 15031725 Study the Effect of Friction on Barreling Behavior during Upsetting Process Using Anand Model
Authors: H. Mohammadi Majd, M. Jalali Azizpour, V. Tavaf, A. Jaderi
Abstract:
In upsetting processes contact friction significantly influence metal flow, stress-strain state and process parameters. Furthermore, tribological conditions influence workpiece deformation and its dimensional precision. A viscoplastic constitutive law, the Anand model, was applied to represent the inelastic deformation behavior in upsetting process. This paper presents research results of the influence of contact friction coefficient on a workpiece deformation in upsetting process.finite element parameters. This technique was tested for three different specimens simulations of the upsetting and the corresponding material and can be successfully employed to predict the deformation of the upsetting process.Keywords: friction, upsetting, barreling, Anand model
Procedia PDF Downloads 33631724 An Integrated Lightweight Naïve Bayes Based Webpage Classification Service for Smartphone Browsers
Authors: Mayank Gupta, Siba Prasad Samal, Vasu Kakkirala
Abstract:
The internet world and its priorities have changed considerably in the last decade. Browsing on smart phones has increased manifold and is set to explode much more. Users spent considerable time browsing different websites, that gives a great deal of insight into user’s preferences. Instead of plain information classifying different aspects of browsing like Bookmarks, History, and Download Manager into useful categories would improve and enhance the user’s experience. Most of the classification solutions are server side that involves maintaining server and other heavy resources. It has security constraints and maybe misses on contextual data during classification. On device, classification solves many such problems, but the challenge is to achieve accuracy on classification with resource constraints. This on device classification can be much more useful in personalization, reducing dependency on cloud connectivity and better privacy/security. This approach provides more relevant results as compared to current standalone solutions because it uses content rendered by browser which is customized by the content provider based on user’s profile. This paper proposes a Naive Bayes based lightweight classification engine targeted for a resource constraint devices. Our solution integrates with Web Browser that in turn triggers classification algorithm. Whenever a user browses a webpage, this solution extracts DOM Tree data from the browser’s rendering engine. This DOM data is a dynamic, contextual and secure data that can’t be replicated. This proposal extracts different features of the webpage that runs on an algorithm to classify into multiple categories. Naive Bayes based engine is chosen in this solution for its inherent advantages in using limited resources compared to other classification algorithms like Support Vector Machine, Neural Networks, etc. Naive Bayes classification requires small memory footprint and less computation suitable for smartphone environment. This solution has a feature to partition the model into multiple chunks that in turn will facilitate less usage of memory instead of loading a complete model. Classification of the webpages done through integrated engine is faster, more relevant and energy efficient than other standalone on device solution. This classification engine has been tested on Samsung Z3 Tizen hardware. The Engine is integrated into Tizen Browser that uses Chromium Rendering Engine. For this solution, extensive dataset is sourced from dmoztools.net and cleaned. This cleaned dataset has 227.5K webpages which are divided into 8 generic categories ('education', 'games', 'health', 'entertainment', 'news', 'shopping', 'sports', 'travel'). Our browser integrated solution has resulted in 15% less memory usage (due to partition method) and 24% less power consumption in comparison with standalone solution. This solution considered 70% of the dataset for training the data model and the rest 30% dataset for testing. An average accuracy of ~96.3% is achieved across the above mentioned 8 categories. This engine can be further extended for suggesting Dynamic tags and using the classification for differential uses cases to enhance browsing experience.Keywords: chromium, lightweight engine, mobile computing, Naive Bayes, Tizen, web browser, webpage classification
Procedia PDF Downloads 16331723 An Integrated Emergency Management System for the Tourism Industry in Oman
Authors: Majda Al Salti
Abstract:
Tourism industry is considered globally as one of the leading industries due to its noticeable contribution to countries' gross domestic product (GDP) and job creation. However, tourism is vulnerable to crisis and disaster that requires its preparedness. With its limited capabilities, there is a need to improve links and the understanding between the tourism industry and the emergency services, thus facilitating future emergency response to any potential incident. This study aims to develop the concept of an integrated emergency management system for the tourism industry. The study used face-to-face semi-structured interviews to evaluate the level of crisis and disaster preparedness of the tourism industry in Oman. The findings suggested that there is a lack of understanding of crisis and disaster management, and hence preparedness level among Oman Tourism Authorities appears to be under-expectation. Therefore, a clear need for tourism sector inter- and intra-integration and collaboration is important in the pre-disaster stage. The need for such integrations can help the tourism industry in Oman to prepare for future incidents as well as identifying its requirements in time of crisis for effective response.Keywords: tourism, emergency services, crisis, disaster
Procedia PDF Downloads 11931722 Simulation Study on Polymer Flooding with Thermal Degradation in Elevated-Temperature Reservoirs
Authors: Lin Zhao, Hanqiao Jiang, Junjian Li
Abstract:
Polymers injected into elevated-temperature reservoirs inevitably suffer from thermal degradation, resulting in severe viscosity loss and poor flooding performance. However, for polymer flooding in such reservoirs, present simulators fail to provide accurate results for lack of description on thermal degradation. In light of this, the objectives of this paper are to provide a simulation model for polymer flooding with thermal degradation and study the effect of thermal degradation on polymer flooding in elevated-temperature reservoirs. Firstly, a thermal degradation experiment was conducted to obtain the degradation law of polymer concentration and viscosity. Different types of polymers degraded in the Thermo tank with elevated temperatures. Afterward, based on the obtained law, a streamline-assistant model was proposed to simulate the degradation process under in-situ flow conditions. Model validation was performed with field data from a well group of an offshore oilfield. Finally, the effect of thermal degradation on polymer flooding was studied using the proposed model. Experimental results showed that the polymer concentration remained unchanged, while the viscosity degraded exponentially with time after degradation. The polymer viscosity was functionally dependent on the polymer degradation time (PDT), which represented the elapsed time started from the polymer particle injection. Tracing the real flow path of polymer particle was required. Therefore, the presented simulation model was streamline-assistant. Equation of PDT vs. time of flight (TOF) along streamline was built by the law of polymer particle transport. Based on the field polymer sample and dynamic data, the new model proved its accuracy. Study of degradation effect on polymer flooding indicated: (1) the viscosity loss increased with TOF exponentially in the main body of polymer-slug and remained constant in the slug front; (2) the responding time of polymer flooding was delayed, but the effective time was prolonged; (3) the breakthrough of subsequent water was eased; (4) the capacity of polymer adjusting injection profile was diminished; (5) the incremental recovery was reduced significantly. In general, the effect of thermal degradation on polymer flooding performance was rather negative. This paper provides a more comprehensive insight into polymer thermal degradation in both the physical process and field application. The proposed simulation model offers an effective means for simulating the polymer flooding process with thermal degradation. The negative effect of thermal degradation suggests that the polymer thermal stability should be given full consideration when designing polymer flooding project in elevated-temperature reservoirs.Keywords: polymer flooding, elevated-temperature reservoir, thermal degradation, numerical simulation
Procedia PDF Downloads 14331721 A Study of Basic and Reactive Dyes Removal from Synthetic and Industrial Wastewater by Electrocoagulation Process
Authors: Almaz Negash, Dessie Tibebe, Marye Mulugeta, Yezbie Kassa
Abstract:
Large-scale textile industries use large amounts of toxic chemicals, which are very hazardous to human health and environmental sustainability. In this study, the removal of various dyes from effluents of textile industries using the electrocoagulation process was investigated. The studied dyes were Reactive Red 120 (RR-120), Basic Blue 3 (BB-3), and Basic Red 46 (BR-46), which were found in samples collected from effluents of three major textile factories in the Amhara region, Ethiopia. For maximum removal, the dye BB-3 required an acidic pH 3, RR120 basic pH 11, while BR-46 neutral pH 7 conditions. BB-3 required a longer treatment time of 80 min than BR46 and RR-120, which required 30 and 40 min, respectively. The best removal efficiency of 99.5%, 93.5%, and 96.3% was achieved for BR-46, BB-3, and RR-120, respectively, from synthetic wastewater containing 10 mg L1of each dye at an applied potential of 10 V. The method was applied to real textile wastewaters and 73.0 to 99.5% removal of the dyes was achieved, Indicating Electrocoagulation can be used as a simple, and reliable method for the treatment of real wastewater from textile industries. It is used as a potentially viable and inexpensive tool for the treatment of textile dyes. Analysis of the electrochemically generated sludge by X-ray Diffraction, Scanning Electron Microscope, and Fourier Transform Infrared Spectroscopy revealed the expected crystalline aluminum oxides (bayerite (Al(OH)3 diaspore (AlO(OH)) found in the sludge. The amorphous phase was also found in the floc. Textile industry owners should be aware of the impact of the discharge of effluents on the Ecosystem and should use the investigated electrocoagulation method for effluent treatment before discharging into the environment.Keywords: electrocoagulation, aluminum electrodes, Basic Blue 3, Basic Red 46, Reactive Red 120, textile industry, wastewater
Procedia PDF Downloads 5331720 Determination of Non-CO2 Greenhouse Gas Emission in Electronics Industry
Authors: Bong Jae Lee, Jeong Il Lee, Hyo Su Kim
Abstract:
Both developed and developing countries have adopted the decision to join the Paris agreement to reduce greenhouse gas (GHG) emissions at the Conference of the Parties (COP) 21 meeting in Paris. As a result, the developed and developing countries have to submit the Intended Nationally Determined Contributions (INDC) by 2020, and each country will be assessed for their performance in reducing GHG. After that, they shall propose a reduction target which is higher than the previous target every five years. Therefore, an accurate method for calculating greenhouse gas emissions is essential to be presented as a rational for implementing GHG reduction measures based on the reduction targets. Non-CO2 GHGs (CF4, NF3, N2O, SF6 and so on) are being widely used in fabrication process of semiconductor manufacturing, and etching/deposition process of display manufacturing process. The Global Warming Potential (GWP) value of Non-CO2 is much higher than CO2, which means it will have greater effect on a global warming than CO2. Therefore, GHG calculation methods of the electronics industry are provided by Intergovernmental Panel on climate change (IPCC) and U.S. Environmental Protection Agency (EPA), and it will be discussed at ISO/TC 146 meeting. As discussed earlier, being precise and accurate in calculating Non-CO2 GHG is becoming more important. Thus this study aims to discuss the implications of the calculating methods through comparing the methods of IPCC and EPA. As a conclusion, after analyzing the methods of IPCC & EPA, the method of EPA is more detailed and it also provides the calculation for N2O. In case of the default emission factor (by IPCC & EPA), IPCC provides more conservative results compared to that of EPA; The factor of IPCC was developed for calculating a national GHG emission, while the factor of EPA was specifically developed for the U.S. which means it must have been developed to address the environmental issue of the US. The semiconductor factory ‘A’ measured F gas according to the EPA Destruction and Removal Efficiency (DRE) protocol and estimated their own DRE, and it was observed that their emission factor shows higher DRE compared to default DRE factor of IPCC and EPA Therefore, each country can improve their GHG emission calculation by developing its own emission factor (if possible) at the time of reporting Nationally Determined Contributions (NDC). Acknowledgements: This work was supported by the Korea Evaluation Institute of Industrial Technology (No. 10053589).Keywords: non-CO2 GHG, GHG emission, electronics industry, measuring method
Procedia PDF Downloads 28831719 Economics of Fish-Plantain Integrated Farm Enterprise in Southern Nigeria
Authors: S. O. Obasa, J. A. Soaga, O. I. Afolabi, N. A. Bamidele, O. E. Babalola
Abstract:
Attempt to improve the income of the rural population is a welcome development in Nigeria. Integrated fish-crop farming has been suggested as a means of raising farm income, reducing wastage and mitigating the risk component in production through the complementarity gain. A feeding trial was carried out to investigate the replacement of maize with fermented unripe plantain (Musa paradisiaca) peel meal in the diet of Nile tilapia, Oreochromis niloticus. The economics of the integrated enterprise was assessed using budgetary analysis techniques. The analysis incorporated the material and labour costs as well as the returns from sale of matured fish and plantain. A total of 60 fingerlings of Nile tilapia (1.70±0.1 g) were stocked at 10 per plastic tank. Two iso-nitrogenous diets containing 35% crude protein in which maize meal was replaced by fermented unripe plantain peel meal at 0% (FUP0/Control diet), and 100% (FUP100) were formulated and prepared. The fingerlings were fed at 5% body weight per day for 56 days. Lowest feed conversion ratio of 1.39 in fish fed diet FUP100 was not significantly different (P > 0.05) from the highest 1.42 of fish fed the Control diet. The highest percentage profit of 88.85% in fish fed diet FUP100 was significantly higher than 66.68% in fish fed diet FUP0, while the profit index of 1.89 in fish fed diet FUP100 was significantly different from 1.67 in fish fed diet FUP0. Therefore, fermented unripe plantain peel meal can completely replace maize in the diet of O. niloticus fingerlings. Profitability assessment shows that the net income from the integration was ₦ 463,000 per hectare and the integration resulted to an increase of ₦ 87,750.00 representing a 12.2% increase than in separate production.Keywords: fish-crop, income, Nile tilapia, waste management
Procedia PDF Downloads 50531718 Facility Data Model as Integration and Interoperability Platform
Authors: Nikola Tomasevic, Marko Batic, Sanja Vranes
Abstract:
Emerging Semantic Web technologies can be seen as the next step in evolution of the intelligent facility management systems. Particularly, this considers increased usage of open source and/or standardized concepts for data classification and semantic interpretation. To deliver such facility management systems, providing the comprehensive integration and interoperability platform in from of the facility data model is a prerequisite. In this paper, one of the possible modelling approaches to provide such integrative facility data model which was based on the ontology modelling concept was presented. Complete ontology development process, starting from the input data acquisition, ontology concepts definition and finally ontology concepts population, was described. At the beginning, the core facility ontology was developed representing the generic facility infrastructure comprised of the common facility concepts relevant from the facility management perspective. To develop the data model of a specific facility infrastructure, first extension and then population of the core facility ontology was performed. For the development of the full-blown facility data models, Malpensa and Fiumicino airports in Italy, two major European air-traffic hubs, were chosen as a test-bed platform. Furthermore, the way how these ontology models supported the integration and interoperability of the overall airport energy management system was analyzed as well.Keywords: airport ontology, energy management, facility data model, ontology modeling
Procedia PDF Downloads 44831717 Moving Object Detection Using Histogram of Uniformly Oriented Gradient
Authors: Wei-Jong Yang, Yu-Siang Su, Pau-Choo Chung, Jar-Ferr Yang
Abstract:
Moving object detection (MOD) is an important issue in advanced driver assistance systems (ADAS). There are two important moving objects, pedestrians and scooters in ADAS. In real-world systems, there exist two important challenges for MOD, including the computational complexity and the detection accuracy. The histogram of oriented gradient (HOG) features can easily detect the edge of object without invariance to changes in illumination and shadowing. However, to reduce the execution time for real-time systems, the image size should be down sampled which would lead the outlier influence to increase. For this reason, we propose the histogram of uniformly-oriented gradient (HUG) features to get better accurate description of the contour of human body. In the testing phase, the support vector machine (SVM) with linear kernel function is involved. Experimental results show the correctness and effectiveness of the proposed method. With SVM classifiers, the real testing results show the proposed HUG features achieve better than classification performance than the HOG ones.Keywords: moving object detection, histogram of oriented gradient, histogram of uniformly-oriented gradient, linear support vector machine
Procedia PDF Downloads 59431716 Treatment of Rice Industry Waste Water by Flotation-Flocculation Method
Authors: J. K. Kapoor, Shagufta Jabin, H. S. Bhatia
Abstract:
Polyamine flocculants were synthesized by poly-condensation of diphenylamine and epichlorohydrin using 1, 2-diaminoethane as modifying agent. The polyelectrolytes were prepared by taking epichlohydrin-diphenylamine in a molar ratio of 1:1, 1.5:1, 2:1, and 2.5:1. The flocculation performance of these polyelectrolytes was evaluated with rice industry waste water. The polyelectrolytes have been used in conjunction with alum for coagulation- flocculation process. Prior to the coagulation- flocculation process, air flotation technique was used with the aim to remove oil and grease content from waste water. Significant improvement was observed in the removal of oil and grease content after the air flotation technique. It has been able to remove 91.7% oil and grease from rice industry waste water. After coagulation-flocculation method, it has been observed that polyelectrolyte with epichlohydrin-diphenylamine molar ratio of 1.5:1 showed best results for the removal of pollutants from rice industry waste water. The highest efficiency of turbidity and TSS removal with polyelectrolyte has been found to be 97.5% and 98.2%, respectively. Results of these evaluations also reveal 86.8% removal of COD and 87.5% removal of BOD from rice industry waste water. Thus, we demonstrate optimization of coagulation–flocculation technique which is appropriate for waste water treatment.Keywords: coagulation, flocculation, air flotation technique, polyelectrolyte, turbidity
Procedia PDF Downloads 48031715 Development of 3D Particle Method for Calculating Large Deformation of Soils
Authors: Sung-Sik Park, Han Chang, Kyung-Hun Chae, Sae-Byeok Lee
Abstract:
In this study, a three-dimensional (3D) Particle method without using grid was developed for analyzing large deformation of soils instead of using ordinary finite element method (FEM) or finite difference method (FDM). In the 3D Particle method, the governing equations were discretized by various particle interaction models corresponding to differential operators such as gradient, divergence, and Laplacian. The Mohr-Coulomb failure criterion was incorporated into the 3D Particle method to determine soil failure. The yielding and hardening behavior of soil before failure was also considered by varying viscosity of soil. First of all, an unconfined compression test was carried out and the large deformation following soil yielding or failure was simulated by the developed 3D Particle method. The results were also compared with those of a commercial FEM software PLAXIS 3D. The developed 3D Particle method was able to simulate the 3D large deformation of soils due to soil yielding and calculate the variation of normal and shear stresses following clay deformation.Keywords: particle method, large deformation, soil column, confined compressive stress
Procedia PDF Downloads 57231714 Active Learning Management for Teacher's Professional Courses in Curriculum and Instruction, Faculty of Education Thaksin University
Authors: Chuanphit Chumkhong
Abstract:
This research aimed 1) to study the effects of the management of Active Learning among 3rd year students enrolled in teacher’s profession courses and 2) to assess the satisfaction of the students with courses using the Active Learning approach. The population for the study consisted of 442 3rd year undergraduate students enrolled in two teacher education courses in 2015: Curriculum Development and Learning Process Management. They were 442 from 11 education programs. Respondents for evaluation of satisfaction with Active Learning management comprised 432 students. The instruments used in research included a detailed course description and rating scale questionnaire on Active Learning. The data were analyzed using arithmetic mean and standard deviation. The results of the study reveal the following: 1. Overall, students gain a better understanding of the Active Learning due to their actual practice on the activity of course. Students have the opportunity to exchange learning knowledge and skills. The AL teaching activities make students interested in the contents and they seek to search for knowledge on their own. 2. Overall, 3rd year students are satisfied with the Active Learning management at a ‘high’ level with a mean score (μ) of 4.12 and standard deviation (σ) of. 51. By individual items, students are satisfied with the 10 elements in the two courses at a ‘high’ level with the mean score (μ) between 3.79 to 4.41 and a standard deviation (σ) between to 68. 79.Keywords: active learning teaching model, teacher’s professional courses, professional courses, curriculum and instruction teacher's
Procedia PDF Downloads 24831713 The Relationships between Human Resource Management and Entrepreneurship: Case Study SME in Thailand
Authors: Bella Llego
Abstract:
This study aims to investigate the relationships between human resource management and entrepreneurship in the view of owner-managers and employees, and among employees with in the SME in Thailand. The research method used a qualitative method to confirm the phenomenology interest with top management position which women are regarding their career path by using purposive sampling method. The results showed that human resources management has positive relate with the corporate entrepreneurship are including the recruitment process, training worker, professional career development and reward system impact to entrepreneur’s knowledge and innovation of corporate entrepreneurship in respectively to bring a very reliable way. Then, the key informant suggested that women’s career experiences predisposed them to find an alternative route for entrepreneurship, despite having achieved top management. The understanding factors that successfully contribute to the development of women entrepreneurs from career development perspective are critical endeavours for any type of organization as well.Keywords: entrepreneurship, firm performance, human resource management, work efficiency
Procedia PDF Downloads 27031712 An Integrated Label Propagation Network for Structural Condition Assessment
Authors: Qingsong Xiong, Cheng Yuan, Qingzhao Kong, Haibei Xiong
Abstract:
Deep-learning-driven approaches based on vibration responses have attracted larger attention in rapid structural condition assessment while obtaining sufficient measured training data with corresponding labels is relevantly costly and even inaccessible in practical engineering. This study proposes an integrated label propagation network for structural condition assessment, which is able to diffuse the labels from continuously-generating measurements by intact structure to those of missing labels of damage scenarios. The integrated network is embedded with damage-sensitive features extraction by deep autoencoder and pseudo-labels propagation by optimized fuzzy clustering, the architecture and mechanism which are elaborated. With a sophisticated network design and specified strategies for improving performance, the present network achieves to extends the superiority of self-supervised representation learning, unsupervised fuzzy clustering and supervised classification algorithms into an integration aiming at assessing damage conditions. Both numerical simulations and full-scale laboratory shaking table tests of a two-story building structure were conducted to validate its capability of detecting post-earthquake damage. The identifying accuracy of a present network was 0.95 in numerical validations and an average 0.86 in laboratory case studies, respectively. It should be noted that the whole training procedure of all involved models in the network stringently doesn’t rely upon any labeled data of damage scenarios but only several samples of intact structure, which indicates a significant superiority in model adaptability and feasible applicability in practice.Keywords: autoencoder, condition assessment, fuzzy clustering, label propagation
Procedia PDF Downloads 9731711 Transpersonal Model of an Individual's Creative Experiencef
Authors: Anatoliy Kharkhurin
Abstract:
Modifications that the prefix ‘trans-‘ refers to start within a person. This presentation focuses on the transpersonal that goes beyond the individual (trans-personal) to encompass wider aspects of humanities, specifically peak experience as a culminating stage of the creative act. It proposes a model according to which the peak experience results from a harmonious vibration of four spheres, which transcend an individual’s capacities and bring one to a qualitatively different level of experience. Each sphere represents an aspect of creative activity: superconscious, intellectual, emotive and active. Each sphere corresponds to one of four creative functions: authenticity, novelty, aesthetics, and utility, respectively. The creative act starts in the superconscious sphere: the supreme pleasure of Creation is reflected in creative pleasure, which is realized in creative will. These three instances serve as a source of force axes, which penetrate other spheres, and in place of infiltration establish restrictive, expansive, and integrative principles, respectively; the latter balances the other two and ensures a harmonious vibration within a sphere. This Hegelian-like triad is realized within each sphere in the form of creative capacities. The intellectual sphere nurtures capacities to invent and to elaborate, which are integrated by capacity to conceptualize. The emotive sphere nurtures satiation and restrictive capacities integrated by capacity to balance. The active sphere nurtures goal orientation and stabilization capacities integrated by capacity for self-expression. All four spheres vibrate within each other – the superconscious sphere being in the core of the structure followed by intellectual, emotive, and active spheres, respectively – thereby reflecting the path of creative production. If the spheres vibrate in-phase, their amplitudes amplify the creative energy; if in antiphase – the amplitudes reduce the creative energy. Thus, creative act is perceived as continuum with perfectly harmonious vibration within and between the spheres on one side and perfectly disharmonious vibration on the other.Keywords: creativity, model, transpersonal, peak experience
Procedia PDF Downloads 35431710 Information and Communication Technology (ICT) Education Improvement for Enhancing Learning Performance and Social Equality
Authors: Heichia Wang, Yalan Chao
Abstract:
Social inequality is a persistent problem. One of the ways to solve this problem is through education. At present, vulnerable groups are often less geographically accessible to educational resources. However, compared with educational resources, communication equipment is easier for vulnerable groups. Now that information and communication technology (ICT) has entered the field of education, today we can accept the convenience that ICT provides in education, and the mobility that it brings makes learning independent of time and place. With mobile learning, teachers and students can start discussions in an online chat room without the limitations of time or place. However, because liquidity learning is quite convenient, people tend to solve problems in short online texts with lack of detailed information in a lack of convenient online environment to express ideas. Therefore, the ICT education environment may cause misunderstanding between teachers and students. Therefore, in order to better understand each other's views between teachers and students, this study aims to clarify the essays of the analysts and classify the students into several types of learning questions to clarify the views of teachers and students. In addition, this study attempts to extend the description of possible omissions in short texts by using external resources prior to classification. In short, by applying a short text classification, this study can point out each student's learning problems and inform the instructor where the main focus of the future course is, thus improving the ICT education environment. In order to achieve the goals, this research uses convolutional neural network (CNN) method to analyze short discussion content between teachers and students in an ICT education environment. Divide students into several main types of learning problem groups to facilitate answering student problems. In addition, this study will further cluster sub-categories of each major learning type to indicate specific problems for each student. Unlike most neural network programs, this study attempts to extend short texts with external resources before classifying them to improve classification performance. In short, by applying the classification of short texts, we can point out the learning problems of each student and inform the instructors where the main focus of future courses will improve the ICT education environment. The data of the empirical process will be used to pre-process the chat records between teachers and students and the course materials. An action system will be set up to compare the most similar parts of the teaching material with each student's chat history to improve future classification performance. Later, the function of short text classification uses CNN to classify rich chat records into several major learning problems based on theory-driven titles. By applying these modules, this research hopes to clarify the main learning problems of students and inform teachers that they should focus on future teaching.Keywords: ICT education improvement, social equality, short text analysis, convolutional neural network
Procedia PDF Downloads 12831709 Value Proposition and Value Creation in Network Environments: An Experimental Study of Academic Productivity via the Application of Bibliometrics
Authors: R. Oleko, A. Saraceni
Abstract:
The aim of this research is to provide a rigorous evaluation of the existing academic productivity in relation to value proposition and creation in networked environments. Bibliometrics is a vigorous approach used to structure existing literature in an objective and reliable manner. To that aim, a thorough bibliometric analysis was performed in order to assess the large volume of the information encountered in a structured and reliable manner. A clear distinction between networks and service networks was considered indispensable in order to capture the effects of each network’s type properties on value creation processes. Via the use of bibliometric parameters, this review was able to capture the state-of-the-art in both value proposition and value creation consecutively. The results provide a rigorous assessment of the annual scientific production, the most influential journals, and the leading corresponding author countries. By means of citation analysis, the most frequently cited manuscripts and countries for each network type were identified. Moreover, by means of co-citation analysis, existing collaborative patterns were detected through the creation of reference co-citation networks and country collaboration networks. Co-word analysis was also performed in order to provide an overview of the conceptual structure in both networks and service networks. The acquired results provide a rigorous and systematic assessment of the existing scientific output in networked settings. As such, they positively contribute to a better understanding of the distinct impact of service networks on value proposition and value creation when compared to regular networks. The implications derived can serve as a guide for informed decision-making by practitioners during network formation and provide a structured evaluation that can stand as a basis for future research in the field.Keywords: bibliometrics, co-citation analysis, networks, service networks, value creation, value proposition
Procedia PDF Downloads 20331708 Selecting Graduates for the Interns’ Award by Using Multisource Feedback Process: Does It Work?
Authors: Kathryn Strachan, Sameer Otoom, Amal AL-Gallaf, Ahmed Al Ansari
Abstract:
Introduction: Introducing a reliable method to select graduates for an award in higher education can be challenging but is not impossible. Multisource feedback (MSF) is a popular assessment tool that relies on evaluations of different groups of people, including physicians and non-physicians. It is useful for assessing several domains, including professionalism, communication and collaboration and may be useful for selecting the best interns to receive a University award. Methods: 16 graduates responded to an invitation to participate in the student award, which was conducted by the Royal College of Surgeons of Ireland-Bahrain Medical University of Bahrain (RCSI Bahrain) using the MSF process. Five individuals from the following categories rated each participant: physicians, nurses, and fellow students. RCSI Bahrain graduates were assessed in the following domains; professionalism, communication, and collaboration. Mean and standard deviation were calculated and the award was given to the graduate who scored the highest among his/her colleagues. Cronbach’s coefficient was used to determine the questionnaire’s internal consistency and reliability. Factor analysis was conducted to examine for the construct validity. Results: 16 graduates participated in the RCSI-Bahrain interns’ award based on the MSF process, giving us a 16.5% response rate. The instrument was found to be suitable for factor analysis and showed 3 factor solutions representing 79.3% of the total variance. Reliability analysis using Cronbach’s α reliability of internal consistency indicated that the full scale of the instrument had high internal consistency (Cronbach’s α 0.98). Conclusion: This study found the MSF process to be reliable and valid for selecting the best graduates for the interns’ awards. However, the low response rates may suggest that the process is not feasible for allowing the majority of the students to participate in the selection process. Further research studies may be required to support the feasibility of the MSF process in selecting graduates for the university award.Keywords: MSF, RCSI, validity, Bahrain
Procedia PDF Downloads 34231707 The Effect of Bath Composition for Hot-Dip Aluminizing of AISI 4140 Steel
Authors: Aptullah Karakas, Murat Baydogan
Abstract:
Hot-dip aluminizing (HDA) is one of the several aluminizing methods to form a wear-, corrosion- and oxidation-resistant aluminide layers on the surface. In this method, the substrate is dipped into a molten aluminum bath, hold in the bath for several minutes, and cooled down to the room temperature in air. A subsequent annealing after the HDA process is generally performed. The main advantage of HDA is its very low investment cost in comparison with other aluminizing methods such as chemical vapor deposition (CVD), pack aluminizing and metalizing. In the HDA process, Al or Al-Si molten baths are mostly used. However, in this study, three different Al alloys such as Al4043 (Al-Mg), Al5356 (Al-Si) and Al7020 (Al-Zn) were used as the molten bath in order to see their effects on morphological and mechanical properties of the resulting aluminide layers. AISI 4140 low alloyed steel was used as the substrate. Parameters of the HDA process were bath composition, bath temperature, and dipping time. These parameters were considered within a Taguchi L9 orthogonal array. After the HDA process and subsequent diffusion annealing, coating thickness measurement, microstructural analysis and hardness measurement of the aluminide layers were conducted. The optimum process parameters were evaluated according to coating morphology, such as cracks, Kirkendall porosity and hardness of the coatings. According to the results, smooth and clean aluminide layer with less Kirkendall porosity and cracks were observed on the sample, which was aluminized in the molten Al7020 bath at 700 C for 10 minutes and subsequently diffusion annealed at 750 C. Hardness of the aluminide layer was in between 1100-1300 HV and the coating thickness was approximately 400 µm. The results were promising such that a hard and thick aluminide layer with less Kirkendall porosity and cracks could be formed. It is, therefore, concluded that Al7020 bath may be used in the HDA process of AISI 4140 steel substrate.Keywords: hot-dip aluminizing, microstructure, hardness measurement, diffusion annealing
Procedia PDF Downloads 7631706 Risk Factors for Defective Autoparts Products Using Bayesian Method in Poisson Generalized Linear Mixed Model
Authors: Pitsanu Tongkhow, Pichet Jiraprasertwong
Abstract:
This research investigates risk factors for defective products in autoparts factories. Under a Bayesian framework, a generalized linear mixed model (GLMM) in which the dependent variable, the number of defective products, has a Poisson distribution is adopted. Its performance is compared with the Poisson GLM under a Bayesian framework. The factors considered are production process, machines, and workers. The products coded RT50 are observed. The study found that the Poisson GLMM is more appropriate than the Poisson GLM. For the production Process factor, the highest risk of producing defective products is Process 1, for the Machine factor, the highest risk is Machine 5, and for the Worker factor, the highest risk is Worker 6.Keywords: defective autoparts products, Bayesian framework, generalized linear mixed model (GLMM), risk factors
Procedia PDF Downloads 57031705 Integrated Braking and Traction Torque Vectoring Control Based on Vehicle Yaw Rate for Stability improvement of All-Wheel-Drive Electric Vehicles
Authors: Mahmoud Said Jneid, Péter Harth
Abstract:
EVs with independent wheel driving greatly improve vehicle stability in poor road conditions. Wheel torques can be precisely controlled through electric motors driven using advanced technologies. As a result, various types of advanced chassis assistance systems (ACAS) can be implemented. This paper proposes an integrated torque vectoring control based on wheel slip regulation in both braking and traction modes. For generating the corrective yaw moment, the vehicle yaw rate and sideslip angle are monitored. The corrective yaw moment is distributed into traction and braking torques based on an equal-opposite components approach. The proposed torque vectoring control scheme is validated in simulation and the results show its superiority when compared to conventional schemes.Keywords: all-wheel-drive, electric vehicle, torque vectoring, regenerative braking, stability control, traction control, yaw rate control
Procedia PDF Downloads 8331704 Examining Postcolonial Corporate Power Structures through the Lens of Development Induced Projects in Africa
Authors: Omogboyega Abe
Abstract:
This paper examines the relationships between socio-economic inequalities of power, race, wealth engendered by corporate structure, and domination in postcolonial Africa. The paper further considers how land as an epitome of property and power for the locals paved the way for capitalist accumulation and control in the hands of transnational corporations. European colonization of Africa was contingent on settler colonialism, where properties, including land, were re-modified as extractive resources for primitive accumulation. In developing Africa's extractive resources, transnational corporations (TNCs) usurped states' structures and domination over native land. The usurpation/corporate capture that exists to date has led to remonstrations and arguably a counter-productive approach to development projects. In some communities, the mention of extractive companies triggers resentment. The paradigm of state capture and state autonomy is simply inadequate to either describe or resolve the play of forces or actors responsible for severe corporate-induced human rights violations in emerging markets. Moreover, even if the deadly working conditions are conceived as some regulatory failure, it is tough to tell whose failure. The analysis in this paper is that the complexity and ambiguity evidenced by the multiple regimes and political and economic forces shaping production, consumption, and distribution of socio-economic variables are not exceptional in emerging markets. Instead, the varied experience in developing countries provides a window for seeing what we face in understanding and theorizing the structure and operation of the global economic and regulatory order in general.Keywords: colonial, emerging markets, business, human rights, corporation
Procedia PDF Downloads 6631703 Quality Improvement of the Sand Moulding Process in Foundries Using Six Sigma Technique
Authors: Cindy Sithole, Didier Nyembwe, Peter Olubambi
Abstract:
The sand casting process involves pattern making, mould making, metal pouring and shake out. Every step in the sand moulding process is very critical for production of good quality castings. However, waste generated during the sand moulding operation and lack of quality are matters that influences performance inefficiencies and lack of competitiveness in South African foundries. Defects produced from the sand moulding process are only visible in the final product (casting) which results in increased number of scrap, reduced sales and increases cost in the foundry. The purpose of this Research is to propose six sigma technique (DMAIC, Define, Measure, Analyze, Improve and Control) intervention in sand moulding foundries and to reduce variation caused by deficiencies in the sand moulding process in South African foundries. Its objective is to create sustainability and enhance productivity in the South African foundry industry. Six sigma is a data driven method to process improvement that aims to eliminate variation in business processes using statistical control methods .Six sigma focuses on business performance improvement through quality initiative using the seven basic tools of quality by Ishikawa. The objectives of six sigma are to eliminate features that affects productivity, profit and meeting customers’ demands. Six sigma has become one of the most important tools/techniques for attaining competitive advantage. Competitive advantage for sand casting foundries in South Africa means improved plant maintenance processes, improved product quality and proper utilization of resources especially scarce resources. Defects such as sand inclusion, Flashes and sand burn on were some of the defects that were identified as resulting from the sand moulding process inefficiencies using six sigma technique. The courses were we found to be wrong design of the mould due to the pattern used and poor ramming of the moulding sand in a foundry. Six sigma tools such as the voice of customer, the Fishbone, the voice of the process and process mapping were used to define the problem in the foundry and to outline the critical to quality elements. The SIPOC (Supplier Input Process Output Customer) Diagram was also employed to ensure that the material and process parameters were achieved to ensure quality improvement in a foundry. The process capability of the sand moulding process was measured to understand the current performance to enable improvement. The Expected results of this research are; reduced sand moulding process variation, increased productivity and competitive advantage.Keywords: defects, foundries, quality improvement, sand moulding, six sigma (DMAIC)
Procedia PDF Downloads 19531702 Evaluation of Arsenic Removal in Soils Contaminated by the Phytoremediation Technique
Authors: V. Ibujes, A. Guevara, P. Barreto
Abstract:
Concentration of arsenic represents a serious threat to human health. It is a bioaccumulable toxic element and is transferred through the food chain. In Ecuador, values of 0.0423 mg/kg As are registered in potatoes of the skirts of the Tungurahua volcano. The increase of arsenic contamination in Ecuador is mainly due to mining activity, since the process of gold extraction generates toxic tailings with mercury. In the Province of Azuay, due to the mining activity, the soil reaches concentrations of 2,500 to 6,420 mg/kg As whereas in the province of Tungurahua it can be found arsenic concentrations of 6.9 to 198.7 mg/kg due to volcanic eruptions. Since the contamination by arsenic, the present investigation is directed to the remediation of the soils in the provinces of Azuay and Tungurahua by phytoremediation technique and the definition of a methodology of extraction by means of analysis of arsenic in the system soil-plant. The methodology consists in selection of two types of plants that have the best arsenic removal capacity in synthetic solutions 60 μM As, a lower percentage of mortality and hydroponics resistance. The arsenic concentrations in each plant were obtained from taking 10 ml aliquots and the subsequent analysis of the ICP-OES (inductively coupled plasma-optical emission spectrometry) equipment. Soils were contaminated with synthetic solutions of arsenic with the capillarity method to achieve arsenic concentration of 13 and 15 mg/kg. Subsequently, two types of plants were evaluated to reduce the concentration of arsenic in soils for 7 weeks. The global variance for soil types was obtained with the InfoStat program. To measure the changes in arsenic concentration in the soil-plant system, the Rhizo and Wenzel arsenic extraction methodology was used and subsequently analyzed with the ICP-OES (optima 8000 Pekin Elmer). As a result, the selected plants were bluegrass and llanten, due to the high percentages of arsenic removal of 55% and 67% and low mortality rates of 9% and 8% respectively. In conclusion, Azuay soil with an initial concentration of 13 mg/kg As reached the concentrations of 11.49 and 11.04 mg/kg As for bluegrass and llanten respectively, and for the initial concentration of 15 mg/kg As reached 11.79 and 11.10 mg/kg As for blue grass and llanten after 7 weeks. For the Tungurahua soil with an initial concentration of 13 mg/kg As it reached the concentrations of 11.56 and 12.16 mg/kg As for the bluegrass and llanten respectively, and for the initial concentration of 15 mg/kg As reached 11.97 and 12.27 mg/kg Ace for bluegrass and llanten after 7 weeks. The best arsenic extraction methodology of soil-plant system is Wenzel.Keywords: blue grass, llanten, phytoremediation, soil of Azuay, soil of Tungurahua, synthetic arsenic solution
Procedia PDF Downloads 10331701 Multi-Factor Optimization Method through Machine Learning in Building Envelope Design: Focusing on Perforated Metal Façade
Authors: Jinwooung Kim, Jae-Hwan Jung, Seong-Jun Kim, Sung-Ah Kim
Abstract:
Because the building envelope has a significant impact on the operation and maintenance stage of the building, designing the facade considering the performance can improve the performance of the building and lower the maintenance cost of the building. In general, however, optimizing two or more performance factors confronts the limits of time and computational tools. The optimization phase typically repeats infinitely until a series of processes that generate alternatives and analyze the generated alternatives achieve the desired performance. In particular, as complex geometry or precision increases, computational resources and time are prohibitive to find the required performance, so an optimization methodology is needed to deal with this. Instead of directly analyzing all the alternatives in the optimization process, applying experimental techniques (heuristic method) learned through experimentation and experience can reduce resource waste. This study proposes and verifies a method to optimize the double envelope of a building composed of a perforated panel using machine learning to the design geometry and quantitative performance. The proposed method is to achieve the required performance with fewer resources by supplementing the existing method which cannot calculate the complex shape of the perforated panel.Keywords: building envelope, machine learning, perforated metal, multi-factor optimization, façade
Procedia PDF Downloads 224