Search results for: Mobile listening applications
1616 Reinforcement-Learning Based Handover Optimization for Cellular Unmanned Aerial Vehicles Connectivity
Authors: Mahmoud Almasri, Xavier Marjou, Fanny Parzysz
Abstract:
The demand for services provided by Unmanned Aerial Vehicles (UAVs) is increasing pervasively across several sectors including potential public safety, economic, and delivery services. As the number of applications using UAVs grows rapidly, more and more powerful, quality of service, and power efficient computing units are necessary. Recently, cellular technology draws more attention to connectivity that can ensure reliable and flexible communications services for UAVs. In cellular technology, flying with a high speed and altitude is subject to several key challenges, such as frequent handovers (HOs), high interference levels, connectivity coverage holes, etc. Additional HOs may lead to “ping-pong” between the UAVs and the serving cells resulting in a decrease of the quality of service and energy consumption. In order to optimize the number of HOs, we develop in this paper a Q-learning-based algorithm. While existing works focus on adjusting the number of HOs in a static network topology, we take into account the impact of cells deployment for three different simulation scenarios (Rural, Semi-rural and Urban areas). We also consider the impact of the decision distance, where the drone has the choice to make a switching decision on the number of HOs. Our results show that a Q-learning-based algorithm allows to significantly reduce the average number of HOs compared to a baseline case where the drone always selects the cell with the highest received signal. Moreover, we also propose which hyper-parameters have the largest impact on the number of HOs in the three tested environments, i.e. Rural, Semi-rural, or Urban.Keywords: drones connectivity, reinforcement learning, handovers optimization, decision distance
Procedia PDF Downloads 1101615 Effect of Fire Retardant Painting Product on Smoke Optical Density of Burning Natural Wood Samples
Authors: Abdullah N. Olimat, Ahmad S. Awad, Faisal M. AL-Ghathian
Abstract:
Natural wood is used in many applications in Jordan such as furniture, partitions constructions, and cupboards. Experimental work for smoke produced by the combustion of certain wood samples was studied. Smoke generated from burning of natural wood, is considered as a major cause of death in furniture fires. The critical parameter for life safety in fires is the available time for escape, so the visual obscuration due to smoke release during fire is taken into consideration. The effect of smoke, produced by burning of wood, depends on the amount of smoke released in case of fire. The amount of smoke production, apparently, affects the time available for the occupants to escape. To achieve the protection of life of building occupants during fire growth, fire retardant painting products are tested. The tested samples of natural wood include Beech, Ash, Beech Pine, and white Beech Pine. A smoke density chamber manufactured by fire testing technology has been used to perform measurement of smoke properties. The procedure of test was carried out according to the ISO-5659. A nonflammable vertical radiant heat flux of 25 kW/m2 is exposed to the wood samples in a horizontal orientation. The main objective of the current study is to carry out the experimental tests for samples of natural woods to evaluate the capability to escape in case of fire and the fire safety requirements. Specific optical density, transmittance, thermal conductivity, and mass loss are main measured parameters. Also, comparisons between samples with paint and with no paint are carried out between the selected samples of woods.Keywords: extinction coefficient, optical density, transmittance, visibility
Procedia PDF Downloads 2381614 A Structure-Switching Electrochemical Aptasensor for Rapid, Reagentless and Single-Step, Nanomolar Detection of C-Reactive Protein
Authors: William L. Whitehouse, Louisa H. Y. Lo, Andrew B. Kinghorn, Simon C. C. Shiu, Julian. A. Tanner
Abstract:
C-reactive protein (CRP) is an acute-phase reactant and sensitive indicator for sepsis and other life-threatening pathologies, including systemic inflammatory response syndrome (SIRS). Currently, clinical turn-around times for established CRP detection methods take between 30 minutes to hours or even days from centralized laboratories. Here, we report the development of an electrochemical biosensor using redox probe-tagged DNA aptamers functionalized onto cheap, commercially available screen-printed electrodes. Binding-induced conformational switching of the CRP-targeting aptamer induces a specific and selective signal-ON event, which enables single-step and reagentless detection of CRP in as little as 1 minute. The aptasensor dynamic range spans 5-1000nM (R=0.97) or 5-500nM (R=0.99) in 50% diluted human serum, with a LOD of 3nM, corresponding to 2-orders of magnitude sensitivity under the clinically relevant cut-off for CRP. The sensor is stable for up to one week and can be reused numerous times, as judged from repeated real-time dosing and dose-response assays. By decoupling binding events from the signal induction mechanism, structure-switching electrochemical aptamer-based sensors (SS-EABs) provide considerable advantages over their adsorption-based counterparts. Our work expands on the retinue of such sensors reported in the literature and is the first instance of an SS-EAB for reagentless CRP detection. We hope this study can inspire further investigations into the suitability of SS-EABs for diagnostics, which will aid translational R&D toward fully realized devices aimed at point-of-care applications or for use more broadly by the public.Keywords: structure-switching, C-reactive protein, electrochemical, biosensor, aptasensor.
Procedia PDF Downloads 711613 Steel Industry Waste as Recyclable Raw Material for the Development of Ferrous-Aluminum Alloys
Authors: Arnold S. Freitas Neto, Rodrigo E. Coelho, Erick S. Mendonça
Abstract:
The study aims to assess if high-purity iron powder in iron-aluminum alloys can be replaced by SAE 1020 steel chips with an atomicity proportion of 50% for each element. Chips of SAE 1020 are rejected in industrial processes. Thus, the use of SAE 1020 as a replaceable composite for iron increase the sustainability of ferrous alloys by recycling industrial waste. The alloys were processed by high energy milling, of which the main advantage is the minimal loss of raw material. The raw material for three of the six samples were high purity iron powder and recyclable aluminum cans. For the other three samples, the high purity iron powder has been replaced with chips of SAE 1020 steel. The process started with the separate milling of chips of aluminum and SAE 1020 steel to obtain the powder. Subsequently, the raw material was mixed in the pre-defined proportions, milled together for five hours and then underwent a closed-die hot compaction at the temperature of 500 °C. Thereafter, the compacted samples underwent heat treatments known as sintering and solubilization. All samples were sintered one hour, and 4 samples were solubilized for either 4 or 10 hours under well-controlled atmosphere conditions. Lastly, the composition and the mechanical properties of their hardness were analyzed. The samples were analyzed by optical microscopy, scanning electron microscopy and hardness testing. The results of the analysis showed a similar chemical composition and interesting hardness levels with low standard deviations. This verified that the use of SAE 1020 steel chips can be a low-cost alternative for high-purity iron powder and could possibly replace high-purity Iron in industrial applications.Keywords: Fe-Al alloys, high energy milling, iron-aluminum alloys, metallography characterization, powder metallurgy, recycling ferrous alloy, SAE 1020 steel recycling
Procedia PDF Downloads 3601612 Nanopharmaceutical: A Comprehensive Appearance of Drug Delivery System
Authors: Mahsa Fathollahzadeh
Abstract:
The various nanoparticles employed in drug delivery applications include micelles, liposomes, solid lipid nanoparticles, polymeric nanoparticles, functionalized nanoparticles, nanocrystals, cyclodextrins, dendrimers, and nanotubes. Micelles, composed of amphiphilic block copolymers, can encapsulate hydrophobic molecules, allowing for targeted delivery. Liposomes, vesicular structures made up of phospholipids, can encapsulate both hydrophobic and hydrophilic molecules, providing a flexible platform for delivering therapeutic agents. Solid lipid nanoparticles (SLNs) and nanostructured lipid carriers (NLCs) are designed to improve the stability and bioavailability of lipophilic drugs. Polymeric nanoparticles, such as poly(lactic-co-glycolic acid) (PLGA), are biodegradable and can be engineered to release drugs in a controlled manner. Functionalized nanoparticles, coated with targeting ligands or antibodies, can specifically target diseased cells or tissues. Nanocrystals, engineered to have specific surface properties, can enhance the solubility and bioavailability of poorly soluble drugs. Cyclodextrins, doughnut-shaped molecules with hydrophobic cavities, can be complex with hydrophobic molecules, allowing for improved solubility and bioavailability. Dendrimers, branched polymers with a central core, can be designed to deliver multiple therapeutic agents simultaneously. Nanotubes and metallic nanoparticles, such as gold nanoparticles, offer real-time tracking capabilities and can be used to detect biomolecular interactions. The use of these nanoparticles has revolutionized the field of drug delivery, enabling targeted and controlled release of therapeutic agents, reduced toxicity, and improved patient outcomes.Keywords: nanotechnology, nanopharmaceuticals, drug-delivery, proteins, ligands, nanoparticles, chemistry
Procedia PDF Downloads 551611 Low Voltage and High Field-Effect Mobility Thin Film Transistor Using Crystalline Polymer Nanocomposite as Gate Dielectric
Authors: Debabrata Bhadra, B. K. Chaudhuri
Abstract:
The operation of organic thin film transistors (OFETs) with low voltage is currently a prevailing issue. We have fabricated anthracene thin-film transistor (TFT) with an ultrathin layer (~450nm) of Poly-vinylidene fluoride (PVDF)/CuO nanocomposites as a gate insulator. We obtained a device with excellent electrical characteristics at low operating voltages (<1V). Different layers of the film were also prepared to achieve the best optimization of ideal gate insulator with various static dielectric constant (εr ). Capacitance density, leakage current at 1V gate voltage and electrical characteristics of OFETs with a single and multi layer films were investigated. This device was found to have highest field effect mobility of 2.27 cm2/Vs, a threshold voltage of 0.34V, an exceptionally low sub threshold slope of 380 mV/decade and an on/off ratio of 106. Such favorable combination of properties means that these OFETs can be utilized successfully as voltages below 1V. A very simple fabrication process has been used along with step wise poling process for enhancing the pyroelectric effects on the device performance. The output characteristic of OFET after poling were changed and exhibited linear current-voltage relationship showing the evidence of large polarization. The temperature dependent response of the device was also investigated. The stable performance of the OFET after poling operation makes it reliable in temperature sensor applications. Such High-ε CuO/PVDF gate dielectric appears to be highly promising candidates for organic non-volatile memory and sensor field-effect transistors (FETs).Keywords: organic field effect transistors, thin film transistor, gate dielectric, organic semiconductor
Procedia PDF Downloads 2451610 Online Multilingual Dictionary Using Hamburg Notation for Avatar-Based Indian Sign Language Generation System
Authors: Sugandhi, Parteek Kumar, Sanmeet Kaur
Abstract:
Sign Language (SL) is used by deaf and other people who cannot speak but can hear or have a problem with spoken languages due to some disability. It is a visual gesture language that makes use of either one hand or both hands, arms, face, body to convey meanings and thoughts. SL automation system is an effective way which provides an interface to communicate with normal people using a computer. In this paper, an avatar based dictionary has been proposed for text to Indian Sign Language (ISL) generation system. This research work will also depict a literature review on SL corpus available for various SL s over the years. For ISL generation system, a written form of SL is required and there are certain techniques available for writing the SL. The system uses Hamburg sign language Notation System (HamNoSys) and Signing Gesture Mark-up Language (SiGML) for ISL generation. It is developed in PHP using Web Graphics Library (WebGL) technology for 3D avatar animation. A multilingual ISL dictionary is developed using HamNoSys for both English and Hindi Language. This dictionary will be used as a database to associate signs with words or phrases of a spoken language. It provides an interface for admin panel to manage the dictionary, i.e., modification, addition, or deletion of a word. Through this interface, HamNoSys can be developed and stored in a database and these notations can be converted into its corresponding SiGML file manually. The system takes natural language input sentence in English and Hindi language and generate 3D sign animation using an avatar. SL generation systems have potential applications in many domains such as healthcare sector, media, educational institutes, commercial sectors, transportation services etc. This research work will help the researchers to understand various techniques used for writing SL and generation of Sign Language systems.Keywords: avatar, dictionary, HamNoSys, hearing impaired, Indian sign language (ISL), sign language
Procedia PDF Downloads 2321609 Normalizing Flow to Augmented Posterior: Conditional Density Estimation with Interpretable Dimension Reduction for High Dimensional Data
Authors: Cheng Zeng, George Michailidis, Hitoshi Iyatomi, Leo L. Duan
Abstract:
The conditional density characterizes the distribution of a response variable y given other predictor x and plays a key role in many statistical tasks, including classification and outlier detection. Although there has been abundant work on the problem of Conditional Density Estimation (CDE) for a low-dimensional response in the presence of a high-dimensional predictor, little work has been done for a high-dimensional response such as images. The promising performance of normalizing flow (NF) neural networks in unconditional density estimation acts as a motivating starting point. In this work, the authors extend NF neural networks when external x is present. Specifically, they use the NF to parameterize a one-to-one transform between a high-dimensional y and a latent z that comprises two components [zₚ, zₙ]. The zₚ component is a low-dimensional subvector obtained from the posterior distribution of an elementary predictive model for x, such as logistic/linear regression. The zₙ component is a high-dimensional independent Gaussian vector, which explains the variations in y not or less related to x. Unlike existing CDE methods, the proposed approach coined Augmented Posterior CDE (AP-CDE) only requires a simple modification of the common normalizing flow framework while significantly improving the interpretation of the latent component since zₚ represents a supervised dimension reduction. In image analytics applications, AP-CDE shows good separation of 𝑥-related variations due to factors such as lighting condition and subject id from the other random variations. Further, the experiments show that an unconditional NF neural network based on an unsupervised model of z, such as a Gaussian mixture, fails to generate interpretable results.Keywords: conditional density estimation, image generation, normalizing flow, supervised dimension reduction
Procedia PDF Downloads 991608 Correction Factors for Soil-Structure Interaction Predicted by Simplified Models: Axisymmetric 3D Model versus Fully 3D Model
Authors: Fu Jia
Abstract:
The effects of soil-structure interaction (SSI) are often studied using axial-symmetric three-dimensional (3D) models to avoid the high computational cost of the more realistic, fully 3D models, which require 2-3 orders of magnitude more computer time and storage. This paper analyzes the error and presents correction factors for system frequency, system damping, and peak amplitude of structural response computed by axisymmetric models, embedded in uniform or layered half-space. The results are compared with those for fully 3D rectangular foundations of different aspect ratios. Correction factors are presented for a range of the model parameters, such as fixed-base frequency, structure mass, height and length-to-width ratio, foundation embedment, soil-layer stiffness and thickness. It is shown that the errors are larger for stiffer, taller and heavier structures, deeper foundations and deeper soil layer. For example, for a stiff structure like Millikan Library (NS response; length-to-width ratio 1), the error is 6.5% in system frequency, 49% in system damping and 180% in peak amplitude. Analysis of a case study shows that the NEHRP-2015 provisions for reduction of base shear force due to SSI effects may be unsafe for some structures and need revision. The presented correction factor diagrams can be used in practical design and other applications.Keywords: 3D soil-structure interaction, correction factors for axisymmetric models, length-to-width ratio, NEHRP-2015 provisions for reduction of base shear force, rectangular embedded foundations, SSI system frequency, SSI system damping
Procedia PDF Downloads 2681607 Adaptive Energy-Aware Routing (AEAR) for Optimized Performance in Resource-Constrained Wireless Sensor Networks
Authors: Innocent Uzougbo Onwuegbuzie
Abstract:
Wireless Sensor Networks (WSNs) are crucial for numerous applications, yet they face significant challenges due to resource constraints such as limited power and memory. Traditional routing algorithms like Dijkstra, Ad hoc On-Demand Distance Vector (AODV), and Bellman-Ford, while effective in path establishment and discovery, are not optimized for the unique demands of WSNs due to their large memory footprint and power consumption. This paper introduces the Adaptive Energy-Aware Routing (AEAR) model, a solution designed to address these limitations. AEAR integrates reactive route discovery, localized decision-making using geographic information, energy-aware metrics, and dynamic adaptation to provide a robust and efficient routing strategy. We present a detailed comparative analysis using a dataset of 50 sensor nodes, evaluating power consumption, memory footprint, and path cost across AEAR, Dijkstra, AODV, and Bellman-Ford algorithms. Our results demonstrate that AEAR significantly reduces power consumption and memory usage while optimizing path weight. This improvement is achieved through adaptive mechanisms that balance energy efficiency and link quality, ensuring prolonged network lifespan and reliable communication. The AEAR model's superior performance underlines its potential as a viable routing solution for energy-constrained WSN environments, paving the way for more sustainable and resilient sensor network deployments.Keywords: wireless sensor networks (WSNs), adaptive energy-aware routing (AEAR), routing algorithms, energy, efficiency, network lifespan
Procedia PDF Downloads 391606 Enhancing Wire Electric Discharge Machining Efficiency through ANOVA-Based Process Optimization
Authors: Rahul R. Gurpude, Pallvita Yadav, Amrut Mulay
Abstract:
In recent years, there has been a growing focus on advanced manufacturing processes, and one such emerging process is wire electric discharge machining (WEDM). WEDM is a precision machining process specifically designed for cutting electrically conductive materials with exceptional accuracy. It achieves material removal from the workpiece metal through spark erosion facilitated by electricity. Initially developed as a method for precision machining of hard materials, WEDM has witnessed significant advancements in recent times, with numerous studies and techniques based on electrical discharge phenomena being proposed. These research efforts and methods in the field of ED encompass a wide range of applications, including mirror-like finish machining, surface modification of mold dies, machining of insulating materials, and manufacturing of micro products. WEDM has particularly found extensive usage in the high-precision machining of complex workpieces that possess varying hardness and intricate shapes. During the cutting process, a wire with a diameter ranging from 0.18mm is employed. The evaluation of EDM performance typically revolves around two critical factors: material removal rate (MRR) and surface roughness (SR). To comprehensively assess the impact of machining parameters on the quality characteristics of EDM, an Analysis of Variance (ANOVA) was conducted. This statistical analysis aimed to determine the significance of various machining parameters and their relative contributions in controlling the response of the EDM process. By undertaking this analysis, optimal levels of machining parameters were identified to achieve desirable material removal rates and surface roughness.Keywords: WEDM, MRR, optimization, surface roughness
Procedia PDF Downloads 771605 A Deep Learning Model with Greedy Layer-Wise Pretraining Approach for Optimal Syngas Production by Dry Reforming of Methane
Authors: Maryam Zarabian, Hector Guzman, Pedro Pereira-Almao, Abraham Fapojuwo
Abstract:
Dry reforming of methane (DRM) has sparked significant industrial and scientific interest not only as a viable alternative for addressing the environmental concerns of two main contributors of the greenhouse effect, i.e., carbon dioxide (CO₂) and methane (CH₄), but also produces syngas, i.e., a mixture of hydrogen (H₂) and carbon monoxide (CO) utilized by a wide range of downstream processes as a feedstock for other chemical productions. In this study, we develop an AI-enable syngas production model to tackle the problem of achieving an equivalent H₂/CO ratio [1:1] with respect to the most efficient conversion. Firstly, the unsupervised density-based spatial clustering of applications with noise (DBSAN) algorithm removes outlier data points from the original experimental dataset. Then, random forest (RF) and deep neural network (DNN) models employ the error-free dataset to predict the DRM results. DNN models inherently would not be able to obtain accurate predictions without a huge dataset. To cope with this limitation, we employ reusing pre-trained layers’ approaches such as transfer learning and greedy layer-wise pretraining. Compared to the other deep models (i.e., pure deep model and transferred deep model), the greedy layer-wise pre-trained deep model provides the most accurate prediction as well as similar accuracy to the RF model with R² values 1.00, 0.999, 0.999, 0.999, 0.999, and 0.999 for the total outlet flow, H₂/CO ratio, H₂ yield, CO yield, CH₄ conversion, and CO₂ conversion outputs, respectively.Keywords: artificial intelligence, dry reforming of methane, artificial neural network, deep learning, machine learning, transfer learning, greedy layer-wise pretraining
Procedia PDF Downloads 891604 Review and Analysis of Sustainable-Based Risk Management in Humanitarian Supply Chains
Authors: Marinko Maslaric, Maja Jokic
Abstract:
When searching for fast and long term responses, sustainable logistics and supply chain applications have developed irrefutable theories and hypotheses towards market requirements. Nevertheless, there are certain misunderstandings on how the implementation of sustainability principles (social, economical, and environmental) and concepts should work in practice, more specifically, within a humanitarian supply chain management context. This paper will focus on the review and analysis of risk management concepts in humanitarian supply chain in order to identify their compliance with sustainable principles. In this direction, the study will look for strategies that suggest: minimization of environmental impacts throughout the reduction of resources consumption, depreciation of logistics costs, including supply chain ones, minimization of transportation and service costs, elaboration of quality performance of supply chain and logistics, and reduction of supply chain delivery time. On the side of meeting all defense, trades and humanitarian logistics needs, the research will be aligned to UN Sustainable Development Goals, standards, and performances. It will start with relevant strategies for identification of risk indicators and it will end with suggestion of valuable strategic approaches for their minimization or total prevention. Finally, a content analysis will propose a suitable methodological structure for the creation of most sustainable strategy in risk management of humanitarian supply chain. Content analysis will accompany thorough, consistent and methodical approach of literature review for potential disaster risk management plan. Thereupon, the propositions of this research will look for contemporary literature gaps, with respect to operate the literature analysis and to suggest the appropriate sustained risk low master plan. The indicated is here to secure the high quality of logistics practices in hazardous events.Keywords: humanitarian logistics, sustainability, supply chain risk, risk management plan
Procedia PDF Downloads 2411603 Optimizing the Window Geometry Using Fractals
Authors: K. Geetha Ramesh, A. Ramachandraiah
Abstract:
In an internal building space, daylight becomes a powerful source of illumination. The challenge therefore, is to develop means of utilizing both direct and diffuse natural light in buildings while maintaining and improving occupant's visual comfort, particularly at greater distances from the windows throwing daylight. The geometrical features of windows in a building have significant effect in providing daylight. The main goal of this research is to develop an innovative window geometry, which will effectively provide the daylight component adequately together with internal reflected component(IRC) and also the external reflected component(ERC), if any. This involves exploration of a light redirecting system using fractal geometry for windows, in order to penetrate and distribute daylight more uniformly to greater depths, minimizing heat gain and glare, and also to reduce building energy use substantially. Of late the creation of fractal geometrical window and the occurrence of daylight illuminance due to such windows is becoming an interesting study. The amount of daylight can change significantly based on the window geometry and sky conditions. This leads to the (i) exploration of various fractal patterns suitable for window designs, and (ii) quantification of the effect of chosen fractal window based on the relationship between the fractal pattern, size, orientation and glazing properties for optimizing daylighting. There are a lot of natural lighting applications able to predict the behaviour of a light in a room through a traditional opening - a regular window. The conventional prediction methodology involves the evaluation of the daylight factor, the internal reflected component and the external reflected component. Having evaluated the daylight illuminance level for a conventional window, the technical performance of a fractal window for an optimal daylighting is to be studied and compared with that of a regular window. The methodologies involved are highlighted in this paper.Keywords: daylighting, fractal geometry, fractal window, optimization
Procedia PDF Downloads 3011602 Use of Smartwatches for the Emotional Self-Regulation of Individuals with Autism Spectrum Disorder (ASD)
Authors: Juan C. Torrado, Javier Gomez, Guadalupe Montero, German Montoro, M. Dolores Villalba
Abstract:
One of the most challenging aspects of the executive dysfunction of people with Autism Spectrum Disorders is the behavior control. This is related to a deficit in their ability to regulate, recognize and manage their own emotions. Some researchers have developed applications for tablets and smartphones to practice strategies of relaxation and emotion recognition. However, they cannot be applied to the very moment of temper outbursts, anger episodes or anxiety, since they require to carry the device, start the application and be helped by caretakers. Also, some of these systems are developed for either obsolete technologies (old versions of tablet devices, PDAs, outdated operative systems of smartphones) or specific devices (self-developed or proprietary ones) that create differentiation between the users and the rest of the individuals in their context. For this project we selected smartwatches. Focusing on emergent technologies ensures a wide lifespan of the developed products, because the derived products are intended to be available in the same moment the very technology gets popularized, not later. We also focused our research in commercial versions of smartwatches, since this way differentiation is easily avoided, so the users’ abandonment rate lowers. We have developed a smartwatch system along with a smartphone authoring tool to display self-regulation strategies. These micro-prompting strategies are conformed of pictograms, animations and temporizers, and they are designed by means of the authoring tool: When both devices synchronize their data, the smartwatch holds the self-regulation strategies, which are triggered when the smartwatch sensors detect a remarkable rise of heart rate and movement. The system is being currently tested in an educational center of people with ASD of Madrid, Spain.Keywords: assistive technologies, emotion regulation, human-computer interaction, smartwatches
Procedia PDF Downloads 2981601 Analysis of the Learning Effectiveness of the Steam-6e Course: A Case Study on the Development of Virtual Idol Product Design as an Example
Authors: Mei-Chun. Chang
Abstract:
STEAM (Science, Technology, Engineering, Art, and Mathematics) represents a cross-disciplinary and learner-centered teaching model that cultivates students to link theory with the presentation of real situations, thereby improving their various abilities. This study explores students' learning performance after using the 6E model in STEAM teaching for a professional course in the digital media design department of technical colleges, as well as the difficulties and countermeasures faced by STEAM curriculum design and its implementation. In this study, through industry experts’ work experience, activity exchanges, course teaching, and experience, learners can think about the design and development value of virtual idol products that meet the needs of users and to employ AR/VR technology to innovate their product applications. Applying action research, the investigation has 35 junior students from the department of digital media design of the school where the researcher teaches as the research subjects. The teaching research was conducted over two stages spanning ten weeks and 30 sessions. This research collected the data and conducted quantitative and qualitative data sorting analyses through ‘design draft sheet’, ‘student interview record’, ‘STEAM Product Semantic Scale’, and ‘Creative Product Semantic Scale (CPSS)’. Research conclusions are presented, and relevant suggestions are proposed as a reference for teachers or follow-up researchers. The contribution of this study is to teach college students to develop original virtual idols and product designs, improve learning effectiveness through STEAM teaching activities, and effectively cultivate innovative and practical cross-disciplinary design talents.Keywords: STEAM, 6E model, virtual idol, learning effectiveness, practical courses
Procedia PDF Downloads 1271600 Effect of Austenitizing Temperature, Soaking Time and Grain Size on Charpy Impact Toughness of Quenched and Tempered Steel
Authors: S. Gupta, R. Sarkar, S. Pathak, D. H. Kela, A. Pramanick, P. Talukdar
Abstract:
Low alloy quenched and tempered steels are typically used in cast railway components such as knuckles, yokes, and couplers. Since these components experience extensive impact loading during their service life, adequate impact toughness of these grades need to be ensured to avoid catastrophic failure of parts in service. Because of the general availability of Charpy V Test equipment, Charpy test is the most common and economical means to evaluate the impact toughness of materials and is generally used in quality control applications. With this backdrop, an experiment was designed to evaluate the effect of austenitizing temperature, soaking time and resultant grain size on the Charpy impact toughness and the related fracture mechanisms in a quenched and tempered low alloy steel, with the aim of optimizing the heat treatment parameters (i.e. austenitizing temperature and soaking time) with respect to impact toughness. In the first phase, samples were austenitized at different temperatures viz. 760, 800, 840, 880, 920 and 960°C, followed by quenching and tempering at 600°C for 4 hours. In the next phase, samples were subjected to different soaking times (0, 2, 4 and 6 hours) at a fixed austenitizing temperature (980°C), followed by quenching and tempering at 600°C for 4 hours. The samples corresponding to different test conditions were then subjected to instrumented Charpy tests at -40°C and energy absorbed were recorded. Subsequently, microstructure and fracture surface of samples corresponding to different test conditions were observed under scanning electron microscope, and the corresponding grain sizes were measured. In the final stage, austenitizing temperature, soaking time and measured grain sizes were correlated with impact toughness and the fracture morphology and mechanism.Keywords: heat treatment, grain size, microstructure, retained austenite and impact toughness
Procedia PDF Downloads 3421599 Experimental Research on Neck Thinning Dynamics of Droplets in Cross Junction Microchannels
Authors: Yilin Ma, Zhaomiao Liu, Xiang Wang, Yan Pang
Abstract:
Microscale droplets play an increasingly important role in various applications, including medical diagnostics, material synthesis, chemical engineering, and cell research due to features of high surface-to-volume ratio and tiny scale, which can significantly improve reaction rates, enhance heat transfer efficiency, enable high-throughput parallel studies as well as reduce reagent usage. As a mature technique to manipulate small amounts of liquids, droplet microfluidics could achieve the precise control of droplet parameters such as size, uniformity, structure, and thus has been widely adopted in the engineering and scientific research of multiple fields. Necking processes of the droplet in the cross junction microchannels are experimentally and theoretically investigated and dynamic mechanisms of the neck thinning in two different regimes are revealed. According to evolutions of the minimum neck width and the thinning rate, the necking process is further divided into different stages and the main driving force during each stage is confirmed. Effects of the flow rates and the cross-sectional aspect ratio on the necking process as well as the neck profile at different stages are provided in detail. The distinct features of the two regimes in the squeezing stage are well captured by the theoretical estimations of the effective flow rate and the variations of the actual flow rates in different channels are reasonably reflected by the channel width ratio. In the collapsing stage, the quantitative relation between the minimum neck width and the remaining time is constructed to identify the physical mechanism.Keywords: cross junction, neck thinning, force analysis, inertial mechanism
Procedia PDF Downloads 1111598 Application of Water Soluble Polymers in Chemical Enhanced Oil Recovery
Authors: M. Shahzad Kamal, Abdullah S. Sultan, Usamah A. Al-Mubaiyedh, Ibnelwaleed A. Hussein
Abstract:
Oil recovery from reservoirs using conventional oil recovery techniques like water flooding is less than 20%. Enhanced oil recovery (EOR) techniques are applied to recover additional oil. Surfactant-polymer flooding is a promising EOR technique used to recover residual oil from reservoirs. Water soluble polymers are used to increase the viscosity of displacing fluids. Surfactants increase the capillary number by reducing the interfacial tension between oil and displacing fluid. Hydrolyzed polyacrylamide (HPAM) is widely used in polymer flooding applications due to its low cost and other desirable properties. HPAM works well in low-temperature and low salinity-environment. In the presence of salts HPAM viscosity decrease due to charge screening effect and it can precipitate at high temperatures in the presence of salts. Various strategies have been adopted to extend the application of water soluble polymers to high-temperature high-salinity (HTHS) reservoir. These include addition of monomers to acrylamide chain that can protect it against thermal hydrolysis. In this work, rheological properties of various water soluble polymers were investigated to find out suitable polymer and surfactant-polymer systems for HTHS reservoirs. Polymer concentration ranged from 0.1 to 1 % (w/v). Effect of temperature, salinity and polymer concentration was investigated using both steady shear and dynamic measurements. Acrylamido tertiary butyl sulfonate based copolymer showed better performance under HTHS conditions compared to HPAM. Moreover, thermoviscosifying polymer showed excellent rheological properties and increase in the viscosity was observed with increase temperature. This property is highly desirable for EOR application.Keywords: rheology, polyacrylamide, salinity, enhanced oil recovery, polymer flooding
Procedia PDF Downloads 4121597 Characterization of Natural Polymers for Guided Bone Regeneration Applications
Authors: Benedetta Isella, Aleksander Drinic, Alissa Heim, Phillip Czichowski, Lisa Lauts, Hans Leemhuis
Abstract:
Introduction: Membranes for guided bone regeneration are essential to perform a barrier function between the soft and the regenerating bone tissue. Bioabsorbable membranes are desirable in this field as they do not require a secondary surgery for removal, decreasing patient surgical risk. Collagen was the first bioabsorbable alternative introduced on the market, but its degradation time may be too fast to guarantee bone regeneration, and optimisation is needed. Silk fibroin, being biocompatible, slowly bioabsorbable, and processable into different scaffold types, could be a promising alternative. Objectives: The objective is to compare the general performance of a silk fibroin membrane for guided bone regeneration to current collagen alternatives developing suitable standardized tests for the mechanical and morphological characterization. Methods: Silk fibroin and collagen-based membranes were compared from the morphological and chemical perspective, with techniques such as SEM imaging and from the mechanical point of view with techniques such as tensile and suture retention strength (SRS) tests. Results: Silk fibroin revealed a high degree of reproducibility in surface density. The SRS of silk fibroin (0.76 ± 0.04 N), although lower than collagen, was still comparable to native tissues such as the internal mammary artery (0.56 N), and the same can be extended to general mechanical behaviour in tensile tests. The SRS could be increased by an increase in thickness. Conclusion: Silk fibroin is a promising material in the field of guided bone regeneration, covering the interesting position of not being considered a product containing cells or tissues of animal origin from the regulatory perspective and having longer degradation times with respect to collagen.Keywords: guided bone regeneration, mechanical characterization, membrane, silk fibroin
Procedia PDF Downloads 451596 Load Carrying Capacity of Soils Reinforced with Encased Stone Columns
Authors: S. Chandrakaran, G. Govind
Abstract:
Stone columns are effectively used to improve bearing strength of soils and also for many geotechnical applications. In soft soils when stone columns are loaded they undergo large settlements due to insufficient lateral confinement. Use of geosynthetics encasement has proved to be a solution for this problem. In this paper, results of a laboratory experimental study carried out with model stone columns with and without encasement. Sand was used for making test beds, and grain size of soil varies from 0.075mm to 4.75mm. Woven geotextiles produced by Gareware ropes India with mass per unit area of 240gm/M2 and having tensile strength of 52KN/m is used for the present investigation. Tests were performed with large scale direct shear box and also using scaled laboratory plate load tests. Stone column of 50mm and 75mm is used for the present investigation. Diameter of stone column, size of stones used for making stone columns is varied in making stone column in the present study. Two types of stone were used namely small and bigger in size. Results indicate that there is an increase in angle of internal friction and also an increase in the shear strength of soil when stone columns are encased. With stone columns with 50mm dia, an average increase of 7% in shear strength and 4.6 % in angle of internal friction was achieved. When large stones were used increase in the shear strength was 12.2%, and angle of internal friction was increased to 5.4%. When the stone column diameter has increased to 75mm increase in shear strength and angle of internal friction was increased with smaller size of stones to 7.9 and 7.5%, and with large size stones, it was 7.7 and 5.48% respectively. Similar results are obtained in plate load tests, also.Keywords: stone columns, encasement, shear strength, plate load test
Procedia PDF Downloads 2361595 Studies on the Use of Sewage Sludge in Agriculture or in Incinerators
Authors: Catalina Iticescu, Lucian Georgescu, Mihaela Timofti, Dumitru Dima, Gabriel Murariu
Abstract:
The amounts of sludge resulting from the treatment of domestic and industrial wastewater can create serious environmental problems if no solutions are found to eliminate them. At present, the predominant method of sewage sludge disposal is to store and use them in agricultural applications. The sewage sludge has fertilizer properties and can be used to enrich agricultural soils due to the nutrient content. In addition to plant growth (nitrogen and phosphorus), the sludge also contains heavy metals in varying amounts. An increasingly used method is the incineration of sludge. Thermal processes can be used to convert large amounts of sludge into useful energy. The sewage sludge analyzed for the present paper was extracted from the Wastewater Treatment Station (WWTP) Galati, Romania. The physico-chemical parameters determined were: pH (upH), nutrients and heavy metals. The determination methods were electrochemical, spectrophotometric and energy dispersive X–ray analyses (EDX). The results of the tests made on the content of nutrients in the sewage sludge have shown that existing nutrients can be used to increase the fertility of agricultural soils. The conclusion reached was that these sludge can be safely used on agricultural land and with good agricultural productivity results. To be able to use sewage sludge as a fuel, we need to know its calorific values. For wet sludge, the caloric power is low, while for dry sludge it is high. Higher calorific value and lower calorific value are determined only for dry solids. The apparatus used to determine the calorific power was a Parr 6755 Solution Calorimeter Calorimeter (Parr Instrument Company USA 2010 model). The calorific capacities for the studied sludge indicate that they can be used successfully in incinerators. Mixed with coal, they can also be used to produce electricity. The advantages are: it reduces the cost of obtaining electricity and considerably reduces the amount of sewage sludge.Keywords: agriculture, incinerators, properties, sewage sludge
Procedia PDF Downloads 1721594 Assessment of the Impact of the Application of Kinesiology Taping on Joint Position Sense in Knee Joint
Authors: Anna Słupik, Patryk Wąsowski, Anna Mosiołek, Dariusz Białoszewski
Abstract:
Introduction: Kinesiology Taping is one of the most popular techniques used for treatment and supporting physiological processes in sports medicine and physiotherapy. Often it is used to sensorimotor skills of lower limbs by athletes. The aim of the study was to determine the effect of the application of muscle Kinesiology Taping to feel the position setting in motion the joint active. Material and methods: The study involved 50 healthy people between 18 and 30 years of age, 30 men and 20 women (mean age 23.24 years). The participants were divided into two groups. The study group was qualified for Kinesiology Taping application (muscle application, type Y, for quadriceps femoris muscle), while the remaining people used the application made of plaster (placebo group). Testing was performed prior to applying taping, with the applied application (after 30 minutes), then 24 hours after wearing, and after removing the tape. Each evaluated joint position sense - Error of Active Reproduction of Joint Position. Results: The survey revealed no significant differences in measurement between the study group and the placebo group (p> 0.05). No significant differences in time taking into account all four measurements in the group with the applied CT application, which was supported by pairs (p> 0.05). Also in the placebo group showed no significant differences over time (p> 0.05). There was no significant difference between the errors committed in the direction of flexion and extension. Conclusions: 1. Application muscle Kinesiology Taping had no significant effect on the knee joint proprioception. Its use in order to improve sensorimotor seems therefore unjustified. 2. There are no differences between applications Kinesiology Taping and placebo indicates that the clinical effect of stretch tape is minimal or absent. 3. The results are the basis for the continuation of prospective, randomized trials of numerous and study group.Keywords: joint position sense, kinesiology taping, knee joint, proprioception
Procedia PDF Downloads 4051593 Design and Optimization of an Electromagnetic Vibration Energy Converter
Authors: Slim Naifar, Sonia Bradai, Christian Viehweger, Olfa Kanoun
Abstract:
Vibration provides an interesting source of energy since it is available in many indoor and outdoor applications. Nevertheless, in order to have an efficient design of the harvesting system, vibration converters have to satisfy some criterion in terms of robustness, compactness and energy outcome. In this work, an electromagnetic converter based on mechanical spring principle is proposed. The designed harvester is formed by a coil oscillating around ten ring magnets using a mechanical spring. The proposed design overcomes one of the main limitation of the moving coil by avoiding the contact between the coil wires with the mechanical spring which leads to a better robustness for the converter. In addition, the whole system can be implemented in a cavity of a screw. Different parameters in the harvester were investigated by finite element method including the magnet size, the coil winding number and diameter and the excitation frequency and amplitude. A prototype was realized and tested. Experiments were performed for 0.5 g to 1 g acceleration. The used experimental setup consists of an electrodynamic shaker as an external artificial vibration source controlled by a laser sensor to measure the applied displacement and frequency excitation. Together with the laser sensor, a controller unit, and an amplifier, the shaker is operated in a closed loop which allows controlling the vibration amplitude. The resonance frequency of the proposed designs is in the range of 24 Hz. Results indicate that the harvester can generate 612 mV and 1150 mV maximum open circuit peak to peak voltage at resonance for 0.5 g and 1 g acceleration respectively which correspond to 4.75 mW and 1.34 mW output power. Tuning the frequency to other values is also possible due to the possibility to add mass to the moving part of the or by changing the mechanical spring stiffness.Keywords: energy harvesting, electromagnetic principle, vibration converter, moving coil
Procedia PDF Downloads 2981592 Mathematical Modelling, Simulation and Prototype Designing of Potable Water System on Basis of Forward Osmosis
Authors: Ridhish Kumar, Sudeep Nadukkandy, Anirban Roy
Abstract:
The development of reverse osmosis happened in 1960. Along the years this technique has been widely accepted all over the world for varied applications ranging from seawater desalination to municipal water treatment. Forward osmosis (FO) is one of the foremost technologies for low energy consuming solutions for water purification. In this study, we have carried out a detailed analysis on selection, design, and pricing for a prototype of potable water system for purifying water in emergency situations. The portable and light purification system is envisaged to be driven by FO. This pouch will help to serve as an emergency water filtration device. The current effort employs a model to understand the interplay of permeability and area on the rate of purification of water from any impure source/brackish water. The draw solution for the FO pouch is considered to be a combination of salt and sugar such that dilution of the same would result in an oral rehydration solution (ORS) which is a boon for dehydrated patients. However, the effort takes an extra step to actually estimate the cost and pricing of designing such a prototype. While the mathematical model yields the best membrane (compositions are taken from literature) combination in terms of permeability and area, the pricing takes into account the feasibility of such a solution to be made available as a retail item. The product is envisaged to be a market competitor for packaged drinking water and ORS combination (costing around $0.5 combined) and thus, to be feasible has to be priced around the same range with greater margins in order to have a better distribution. Thus a proper business plan and production of the same has been formulated in order to be a feasible solution for unprecedented calamities and emergency situations.Keywords: forward osmosis, water treatment, oral rehydration solution, prototype
Procedia PDF Downloads 1851591 ACO-TS: an ACO-based Algorithm for Optimizing Cloud Task Scheduling
Authors: Fahad Y. Al-dawish
Abstract:
The current trend by a large number of organizations and individuals to use cloud computing. Many consider it a significant shift in the field of computing. Cloud computing are distributed and parallel systems consisting of a collection of interconnected physical and virtual machines. With increasing request and profit of cloud computing infrastructure, diverse computing processes can be executed on cloud environment. Many organizations and individuals around the world depend on the cloud computing environments infrastructure to carry their applications, platform, and infrastructure. One of the major and essential issues in this environment related to allocating incoming tasks to suitable virtual machine (cloud task scheduling). Cloud task scheduling is classified as optimization problem, and there are several meta-heuristic algorithms have been anticipated to solve and optimize this problem. Good task scheduler should execute its scheduling technique on altering environment and the types of incoming task set. In this research project a cloud task scheduling methodology based on ant colony optimization ACO algorithm, we call it ACO-TS Ant Colony Optimization for Task Scheduling has been proposed and compared with different scheduling algorithms (Random, First Come First Serve FCFS, and Fastest Processor to the Largest Task First FPLTF). Ant Colony Optimization (ACO) is random optimization search method that will be used for assigning incoming tasks to available virtual machines VMs. The main role of proposed algorithm is to minimizing the makespan of certain tasks set and maximizing resource utilization by balance the load among virtual machines. The proposed scheduling algorithm was evaluated by using Cloudsim toolkit framework. Finally after analyzing and evaluating the performance of experimental results we find that the proposed algorithm ACO-TS perform better than Random, FCFS, and FPLTF algorithms in each of the makespaan and resource utilization.Keywords: cloud Task scheduling, ant colony optimization (ACO), cloudsim, cloud computing
Procedia PDF Downloads 4221590 Corrosion of Concrete Reinforcing Steel Bars Tested and Compared Between Various Protection Methods
Authors: P. van Tonder, U. Bagdadi, B. M. D. Lario, Z. Masina, T. R. Motshwari
Abstract:
This paper analyses how concrete reinforcing steel bars corrode and how it can be minimised through the use of various protection methods against corrosion, such as metal-based paint, alloying, cathodic protection and electroplating. Samples of carbon steel bars were protected, using these four methods. Tests performed on the samples included durability, electrical resistivity and bond strength. Durability results indicated relatively low corrosion rates for alloying, cathodic protection, electroplating and metal-based paint. The resistivity results indicate all samples experienced a downward trend, despite erratic fluctuations in the data, indicating an inverse relationship between electrical resistivity and corrosion rate. The results indicated lowered bond strengths when the reinforced concrete was cured in seawater compared to being cured in normal water. It also showed that higher design compressive strengths lead to higher bond strengths which can be used to compensate for the loss of bond strength due to corrosion in a real-world application. In terms of implications, all protection methods have the potential to be effective at resisting corrosion in real-world applications, especially the alloying, cathodic protection and electroplating methods. The metal-based paint underperformed by comparison, most likely due to the nature of paint in general which can fade and chip away, revealing the steel samples and exposing them to corrosion. For alloying, stainless steel is the suggested material of choice, where Y-bars are highly recommended as smooth bars have a much-lowered bond strength. Cathodic protection performed the best of all in protecting the sample from corrosion, however, its real-world application would require significant evaluation into the feasibility of such a method.Keywords: protection methods, corrosion, concrete, reinforcing steel bars
Procedia PDF Downloads 1761589 Sensitivity Analysis of the Thermal Properties in Early Age Modeling of Mass Concrete
Authors: Farzad Danaei, Yilmaz Akkaya
Abstract:
In many civil engineering applications, especially in the construction of large concrete structures, the early age behavior of concrete has shown to be a crucial problem. The uneven rise in temperature within the concrete in these constructions is the fundamental issue for quality control. Therefore, developing accurate and fast temperature prediction models is essential. The thermal properties of concrete fluctuate over time as it hardens, but taking into account all of these fluctuations makes numerical models more complex. Experimental measurement of the thermal properties at the laboratory conditions also can not accurately predict the variance of these properties at site conditions. Therefore, specific heat capacity and the heat conductivity coefficient are two variables that are considered constant values in many of the models previously recommended. The proposed equations demonstrate that these two quantities are linearly decreasing as cement hydrates, and their value are related to the degree of hydration. The effects of changing the thermal conductivity and specific heat capacity values on the maximum temperature and the time it takes for concrete to reach that temperature are examined in this study using numerical sensibility analysis, and the results are compared to models that take a fixed value for these two thermal properties. The current study is conducted in 7 different mix designs of concrete with varying amounts of supplementary cementitious materials (fly ash and ground granulated blast furnace slag). It is concluded that the maximum temperature will not change as a result of the constant conductivity coefficient, but variable specific heat capacity must be taken into account, also about duration when a concrete's central node reaches its max value again variable specific heat capacity can have a considerable effect on the final result. Also, the usage of GGBFS has more influence compared to fly ash.Keywords: early-age concrete, mass concrete, specific heat capacity, thermal conductivity coefficient
Procedia PDF Downloads 801588 Geospatial Curve Fitting Methods for Disease Mapping of Tuberculosis in Eastern Cape Province, South Africa
Authors: Davies Obaromi, Qin Yongsong, James Ndege
Abstract:
To interpolate scattered or regularly distributed data, there are imprecise or exact methods. However, there are some of these methods that could be used for interpolating data in a regular grid and others in an irregular grid. In spatial epidemiology, it is important to examine how a disease prevalence rates are distributed in space, and how they relate with each other within a defined distance and direction. In this study, for the geographic and graphic representation of the disease prevalence, linear and biharmonic spline methods were implemented in MATLAB, and used to identify, localize and compare for smoothing in the distribution patterns of tuberculosis (TB) in Eastern Cape Province. The aim of this study is to produce a more “smooth” graphical disease map for TB prevalence patterns by a 3-D curve fitting techniques, especially the biharmonic splines that can suppress noise easily, by seeking a least-squares fit rather than exact interpolation. The datasets are represented generally as a 3D or XYZ triplets, where X and Y are the spatial coordinates and Z is the variable of interest and in this case, TB counts in the province. This smoothing spline is a method of fitting a smooth curve to a set of noisy observations using a spline function, and it has also become the conventional method for its high precision, simplicity and flexibility. Surface and contour plots are produced for the TB prevalence at the provincial level for 2012 – 2015. From the results, the general outlook of all the fittings showed a systematic pattern in the distribution of TB cases in the province and this is consistent with some spatial statistical analyses carried out in the province. This new method is rarely used in disease mapping applications, but it has a superior advantage to be assessed at subjective locations rather than only on a rectangular grid as seen in most traditional GIS methods of geospatial analyses.Keywords: linear, biharmonic splines, tuberculosis, South Africa
Procedia PDF Downloads 2401587 Design and Optimization of a Small Hydraulic Propeller Turbine
Authors: Dario Barsi, Marina Ubaldi, Pietro Zunino, Robert Fink
Abstract:
A design and optimization procedure is proposed and developed to provide the geometry of a high efficiency compact hydraulic propeller turbine for low head. For the preliminary design of the machine, classic design criteria, based on the use of statistical correlations for the definition of the fundamental geometric parameters and the blade shapes are used. These relationships are based on the fundamental design parameters (i.e., specific speed, flow coefficient, work coefficient) in order to provide a simple yet reliable procedure. Particular attention is paid, since from the initial steps, on the correct conformation of the meridional channel and on the correct arrangement of the blade rows. The preliminary geometry thus obtained is used as a starting point for the hydrodynamic optimization procedure, carried out using a CFD calculation software coupled with a genetic algorithm that generates and updates a large database of turbine geometries. The optimization process is performed using a commercial approach that solves the turbulent Navier Stokes equations (RANS) by exploiting the axial-symmetric geometry of the machine. The geometries generated within the database are therefore calculated in order to determine the corresponding overall performance. In order to speed up the optimization calculation, an artificial neural network (ANN) based on the use of an objective function is employed. The procedure was applied for the specific case of a propeller turbine with an innovative design of a modular type, specific for applications characterized by very low heads. The procedure is tested in order to verify its validity and the ability to automatically obtain the targeted net head and the maximum for the total to total internal efficiency.Keywords: renewable energy conversion, hydraulic turbines, low head hydraulic energy, optimization design
Procedia PDF Downloads 151