Search results for: creativity and creative performance
1098 Synthesis and Characterization of AFe₂O₄ (A=CA, Co, CU) Nano-Spinels: Application to Hydrogen Photochemical Production under Visible Light Irradiation
Authors: H. Medjadji, A. Boulahouache, N. Salhi, A. Boudjemaa, M. Trari
Abstract:
Hydrogen from renewable sources, such as solar, is referred to as green hydrogen. The splitting water process using semiconductors, such as photocatalysts, has attracted significant attention due to its potential application for solving the energy crisis and environmental pollution. Spinel ferrites of the MF₂O₄ type have shown broad interest in diverse energy conversion processes, including fuel cells and photo electrocatalytic water splitting. This work focuses on preparing nano-spinels based on iron AFe₂O₄ (A= Ca, Co, and Cu) as photocatalysts using the nitrate method. These materials were characterized both physically and optically and subsequently tested for hydrogen generation under visible light irradiation. Various techniques were used to investigate the properties of the materials, including TGA-DT, X-ray diffraction (XRD), Fourier Transform Infrared Spectroscopy (FTIR), UV-visible spectroscopy, Scanning Electron Microscopy with Energy Dispersive X-ray Spectroscopy (SEM-EDX) and X-ray Photoelectron Spectroscopy (XPS) was also undertaken. XRD analysis confirmed the formation of pure phases at 850°C, with crystalline sizes of 31 nm for CaFe₂O₄, 27 nm for CoFe₂O₄, and 40 nm for CuFe₂O₄. The energy gaps, calculated from recorded diffuse reflection data, are 1.85 eV for CaFe₂O₄, 1.27 eV for CoFe₂O₄, and 1.64 eV for CuFe₂O₄. SEM micrographs showed homogeneous grains with uniform shapes and medium porosity in all samples. EDX elemental analysis determined the absence of any contaminating elements, highlighting the high purity of the prepared materials via the nitrate route. XPS spectra revealed the presence of Fe3+ and O in all samples. Additionally, XPS analysis revealed the presence of Ca²⁺, Co²⁺, and Cu²⁺ on the surface of CaFe₂O₄ and CoFe₂O₄ spinels, respectively. The photocatalytic activity was successfully evaluated by measuring H₂ evolution through the water-splitting process. The best performance was achieved with CaFe₂O₄ in a neutral medium (pH ~ 7), yielding 189 µmol at an optimal temperature of ~50°C. The highest hydrogen production rates for CoFe₂O₄ and CuFe₂O₄ were obtained at pH ~ 12 with release rates of 65 and 85 µmol, respectively, under visible light irradiation at the same optimal temperature. Various conditions were investigated including the pH of the solution, the hole sensors utilization and recyclability.Keywords: hydrogen, MFe₂O₄, nitrate route, spinel ferrite
Procedia PDF Downloads 401097 Controlling Shape and Position of Silicon Micro-nanorolls Fabricated using Fine Bubbles during Anodization
Authors: Yodai Ashikubo, Toshiaki Suzuki, Satoshi Kouya, Mitsuya Motohashi
Abstract:
Functional microstructures such as wires, fins, needles, and rolls are currently being applied to variety of high-performance devices. Under these conditions, a roll structure (silicon micro-nanoroll) was formed on the surface of the silicon substrate via fine bubbles during anodization using an extremely diluted hydrofluoric acid (HF + H₂O). The as-formed roll had a microscale length and width of approximately 1 µm. The number of rolls was 3-10 times and the thickness of the film forming the rolls was about 10 nm. Thus, it is promising for applications as a distinct device material. These rolls functioned as capsules and/or pipelines. To date, number of rolls and roll length have been controlled by anodization conditions. In general, controlling the position and roll winding state is required for device applications. However, it has not been discussed. Grooves formed on silicon surface before anodization might be useful control the bubbles. In this study, we investigated the effect of the grooves on the position and shape of the roll. The surfaces of the silicon wafers were anodized. The starting material was p-type (100) single-crystalline silicon wafers. The resistivity of the wafer is 5-20 ∙ cm. Grooves were formed on the surface of the substrate before anodization using sandpaper and diamond pen. The average width and depth of the grooves were approximately 1 µm and 0.1 µm, respectively. The HF concentration {HF/ (HF + C₂H5OH + H₂O)} was 0.001 % by volume. The C2H5OH concentration {C₂H5OH/ (HF + C₂H5OH + H₂O)} was 70 %. A vertical single-tank cell and Pt cathode were used for anodization. The silicon roll was observed by field-emission scanning electron microscopy (FE-SEM; JSM-7100, JEOL). The atomic bonding state of the rolls was evaluated using X-ray photoelectron spectroscopy (XPS; ESCA-3400, Shimadzu). For straight groove, the rolls were formed along the groove. This indicates that the orientation of the rolls can be controlled by the grooves. For lattice-like groove, the rolls formed inside the lattice and along the long sides. In other words, the aspect ratio of the lattice is very important for the roll formation. In addition, many rolls were formed and winding states were not uniform when the lattice size is too large. On the other hand, no rolls were formed for small lattice. These results indicate that there is the optimal size of lattice for roll formation. In the future, we are planning on formation of rolls using groove formed by lithography technique instead of sandpaper and the pen. Furthermore, the rolls included nanoparticles will be formed for nanodevices.Keywords: silicon roll, anodization, fine bubble, microstructure
Procedia PDF Downloads 291096 Management of Permits and Regulatory Compliance Obligations for the East African Crude Oil Pipeline Project
Authors: Ezra Kavana
Abstract:
This article analyses the role those East African countries play in enforcing crude oil pipeline regulations. The paper finds that countries are more likely to have responsibility for enforcing these regulations if they have larger networks of gathering and transmission lines and if their citizens are more liberal and more pro-environment., Pipeline operations, transportation costs, new pipeline construction, and environmental effects are all heavily controlled. All facets of pipeline systems and the facilities connected to them are governed by statutory bodies. In order to support the project manager on such new pipeline projects, companies building and running these pipelines typically include personnel and consultants who specialize in these permitting processes. The primary permissions that can be necessary for pipelines carrying different commodities are mentioned in this paper. National, regional, and local municipalities each have their own permits. Through their right-of-way group, the contractor's project compliance leadership is typically directly responsible for obtaining those permits, which are typically obtained through government agencies. The whole list of local permits needed for a planned pipeline can only be found after a careful field investigation. A country's government regulates pipelines that are entirely within its borders. With a few exceptions, state regulations governing ratemaking and safety have been enacted to be consistent with regulatory requirements. Countries that produce a lot of energy are typically more involved in regulating pipelines than countries that produce little to no energy. To identify the proper regulatory authority, it is important to research the several government agencies that regulate pipeline transportation. Additionally, it's crucial that the scope determination of a planned project engage with a various external professional with experience in linear facilities or the company's pipeline construction and environmental professional to identify and obtain any necessary design clearances, permits, or approvals. These professionals can offer precise estimations of the costs and length of time needed to process necessary permits. Governments with a stronger energy sector, on the other hand, are less likely to take on control. However, the performance of the pipeline or national enforcement activities are unaffected significantly by whether a government has taken on control. Financial fines are the most efficient government enforcement instrument because they greatly reduce occurrences and property damage.Keywords: crude oil, pipeline, regulatory compliance, and construction permits
Procedia PDF Downloads 991095 The Impact of Artificial Intelligence on Agricultural Machines and Plant Nutrition
Authors: Kirolos Gerges Yakoub Gerges
Abstract:
Self-sustaining agricultural machines act in stochastic surroundings and therefore, should be capable of perceive the surroundings in real time. This notion can be done using image sensors blended with superior device learning, mainly Deep mastering. Deep convolutional neural networks excel in labeling and perceiving colour pix and since the fee of RGB-cameras is low, the hardware cost of accurate notion relies upon heavily on memory and computation power. This paper investigates the opportunity of designing lightweight convolutional neural networks for semantic segmentation (pixel clever class) with reduced hardware requirements, to allow for embedded usage in self-reliant agricultural machines. The usage of compression techniques, a lightweight convolutional neural community is designed to carry out actual-time semantic segmentation on an embedded platform. The community is skilled on two big datasets, ImageNet and Pascal Context, to apprehend as much as four hundred man or woman instructions. The 400 training are remapped into agricultural superclasses (e.g. human, animal, sky, road, area, shelterbelt and impediment) and the capacity to provide correct actual-time perception of agricultural environment is studied. The network is carried out to the case of self-sufficient grass mowing the usage of the NVIDIA Tegra X1 embedded platform. Feeding case-unique pics to the community consequences in a fully segmented map of the superclasses within the picture. As the network remains being designed and optimized, handiest a qualitative analysis of the technique is entire on the abstract submission deadline. intending this cut-off date, the finalized layout is quantitatively evaluated on 20 annotated grass mowing pictures. Light-weight convolutional neural networks for semantic segmentation can be implemented on an embedded platform and show aggressive performance on the subject of accuracy and speed. It’s miles viable to offer value-efficient perceptive capabilities related to semantic segmentation for autonomous agricultural machines.Keywords: centrifuge pump, hydraulic energy, agricultural applications, irrigationaxial flux machines, axial flux applications, coreless machines, PM machinesautonomous agricultural machines, deep learning, safety, visual perception
Procedia PDF Downloads 281094 Water Dumpflood into Multiple Low-Pressure Gas Reservoirs
Authors: S. Lertsakulpasuk, S. Athichanagorn
Abstract:
As depletion-drive gas reservoirs are abandoned when there is insufficient production rate due to pressure depletion, waterflooding has been proposed to increase the reservoir pressure in order to prolong gas production. Due to high cost, water injection may not be economically feasible. Water dumpflood into gas reservoirs is a new promising approach to increase gas recovery by maintaining reservoir pressure with much cheaper costs than conventional waterflooding. Thus, a simulation study of water dumpflood into multiple nearly abandoned or already abandoned thin-bedded gas reservoirs commonly found in the Gulf of Thailand was conducted to demonstrate the advantage of the proposed method and to determine the most suitable operational parameters for reservoirs having different system parameters. A reservoir simulation model consisting of several thin-layered depletion-drive gas reservoirs and an overlying aquifer was constructed in order to investigate the performance of the proposed method. Two producers were initially used to produce gas from the reservoirs. One of them was later converted to a dumpflood well after gas production rate started to decline due to continuous reduction in reservoir pressure. The dumpflood well was used to flow water from the aquifer to increase pressure of the gas reservoir in order to drive gas towards producer. Two main operational parameters which are wellhead pressure of producer and the time to start water dumpflood were investigated to optimize gas recovery for various systems having different gas reservoir dip angles, well spacings, aquifer sizes, and aquifer depths. This simulation study found that water dumpflood can increase gas recovery up to 12% of OGIP depending on operational conditions and system parameters. For the systems having a large aquifer and large distance between wells, it is best to start water dumpflood when the gas rate is still high since the long distance between the gas producer and dumpflood well helps delay water breakthrough at producer. As long as there is no early water breakthrough, the earlier the energy is supplied to the gas reservoirs, the better the gas recovery. On the other hand, for the systems having a small or moderate aquifer size and short distance between the two wells, performing water dumpflood when the rate is close to the economic rate is better because water is more likely to cause an early breakthrough when the distance is short. Water dumpflood into multiple nearly-depleted or depleted gas reservoirs is a novel study. The idea of using water dumpflood to increase gas recovery has been mentioned in the literature but has never been investigated. This detailed study will help a practicing engineer to understand the benefits of such method and can implement it with minimum cost and risk.Keywords: dumpflood, increase gas recovery, low-pressure gas reservoir, multiple gas reservoirs
Procedia PDF Downloads 4451093 Hydrothermal Aging Behavior of Continuous Carbon Fiber Reinforced Polyamide 6 Composites
Authors: Jifeng Zhang , Yongpeng Lei
Abstract:
Continuous carbon fiber reinforced polyamide 6 (CF/PA6) composites are potential for application in the automotive industry due to their high specific strength and stiffness. However, PA6 resin is sensitive to the moisture in the hydrothermal environment and CF/PA6 composites might undergo several physical and chemical changes, such as plasticization, swelling, and hydrolysis, which induces a reduction of mechanical properties. So far, little research has been reported on the assessment of the effects of hydrothermal aging on the mechanical properties of continuous CF/PA6 composite. This study deals with the effects of hydrothermal aging on moisture absorption and mechanical properties of polyamide 6 (PA6) and polyamide 6 reinforced with continuous carbon fibers composites (CF/PA6) by immersion in distilled water at 30 ℃, 50 ℃, 70 ℃, and 90 ℃. Degradation of mechanical performance has been monitored, depending on the water absorption content and the aging temperature. The experimental results reveal that under the same aging condition, the PA6 resin absorbs more water than the CF/PA6 composite, while the water diffusion coefficient of CF/PA6 composite is higher than that of PA6 resin because of interfacial diffusion channel. In mechanical properties degradation process, an exponential reduction in tensile strength and elastic modulus are observed in PA6 resin as aging temperature and water absorption content increases. The degradation trend of flexural properties of CF/PA6 is the same as that of tensile properties of PA6 resin. Moreover, the water content plays a decisive role in mechanical degradation compared with aging temperature. In contrast, hydrothermal environment has mild effect on the tensile properties of CF/PA6 composites. The elongation at breakage of PA6 resin and CF/PA6 reaches the highest value when their water content reaches 6% and 4%, respectively. Dynamic mechanical analysis (DMA) and scanning electron microscope (SEM) were also used to explain the mechanism of mechanical properties alteration. After exposed to the hydrothermal environment, the Tg (glass transition temperature) of samples decreases dramatically with water content increase. This reduction can be ascribed to the plasticization effect of water. For the unaged specimens, the fibers surface is coated with resin and the main fracture mode is fiber breakage, indicating that a good adhesion between fiber and matrix. However, with absorbed water content increasing, the fracture mode transforms to fiber pullout. Finally, based on Arrhenius methodology, a predictive model with relate to the temperature and water content has been presented to estimate the retention of mechanical properties for PA6 and CF/PA6.Keywords: continuous carbon fiber reinforced polyamide 6 composite, hydrothermal aging, Arrhenius methodology, interface
Procedia PDF Downloads 1221092 Self-Assembled ZnFeAl Layered Double Hydroxides as Highly Efficient Fenton-Like Catalysts
Authors: Marius Sebastian Secula, Mihaela Darie, Gabriela Carja
Abstract:
Ibuprofen is a non-steroidal anti-inflammatory drug (NSAIDs) and is among the most frequently detected pharmaceuticals in environmental samples and among the most widespread drug in the world. Its concentration in the environment is reported to be between 10 and 160 ng L-1. In order to improve the abatement efficiency of this compound for water source prevention and reclamation, the development of innovative technologies is mandatory. AOPs (advanced oxidation processes) are known as highly efficient towards the oxidation of organic pollutants. Among the promising combined treatments, photo-Fenton processes using layered double hydroxides (LDHs) attracted significant consideration especially due to their composition flexibility, high surface area and tailored redox features. This work presents the self-supported Fe, Mn or Ti on ZnFeAl LDHs obtained by co-precipitation followed by reconstruction method as novel efficient photo-catalysts for Fenton-like catalysis. Fe, Mn or Ti/ZnFeAl LDHs nano-hybrids were tested for the degradation of a model pharmaceutical agent, the anti-inflammatory agent ibuprofen, by photocatalysis and photo-Fenton catalysis, respectively, by means of a lab-scale system consisting of a batch reactor equipped with an UV lamp (17 W). The present study presents comparatively the degradation of Ibuprofen in aqueous solution UV light irradiation using four different types of LDHs. The newly prepared Ti/ZnFeAl 4:1 catalyst results in the best degradation performance. After 60 minutes of light irradiation, the Ibuprofen removal efficiency reaches 95%. The slowest degradation of Ibuprofen solution occurs in case of Fe/ZnFeAl 4:1 LDH, (67% removal efficiency after 60 minutes of process). Evolution of Ibuprofen degradation during the photo Fenton process is also studied using Ti/ZnFeAl 2:1 and 4:1 LDHs in the presence and absence of H2O2. It is found that after 60 min the use of Ti/ZnFeAl 4:1 LDH in presence of 100 mg/L H2O2 leads to the fastest degradation of Ibuprofen molecule. After 120 min, both catalysts Ti/ZnFeAl 4:1 and 2:1 result in the same value of removal efficiency (98%). In the absence of H2O2, Ibuprofen degradation reaches only 73% removal efficiency after 120 min of degradation process. Acknowledgements: This work was supported by a grant of the Romanian National Authority for Scientific Research and Innovation, CNCS - UEFISCDI, project number PN-II-RU-TE-2014-4-0405.Keywords: layered double hydroxide, advanced oxidation process, micropollutant, heterogeneous Fenton
Procedia PDF Downloads 2301091 Evaluation of Teaching Team Stress Factors in Two Engineering Education Programs
Authors: Kari Bjorn
Abstract:
Team learning has been studied and modeled as double loop model and its variations. Also, metacognition has been suggested as a concept to describe the nature of team learning to be more than a simple sum of individual learning of the team members. Team learning has a positive correlation with both individual motivation of its members, as well as the collective factors within the team. Team learning of previously very independent members of two teaching teams is analyzed. Applied Science Universities are training future professionals with ever more diversified and multidisciplinary skills. The size of the units of teaching and learning are increasingly larger for several reasons. First, multi-disciplinary skill development requires more active learning and richer learning environments and learning experiences. This occurs on students teams. Secondly, teaching of multidisciplinary skills requires a multidisciplinary and team-based teaching from the teachers as well. Team formation phases have been identifies and widely accepted. Team role stress has been analyzed in project teams. Projects typically have a well-defined goal and organization. This paper explores team stress of two teacher teams in a parallel running two course units in engineering education. The first is an Industrial Automation Technology and the second is Development of Medical Devices. The courses have a separate student group, and they are in different campuses. Both are run in parallel within 8 week time. Both of them are taught by a group of four teachers with several years of teaching experience, but individually. The team role stress scale items - the survey is done to both teaching groups at the beginning of the course and at the end of the course. The inventory of questions covers the factors of ambiguity, conflict, quantitative role overload and qualitative role overload. Some comparison to the study on project teams can be drawn. Team development stage of the two teaching groups is different. Relating the team role stress factors to the development stage of the group can reveal the potential of management actions to promote team building and to understand the maturity of functional and well-established teams. Mature teams indicate higher job satisfaction and deliver higher performance. Especially, teaching teams who deliver highly intangible results of learning outcome are sensitive to issues in the job satisfaction and team conflicts. Because team teaching is increasing, the paper provides a review of the relevant theories and initial comparative and longitudinal results of the team role stress factors applied to teaching teams.Keywords: engineering education, stress, team role, team teaching
Procedia PDF Downloads 2251090 Developing Optical Sensors with Application of Cancer Detection by Elastic Light Scattering Spectroscopy
Authors: May Fadheel Estephan, Richard Perks
Abstract:
Context: Cancer is a serious health concern that affects millions of people worldwide. Early detection and treatment are essential for improving patient outcomes. However, current methods for cancer detection have limitations, such as low sensitivity and specificity. Research Aim: The aim of this study was to develop an optical sensor for cancer detection using elastic light scattering spectroscopy (ELSS). ELSS is a noninvasive optical technique that can be used to characterize the size and concentration of particles in a solution. Methodology: An optical probe was fabricated with a 100-μm-diameter core and a 132-μm centre-to-centre separation. The probe was used to measure the ELSS spectra of polystyrene spheres with diameters of 2, 0.8, and 0.413 μm. The spectra were then analysed to determine the size and concentration of the spheres. Findings: The results showed that the optical probe was able to differentiate between the three different sizes of polystyrene spheres. The probe was also able to detect the presence of polystyrene spheres in suspension concentrations as low as 0.01%. Theoretical Importance: The results of this study demonstrate the potential of ELSS for cancer detection. ELSS is a noninvasive technique that can be used to characterize the size and concentration of cells in a tissue sample. This information can be used to identify cancer cells and assess the stage of the disease. Data Collection: The data for this study were collected by measuring the ELSS spectra of polystyrene spheres with different diameters. The spectra were collected using a spectrometer and a computer. Analysis Procedures: The ELSS spectra were analysed using a software program to determine the size and concentration of the spheres. The software program used a mathematical algorithm to fit the spectra to a theoretical model. Question Addressed: The question addressed by this study was whether ELSS could be used to detect cancer cells. The results of the study showed that ELSS could be used to differentiate between different sizes of cells, suggesting that it could be used to detect cancer cells. Conclusion: The findings of this research show the utility of ELSS in the early identification of cancer. ELSS is a noninvasive method for characterizing the number and size of cells in a tissue sample. To determine cancer cells and determine the disease's stage, this information can be employed. Further research is needed to evaluate the clinical performance of ELSS for cancer detection.Keywords: elastic light scattering spectroscopy, polystyrene spheres in suspension, optical probe, fibre optics
Procedia PDF Downloads 821089 The Effectiveness of Blended Learning in Pre-Registration Nurse Education: A Mixed Methods Systematic Review and Met Analysis
Authors: Albert Amagyei, Julia Carroll, Amanda R. Amorim Adegboye, Laura Strumidlo, Rosie Kneafsey
Abstract:
Introduction: Classroom-based learning has persisted as the mainstream model of pre-registration nurse education. This model is often rigid, teacher-centered, and unable to support active learning and the practical learning needs of nursing students. Health Education England (HEE), a public body of the Department of Health and Social Care, hypothesises that blended learning (BL) programmes may address health system and nursing profession challenges, such as nursing shortages and lack of digital expertise, by exploring opportunities for providing predominantly online, remote-access study which may increase nursing student recruitment, offering alternate pathways to nursing other than the traditional classroom route. This study will provide evidence for blended learning strategies adopted in nursing education as well as examine nursing student learning experiences concerning the challenges and opportunities related to using blended learning within nursing education. Objective: This review will explore the challenges and opportunities of BL within pre-registration nurse education from the student's perspective. Methods: The search was completed within five databases. Eligible studies were appraised independently by four reviewers. The JBI-convergent segregated approach for mixed methods review was used to assess and synthesize the data. The study’s protocol has been registered with the International Register of Systematic Reviews with registration number// PROSPERO (CRD42023423532). Results: Twenty-seven (27) studies (21 quantitative and 6 qualitative) were included in the review. The study confirmed that BL positively impacts nursing students' learning outcomes, as demonstrated by the findings of the meta-analysis and meta-synthesis. Conclusion: The review compared BL to traditional learning, simulation, laboratory, and online learning on nursing students’ learning and programme outcomes as well as learning behaviour and experience. The results show that BL could effectively improve nursing students’ knowledge, academic achievement, critical skills, and clinical performance as well as enhance learner satisfaction and programme retention. The review findings outline that students’ background characteristics, BL design, and format significantly impact the success of the BL nursing programme.Keywords: nursing student, blended learning, pre-registration nurse education, online learning
Procedia PDF Downloads 531088 Navigating through Uncertainty: An Explorative Study of Managers’ Experiences in China-foreign Cooperative Higher Education
Abstract:
To drive practical interpretations and applications of various policies in building the transnational education joint-ventures, middle managers learn to navigate through uncertainties and ambiguities. However, the current literature views very little about those middle managers’ experiences, perceptions, and practices. This paper takes the empirical approach and aims to uncover the middle managers’ experiences by conducting interviews, campus visits, and document analysis. Following the qualitative research method approach, the researchers gathered information from a mixture of fourteen foreign and Chinese managers. Their perceptions of the China-foreign cooperation in higher education and their perceived roles have offered important, valuable insights to this group of people’s attitudes and management performances. The diverse cultural and demographic backgrounds contributed to the significance of the study. There are four key findings. One, middle managers’ immediate micro-contexts and individual attitudes are the top two influential factors in managers’ performances. Two, the foreign middle managers showed a stronger sense of self-identity in risk-taking. Three, the Chinese middle managers preferred to see difficulties as part of their assigned responsibilities. Four, middle managers in independent universities demonstrated a stronger sense of belonging and fewer frustrations than middle managers in secondary institutes. The researchers propose that training for managers in a transnational educational setting should consider these discoveries when select fitting topics and content. In particular, middle managers should be better prepared to anticipate their everyday jobs in the micro-environment; hence, information concerning sponsor organizations’ working culture is as essential as knowing the national and local regulations, and socio-culture. Different case studies can help the managers to recognize and celebrate the diversity in transnational education. Situational stories can help them to become aware of the diverse and wide range of work contexts so that they will not feel to be left alone when facing challenges without relevant previous experience or training. Though this research is a case study based in the Chinese transnational higher education setting, the implications could be relevant and comparable to other transnational higher education situations and help to continue expanding the potential applications in this field.Keywords: educational management, middle manager performance, transnational higher education
Procedia PDF Downloads 1671087 Characterization and Modelling of Groundwater Flow towards a Public Drinking Water Well Field: A Case Study of Ter Kamerenbos Well Field
Authors: Buruk Kitachew Wossenyeleh
Abstract:
Groundwater is the largest freshwater reservoir in the world. Like the other reservoirs of the hydrologic cycle, it is a finite resource. This study focused on the groundwater modeling of the Ter Kamerenbos well field to understand the groundwater flow system and the impact of different scenarios. The study area covers 68.9Km2 in the Brussels Capital Region and is situated in two river catchments, i.e., Zenne River and Woluwe Stream. The aquifer system has three layers, but in the modeling, they are considered as one layer due to their hydrogeological properties. The catchment aquifer system is replenished by direct recharge from rainfall. The groundwater recharge of the catchment is determined using the spatially distributed water balance model called WetSpass, and it varies annually from zero to 340mm. This groundwater recharge is used as the top boundary condition for the groundwater modeling of the study area. During the groundwater modeling using Processing MODFLOW, constant head boundary conditions are used in the north and south boundaries of the study area. For the east and west boundaries of the study area, head-dependent flow boundary conditions are used. The groundwater model is calibrated manually and automatically using observed hydraulic heads in 12 observation wells. The model performance evaluation showed that the root means the square error is 1.89m and that the NSE is 0.98. The head contour map of the simulated hydraulic heads indicates the flow direction in the catchment, mainly from the Woluwe to Zenne catchment. The simulated head in the study area varies from 13m to 78m. The higher hydraulic heads are found in the southwest of the study area, which has the forest as a land-use type. This calibrated model was run for the climate change scenario and well operation scenario. Climate change may cause the groundwater recharge to increase by 43% and decrease by 30% in 2100 from current conditions for the high and low climate change scenario, respectively. The groundwater head varies for a high climate change scenario from 13m to 82m, whereas for a low climate change scenario, it varies from 13m to 76m. If doubling of the pumping discharge assumed, the groundwater head varies from 13m to 76.5m. However, if the shutdown of the pumps is assumed, the head varies in the range of 13m to 79m. It is concluded that the groundwater model is done in a satisfactory way with some limitations, and the model output can be used to understand the aquifer system under steady-state conditions. Finally, some recommendations are made for the future use and improvement of the model.Keywords: Ter Kamerenbos, groundwater modelling, WetSpass, climate change, well operation
Procedia PDF Downloads 1531086 Teaching Behaviours of Effective Secondary Mathematics Teachers: A Study in Dhaka, Bangladesh
Authors: Asadullah Sheikh, Kerry Barnett, Paul Ayres
Abstract:
Despite significant progress in access, equity and public examination success, poor student performance in mathematics in secondary schools has become a major concern in Bangladesh. A substantial body of research has emphasised the important contribution of teaching practices to student achievement. However, this has not been investigated in Bangladesh. Therefore, the study sought to find out the effectiveness of mathematics teaching practices as a means of improving secondary school mathematics in Dhaka Municipality City (DMC) area, Bangladesh. The purpose of this study was twofold, first, to identify the 20 highest performing secondary schools in mathematics in DMC, and second, to investigate the teaching practices of mathematics teachers in these schools. A two-phase mixed method approach was adopted. In the first phase, secondary source data were obtained from the Board of Intermediate and Secondary Education (BISE), Dhaka and value-added measures used to identify the 20 highest performing secondary schools in mathematics. In the second phase, a concurrent mixed method design, where qualitative methods were embedded within a dominant quantitative approach was utilised. A purposive sampling strategy was used to select fifteen teachers from the 20 highest performing secondary schools. The main sources of data were classroom teaching observations, and teacher interviews. The data from teacher observations were analysed with descriptive and nonparametric statistics. The interview data were analysed qualitatively. The main findings showed teachers adopt a direct teaching approach which incorporates orientation, structuring, modelling, practice, questioning and teacher-student interaction that creates an individualistic learning environment. The variation in developmental levels of teaching skill indicate that teachers do not necessarily use the qualitative (i.e., focus, stage, quality and differentiation) aspects of teaching behaviours effectively. This is the first study to investigate teaching behaviours of effective secondary mathematics teachers within Dhaka, Bangladesh. It contributes in an international dimension to the field of educational effectiveness and raise questions about existing constructivist approaches. Further, it contributes to important insights about teaching behaviours that can be used to inform the development of evidence-based policy and practice on quality teaching in Bangladesh.Keywords: effective teaching, mathematics, secondary schools, student achievement, value-added measures
Procedia PDF Downloads 2411085 Nondestructive Prediction and Classification of Gel Strength in Ethanol-Treated Kudzu Starch Gels Using Near-Infrared Spectroscopy
Authors: John-Nelson Ekumah, Selorm Yao-Say Solomon Adade, Mingming Zhong, Yufan Sun, Qiufang Liang, Muhammad Safiullah Virk, Xorlali Nunekpeku, Nana Adwoa Nkuma Johnson, Bridget Ama Kwadzokpui, Xiaofeng Ren
Abstract:
Enhancing starch gel strength and stability is crucial. However, traditional gel property assessment methods are destructive, time-consuming, and resource-intensive. Thus, understanding ethanol treatment effects on kudzu starch gel strength and developing a rapid, nondestructive gel strength assessment method is essential for optimizing the treatment process and ensuring product quality consistency. This study investigated the effects of different ethanol concentrations on the microstructure of kudzu starch gels using a comprehensive microstructural analysis. We also developed a nondestructive method for predicting gel strength and classifying treatment levels using near-infrared (NIR) spectroscopy, and advanced data analytics. Scanning electron microscopy revealed progressive network densification and pore collapse with increasing ethanol concentration, correlating with enhanced mechanical properties. NIR spectroscopy, combined with various variable selection methods (CARS, GA, and UVE) and modeling algorithms (PLS, SVM, and ELM), was employed to develop predictive models for gel strength. The UVE-SVM model demonstrated exceptional performance, with the highest R² values (Rc = 0.9786, Rp = 0.9688) and lowest error rates (RMSEC = 6.1340, RMSEP = 6.0283). Pattern recognition algorithms (PCA, LDA, and KNN) successfully classified gels based on ethanol treatment levels, achieving near-perfect accuracy. This integrated approach provided a multiscale perspective on ethanol-induced starch gel modification, from molecular interactions to macroscopic properties. Our findings demonstrate the potential of NIR spectroscopy, coupled with advanced data analysis, as a powerful tool for rapid, nondestructive quality assessment in starch gel production. This study contributes significantly to the understanding of starch modification processes and opens new avenues for research and industrial applications in food science, pharmaceuticals, and biomaterials.Keywords: kudzu starch gel, near-infrared spectroscopy, gel strength prediction, support vector machine, pattern recognition algorithms, ethanol treatment
Procedia PDF Downloads 401084 Handling, Exporting and Archiving Automated Mineralogy Data Using TESCAN TIMA
Authors: Marek Dosbaba
Abstract:
Within the mining sector, SEM-based Automated Mineralogy (AM) has been the standard application for quickly and efficiently handling mineral processing tasks. Over the last decade, the trend has been to analyze larger numbers of samples, often with a higher level of detail. This has necessitated a shift from interactive sample analysis performed by an operator using a SEM, to an increased reliance on offline processing to analyze and report the data. In response to this trend, TESCAN TIMA Mineral Analyzer is designed to quickly create a virtual copy of the studied samples, thereby preserving all the necessary information. Depending on the selected data acquisition mode, TESCAN TIMA can perform hyperspectral mapping and save an X-ray spectrum for each pixel or segment, respectively. This approach allows the user to browse through elemental distribution maps of all elements detectable by means of energy dispersive spectroscopy. Re-evaluation of the existing data for the presence of previously unconsidered elements is possible without the need to repeat the analysis. Additional tiers of data such as a secondary electron or cathodoluminescence images can also be recorded. To take full advantage of these information-rich datasets, TIMA utilizes a new archiving tool introduced by TESCAN. The dataset size can be reduced for long-term storage and all information can be recovered on-demand in case of renewed interest. TESCAN TIMA is optimized for network storage of its datasets because of the larger data storage capacity of servers compared to local drives, which also allows multiple users to access the data remotely. This goes hand in hand with the support of remote control for the entire data acquisition process. TESCAN also brings a newly extended open-source data format that allows other applications to extract, process and report AM data. This offers the ability to link TIMA data to large databases feeding plant performance dashboards or geometallurgical models. The traditional tabular particle-by-particle or grain-by-grain export process is preserved and can be customized with scripts to include user-defined particle/grain properties.Keywords: Tescan, electron microscopy, mineralogy, SEM, automated mineralogy, database, TESCAN TIMA, open format, archiving, big data
Procedia PDF Downloads 1111083 A Comparative Study of Optimization Techniques and Models to Forecasting Dengue Fever
Abstract:
Dengue is a serious public health issue that causes significant annual economic and welfare burdens on nations. However, enhanced optimization techniques and quantitative modeling approaches can predict the incidence of dengue. By advocating for a data-driven approach, public health officials can make informed decisions, thereby improving the overall effectiveness of sudden disease outbreak control efforts. The National Oceanic and Atmospheric Administration and the Centers for Disease Control and Prevention are two of the U.S. Federal Government agencies from which this study uses environmental data. Based on environmental data that describe changes in temperature, precipitation, vegetation, and other factors known to affect dengue incidence, many predictive models are constructed that use different machine learning methods to estimate weekly dengue cases. The first step involves preparing the data, which includes handling outliers and missing values to make sure the data is prepared for subsequent processing and the creation of an accurate forecasting model. In the second phase, multiple feature selection procedures are applied using various machine learning models and optimization techniques. During the third phase of the research, machine learning models like the Huber Regressor, Support Vector Machine, Gradient Boosting Regressor (GBR), and Support Vector Regressor (SVR) are compared with several optimization techniques for feature selection, such as Harmony Search and Genetic Algorithm. In the fourth stage, the model's performance is evaluated using Mean Square Error (MSE), Mean Absolute Error (MAE), and Root Mean Square Error (RMSE) as assistance. Selecting an optimization strategy with the least number of errors, lowest price, biggest productivity, or maximum potential results is the goal. In a variety of industries, including engineering, science, management, mathematics, finance, and medicine, optimization is widely employed. An effective optimization method based on harmony search and an integrated genetic algorithm is introduced for input feature selection, and it shows an important improvement in the model's predictive accuracy. The predictive models with Huber Regressor as the foundation perform the best for optimization and also prediction.Keywords: deep learning model, dengue fever, prediction, optimization
Procedia PDF Downloads 671082 Towards End-To-End Disease Prediction from Raw Metagenomic Data
Authors: Maxence Queyrel, Edi Prifti, Alexandre Templier, Jean-Daniel Zucker
Abstract:
Analysis of the human microbiome using metagenomic sequencing data has demonstrated high ability in discriminating various human diseases. Raw metagenomic sequencing data require multiple complex and computationally heavy bioinformatics steps prior to data analysis. Such data contain millions of short sequences read from the fragmented DNA sequences and stored as fastq files. Conventional processing pipelines consist in multiple steps including quality control, filtering, alignment of sequences against genomic catalogs (genes, species, taxonomic levels, functional pathways, etc.). These pipelines are complex to use, time consuming and rely on a large number of parameters that often provide variability and impact the estimation of the microbiome elements. Training Deep Neural Networks directly from raw sequencing data is a promising approach to bypass some of the challenges associated with mainstream bioinformatics pipelines. Most of these methods use the concept of word and sentence embeddings that create a meaningful and numerical representation of DNA sequences, while extracting features and reducing the dimensionality of the data. In this paper we present an end-to-end approach that classifies patients into disease groups directly from raw metagenomic reads: metagenome2vec. This approach is composed of four steps (i) generating a vocabulary of k-mers and learning their numerical embeddings; (ii) learning DNA sequence (read) embeddings; (iii) identifying the genome from which the sequence is most likely to come and (iv) training a multiple instance learning classifier which predicts the phenotype based on the vector representation of the raw data. An attention mechanism is applied in the network so that the model can be interpreted, assigning a weight to the influence of the prediction for each genome. Using two public real-life data-sets as well a simulated one, we demonstrated that this original approach reaches high performance, comparable with the state-of-the-art methods applied directly on processed data though mainstream bioinformatics workflows. These results are encouraging for this proof of concept work. We believe that with further dedication, the DNN models have the potential to surpass mainstream bioinformatics workflows in disease classification tasks.Keywords: deep learning, disease prediction, end-to-end machine learning, metagenomics, multiple instance learning, precision medicine
Procedia PDF Downloads 1261081 A Virtual Set-Up to Evaluate Augmented Reality Effect on Simulated Driving
Authors: Alicia Yanadira Nava Fuentes, Ilse Cervantes Camacho, Amadeo José Argüelles Cruz, Ana María Balboa Verduzco
Abstract:
Augmented reality promises being present in future driving, with its immersive technology let to show directions and maps to identify important places indicating with graphic elements when the car driver requires the information. On the other side, driving is considered a multitasking activity and, for some people, a complex activity where different situations commonly occur that require the immediate attention of the car driver to make decisions that contribute to avoid accidents; therefore, the main aim of the project is the instrumentation of a platform with biometric sensors that allows evaluating the performance in driving vehicles with the influence of augmented reality devices to detect the level of attention in drivers, since it is important to know the effect that it produces. In this study, the physiological sensors EPOC X (EEG), ECG06 PRO and EMG Myoware are joined in the driving test platform with a Logitech G29 steering wheel and the simulation software City Car Driving in which the level of traffic can be controlled, as well as the number of pedestrians that exist within the simulation obtaining a driver interaction in real mode and through a MSP430 microcontroller achieves the acquisition of data for storage. The sensors bring a continuous analog signal in time that needs signal conditioning, at this point, a signal amplifier is incorporated due to the acquired signals having a sensitive range of 1.25 mm/mV, also filtering that consists in eliminating the frequency bands of the signal in order to be interpretative and without noise to convert it from an analog signal into a digital signal to analyze the physiological signals of the drivers, these values are stored in a database. Based on this compilation, we work on the extraction of signal features and implement K-NN (k-nearest neighbor) classification methods and decision trees (unsupervised learning) that enable the study of data for the identification of patterns and determine by classification methods different effects of augmented reality on drivers. The expected results of this project include are a test platform instrumented with biometric sensors for data acquisition during driving and a database with the required variables to determine the effect caused by augmented reality on people in simulated driving.Keywords: augmented reality, driving, physiological signals, test platform
Procedia PDF Downloads 1421080 Effect of Cutting Tools and Working Conditions on the Machinability of Ti-6Al-4V Using Vegetable Oil-Based Cutting Fluids
Authors: S. Gariani, I. Shyha
Abstract:
Cutting titanium alloys are usually accompanied with low productivity, poor surface quality, short tool life and high machining costs. This is due to the excessive generation of heat at the cutting zone and difficulties in heat dissipation due to relatively low heat conductivity of this metal. The cooling applications in machining processes are crucial as many operations cannot be performed efficiently without cooling. Improving machinability, increasing productivity, enhancing surface integrity and part accuracy are the main advantages of cutting fluids. Conventional fluids such as mineral oil-based, synthetic and semi-synthetic are the most common cutting fluids in the machining industry. Although, these cutting fluids are beneficial in the industries, they pose a great threat to human health and ecosystem. Vegetable oils (VOs) are being investigated as a potential source of environmentally favourable lubricants, due to a combination of biodegradability, good lubricous properties, low toxicity, high flash points, low volatility, high viscosity indices and thermal stability. Fatty acids of vegetable oils are known to provide thick, strong, and durable lubricant films. These strong lubricating films give the vegetable oil base stock a greater capability to absorb pressure and high load carrying capacity. This paper details preliminary experimental results when turning Ti-6Al-4V. The impact of various VO-based cutting fluids, cutting tool materials, working conditions was investigated. The full factorial experimental design was employed involving 24 tests to evaluate the influence of process variables on average surface roughness (Ra), tool wear and chip formation. In general, Ra varied between 0.5 and 1.56 µm and Vasco1000 cutting fluid presented comparable performance with other fluids in terms of surface roughness while uncoated coarse grain WC carbide tool achieved lower flank wear at all cutting speeds. On the other hand, all tools tips were subjected to uniform flank wear during whole cutting trails. Additionally, formed chip thickness ranged between 0.1 and 0.14 mm with a noticeable decrease in chip size when higher cutting speed was used.Keywords: cutting fluids, turning, Ti-6Al-4V, vegetable oils, working conditions
Procedia PDF Downloads 2791079 Performance of HVOF Sprayed Ni-20CR and Cr3C2-NiCr Coatings on Fe-Based Superalloy in an Actual Industrial Environment of a Coal Fired Boiler
Authors: Tejinder Singh Sidhu
Abstract:
Hot corrosion has been recognized as a severe problem in steam-powered electricity generation plants and industrial waste incinerators as it consumes the material at an unpredictably rapid rate. Consequently, the load-carrying ability of the components reduces quickly, eventually leading to catastrophic failure. The inability to either totally prevent hot corrosion or at least detect it at an early stage has resulted in several accidents, leading to loss of life and/or destruction of infrastructures. A number of countermeasures are currently in use or under investigation to combat hot corrosion, such as using inhibitors, controlling the process parameters, designing a suitable industrial alloy, and depositing protective coatings. However, the protection system to be selected for a particular application must be practical, reliable, and economically viable. Due to the continuously rising cost of the materials as well as increased material requirements, the coating techniques have been given much more importance in recent times. Coatings can add value to products up to 10 times the cost of the coating. Among the different coating techniques, thermal spraying has grown into a well-accepted industrial technology for applying overlay coatings onto the surfaces of engineering components to allow them to function under extreme conditions of wear, erosion-corrosion, high-temperature oxidation, and hot corrosion. In this study, the hot corrosion performances of Ni-20Cr and Cr₃C₂-NiCr coatings developed by High Velocity Oxy-Fuel (HVOF) process have been studied. The coatings were developed on a Fe-based superalloy, and experiments were performed in an actual industrial environment of a coal-fired boiler. The cyclic study was carried out around the platen superheater zone where the temperature was around 1000°C. The study was conducted for 10 cycles, and one cycle was consisting of 100 hours of heating followed by 1 hour of cooling at ambient temperature. Both the coatings deposited on Fe-based superalloy imparted better hot corrosion resistance than the uncoated one. The Ni-20Cr coated superalloy performed better than the Cr₃C₂-NiCr coated in the actual working conditions of the coal fired boiler. It is found that the formation of chromium oxide at the boundaries of Ni-rich splats of the coating blocks the inward permeation of oxygen and other corrosive species to the substrate.Keywords: hot corrosion, coating, HVOF, oxidation
Procedia PDF Downloads 851078 A Method to Evaluate and Compare Web Information Extractors
Authors: Patricia Jiménez, Rafael Corchuelo, Hassan A. Sleiman
Abstract:
Web mining is gaining importance at an increasing pace. Currently, there are many complementary research topics under this umbrella. Their common theme is that they all focus on applying knowledge discovery techniques to data that is gathered from the Web. Sometimes, these data are relatively easy to gather, chiefly when it comes from server logs. Unfortunately, there are cases in which the data to be mined is the data that is displayed on a web document. In such cases, it is necessary to apply a pre-processing step to first extract the information of interest from the web documents. Such pre-processing steps are performed using so-called information extractors, which are software components that are typically configured by means of rules that are tailored to extracting the information of interest from a web page and structuring it according to a pre-defined schema. Paramount to getting good mining results is that the technique used to extract the source information is exact, which requires to evaluate and compare the different proposals in the literature from an empirical point of view. According to Google Scholar, about 4 200 papers on information extraction have been published during the last decade. Unfortunately, they were not evaluated within a homogeneous framework, which leads to difficulties to compare them empirically. In this paper, we report on an original information extraction evaluation method. Our contribution is three-fold: a) this is the first attempt to provide an evaluation method for proposals that work on semi-structured documents; the little existing work on this topic focuses on proposals that work on free text, which has little to do with extracting information from semi-structured documents. b) It provides a method that relies on statistically sound tests to support the conclusions drawn; the previous work does not provide clear guidelines or recommend statistically sound tests, but rather a survey that collects many features to take into account as well as related work; c) We provide a novel method to compute the performance measures regarding unsupervised proposals; otherwise they would require the intervention of a user to compute them by using the annotations on the evaluation sets and the information extracted. Our contributions will definitely help researchers in this area make sure that they have advanced the state of the art not only conceptually, but from an empirical point of view; it will also help practitioners make informed decisions on which proposal is the most adequate for a particular problem. This conference is a good forum to discuss on our ideas so that we can spread them to help improve the evaluation of information extraction proposals and gather valuable feedback from other researchers.Keywords: web information extractors, information extraction evaluation method, Google scholar, web
Procedia PDF Downloads 2481077 Characterization of Volatiles Botrytis cinerea in Blueberry Using Solid Phase Micro Extraction, Gas Chromatography Mass Spectrometry
Authors: Ahmed Auda, Manjree Agarwala, Giles Hardya, Yonglin Rena
Abstract:
Botrytis cinerea is a major pest for many plants. It can attack a wide range of plant parts. It can attack buds, flowers, and leaves, stems, and fruit. However, B. cinerea can be mixed with other diseases that cause the same damage. There are many species of botrytis and more than one different strains of each. Botrytis might infect the foliage of nursery stock stored through winter in damp conditions. There are no known resistant plants. Botrytis must have nutrients or food source before it infests the plant. Nutrients leaking from wounded plant parts or dying tissue like old flower petals give the required nutrients. From this food, the fungus becomes more attackers and invades healthy tissue. Dark to light brown rot forms in the ill tissue. High humidity conditions support the growth of this fungus. However, we suppose that selection pressure can act on the morphological and neurophysiologic filter properties of the receiver and on both the biochemical and the physiological regulation of the signal. Communication is implied when signal and receiver evolves toward more and more specific matching, culminating. In other hand, receivers respond to portions of a body odor bouquet which is released to the environment not as an (intentional) signal but as an unavoidable consequence of metabolic activity or tissue damage. Each year Botrytis species can cause considerable economic losses to plant crops. Even with the application of strict quarantine and control measures, these fungi can still find their way into crops and cause the imposition of onerous restrictions on exports. Blueberry fruit mould caused by a fungal infection usually results in major losses during post-harvest storage. Therefore, the management of infection in early stages of disease development is necessary to minimize losses. The overall purpose of this study will develop sensitive, cheap, quick and robust diagnostic techniques for the detection of B. cinerea in blueberry. The specific aim was designed to investigate the performance of volatile organic compounds (VOCs) in the detection and discrimination of blueberry fruits infected by fungal pathogens with an emphasis on Botrytis in the early storage stage of post-harvest.Keywords: botrytis cinerea, blueberry, GC/MS, VOCs
Procedia PDF Downloads 2441076 Professional Development in EFL Classroom: Motivation and Reflection
Authors: Iman Jabbar
Abstract:
Within the scope of professionalism and in order to compete with the modern world, teachers, are expected to develop their teaching skills and activities in addition to their professional knowledge. At the college level, the teacher should be able to face classroom challenges through his engagement with the learning situation to understand the students and their needs. In our field of TESOL, the role of the English teacher is no longer restricted to teaching English texts, but rather he should endeavor to enhance the students’ skills such as communication and critical analysis. Within the literature of professionalism, there are certain strategies and tools that an English teacher should adopt to develop his competence and performance. Reflective practice, which is an exploratory process, is one of these strategies. Another strategy contributing to classroom development is motivation. It is crucial in students’ learning as it affects the quality of learning English in the classroom in addition to determining success or failure as well as language achievement. This is a qualitative study grounded on interpretive perspectives of teachers and students regarding the process of professional development. This study aims at (a) understanding how teachers at the college level conceptualize reflective practice and motivation inside EFL classroom, and (b) exploring the methods and strategies that they implement to practice reflection and motivation. This study and is based on two questions: 1. How do EFL teachers perceive and view reflection and motivation in relation to their teaching and professional development? 2. How can reflective practice and motivation be developed into practical strategies and actions in EFL teachers’ professional context? The study is organized into two parts, theoretical and practical. The theoretical part reviews the literature on the concept of reflective practice and motivation in relation to professional development through providing certain definitions, theoretical models, and strategies. The practical part draws on the theoretical one, however; it is the core of the study since it deals with two issues. It involves the research design, methodology, and methods of data collection, sampling, and data analysis. It ends up with an overall discussion of findings and the researcher's reflections on the investigated topic. In terms of significance, the study is intended to contribute to the field of TESOL at the academic level through the selection of the topic and investigating it from theoretical and practical perspectives. Professional development is the path that leads to enhancing the quality of teaching English as a foreign or second language in a way that suits the modern trends of globalization and advanced technology.Keywords: professional development, motivation, reflection, learning
Procedia PDF Downloads 4521075 An Investigation into the Influence of Compression on 3D Woven Preform Thickness and Architecture
Authors: Calvin Ralph, Edward Archer, Alistair McIlhagger
Abstract:
3D woven textile composites continue to emerge as an advanced material for structural applications and composite manufacture due to their bespoke nature, through thickness reinforcement and near net shape capabilities. When 3D woven preforms are produced, they are in their optimal physical state. As 3D weaving is a dry preforming technology it relies on compression of the preform to achieve the desired composite thickness, fibre volume fraction (Vf) and consolidation. This compression of the preform during manufacture results in changes to its thickness and architecture which can often lead to under-performance or changes of the 3D woven composite. Unlike traditional 2D fabrics, the bespoke nature and variability of 3D woven architectures makes it difficult to know exactly how each 3D preform will behave during processing. Therefore, the focus of this study is to investigate the effect of compression on differing 3D woven architectures in terms of structure, crimp or fibre waviness and thickness as well as analysing the accuracy of available software to predict how 3D woven preforms behave under compression. To achieve this, 3D preforms are modelled and compression simulated in Wisetex with varying architectures of binder style, pick density, thickness and tow size. These architectures have then been woven with samples dry compression tested to determine the compressibility of the preforms under various pressures. Additional preform samples were manufactured using Resin Transfer Moulding (RTM) with varying compressive force. Composite samples were cross sectioned, polished and analysed using microscopy to investigate changes in architecture and crimp. Data from dry fabric compression and composite samples were then compared alongside the Wisetex models to determine accuracy of the prediction and identify architecture parameters that can affect the preform compressibility and stability. Results indicate that binder style/pick density, tow size and thickness have a significant effect on compressibility of 3D woven preforms with lower pick density allowing for greater compression and distortion of the architecture. It was further highlighted that binder style combined with pressure had a significant effect on changes to preform architecture where orthogonal binders experienced highest level of deformation, but highest overall stability, with compression while layer to layer indicated a reduction in fibre crimp of the binder. In general, simulations showed a relative comparison to experimental results; however, deviation is evident due to assumptions present within the modelled results.Keywords: 3D woven composites, compression, preforms, textile composites
Procedia PDF Downloads 1361074 Artificial Membrane Comparison for Skin Permeation in Skin PAMPA
Authors: Aurea C. L. Lacerda, Paulo R. H. Moreno, Bruna M. P. Vianna, Cristina H. R. Serra, Airton Martin, André R. Baby, Vladi O. Consiglieri, Telma M. Kaneko
Abstract:
The modified Franz cell is the most widely used model for in vitro permeation studies, however it still presents some disadvantages. Thus, some alternative methods have been developed such as Skin PAMPA, which is a bio- artificial membrane that has been applied for skin penetration estimation of xenobiotics based on HT permeability model consisting. Skin PAMPA greatest advantage is to carry out more tests, in a fast and inexpensive way. The membrane system mimics the stratum corneum characteristics, which is the primary skin barrier. The barrier properties are given by corneocytes embedded in a multilamellar lipid matrix. This layer is the main penetration route through the paracellular permeation pathway and it consists of a mixture of cholesterol, ceramides, and fatty acids as the dominant components. However, there is no consensus on the membrane composition. The objective of this work was to compare the performance among different bio-artificial membranes for studying the permeation in skin PAMPA system. Material and methods: In order to mimetize the lipid composition`s present in the human stratum corneum six membranes were developed. The membrane composition was equimolar mixture of cholesterol, ceramides 1-O-C18:1, C22, and C20, plus fatty acids C20 and C24. The membrane integrity assay was based on the transport of Brilliant Cresyl Blue, which has a low permeability; and Lucifer Yellow with very poor permeability and should effectively be completely rejected. The membrane characterization was performed using Confocal Laser Raman Spectroscopy, using stabilized laser at 785 nm with 10 second integration time and 2 accumulations. The membrane behaviour results on the PAMPA system were statistically evaluated and all of the compositions have shown integrity and permeability. The confocal Raman spectra were obtained in the region of 800-1200 cm-1 that is associated with the C-C stretches of the carbon scaffold from the stratum corneum lipids showed similar pattern for all the membranes. The ceramides, long chain fatty acids and cholesterol in equimolar ratio permitted to obtain lipid mixtures with self-organization capability, similar to that occurring into the stratum corneum. Conclusion: The artificial biological membranes studied for Skin PAMPA showed to be similar and with comparable properties to the stratum corneum.Keywords: bio-artificial membranes, comparison, confocal Raman, skin PAMPA
Procedia PDF Downloads 5091073 Spark Plasma Sintering/Synthesis of Alumina-Graphene Composites
Authors: Nikoloz Jalabadze, Roin Chedia, Lili Nadaraia, Levan Khundadze
Abstract:
Nanocrystalline materials in powder condition can be manufactured by a number of different methods, however manufacture of composite materials product in the same nanocrystalline state is still a problem because the processes of compaction and synthesis of nanocrystalline powders go with intensive growth of particles – the process which promotes formation of pieces in an ordinary crystalline state instead of being crystallized in the desirable nanocrystalline state. To date spark plasma sintering (SPS) has been considered as the most promising and energy efficient method for producing dense bodies of composite materials. An advantage of the SPS method in comparison with other methods is mainly low temperature and short time of the sintering procedure. That finally gives an opportunity to obtain dense material with nanocrystalline structure. Graphene has recently garnered significant interest as a reinforcing phase in composite materials because of its excellent electrical, thermal and mechanical properties. Graphene nanoplatelets (GNPs) in particular have attracted much interest as reinforcements for ceramic matrix composites (mostly in Al2O3, Si3N4, TiO2, ZrB2 a. c.). SPS has been shown to fully densify a variety of ceramic systems effectively including Al2O3 and often with improvements in mechanical and functional behavior. Alumina consolidated by SPS has been shown to have superior hardness, fracture toughness, plasticity and optical translucency compared to conventionally processed alumina. Knowledge of how GNPs influence sintering behavior is important to effectively process and manufacture process. In this study, the effects of GNPs on the SPS processing of Al2O3 are investigated by systematically varying sintering temperature, holding time and pressure. Our experiments showed that SPS process is also appropriate for the synthesis of nanocrystalline powders of alumina-graphene composites. Depending on the size of the molds, it is possible to obtain different amount of nanopowders. Investigation of the structure, physical-chemical, mechanical and performance properties of the elaborated composite materials was performed. The results of this study provide a fundamental understanding of the effects of GNP on sintering behavior, thereby providing a foundation for future optimization of the processing of these promising nanocomposite systems.Keywords: alumina oxide, ceramic matrix composites, graphene nanoplatelets, spark-plasma sintering
Procedia PDF Downloads 3771072 Review of the Model-Based Supply Chain Management Research in the Construction Industry
Authors: Aspasia Koutsokosta, Stefanos Katsavounis
Abstract:
This paper reviews the model-based qualitative and quantitative Operations Management research in the context of Construction Supply Chain Management (CSCM). Construction industry has been traditionally blamed for low productivity, cost and time overruns, waste, high fragmentation and adversarial relationships. The construction industry has been slower than other industries to employ the Supply Chain Management (SCM) concept and develop models that support the decision-making and planning. However the last decade there is a distinct shift from a project-based to a supply-based approach of construction management. CSCM comes up as a new promising management tool of construction operations and improves the performance of construction projects in terms of cost, time and quality. Modeling the Construction Supply Chain (CSC) offers the means to reap the benefits of SCM, make informed decisions and gain competitive advantage. Different modeling approaches and methodologies have been applied in the multi-disciplinary and heterogeneous research field of CSCM. The literature review reveals that a considerable percentage of CSC modeling accommodates conceptual or process models which discuss general management frameworks and do not relate to acknowledged soft OR methods. We particularly focus on the model-based quantitative research and categorize the CSCM models depending on their scope, mathematical formulation, structure, objectives, solution approach, software used and decision level. Although over the last few years there has been clearly an increase of research papers on quantitative CSC models, we identify that the relevant literature is very fragmented with limited applications of simulation, mathematical programming and simulation-based optimization. Most applications are project-specific or study only parts of the supply system. Thus, some complex interdependencies within construction are neglected and the implementation of the integrated supply chain management is hindered. We conclude this paper by giving future research directions and emphasizing the need to develop robust mathematical optimization models for the CSC. We stress that CSC modeling needs a multi-dimensional, system-wide and long-term perspective. Finally, prior applications of SCM to other industries have to be taken into account in order to model CSCs, but not without the consequential reform of generic concepts to match the unique characteristics of the construction industry.Keywords: construction supply chain management, modeling, operations research, optimization, simulation
Procedia PDF Downloads 5031071 Factory Communication System for Customer-Based Production Execution: An Empirical Study on the Manufacturing System Entropy
Authors: Nyashadzashe Chiraga, Anthony Walker, Glen Bright
Abstract:
The manufacturing industry is currently experiencing a paradigm shift into the Fourth Industrial Revolution in which customers are increasingly at the epicentre of production. The high degree of production customization and personalization requires a flexible manufacturing system that will rapidly respond to the dynamic and volatile changes driven by the market. They are a gap in technology that allows for the optimal flow of information and optimal manufacturing operations on the shop floor regardless of the rapid changes in the fixture and part demands. Information is the reduction of uncertainty; it gives meaning and context on the state of each cell. The amount of information needed to describe cellular manufacturing systems is investigated by two measures: the structural entropy and the operational entropy. Structural entropy is the expected amount of information needed to describe scheduled states of a manufacturing system. While operational entropy is the amount of information that describes the scheduled states of a manufacturing system, which occur during the actual manufacturing operation. Using Anylogic simulator a typical manufacturing job shop was set-up with a cellular manufacturing configuration. The cellular make-up of the configuration included; a Material handling cell, 3D Printer cell, Assembly cell, manufacturing cell and Quality control cell. The factory shop provides manufactured parts to a number of clients, and there are substantial variations in the part configurations, new part designs are continually being introduced to the system. Based on the normal expected production schedule, the schedule adherence was calculated from the structural entropy and operation entropy of varying the amounts of information communicated in simulated runs. The structural entropy denotes a system that is in control; the necessary real-time information is readily available to the decision maker at any point in time. For contractive analysis, different out of control scenarios were run, in which changes in the manufacturing environment were not effectively communicated resulting in deviations in the original predetermined schedule. The operational entropy was calculated from the actual operations. From the results obtained in the empirical study, it was seen that increasing, the efficiency of a factory communication system increases the degree of adherence of a job to the expected schedule. The performance of downstream production flow fed from the parallel upstream flow of information on the factory state was increased.Keywords: information entropy, communication in manufacturing, mass customisation, scheduling
Procedia PDF Downloads 2471070 Integrating Wearable-Textiles Sensors and IoT for Continuous Electromyography Monitoring
Authors: Bulcha Belay Etana, Benny Malengier, Debelo Oljira, Janarthanan Krishnamoorthy, Lieva Vanlangenhove
Abstract:
Electromyography (EMG) is a technique used to measure the electrical activity of muscles. EMG can be used to assess muscle function in a variety of settings, including clinical, research, and sports medicine. The aim of this study was to develop a wearable textile sensor for EMG monitoring. The sensor was designed to be soft, stretchable, and washable, making it suitable for long-term use. The sensor was fabricated using a conductive thread material that was embroidered onto a fabric substrate. The sensor was then connected to a microcontroller unit (MCU) and a Wi-Fi-enabled module. The MCU was programmed to acquire the EMG signal and transmit it wirelessly to the Wi-Fi-enabled module. The Wi-Fi-enabled module then sent the signal to a server, where it could be accessed by a computer or smartphone. The sensor was able to successfully acquire and transmit EMG signals from a variety of muscles. The signal quality was comparable to that of commercial EMG sensors. The development of this sensor has the potential to improve the way EMG is used in a variety of settings. The sensor is soft, stretchable, and washable, making it suitable for long-term use. This makes it ideal for use in clinical settings, where patients may need to wear the sensor for extended periods of time. The sensor is also small and lightweight, making it ideal for use in sports medicine and research settings. The data for this study was collected from a group of healthy volunteers. The volunteers were asked to perform a series of muscle contractions while the EMG signal was recorded. The data was then analyzed to assess the performance of the sensor. The EMG signals were analyzed using a variety of methods, including time-domain analysis and frequency-domain analysis. The time-domain analysis was used to extract features such as the root mean square (RMS) and average rectified value (ARV). The frequency-domain analysis was used to extract features such as the power spectrum. The question addressed by this study was whether a wearable textile sensor could be developed that is soft, stretchable, and washable and that can successfully acquire and transmit EMG signals. The results of this study demonstrate that a wearable textile sensor can be developed that meets the requirements of being soft, stretchable, washable, and capable of acquiring and transmitting EMG signals. This sensor has the potential to improve the way EMG is used in a variety of settings.Keywords: EMG, electrode position, smart wearable, textile sensor, IoT, IoT-integrated textile sensor
Procedia PDF Downloads 751069 Window Seat: Examining Public Space, Politics, and Social Identity through Urban Public Transportation
Authors: Sabrina Howard
Abstract:
'Window Seat' uses public transportation as an entry point for understanding the relationship between public space, politics, and social identity construction. This project argues that by bringing people of different races, classes, and genders in 'contact' with one another, public transit operates as a site of exposure, as people consciously and unconsciously perform social identity within these spaces. These performances offer a form of freedom that we associate with being in urban spaces while simultaneously rendering certain racialized, gendered, and classed bodies vulnerable to violence. Furthermore, due to its exposing function, public transit operates as a site through which we, as urbanites and scholars, can read social injustice and reflect on the work that is necessary to become a truly democratic society. The major questions guiding this research are: How does using public transit as the entry point provide unique insights into the relationship between social identity, politics, and public space? What ideas do Americans hold about public space and how might these ideas reflect a liberal yearning for a more democratic society? To address these research questions, 'Window Seat' critically examines ethnographic data collected on public buses and trains in Los Angeles, California, and online news media. It analyzes these sources through literature in socio-cultural psychology, sociology, and political science. It investigates the 'everyday urban hero' narrative or popular news stories that feature an individual or group of people acting against discriminatory or 'Anti-American' behavior on public buses and trains. 'Window Seat' studies these narratives to assert that by circulating stories of civility in news media, United Statsians construct and maintain ideas of the 'liberal city,' which is characterized by ideals of freedom and democracy. Furthermore, for those involved, these moments create an opportunity to perform the role of the Good Samaritan, an identity that is wrapped up in liberal beliefs in diversity and inclusion. This research expands conversations in urban studies by making a case for the political significance of urban public space. It demonstrates how these sites serve as spaces through which liberal beliefs are circulated and upheld through identity performance.Keywords: social identity, public space, public transportation, liberalism
Procedia PDF Downloads 206