Search results for: strain engineering
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4535

Search results for: strain engineering

95 Geotechnical Evaluation and Sizing of the Reinforcement Layer on Soft Soil in the Construction of the North Triage Road Clover, in Brasilia Federal District, Brazil

Authors: Rideci Farias, Haroldo Paranhos, Joyce Silva, Elson Almeida, Hellen Silva, Lucas Silva

Abstract:

The constant growth of the fleet of vehicles in the big cities, makes that the Engineering is dynamic, with respect to the new solutions for traffic flow in general. In the Federal District (DF), Brazil, it is no different. The city of Brasilia, Capital of Brazil, and Cultural Heritage of Humanity by UNESCO, is projected to 500 thousand inhabitants, and today circulates more than 3 million people in the city, and with a fleet of more than one vehicle for every two inhabitants. The growth of the city to the North region, made that the urban planning presented solutions for the fleet in constant growth. In this context, a complex of viaducts, road accesses, creation of new rolling roads and duplication of the Bragueto bridge over Paranoa lake in the northern part of the city was designed, giving access to the BR-020 highway, denominated Clover of North Triage (TTN). In the geopedological context, the region is composed of hydromorphic soils, with the presence of the water level at some times of the year. From the geotechnical point of view, are soils with SPT < 4 and Resistance not drained, Su < 50 kPa. According to urban planning in Brasília, special art works can not rise in the urban landscape, contrasting with the urban characteristics of the architects Lúcio Costa and Oscar Niemeyer. Architects hired to design the new Capital of Brazil. The urban criterion then created the technical impasse, resulting in the technical need to ‘bury’ the works of art and in turn the access greenhouses at different levels, in regions of low support soil and water level Outcrossing, generally inducing the need for this study and design. For the adoption of the appropriate solution, Standard Penetration Test (SPT), Vane Test, Diagnostic peritoneal lavage (DPL) and auger boring campaigns were carried out. With the comparison of the results of these tests, the profiles of resistance of the soils and water levels were created in the studied sections. Geometric factors such as existing sidewalks and lack of elevation for the discharge of deep drainage water have inhibited traditional techniques for total removal of soft soils, thus avoiding the use of temporary drawdown and shoring of excavations. Thus, a structural layer was designed to reinforce the subgrade by means of the ‘needling’ of the soft soil, without the need for longitudinal drains. In this context, the article presents the geological and geotechnical studies carried out, but also the dimensioning of the reinforcement layer on the soft soil with a view to the main objective of this solution that is to allow the execution of the civil works without the interference in the roads in use, Execution of services in rainy periods, presentation of solution compatible with drainage characteristics and soft soil reinforcement.

Keywords: layer, reinforcement, soft soil, clover of north triage

Procedia PDF Downloads 228
94 Multi-Criteria Decision Making Network Optimization for Green Supply Chains

Authors: Bandar A. Alkhayyal

Abstract:

Modern supply chains are typically linear, transforming virgin raw materials into products for end consumers, who then discard them after use to landfills or incinerators. Nowadays, there are major efforts underway to create a circular economy to reduce non-renewable resource use and waste. One important aspect of these efforts is the development of Green Supply Chain (GSC) systems which enables a reverse flow of used products from consumers back to manufacturers, where they can be refurbished or remanufactured, to both economic and environmental benefit. This paper develops novel multi-objective optimization models to inform GSC system design at multiple levels: (1) strategic planning of facility location and transportation logistics; (2) tactical planning of optimal pricing; and (3) policy planning to account for potential valuation of GSC emissions. First, physical linear programming was applied to evaluate GSC facility placement by determining the quantities of end-of-life products for transport from candidate collection centers to remanufacturing facilities while satisfying cost and capacity criteria. Second, disassembly and remanufacturing processes have received little attention in industrial engineering and process cost modeling literature. The increasing scale of remanufacturing operations, worth nearly $50 billion annually in the United States alone, have made GSC pricing an important subject of research. A non-linear physical programming model for optimization of pricing policy for remanufactured products that maximizes total profit and minimizes product recovery costs were examined and solved. Finally, a deterministic equilibrium model was used to determine the effects of internalizing a cost of GSC greenhouse gas (GHG) emissions into optimization models. Changes in optimal facility use, transportation logistics, and pricing/profit margins were all investigated against a variable cost of carbon, using case study system created based on actual data from sites in the Boston area. As carbon costs increase, the optimal GSC system undergoes several distinct shifts in topology as it seeks new cost-minimal configurations. A comprehensive study of quantitative evaluation and performance of the model has been done using orthogonal arrays. Results were compared to top-down estimates from economic input-output life cycle assessment (EIO-LCA) models, to contrast remanufacturing GHG emission quantities with those from original equipment manufacturing operations. Introducing a carbon cost of $40/t CO2e increases modeled remanufacturing costs by 2.7% but also increases original equipment costs by 2.3%. The assembled work advances the theoretical modeling of optimal GSC systems and presents a rare case study of remanufactured appliances.

Keywords: circular economy, extended producer responsibility, greenhouse gas emissions, industrial ecology, low carbon logistics, green supply chains

Procedia PDF Downloads 160
93 Design and Development of Graphene Oxide Modified by Chitosan Nanosheets Showing pH-Sensitive Surface as a Smart Drug Delivery System for Control Release of Doxorubicin

Authors: Parisa Shirzadeh

Abstract:

Drug delivery systems in which drugs are traditionally used, multi-stage and at specified intervals by patients, do not meet the needs of the world's up-to-date drug delivery. In today's world, we are dealing with a huge number of recombinant peptide and protean drugs and analogues of hormones in the body, most of which are made with genetic engineering techniques. Most of these drugs are used to treat critical diseases such as cancer. Due to the limitations of the traditional method, researchers sought to find ways to solve the problems of the traditional method to a large extent. Following these efforts, controlled drug release systems were introduced, which have many advantages. Using controlled release of the drug in the body, the concentration of the drug is kept at a certain level, and in a short time, it is done at a higher rate. Graphene is a natural material that is biodegradable, non-toxic, and natural compared to carbon nanotubes; its price is lower than carbon nanotubes and is cost-effective for industrialization. On the other hand, the presence of highly effective surfaces and wide surfaces of graphene plates makes it more effective to modify graphene than carbon nanotubes. Graphene oxide is often synthesized using concentrated oxidizers such as sulfuric acid, nitric acid, and potassium permanganate based on Hummer 1 method. In comparison with the initial graphene, the resulting graphene oxide is heavier and has carboxyl, hydroxyl, and epoxy groups. Therefore, graphene oxide is very hydrophilic and easily dissolves in water and creates a stable solution. On the other hand, because the hydroxyl, carboxyl, and epoxy groups created on the surface are highly reactive, they have the ability to work with other functional groups such as amines, esters, polymers, etc. Connect and bring new features to the surface of graphene. In fact, it can be concluded that the creation of hydroxyl groups, Carboxyl, and epoxy and in fact graphene oxidation is the first step and step in creating other functional groups on the surface of graphene. Chitosan is a natural polymer and does not cause toxicity in the body. Due to its chemical structure and having OH and NH groups, it is suitable for binding to graphene oxide and increasing its solubility in aqueous solutions. Graphene oxide (GO) has been modified by chitosan (CS) covalently, developed for control release of doxorubicin (DOX). In this study, GO is produced by the hummer method under acidic conditions. Then, it is chlorinated by oxalyl chloride to increase its reactivity against amine. After that, in the presence of chitosan, the amino reaction was performed to form amide transplantation, and the doxorubicin was connected to the carrier surface by π-π interaction in buffer phosphate. GO, GO-CS, and GO-CS-DOX characterized by FT-IR, RAMAN, TGA, and SEM. The ability to load and release is determined by UV-Visible spectroscopy. The loading result showed a high capacity of DOX absorption (99%) and pH dependence identified as a result of DOX release from GO-CS nanosheet at pH 5.3 and 7.4, which show a fast release rate in acidic conditions.

Keywords: graphene oxide, chitosan, nanosheet, controlled drug release, doxorubicin

Procedia PDF Downloads 120
92 Experimental Research of Canine Mandibular Defect Construction with the Controlled Meshy Titanium Alloy Scaffold Fabricated by Electron Beam Melting Combined with BMSCs-Encapsulating Chitosan Hydrogel

Authors: Wang Hong, Liu Chang Kui, Zhao Bing Jing, Hu Min

Abstract:

Objection We observed the repairment effection of canine mandibular defect with meshy Ti6Al4V scaffold fabricated by electron beam melting (EBM) combined with bone marrow mesenchymal stem cells (BMMSCs) encapsulated in chitosan hydrogel. Method Meshy titanium scaffolds were prepared by EBM of commercial Ti6Al4V power. The length of scaffolds was 24 mm, the width was 5 mm and height was 8mm. The pore size and porosity were evaluated by scanning electron microscopy (SEM). Chitosan /Bio-Oss hydrogel was prepared by chitosan, β- sodium glycerophosphate and Bio-Oss power. BMMSCs were harvested from canine iliac crests. BMMSCs were seeded in titanium scaffolds and encapsulated in Chitosan /Bio-Oss hydrogel. The validity of BMMSCs was evaluated by cell count kit-8 (CCK-8). The osteogenic differentiation ability was evaluated by alkaline phosphatase (ALP) activity and gene expression of OC, OPN and CoⅠ. Combination were performed by injecting BMMSCs/ Chitosan /Bio-Oss hydrogel into the meshy Ti6Al4V scaffolds and solidified. 24 mm long box-shaped bone defects were made at the mid-portion of mandible of adult beagles. The defects were randomly filled with BMMSCs/ Chitosan/Bio-Oss + titanium, Chitosan /Bio-Oss+titanium, titanium alone. Autogenous iliac crests graft as control group in 3 beagles. Radionuclide bone imaging was used to monitor the new bone tissue at 2, 4, 8 and 12 weeks after surgery. CT examination was made on the surgery day and 4 weeks, 12 weeks and 24 weeks after surgery. The animals were sacrificed in 4, 12 and 24 weeks after surgery. The bone formation were evaluated by histology and micro-CT. Results: The pores of the scaffolds was interconnected, the pore size was about 1 mm, the average porosity was about 76%. The pore size of the hydrogel was 50-200μm and the average porosity was approximately 90%. The hydrogel were solidified under the condition of 37℃in 10 minutes. The validity and the osteogenic differentiation ability of BMSCs were not affected by titanium scaffolds and hydrogel. Radionuclide bone imaging shown an increasing tendency of the revascularization and bone regeneration was observed in all the groups at 2, 4, 8 weeks after operation, and there were no changes at 12weeks.The tendency was more obvious in the BMMSCs/ Chitosan/Bio-Oss +titanium group and autogenous group. CT, Micro-CT and histology shown that new bone formed increasingly with the time extend. There were more new bone regenerated in BMMSCs/ Chitosan /Bio-Oss + titanium group and autogenous group than the other two groups. At 24 weeks, the autogenous group was achieved bone union. The BMSCs/ Chitosan /Bio-Oss group was seen extensive new bone formed around the scaffolds and more new bone inside of the central pores of scaffolds than Chitosan /Bio-Oss + titanium group and titanium group. The difference was significantly. Conclusion: The titanium scaffolds fabricated by EBM had controlled porous structure, good bone conduction and biocompatibility. Chitosan /Bio-Oss hydrogel had injectable plasticity, thermosensitive property and good biocompatibility. The meshy Ti6Al4V scaffold produced by EBM combined BMSCs encapsulated in chitosan hydrogel had good capacity on mandibular bone defect repair.

Keywords: mandibular reconstruction, tissue engineering, electron beam melting, titanium alloy

Procedia PDF Downloads 445
91 Evaluation of Nanoparticle Application to Control Formation Damage in Porous Media: Laboratory and Mathematical Modelling

Authors: Gabriel Malgaresi, Sara Borazjani, Hadi Madani, Pavel Bedrikovetsky

Abstract:

Suspension-Colloidal flow in porous media occurs in numerous engineering fields, such as industrial water treatment, the disposal of industrial wastes into aquifers with the propagation of contaminants and low salinity water injection into petroleum reservoirs. The main effects are particle mobilization and captured by the porous rock, which can cause pore plugging and permeability reduction which is known as formation damage. Various factors such as fluid salinity, pH, temperature, and rock properties affect particle detachment. Formation damage is unfavorable specifically near injection and production wells. One way to control formation damage is pre-treatment of the rock with nanoparticles. Adsorption of nanoparticles on fines and rock surfaces alters zeta-potential of the surfaces and enhances the attachment force between the rock and fine particles. The main objective of this study is to develop a two-stage mathematical model for (1) flow and adsorption of nanoparticles on the rock in the pre-treatment stage and (2) fines migration and permeability reduction during the water production after the pre-treatment. The model accounts for adsorption and desorption of nanoparticles, fines migration, and kinetics of particle capture. The system of equations allows for the exact solution. The non-self-similar wave-interaction problem was solved by the Method of Characteristics. The analytical model is new in two ways: First, it accounts for the specific boundary and initial condition describing the injection of nanoparticle and production from the pre-treated porous media; second, it contains the effect of nanoparticle sorption hysteresis. The derived analytical model contains explicit formulae for the concentration fronts along with pressure drop. The solution is used to determine the optimal injection concentration of nanoparticle to avoid formation damage. The mathematical model was validated via an innovative laboratory program. The laboratory study includes two sets of core-flood experiments: (1) production of water without nanoparticle pre-treatment; (2) pre-treatment of a similar core with nanoparticles followed by water production. Positively-charged Alumina nanoparticles with the average particle size of 100 nm were used for the rock pre-treatment. The core was saturated with the nanoparticles and then flushed with low salinity water; pressure drop across the core and the outlet fine concentration was monitored and used for model validation. The results of the analytical modeling showed a significant reduction in the fine outlet concentration and formation damage. This observation was in great agreement with the results of core-flood data. The exact solution accurately describes fines particle breakthroughs and evaluates the positive effect of nanoparticles in formation damage. We show that the adsorbed concentration of nanoparticle highly affects the permeability of the porous media. For the laboratory case presented, the reduction of permeability after 1 PVI production in the pre-treated scenario is 50% lower than the reference case. The main outcome of this study is to provide a validated mathematical model to evaluate the effect of nanoparticles on formation damage.

Keywords: nano-particles, formation damage, permeability, fines migration

Procedia PDF Downloads 622
90 Integrated Planning, Designing, Development and Management of Eco-Friendly Human Settlements for Sustainable Development of Environment, Economic, Peace and Society of All Economies

Authors: Indra Bahadur Chand

Abstract:

This paper will focus on the need for development and application of global protocols and policy in planning, designing, development, and management of systems of eco-towns and eco-villages so that sustainable development will be assured from the perspective of environmental, economical, peace, and harmonized social dynamics. This perspective is essential for the development of civilized and eco-friendly human settlements in the town and rural areas of the nation that will be a milestone for developing a happy and sustainable lifestyle of rural and urban communities of the nation. The urban population of most of the town of developing economies has been tremendously increasing, whereas rural people have been tremendously migrating for the past three decades. Consequently, the urban lifestyle in most towns has stressed in terms of environmental pollution, water crisis, congested traffic, energy crisis, food crisis, and unemployment. Eco-towns and villages should be developed where lifestyle of all residents is sustainable and happy. Built up environment of settlement should reduce and minimize the problems of non ecological CO2 emissions, unbalanced utilization of natural resources, environmental degradation, natural calamities, ecological imbalance, energy crisis, water scarcity, waste management, food crisis, unemployment, deterioration of cultural heritage, social, the ratio among the public and private land ownership, ratio of land covered with vegetation and area of settlement, the ratio of people in the vehicles and foot, the ratio of people employed outside of town and village, ratio of resources recycling of waste materials, water consumption level, the ratio of people and vehicles, ratio of the length of the road network and area of town/villages, a ratio of renewable energy consumption with total energy, a ratio of religious/recreational area out of the total built-up area, the ratio of annual suicide case out of total people, a ratio of annual injured and death out of total people from a traffic accident, a ratio of production of agro foods within town out of total food consumption will be used to assist in designing and monitoring of each eco-towns and villages. An eco-town and villages should be planned and developed to offer sustainable infrastructure and utilities that maintain CO2 level in individual homes and settlements, home energy use, transport, food and consumer goods, water supply, waste management, conservation of historical heritages, healthy neighborhood, conservation of natural landscape, conserving bio-diversity and developing green infrastructures. Eco-towns and villages should be developed on the basis of master planning and architecture that affect and define the settlement and its form. Master planning and engineering should focus in delivering the sustainability criteria of eco towns and eco village. This will involve working with specific landscape and natural resources of locality.

Keywords: eco-town, ecological habitation, master plan, sustainable development

Procedia PDF Downloads 180
89 Investigation of Software Integration for Simulations of Buoyancy-Driven Heat Transfer in a Vehicle Underhood during Thermal Soak

Authors: R. Yuan, S. Sivasankaran, N. Dutta, K. Ebrahimi

Abstract:

This paper investigates the software capability and computer-aided engineering (CAE) method of modelling transient heat transfer process occurred in the vehicle underhood region during vehicle thermal soak phase. The heat retention from the soak period will be beneficial to the cold start with reduced friction loss for the second 14°C worldwide harmonized light-duty vehicle test procedure (WLTP) cycle, therefore provides benefits on both CO₂ emission reduction and fuel economy. When vehicle undergoes soak stage, the airflow and the associated convective heat transfer around and inside the engine bay is driven by the buoyancy effect. This effect along with thermal radiation and conduction are the key factors to the thermal simulation of the engine bay to obtain the accurate fluids and metal temperature cool-down trajectories and to predict the temperatures at the end of the soak period. Method development has been investigated in this study on a light-duty passenger vehicle using coupled aerodynamic-heat transfer thermal transient modelling method for the full vehicle under 9 hours of thermal soak. The 3D underhood flow dynamics were solved inherently transient by the Lattice-Boltzmann Method (LBM) method using the PowerFlow software. This was further coupled with heat transfer modelling using the PowerTHERM software provided by Exa Corporation. The particle-based LBM method was capable of accurately handling extremely complicated transient flow behavior on complex surface geometries. The detailed thermal modelling, including heat conduction, radiation, and buoyancy-driven heat convection, were integrated solved by PowerTHERM. The 9 hours cool-down period was simulated and compared with the vehicle testing data of the key fluid (coolant, oil) and metal temperatures. The developed CAE method was able to predict the cool-down behaviour of the key fluids and components in agreement with the experimental data and also visualised the air leakage paths and thermal retention around the engine bay. The cool-down trajectories of the key components obtained for the 9 hours thermal soak period provide vital information and a basis for the further development of reduced-order modelling studies in future work. This allows a fast-running model to be developed and be further imbedded with the holistic study of vehicle energy modelling and thermal management. It is also found that the buoyancy effect plays an important part at the first stage of the 9 hours soak and the flow development during this stage is vital to accurately predict the heat transfer coefficients for the heat retention modelling. The developed method has demonstrated the software integration for simulating buoyancy-driven heat transfer in a vehicle underhood region during thermal soak with satisfying accuracy and efficient computing time. The CAE method developed will allow integration of the design of engine encapsulations for improving fuel consumption and reducing CO₂ emissions in a timely and robust manner, aiding the development of low-carbon transport technologies.

Keywords: ATCT/WLTC driving cycle, buoyancy-driven heat transfer, CAE method, heat retention, underhood modeling, vehicle thermal soak

Procedia PDF Downloads 154
88 Blended Learning in a Mathematics Classroom: A Focus in Khan Academy

Authors: Sibawu Witness Siyepu

Abstract:

This study explores the effects of instructional design using blended learning in the learning of radian measures among Engineering students. Blended learning is an education programme that combines online digital media with traditional classroom methods. It requires the physical presence of both lecturer and student in a mathematics computer laboratory. Blended learning provides element of class control over time, place, path or pace. The focus was on the use of Khan Academy to supplement traditional classroom interactions. Khan Academy is a non-profit educational organisation created by educator Salman Khan with a goal of creating an accessible place for students to learn through watching videos in a computer assisted computer. The researcher who is an also lecturer in mathematics support programme collected data through instructing students to watch Khan Academy videos on radian measures, and by supplying students with traditional classroom activities. Classroom activities entails radian measure activities extracted from the Internet. Students were given an opportunity to engage in class discussions, social interactions and collaborations. These activities necessitated students to write formative assessments tests. The purpose of formative assessments tests was to find out about the students’ understanding of radian measures, including errors and misconceptions they displayed in their calculations. Identification of errors and misconceptions serve as pointers of students’ weaknesses and strengths in their learning of radian measures. At the end of data collection, semi-structure interviews were administered to a purposefully sampled group to explore their perceptions and feedback regarding the use of blended learning approach in teaching and learning of radian measures. The study employed Algebraic Insight Framework to analyse data collected. Algebraic Insight Framework is a subset of symbol sense which allows a student to correctly enter expressions into a computer assisted systems efficiently. This study offers students opportunities to enter topics and subtopics on radian measures into a computer through the lens of Khan Academy. Khan academy demonstrates procedures followed to reach solutions of mathematical problems. The researcher performed the task of explaining mathematical concepts and facilitated the process of reinvention of rules and formulae in the learning of radian measures. Lastly, activities that reinforce students’ understanding of radian were distributed. Results showed that this study enthused the students in their learning of radian measures. Learning through videos prompted the students to ask questions which brought about clarity and sense making to the classroom discussions. Data revealed that sense making through reinvention of rules and formulae assisted the students in enhancing their learning of radian measures. This study recommends the use of Khan Academy in blended learning to be introduced as a socialisation programme to all first year students. This will prepare students that are computer illiterate to become conversant with the use of Khan Academy as a powerful tool in the learning of mathematics. Khan Academy is a key technological tool that is pivotal for the development of students’ autonomy in the learning of mathematics and that promotes collaboration with lecturers and peers.

Keywords: algebraic insight framework, blended learning, Khan Academy, radian measures

Procedia PDF Downloads 311
87 Fiber Stiffness Detection of GFRP Using Combined ABAQUS and Genetic Algorithms

Authors: Gyu-Dong Kim, Wuk-Jae Yoo, Sang-Youl Lee

Abstract:

Composite structures offer numerous advantages over conventional structural systems in the form of higher specific stiffness and strength, lower life-cycle costs, and benefits such as easy installation and improved safety. Recently, there has been a considerable increase in the use of composites in engineering applications and as wraps for seismic upgrading and repairs. However, these composites deteriorate with time because of outdated materials, excessive use, repetitive loading, climatic conditions, manufacturing errors, and deficiencies in inspection methods. In particular, damaged fibers in a composite result in significant degradation of structural performance. In order to reduce the failure probability of composites in service, techniques to assess the condition of the composites to prevent continual growth of fiber damage are required. Condition assessment technology and nondestructive evaluation (NDE) techniques have provided various solutions for the safety of structures by means of detecting damage or defects from static or dynamic responses induced by external loading. A variety of techniques based on detecting the changes in static or dynamic behavior of isotropic structures has been developed in the last two decades. These methods, based on analytical approaches, are limited in their capabilities in dealing with complex systems, primarily because of their limitations in handling different loading and boundary conditions. Recently, investigators have introduced direct search methods based on metaheuristics techniques and artificial intelligence, such as genetic algorithms (GA), simulated annealing (SA) methods, and neural networks (NN), and have promisingly applied these methods to the field of structural identification. Among them, GAs attract our attention because they do not require a considerable amount of data in advance in dealing with complex problems and can make a global solution search possible as opposed to classical gradient-based optimization techniques. In this study, we propose an alternative damage-detection technique that can determine the degraded stiffness distribution of vibrating laminated composites made of Glass Fiber-reinforced Polymer (GFRP). The proposed method uses a modified form of the bivariate Gaussian distribution function to detect degraded stiffness characteristics. In addition, this study presents a method to detect the fiber property variation of laminated composite plates from the micromechanical point of view. The finite element model is used to study free vibrations of laminated composite plates for fiber stiffness degradation. In order to solve the inverse problem using the combined method, this study uses only first mode shapes in a structure for the measured frequency data. In particular, this study focuses on the effect of the interaction among various parameters, such as fiber angles, layup sequences, and damage distributions, on fiber-stiffness damage detection.

Keywords: stiffness detection, fiber damage, genetic algorithm, layup sequences

Procedia PDF Downloads 276
86 Advancing Agriculture through Technology: An Abstract of Research Findings

Authors: Eugene Aninagyei-Bonsu

Abstract:

Introduction: Agriculture has been a cornerstone of human civilization, ensuring food security and livelihoods for billions of people worldwide. In recent decades, rapid advancements in technology have revolutionized the agricultural sector, offering innovative solutions to enhance productivity, sustainability, and efficiency. This abstract summarizes key findings from a research study that explores the impacts of technology in modern agriculture and its implications for future food production systems. Methodologies: The research study employed a mixed-methods approach, combining quantitative data analysis with qualitative interviews and surveys to gain a comprehensive understanding of the role of technology in agriculture. Data was collected from various stakeholders, including farmers, agricultural technicians, and industry experts, to capture diverse perspectives on the adoption and utilization of agricultural technologies. The study also utilized case studies and literature reviews to contextualize the findings within the broader agricultural landscape. Major Findings: The research findings reveal that technology plays a pivotal role in transforming traditional farming practices and driving innovation in agriculture. Advanced technologies such as precision agriculture, drone technology, genetic engineering, and smart irrigation systems have significantly improved crop yields, reduced environmental impact, and optimized resource utilization. Farmers who have embraced these technologies have reported increased productivity, enhanced profitability, and improved resilience to environmental challenges. Furthermore, the study highlights the importance of accessible and affordable technology solutions for smallholder farmers in developing countries. Mobile applications, sensor technologies, and digital platforms have enabled small-scale farmers to access market information, weather forecasts, and agricultural best practices, empowering them to make informed decisions and improve their livelihoods. The research emphasizes the need for targeted policies and investments to bridge the digital divide and promote equitable technology adoption in agriculture. Conclusion: In conclusion, this research underscores the transformative potential of technology in agriculture and its critical role in advancing sustainable food production systems. The findings suggest that harnessing technology can address key challenges facing the agricultural sector, including climate change, resource scarcity, and food insecurity. By embracing innovation and leveraging technology, farmers can enhance their productivity, profitability, and resilience in a rapidly evolving global food system. Moving forward, policymakers, researchers, and industry stakeholders must collaborate to facilitate the adoption of appropriate technologies, support capacity building, and promote sustainable agricultural practices for a more resilient and food-secure future.

Keywords: technology development in modern agriculture, the influence of information technology access in agriculture, analyzing agricultural technology development, analyzing of the frontier technology of agriculture loT

Procedia PDF Downloads 38
85 Physiological Effects on Scientist Astronaut Candidates: Hypobaric Training Assessment

Authors: Pedro Llanos, Diego García

Abstract:

This paper is addressed to expanding our understanding of the effects of hypoxia training on our bodies to better model its dynamics and leverage some of its implications and effects on human health. Hypoxia training is a recommended practice for military and civilian pilots that allow them to recognize their early hypoxia signs and symptoms, and Scientist Astronaut Candidates (SACs) who underwent hypobaric hypoxia (HH) exposure as part of a training activity for prospective suborbital flight applications. This observational-analytical study describes physiologic responses and symptoms experienced by a SAC group before, during and after HH exposure and proposes a model for assessing predicted versus observed physiological responses. A group of individuals with diverse Science Technology Engineering Mathematics (STEM) backgrounds conducted a hypobaric training session to an altitude up to 22,000 ft (FL220) or 6,705 meters, where heart rate (HR), breathing rate (BR) and core temperature (Tc) were monitored with the use of a chest strap sensor pre and post HH exposure. A pulse oximeter registered levels of saturation of oxygen (SpO2), number and duration of desaturations during the HH chamber flight. Hypoxia symptoms as described by the SACs during the HH training session were also registered. This data allowed to generate a preliminary predictive model of the oxygen desaturation and O2 pressure curve for each subject, which consists of a sixth-order polynomial fit during exposure, and a fifth or fourth-order polynomial fit during recovery. Data analysis showed that HR and BR showed no significant differences between pre and post HH exposure in most of the SACs, while Tc measures showed slight but consistent decrement changes. All subjects registered SpO2 greater than 94% for the majority of their individual HH exposures, but all of them presented at least one clinically significant desaturation (SpO2 < 85% for more than 5 seconds) and half of the individuals showed SpO2 below 87% for at least 30% of their HH exposure time. Finally, real time collection of HH symptoms presented temperature somatosensory perceptions (SP) for 65% of individuals, and task-focus issues for 52.5% of individuals as the most common HH indications. 95% of the subjects experienced HH onset symptoms below FL180; all participants achieved full recovery of HH symptoms within 1 minute of donning their O2 mask. The current HH study performed on this group of individuals suggests a rapid and fully reversible physiologic response after HH exposure as expected and obtained in previous studies. Our data showed consistent results between predicted versus observed SpO2 curves during HH suggesting a mathematical function that may be used to model HH performance deficiencies. During the HH study, real-time HH symptoms were registered providing evidenced SP and task focusing as the earliest and most common indicators. Finally, an assessment of HH signs of symptoms in a group of heterogeneous, non-pilot individuals showed similar results to previous studies in homogeneous populations of pilots.

Keywords: slow onset hypoxia, hypobaric chamber training, altitude sickness, symptoms and altitude, pressure cabin

Procedia PDF Downloads 116
84 Respiratory Health and Air Movement Within Equine Indoor Arenas

Authors: Staci McGill, Morgan Hayes, Robert Coleman, Kimberly Tumlin

Abstract:

The interaction and relationships between horses and humans have been shown to be positive for physical, mental, and emotional wellbeing, however equine spaces where these interactions occur do include some environmental risks. There are 1.7 million jobs associated with the equine industry in the United States in addition to recreational riders, owners, and volunteers who interact with horses for substantial amounts of time daily inside built structures. One specialized facility, an “indoor arena” is a semi-indoor structure used for exercising horses and exhibiting skills during competitive events. Typically, indoor arenas have a sand or sand mixture as the footing or surface over which the horse travels, and increasingly, silica sand is being recommended due to its durable nature. It was previously identified in a semi-qualitative survey that the majority of individuals using indoor arenas have environmental concerns with dust. 27% (90/333) of respondents reported respiratory issues or allergy-like symptoms while riding with 21.6% (71/329) of respondents reporting these issues while standing on the ground observing or teaching. Frequent headaches and/or lightheadedness was reported in 9.9% (33/333) of respondents while riding and in 4.3% 14/329 while on the ground. Horse respiratory health is also negatively impacted with 58% (194/333) of respondents indicating horses cough during or after time in the indoor arena. Instructors who spent time in indoor arenas self-reported more respiratory issues than those individuals who identified as smokers, highlighting the health relevance of understanding these unique structures. To further elucidate environmental concerns and self-reported health issues, 35 facility assessments were conducted in a cross-sectional sampling design in the states of Kentucky and Ohio (USA). Data, including air speeds, were collected in a grid fashion at 15 points within the indoor arenas and then mapped spatially using krigging in ARCGIS. From the spatial maps, standard variances were obtained and differences were analyzed using multivariant analysis of variances (MANOVA) and analysis of variances (ANOVA). There were no differences for the variance of the air speeds in the spaces for facility orientation, presence and type of roof ventilation, climate control systems, amount of openings, or use of fans. Variability of the air speeds in the indoor arenas was 0.25 or less. Further analysis yielded that average air speeds within the indoor arenas were lower than 100 ft/min (0.51 m/s) which is considered still air in other animal facilities. The lack of air movement means that dust clearance is reliant on particle size and weight rather than ventilation. While further work on respirable dust is necessary, this characterization of the semi-indoor environment where animals and humans interact indicates insufficient air flow to eliminate or reduce respiratory hazards. Finally, engineering solutions to address air movement deficiencies within indoor arenas or mitigate particulate matter are critical to ensuring exposures do not lead to adverse health outcomes for equine professionals, volunteers, participants, and horses within these spaces.

Keywords: equine, indoor arena, ventilation, particulate matter, respiratory health

Procedia PDF Downloads 117
83 Spin Rate Decaying Law of Projectile with Hemispherical Head in Exterior Trajectory

Authors: Quan Wen, Tianxiao Chang, Shaolu Shi, Yushi Wang, Guangyu Wang

Abstract:

As a kind of working environment of the fuze, the spin rate decaying law of projectile in exterior trajectory is of great value in the design of the rotation count fixed distance fuze. In addition, it is significant in the field of devices for simulation tests of fuze exterior ballistic environment, flight stability, and dispersion accuracy of gun projectile and opening and scattering design of submunition and illuminating cartridges. Besides, the self-destroying mechanism of the fuze in small-caliber projectile often works by utilizing the attenuation of centrifugal force. In the theory of projectile aerodynamics and fuze design, there are many formulas describing the change law of projectile angular velocity in external ballistic such as Roggla formula, exponential function formula, and power function formula. However, these formulas are mostly semi-empirical due to the poor test conditions and insufficient test data at that time. These formulas are difficult to meet the design requirements of modern fuze because they are not accurate enough and have a narrow range of applications now. In order to provide more accurate ballistic environment parameters for the design of a hemispherical head projectile fuze, the projectile’s spin rate decaying law in exterior trajectory under the effect of air resistance was studied. In the analysis, the projectile shape was simplified as hemisphere head, cylindrical part, rotating band part, and anti-truncated conical tail. The main assumptions are as follows: a) The shape and mass are symmetrical about the longitudinal axis, b) There is a smooth transition between the ball hea, c) The air flow on the outer surface is set as a flat plate flow with the same area as the expanded outer surface of the projectile, and the boundary layer is turbulent, d) The polar damping moment attributed to the wrench hole and rifling mark on the projectile is not considered, e) The groove of the rifle on the rotating band is uniform, smooth and regular. The impacts of the four parts on aerodynamic moment of the projectile rotation were obtained by aerodynamic theory. The surface friction stress of the projectile, the polar damping moment formed by the head of the projectile, the surface friction moment formed by the cylindrical part, the rotating band, and the anti-truncated conical tail were obtained by mathematical derivation. After that, the mathematical model of angular spin rate attenuation was established. In the whole trajectory with the maximum range angle (38°), the absolute error of the polar damping torque coefficient obtained by simulation and the coefficient calculated by the mathematical model established in this paper is not more than 7%. Therefore, the credibility of the mathematical model was verified. The mathematical model can be described as a first-order nonlinear differential equation, which has no analytical solution. The solution can be only gained as a numerical solution by connecting the model with projectile mass motion equations in exterior ballistics.

Keywords: ammunition engineering, fuze technology, spin rate, numerical simulation

Procedia PDF Downloads 147
82 Improving Road Infrastructure Safety Management Through Statistical Analysis of Road Accident Data. Case Study: Streets in Bucharest

Authors: Dimitriu Corneliu-Ioan, Gheorghe FrațIlă

Abstract:

Romania has one of the highest rates of road deaths among European Union Member States, and there is a concern that the country will not meet its goal of "zero deaths" by 2050. The European Union also aims to halve the number of people seriously injured in road accidents by 2030. Therefore, there is a need to improve road infrastructure safety management in Romania. The aim of this study is to analyze road accident data through statistical methods to assess the current state of road infrastructure safety in Bucharest. The study also aims to identify trends and make forecasts regarding serious road accidents and their consequences. The objective is to provide insights that can help prioritize measures to increase road safety, particularly in urban areas. The research utilizes statistical analysis methods, including exploratory analysis and descriptive statistics. Databases from the Traffic Police and the Romanian Road Authority are analyzed using Excel. Road risks are compared with the main causes of road accidents to identify correlations. The study emphasizes the need for better quality and more diverse collection of road accident data for effective analysis in the field of road infrastructure engineering. The research findings highlight the importance of prioritizing measures to improve road safety in urban areas, where serious accidents and their consequences are more frequent. There is a correlation between the measures ordered by road safety auditors and the main causes of serious accidents in Bucharest. The study also reveals the significant social costs of road accidents, amounting to approximately 3% of GDP, emphasizing the need for collaboration between local and central administrations in allocating resources for road safety. This research contributes to a clearer understanding of the current road infrastructure safety situation in Romania. The findings provide critical insights that can aid decision-makers in allocating resources efficiently and institutionally cooperating to achieve sustainable road safety. The data used for this study are collected from the Traffic Police and the Romanian Road Authority. The data processing involves exploratory analysis and descriptive statistics using the Excel tool. The analysis allows for a better understanding of the factors contributing to the current road safety situation and helps inform managerial decisions to eliminate or reduce road risks. The study addresses the state of road infrastructure safety in Bucharest and analyzes the trends and forecasts regarding serious road accidents and their consequences. It studies the correlation between road safety measures and the main causes of serious accidents. To improve road safety, cooperation between local and central administrations towards joint financial efforts is important. This research highlights the need for statistical data processing methods to substantiate managerial decisions in road infrastructure management. It emphasizes the importance of improving the quality and diversity of road accident data collection. The research findings provide a critical perspective on the current road safety situation in Romania and offer insights to identify appropriate solutions to reduce the number of serious road accidents in the future.

Keywords: road death rate, strategic objective, serious road accidents, road safety, statistical analysis

Procedia PDF Downloads 86
81 Treatment Process of Sludge from Leachate with an Activated Sludge System and Extended Aeration System

Authors: A. Chávez, A. Rodríguez, F. Pinzón

Abstract:

Society is concerned about measures of environmental, economic and social impacts generated in the solid waste disposal. These places of confinement, also known as landfills, are locations where problems of pollution and damage to human health are reduced. They are technically designed and operated, using engineering principles, storing the residue in a small area, compact it to reduce volume and covering them with soil layers. Problems preventing liquid (leachate) and gases produced by the decomposition of organic matter. Despite planning and site selection for disposal, monitoring and control of selected processes, remains the dilemma of the leachate as extreme concentration of pollutants, devastating soil, flora and fauna; aggressive processes requiring priority attention. A biological technology is the activated sludge system, used for tributaries with high pollutant loads. Since transforms biodegradable dissolved and particulate matter into CO2, H2O and sludge; transform suspended and no Settleable solids; change nutrients as nitrogen and phosphorous; and degrades heavy metals. The microorganisms that remove organic matter in the processes are in generally facultative heterotrophic bacteria, forming heterogeneous populations. Is possible to find unicellular fungi, algae, protozoa and rotifers, that process the organic carbon source and oxygen, as well as the nitrogen and phosphorus because are vital for cell synthesis. The mixture of the substrate, in this case sludge leachate, molasses and wastewater is maintained ventilated by mechanical aeration diffusers. Considering as the biological processes work to remove dissolved material (< 45 microns), generating biomass, easily obtained by decantation processes. The design consists of an artificial support and aeration pumps, favoring develop microorganisms (denitrifying) using oxygen (O) with nitrate, resulting in nitrogen (N) in the gas phase. Thus, avoiding negative effects of the presence of ammonia or phosphorus. Overall the activated sludge system includes about 8 hours of hydraulic retention time, which does not prevent the demand for nitrification, which occurs on average in a value of MLSS 3,000 mg/L. The extended aeration works with times greater than 24 hours detention; with ratio of organic load/biomass inventory under 0.1; and average stay time (sludge age) more than 8 days. This project developed a pilot system with sludge leachate from Doña Juana landfill - RSDJ –, located in Bogota, Colombia, where they will be subjected to a process of activated sludge and extended aeration through a sequential Bach reactor - SBR, to be dump in hydric sources, avoiding ecological collapse. The system worked with a dwell time of 8 days, 30 L capacity, mainly by removing values of BOD and COD above 90%, with initial data of 1720 mg/L and 6500 mg/L respectively. Motivating the deliberate nitrification is expected to be possible commercial use diffused aeration systems for sludge leachate from landfills.

Keywords: sludge, landfill, leachate, SBR

Procedia PDF Downloads 273
80 Comparative Studies on the Needs and Development of Autotronic Maintenance Training Modules for the Training of Automobile Independent Workshop Service Technicians in North – Western Region, Nigeria

Authors: Muhammad Shuaibu Birniwa

Abstract:

Automobile Independent Workshop Service Technicians (popularly called roadside mechanics) are technical personals that repairs most of the automobile vehicles in Nigeria. Majority of these mechanics acquired their skills through apprenticeship training. Modern vehicle imported into the country posed greater challenges to the present automobile technicians particularly in the area of carrying out maintenance repairs of these latest automobile vehicles (autotronics vehicle) due to their inability to possessed autotronic skills competency. To source for solution to the above mentioned problems, therefore a research is carried out in North – Western region of Nigeria to produce a suitable maintenance training modules that can be used to train the technicians for them to upgrade/acquire the needed competencies for successful maintenance repair of the autotronic vehicles that were running everyday on the nation’s roads. A cluster sampling technique is used to obtain a sample from the population. The population of the study is all autotronic inclined lecturers, instructors and independent workshop service technicians that are within North – Western region of Nigeria. There are seven states (Jigawa, Kaduna, Kano, Katsina, Kebbi, Sokoto and Zamfara) in the study area, these serves as clusters in the population. Five (5) states were randomly selected to serve as the sample size. The five states are Jigawa, Kano, Katsina, Kebbi and Zamfara, the entire population of the five states which serves as clusters is (183), lecturers (44), instructors (49) and autotronic independent workshop service technicians (90), all of them were used in the study because of their manageable size. 183 copies of autotronic maintenance training module questionnaires (AMTMQ) with 174 and 149 question items respectively were administered and collected by the researcher with the help of an assistants, they are administered to 44 Polytechnic lecturers in the department of mechanical engineering, 49 instructors in skills acquisition centres/polytechnics and 90 master craftsmen of an independent workshops that are autotronic inclined. Data collected for answering research questions 1, 3, 4 and 5 were analysed using SPSS software version 22, Grand Mean and standard deviation were used to answer the research questions. Analysis of Variance (ANOVA) was used to test null hypotheses one (1) to three (3) and t-test statistical tool is used to analyzed hypotheses four (4) and five (5) all at 0.05 level of significance. The research conducted revealed that; all the objectives, contents/tasks, facilities, delivery systems and evaluation techniques contained in the questionnaire were required for the development of the autotronic maintenance training modules for independent workshop service technicians in the north – western zone of Nigeria. The skills upgrade training conducted by federal government in collaboration with SURE-P, NAC and SMEDEN was not successful because the educational status of the target population was not considered in drafting the needed training modules. The mode of training used does not also take cognizance of the theoretical aspect of the trainees, especially basic science which rendered the programme ineffective and insufficient for the tasks on ground.

Keywords: autotronics, roadside, mechanics, technicians, independent

Procedia PDF Downloads 73
79 A Flipped Learning Experience in an Introductory Course of Information and Communication Technology in Two Bachelor's Degrees: Combining the Best of Online and Face-to-Face Teaching

Authors: Begona del Pino, Beatriz Prieto, Alberto Prieto

Abstract:

Two opposite approaches to teaching can be considered: in-class learning (teacher-oriented) versus virtual learning (student-oriented). The most known example of the latter is Massive Online Open Courses (MOOCs). Both methodologies have pros and cons. Nowadays there is an increasing trend towards combining both of them. Blending learning is considered a valuable tool for improving learning since it combines student-centred interactive e-learning and face to face instruction. The aim of this contribution is to exchange and share the experience and research results of a blended-learning project that took place in the University of Granada (Spain). The research objective was to prove how combining didactic resources of a MOOC with in-class teaching, interacting directly with students, can substantially improve academic results, as well as student acceptance. The proposed methodology is based on the use of flipped learning technics applied to the subject ‘Fundamentals of Computer Science’ of the first course of two degrees: Telecommunications Engineering, and Industrial Electronics. In this proposal, students acquire the theoretical knowledges at home through a MOOC platform, where they watch video-lectures, do self-evaluation tests, and use other academic multimedia online resources. Afterwards, they have to attend to in-class teaching where they do other activities in order to interact with teachers and the rest of students (discussing of the videos, solving of doubts and practical exercises, etc.), trying to overcome the disadvantages of self-regulated learning. The results are obtained through the grades of the students and their assessment of the blended experience, based on an opinion survey conducted at the end of the course. The major findings of the study are the following: The percentage of students passing the subject has grown from 53% (average from 2011 to 2014 using traditional learning methodology) to 76% (average from 2015 to 2018 using blended methodology). The average grade has improved from 5.20±1.99 to 6.38±1.66. The results of the opinion survey indicate that most students preferred blended methodology to traditional approaches, and positively valued both courses. In fact, 69% of students felt ‘quite’ or ‘very’ satisfied with the classroom activities; 65% of students preferred the flipped classroom methodology to traditional in-class lectures, and finally, 79% said they were ‘quite’ or ‘very’ satisfied with the course in general. The main conclusions of the experience are the improvement in academic results, as well as the highly satisfactory assessments obtained in the opinion surveys. The results confirm the huge potential of combining MOOCs in formal undergraduate studies with on-campus learning activities. Nevertheless, the results in terms of students’ participation and follow-up have a wide margin for improvement. The method is highly demanding for both students and teachers. As a recommendation, students must perform the assigned tasks with perseverance, every week, in order to take advantage of the face-to-face classes. This perseverance is precisely what needs to be promoted among students because it clearly brings about an improvement in learning.

Keywords: blended learning, educational paradigm, flipped classroom, flipped learning technologies, lessons learned, massive online open course, MOOC, teacher roles through technology

Procedia PDF Downloads 181
78 Novel Numerical Technique for Dusty Plasma Dynamics (Yukawa Liquids): Microfluidic and Role of Heat Transport

Authors: Aamir Shahzad, Mao-Gang He

Abstract:

Currently, dusty plasmas motivated the researchers' widespread interest. Since the last two decades, substantial efforts have been made by the scientific and technological community to investigate the transport properties and their nonlinear behavior of three-dimensional and two-dimensional nonideal complex (dusty plasma) liquids (NICDPLs). Different calculations have been made to sustain and utilize strongly coupled NICDPLs because of their remarkable scientific and industrial applications. Understanding of the thermophysical properties of complex liquids under various conditions is of practical interest in the field of science and technology. The determination of thermal conductivity is also a demanding question for thermophysical researchers, due to some reasons; very few results are offered for this significant property. Lack of information of the thermal conductivity of dense and complex liquids at different parameters related to the industrial developments is a major barrier to quantitative knowledge of the heat flux flow from one medium to another medium or surface. The exact numerical investigation of transport properties of complex liquids is a fundamental research task in the field of thermophysics, as various transport data are closely related with the setup and confirmation of equations of state. A reliable knowledge of transport data is also important for an optimized design of processes and apparatus in various engineering and science fields (thermoelectric devices), and, in particular, the provision of precise data for the parameters of heat, mass, and momentum transport is required. One of the promising computational techniques, the homogenous nonequilibrium molecular dynamics (HNEMD) simulation, is over viewed with a special importance on the application to transport problems of complex liquids. This proposed work is particularly motivated by the FIRST TIME to modify the problem of heat conduction equations leads to polynomial velocity and temperature profiles algorithm for the investigation of transport properties with their nonlinear behaviors in the NICDPLs. The aim of proposed work is to implement a NEMDS algorithm (Poiseuille flow) and to delve the understanding of thermal conductivity behaviors in Yukawa liquids. The Yukawa system is equilibrated through the Gaussian thermostat in order to maintain the constant system temperature (canonical ensemble ≡ NVT)). The output steps will be developed between 3.0×105/ωp and 1.5×105/ωp simulation time steps for the computation of λ data. The HNEMD algorithm shows that the thermal conductivity is dependent on plasma parameters and the minimum value of lmin shifts toward higher G with an increase in k, as expected. New investigations give more reliable simulated data for the plasma conductivity than earlier known simulation data and generally the plasma λ0 by 2%-20%, depending on Γ and κ. It has been shown that the obtained results at normalized force field are in satisfactory agreement with various earlier simulation results. This algorithm shows that the new technique provides more accurate results with fast convergence and small size effects over a wide range of plasma states.

Keywords: molecular dynamics simulation, thermal conductivity, nonideal complex plasma, Poiseuille flow

Procedia PDF Downloads 274
77 Numerical Investigation on Design Method of Timber Structures Exposed to Parametric Fire

Authors: Robert Pečenko, Karin Tomažič, Igor Planinc, Sabina Huč, Tomaž Hozjan

Abstract:

Timber is favourable structural material due to high strength to weight ratio, recycling possibilities, and green credentials. Despite being flammable material, it has relatively high fire resistance. Everyday engineering practice around the word is based on an outdated design of timber structures considering standard fire exposure, while modern principles of performance-based design enable use of advanced non-standard fire curves. In Europe, standard for fire design of timber structures EN 1995-1-2 (Eurocode 5) gives two methods, reduced material properties method and reduced cross-section method. In the latter, fire resistance of structural elements depends on the effective cross-section that is a residual cross-section of uncharred timber reduced additionally by so called zero strength layer. In case of standard fire exposure, Eurocode 5 gives a fixed value of zero strength layer, i.e. 7 mm, while for non-standard parametric fires no additional comments or recommendations for zero strength layer are given. Thus designers often implement adopted 7 mm rule also for parametric fire exposure. Since the latest scientific evidence suggests that proposed value of zero strength layer can be on unsafe side for standard fire exposure, its use in the case of a parametric fire is also highly questionable and more numerical and experimental research in this field is needed. Therefore, the purpose of the presented study is to use advanced calculation methods to investigate the thickness of zero strength layer and parametric charring rates used in effective cross-section method in case of parametric fire. Parametric studies are carried out on a simple solid timber beam that is exposed to a larger number of parametric fire curves Zero strength layer and charring rates are determined based on the numerical simulations which are performed by the recently developed advanced two step computational model. The first step comprises of hygro-thermal model which predicts the temperature, moisture and char depth development and takes into account different initial moisture states of timber. In the second step, the response of timber beam simultaneously exposed to mechanical and fire load is determined. The mechanical model is based on the Reissner’s kinematically exact beam model and accounts for the membrane, shear and flexural deformations of the beam. Further on, material non-linear and temperature dependent behaviour is considered. In the two step model, the char front temperature is, according to Eurocode 5, assumed to have a fixed temperature of around 300°C. Based on performed study and observations, improved levels of charring rates and new thickness of zero strength layer in case of parametric fires are determined. Thus, the reduced cross section method is substantially improved to offer practical recommendations for designing fire resistance of timber structures. Furthermore, correlations between zero strength layer thickness and key input parameters of the parametric fire curve (for instance, opening factor, fire load, etc.) are given, representing a guideline for a more detailed numerical and also experimental research in the future.

Keywords: advanced numerical modelling, parametric fire exposure, timber structures, zero strength layer

Procedia PDF Downloads 168
76 Ultrasound Disintegration as a Potential Method for the Pre-Treatment of Virginia Fanpetals (Sida hermaphrodita) Biomass before Methane Fermentation Process

Authors: Marcin Dębowski, Marcin Zieliński, Mirosław Krzemieniewski

Abstract:

As methane fermentation is a complex series of successive biochemical transformations, its subsequent stages are determined, to a various extent, by physical and chemical factors. A specific state of equilibrium is being settled in the functioning fermentation system between environmental conditions and the rate of biochemical reactions and products of successive transformations. In the case of physical factors that influence the effectiveness of methane fermentation transformations, the key significance is ascribed to temperature and intensity of biomass agitation. Among the chemical factors, significant are pH value, type, and availability of the culture medium (to put it simply: the C/N ratio) as well as the presence of toxic substances. One of the important elements which influence the effectiveness of methane fermentation is the pre-treatment of organic substrates and the mode in which the organic matter is made available to anaerobes. Out of all known and described methods for organic substrate pre-treatment before methane fermentation process, the ultrasound disintegration is one of the most interesting technologies. Investigations undertaken on the ultrasound field and the use of installations operating on the existing systems result principally from very wide and universal technological possibilities offered by the sonication process. This physical factor may induce deep physicochemical changes in ultrasonicated substrates that are highly beneficial from the viewpoint of methane fermentation processes. In this case, special role is ascribed to disintegration of biomass that is further subjected to methane fermentation. Once cell walls are damaged, cytoplasm and cellular enzymes are released. The released substances – either in dissolved or colloidal form – are immediately available to anaerobic bacteria for biodegradation. To ensure the maximal release of organic matter from dead biomass cells, disintegration processes are aimed to achieve particle size below 50 μm. It has been demonstrated in many research works and in systems operating in the technical scale that immediately after substrate supersonication the content of organic matter (characterized by COD, BOD5 and TOC indices) was increasing in the dissolved phase of sedimentation water. This phenomenon points to the immediate sonolysis of solid substances contained in the biomass and to the release of cell material, and consequently to the intensification of the hydrolytic phase of fermentation. It results in a significant reduction of fermentation time and increased effectiveness of production of gaseous metabolites of anaerobic bacteria. Because disintegration of Virginia fanpetals biomass via ultrasounds applied in order to intensify its conversion is a novel technique, it is often underestimated by exploiters of agri-biogas works. It has, however, many advantages that have a direct impact on its technological and economical superiority over thus far applied methods of biomass conversion. As for now, ultrasound disintegrators for biomass conversion are not produced on the mass-scale, but by specialized groups in scientific or R&D centers. Therefore, their quality and effectiveness are to a large extent determined by their manufacturers’ knowledge and skills in the fields of acoustics and electronic engineering.

Keywords: ultrasound disintegration, biomass, methane fermentation, biogas, Virginia fanpetals

Procedia PDF Downloads 369
75 The Influence of Human Movement on the Formation of Adaptive Architecture

Authors: Rania Raouf Sedky

Abstract:

Adaptive architecture relates to buildings specifically designed to adapt to their residents and their environments. To design a biologically adaptive system, we can observe how living creatures in nature constantly adapt to different external and internal stimuli to be a great inspiration. The issue is not just how to create a system that is capable of change but also how to find the quality of change and determine the incentive to adapt. The research examines the possibilities of transforming spaces using the human body as an active tool. The research also aims to design and build an effective dynamic structural system that can be applied on an architectural scale and integrate them all into the creation of a new adaptive system that allows us to conceive a new way to design, build and experience architecture in a dynamic manner. The main objective was to address the possibility of a reciprocal transformation between the user and the architectural element so that the architecture can adapt to the user, as the user adapts to architecture. The motivation is the desire to deal with the psychological benefits of an environment that can respond and thus empathize with human emotions through its ability to adapt to the user. Adaptive affiliations of kinematic structures have been discussed in architectural research for more than a decade, and these issues have proven their effectiveness in developing kinematic structures, responsive and adaptive, and their contribution to 'smart architecture'. A wide range of strategies have been used in building complex kinetic and robotic systems mechanisms to achieve convertibility and adaptability in engineering and architecture. One of the main contributions of this research is to explore how the physical environment can change its shape to accommodate different spatial displays based on the movement of the user’s body. The main focus is on the relationship between materials, shape, and interactive control systems. The intention is to develop a scenario where the user can move, and the structure interacts without any physical contact. The soft form of shifting language and interaction control technology will provide new possibilities for enriching human-environmental interactions. How can we imagine a space in which to construct and understand its users through physical gestures, visual expressions, and response accordingly? How can we imagine a space whose interaction depends not only on preprogrammed operations but on real-time feedback from its users? The research also raises some important questions for the future. What would be the appropriate structure to show physical interaction with the dynamic world? This study concludes with a strong belief in the future of responsive motor structures. We imagine that they are developing the current structure and that they will radically change the way spaces are tested. These structures have obvious advantages in terms of energy performance and the ability to adapt to the needs of users. The research highlights the interface between remote sensing and a responsive environment to explore the possibility of an interactive architecture that adapts to and responds to user movements. This study ends with a strong belief in the future of responsive motor structures. We envision that it will improve the current structure and that it will bring a fundamental change to the way in which spaces are tested.

Keywords: adaptive architecture, interactive architecture, responsive architecture, tensegrity

Procedia PDF Downloads 160
74 Towards a Measuring Tool to Encourage Knowledge Sharing in Emerging Knowledge Organizations: The Who, the What and the How

Authors: Rachel Barker

Abstract:

The exponential velocity in the truly knowledge-intensive world today has increasingly bombarded organizations with unfathomable challenges. Hence organizations are introduced to strange lexicons of descriptors belonging to a new paradigm of who, what and how knowledge at individual and organizational levels should be managed. Although organizational knowledge has been recognized as a valuable intangible resource that holds the key to competitive advantage, little progress has been made in understanding how knowledge sharing at individual level could benefit knowledge use at collective level to ensure added value. The research problem is that a lack of research exists to measure knowledge sharing through a multi-layered structure of ideas with at its foundation, philosophical assumptions to support presuppositions and commitment which requires actual findings from measured variables to confirm observed and expected events. The purpose of this paper is to address this problem by presenting a theoretical approach to measure knowledge sharing in emerging knowledge organizations. The research question is that despite the competitive necessity of becoming a knowledge-based organization, leaders have found it difficult to transform their organizations due to a lack of knowledge on who, what and how it should be done. The main premise of this research is based on the challenge for knowledge leaders to develop an organizational culture conducive to the sharing of knowledge and where learning becomes the norm. The theoretical constructs were derived and based on the three components of the knowledge management theory, namely technical, communication and human components where it is suggested that this knowledge infrastructure could ensure effective management. While it is realised that it might be a little problematic to implement and measure all relevant concepts, this paper presents effect of eight critical success factors (CSFs) namely: organizational strategy, organizational culture, systems and infrastructure, intellectual capital, knowledge integration, organizational learning, motivation/performance measures and innovation. These CSFs have been identified based on a comprehensive literature review of existing research and tested in a new framework adapted from four perspectives of the balanced score card (BSC). Based on these CSFs and their items, an instrument was designed and tested among managers and employees of a purposefully selected engineering company in South Africa who relies on knowledge sharing to ensure their competitive advantage. Rigorous pretesting through personal interviews with executives and a number of academics took place to validate the instrument and to improve the quality of items and correct wording of issues. Through analysis of surveys collected, this research empirically models and uncovers key aspects of these dimensions based on the CSFs. Reliability of the instrument was calculated by Cronbach’s a for the two sections of the instrument on organizational and individual levels.The construct validity was confirmed by using factor analysis. The impact of the results was tested using structural equation modelling and proved to be a basis for implementing and understanding the competitive predisposition of the organization as it enters the process of knowledge management. In addition, they realised the importance to consolidate their knowledge assets to create value that is sustainable over time.

Keywords: innovation, intellectual capital, knowledge sharing, performance measures

Procedia PDF Downloads 196
73 Innovation in PhD Training in the Interdisciplinary Research Institute

Authors: B. Shaw, K. Doherty

Abstract:

The Cultural Communication and Computing Research Institute (C3RI) is a diverse multidisciplinary research institute including art, design, media production, communication studies, computing and engineering. Across these disciplines it can seem like there are enormous differences of research practice and convention, including differing positions on objectivity and subjectivity, certainty and evidence, and different political and ethical parameters. These differences sit within, often unacknowledged, histories, codes, and communication styles of specific disciplines, and it is all these aspects that can make understanding of research practice across disciplines difficult. To explore this, a one day event was orchestrated, testing how a PhD community might communicate and share research in progress in a multi-disciplinary context. Instead of presenting results at a conference, research students were tasked to articulate their method of inquiry. A working party of students from across disciplines had to design a conference call, visual identity and an event framework that would work for students across all disciplines. The process of establishing the shape and identity of the conference was revealing. Even finding a linguistic frame that would meet the expectations of different disciplines for the conference call was challenging. The first abstracts submitted either resorted to reporting findings, or only described method briefly. It took several weeks of supported intervention for research students to get ‘inside’ their method and to understand their research practice as a process rich with philosophical and practical decisions and implications. In response to the abstracts the conference committee generated key methodological categories for conference sessions, including sampling, capturing ‘experience’, ‘making models’, researcher identities, and ‘constructing data’. Each session involved presentations by visual artists, communications students and computing researchers with inter-disciplinary dialogue, facilitated by alumni Chairs. The apparently simple focus on method illuminated research process as a site of creativity, innovation and discovery, and also built epistemological awareness, drawing attention to what is being researched and how it can be known. It was surprisingly difficult to limit students to discussing method, and it was apparent that the vocabulary available for method is sometimes limited. However, by focusing on method rather than results, the genuine process of research, rather than one constructed for approval, could be captured. In unlocking the twists and turns of planning and implementing research, and the impact of circumstance and contingency, students had to reflect frankly on successes and failures. This level of self – and public- critique emphasised the degree of critical thinking and rigour required in executing research and demonstrated that honest reportage of research, faults and all, is good valid research. The process also revealed the degree that disciplines can learn from each other- the computing students gained insights from the sensitive social contextualizing generated by communications and art and design students, and art and design students gained understanding from the greater ‘distance’ and emphasis on application that computing students applied to their subjects. Finding the means to develop dialogue across disciplines makes researchers better equipped to devise and tackle research problems across disciplines, potentially laying the ground for more effective collaboration.

Keywords: interdisciplinary, method, research student, training

Procedia PDF Downloads 207
72 Investigation of Resilient Circles in Local Community and Industry: Waju-Traditional Culture in Japan and Modern Technology Application

Authors: R. Ueda

Abstract:

Today global society is seeking resilient partnership in local organizations and individuals, which realizes multi-stakeholders relationship. Although it is proposed by modern global framework of sustainable development, it is conceivable that such affiliation can be found out in the traditional local community in Japan, and that traditional spirit is tacitly sustaining in modern context of disaster mitigation in society and economy. Then this research is aiming to clarify and analyze implication for the global world by actual case studies. Regional and urban resilience is the ability of multi-stakeholders to cooperate flexibly and to adapt in response to changes in the circumstances caused by disasters, but there are various conflicts affecting coordination of disaster relief measures. These conflicts arise not only from a lack of communication and an insufficient network, but also from the difficulty to jointly draw common context from fragmented information. This is because of the weakness of our modern engineering which focuses on maintenance and restoration of individual systems. Here local ‘circles’ holistically includes local community and interacts periodically. Focusing on examples of resilient organizations and wisdom created in communities, what can be seen throughout history is a virtuous cycle where the information and the knowledge are structured, the context to be adapted becomes clear, and an adaptation at a higher level is made possible, by which the collaboration between organizations is deepened and expanded. And the wisdom of a solid and autonomous disaster prevention formed by the historical community called’ Waju’ – an area surrounded by circle embankment to protect the settlement from flood – lives on in government efforts of the coastal industrial island of today. Industrial company there collaborates to create a circle including common evacuation space, road access improvement and infrastructure recovery. These days, people here adopts new interface technology. Large-scale AR- Augmented Reality for more than hundred people is expressing detailed hazard by tsunami and liquefaction. Common experiences of the major disaster space and circle of mutual discussion are enforcing resilience. Collaboration spirit lies in the center of circle. A consistent key point is a virtuous cycle where the information and the knowledge are structured, the context to be adapted becomes clear, and an adaptation at a higher level is made possible, by which the collaboration between organizations is deepened and expanded. This writer believes that both self-governing human organizations and the societal implementation of technical systems are necessary. Infrastructure should be autonomously instituted by associations of companies and other entities in industrial areas for working closely with local governments. To develop advanced disaster prevention and multi-stakeholder collaboration, partnerships among industry, government, academia and citizens are important.

Keywords: industrial recovery, multi-sakeholders, traditional culture, user experience, Waju

Procedia PDF Downloads 114
71 Single Pass Design of Genetic Circuits Using Absolute Binding Free Energy Measurements and Dimensionless Analysis

Authors: Iman Farasat, Howard M. Salis

Abstract:

Engineered genetic circuits reprogram cellular behavior to act as living computers with applications in detecting cancer, creating self-controlling artificial tissues, and dynamically regulating metabolic pathways. Phenemenological models are often used to simulate and design genetic circuit behavior towards a desired behavior. While such models assume that each circuit component’s function is modular and independent, even small changes in a circuit (e.g. a new promoter, a change in transcription factor expression level, or even a new media) can have significant effects on the circuit’s function. Here, we use statistical thermodynamics to account for the several factors that control transcriptional regulation in bacteria, and experimentally demonstrate the model’s accuracy across 825 measurements in several genetic contexts and hosts. We then employ our first principles model to design, experimentally construct, and characterize a family of signal amplifying genetic circuits (genetic OpAmps) that expand the dynamic range of cell sensors. To develop these models, we needed a new approach to measuring the in vivo binding free energies of transcription factors (TFs), a key ingredient of statistical thermodynamic models of gene regulation. We developed a new high-throughput assay to measure RNA polymerase and TF binding free energies, requiring the construction and characterization of only a few constructs and data analysis (Figure 1A). We experimentally verified the assay on 6 TetR-homolog repressors and a CRISPR/dCas9 guide RNA. We found that our binding free energy measurements quantitatively explains why changing TF expression levels alters circuit function. Altogether, by combining these measurements with our biophysical model of translation (the RBS Calculator) as well as other measurements (Figure 1B), our model can account for changes in TF binding sites, TF expression levels, circuit copy number, host genome size, and host growth rate (Figure 1C). Model predictions correctly accounted for how these 8 factors control a promoter’s transcription rate (Figure 1D). Using the model, we developed a design framework for engineering multi-promoter genetic circuits that greatly reduces the number of degrees of freedom (8 factors per promoter) to a single dimensionless unit. We propose the Ptashne (Pt) number to encapsulate the 8 co-dependent factors that control transcriptional regulation into a single number. Therefore, a single number controls a promoter’s output rather than these 8 co-dependent factors, and designing a genetic circuit with N promoters requires specification of only N Pt numbers. We demonstrate how to design genetic circuits in Pt number space by constructing and characterizing 15 2-repressor OpAmp circuits that act as signal amplifiers when within an optimal Pt region. We experimentally show that OpAmp circuits using different TFs and TF expression levels will only amplify the dynamic range of input signals when their corresponding Pt numbers are within the optimal region. Thus, the use of the Pt number greatly simplifies the genetic circuit design, particularly important as circuits employ more TFs to perform increasingly complex functions.

Keywords: transcription factor, synthetic biology, genetic circuit, biophysical model, binding energy measurement

Procedia PDF Downloads 473
70 Rainwater Management: A Case Study of Residential Reconstruction of Cultural Heritage Buildings in Russia

Authors: V. Vsevolozhskaia

Abstract:

Since 1990, energy-efficient development concepts have constituted both a turning point in civil engineering and a challenge for an environmentally friendly future. Energy and water currently play an essential role in the sustainable economic growth of the world in general and Russia in particular: the efficiency of the water supply system is the second most important parameter for energy consumption according to the British assessment method, while the water-energy nexus has been identified as a focus for accelerating sustainable growth and developing effective, innovative solutions. The activities considered in this study were aimed at organizing and executing the renovation of the property in residential buildings located in St. Petersburg, specifically buildings with local or federal historical heritage status under the control of the St. Petersburg Committee for the State Inspection and Protection of Historic and Cultural Monuments (KGIOP) and UNESCO. Even after reconstruction, these buildings still fall into energy efficiency class D. Russian Government Resolution No. 87 on the structure and required content of project documentation contains a section entitled ‘Measures to ensure compliance with energy efficiency and equipment requirements for buildings, structures, and constructions with energy metering devices’. Mention is made of the need to install collectors and meters, which only calculate energy, neglecting the main purpose: to make buildings more energy-efficient, potentially even energy efficiency class A. The least-explored aspects of energy-efficient technology in the Russian Federation remain the water balance and the possibility of implementing rain and meltwater collection systems. These modern technologies are used exclusively for new buildings due to a lack of government directive to create project documentation during the planning of major renovations and reconstruction that would include the collection and reuse of rainwater. Energy-efficient technology for rain and meltwater collection is currently applied only to new buildings, even though research has proved that using rainwater is safe and offers a huge step forward in terms of eco-efficiency analysis and water innovation. Where conservation is mandatory, making changes to protected sites is prohibited. In most cases, the protected site is the cultural heritage building itself, including the main walls and roof. However, the installation of a second water supply system and collection of rainwater would not affect the protected building itself. Water efficiency in St. Petersburg is currently considered only from the point of view of the installation that regulates the flow of the pipeline shutoff valves. The development of technical guidelines for the use of grey- and/or rainwater to meet the needs of residential buildings during reconstruction or renovation is not yet complete. The ideas for water treatment, collection and distribution systems presented in this study should be taken into consideration during the reconstruction or renovation of residential cultural heritage buildings under the protection of KGIOP and UNESCO. The methodology applied also has the potential to be extended to other cultural heritage sites in northern countries and lands with an average annual rainfall of over 600 mm to cover average toilet-flush needs.

Keywords: cultural heritage, energy efficiency, renovation, rainwater collection, reconstruction, water management, water supply

Procedia PDF Downloads 92
69 Simulation Research of Innovative Ignition System of ASz62IR Radial Aircraft Engine

Authors: Miroslaw Wendeker, Piotr Kacejko, Mariusz Duk, Pawel Karpinski

Abstract:

The research in the field of aircraft internal combustion engines is currently driven by the needs of decreasing fuel consumption and CO2 emissions, while fulfilling the level of safety. Currently, reciprocating aircraft engines are found in sports, emergency, agricultural and recreation aviation. Technically, they are most at a pre-war knowledge of the theory of operation, design and manufacturing technology, especially if compared to that high level of development of automotive engines. Typically, these engines are driven by carburetors of a quite primitive construction. At present, due to environmental requirements and dealing with a climate change, it is beneficial to develop aircraft piston engines and adopt the achievements of automotive engineering such as computer-controlled low-pressure injection, electronic ignition control and biofuels. The paper describes simulation research of the innovative power and control systems for the aircraft radial engine of high power. Installing an electronic ignition system in the radial aircraft engine is a fundamental innovative idea of this solution. Consequently, the required level of safety and better functionality as compared to the today’s plug system can be guaranteed. In this framework, this research work focuses on describing a methodology for optimizing the electronically controlled ignition system. This attempt can reduce emissions of toxic compounds as a result of lowered fuel consumption, optimized combustion and engine capability of efficient combustion of ecological fuels. New, redundant elements of the control system can improve the safety of aircraft. Consequently, the required level of safety and better functionality as compared to the today’s plug system can be guaranteed. The simulation research aimed to determine the vulnerability of the values measured (they were planned as the quantities measured by the measurement systems) to determining the optimal ignition angle (the angle of maximum torque at a given operating point). The described results covered: a) research in steady states; b) velocity ranging from 1500 to 2200 rpm (every 100 rpm); c) loading ranging from propeller power to maximum power; d) altitude ranging according to the International Standard Atmosphere from 0 to 8000 m (every 1000 m); e) fuel: automotive gasoline ES95. The three models of different types of ignition coil (different energy discharge) were studied. The analysis aimed at the optimization of the design of the innovative ignition system for an aircraft engine. The optimization involved: a) the optimization of the measurement systems; b) the optimization of actuator systems. The studies enabled the research on the vulnerability of the signals to the control of the ignition timing. Accordingly, the number and type of sensors were determined for the ignition system to achieve its optimal performance. The results confirmed the limited benefits, in terms of fuel consumption. Thus, including spark management in the optimization is mandatory to significantly decrease the fuel consumption. This work has been financed by the Polish National Centre for Research and Development, INNOLOT, under Grant Agreement No. INNOLOT/I/1/NCBR/2013.

Keywords: piston engine, radial engine, ignition system, CFD model, engine optimization

Procedia PDF Downloads 387
68 Fully Autonomous Vertical Farm to Increase Crop Production

Authors: Simone Cinquemani, Lorenzo Mantovani, Aleksander Dabek

Abstract:

New technologies in agriculture are opening new challenges and new opportunities. Among these, certainly, robotics, vision, and artificial intelligence are the ones that will make a significant leap, compared to traditional agricultural techniques, possible. In particular, the indoor farming sector will be the one that will benefit the most from these solutions. Vertical farming is a new field of research where mechanical engineering can bring knowledge and know-how to transform a highly labor-based business into a fully autonomous system. The aim of the research is to develop a multi-purpose, modular, and perfectly integrated platform for crop production in indoor vertical farming. Activities will be based both on hardware development such as automatic tools to perform different activities on soil and plants, as well as research to introduce an extensive use of monitoring techniques based on machine learning algorithms. This paper presents the preliminary results of a research project of a vertical farm living lab designed to (i) develop and test vertical farming cultivation practices, (ii) introduce a very high degree of mechanization and automation that makes all processes replicable, fully measurable, standardized and automated, (iii) develop a coordinated control and management environment for autonomous multiplatform or tele-operated robots in environments with the aim of carrying out complex tasks in the presence of environmental and cultivation constraints, (iv) integrate AI-based algorithms as decision support system to improve quality production. The coordinated management of multiplatform systems still presents innumerable challenges that require a strongly multidisciplinary approach right from the design, development, and implementation phases. The methodology is based on (i) the development of models capable of describing the dynamics of the various platforms and their interactions, (ii) the integrated design of mechatronic systems able to respond to the needs of the context and to exploit the strength characteristics highlighted by the models, (iii) implementation and experimental tests performed to test the real effectiveness of the systems created, evaluate any weaknesses so as to proceed with a targeted development. To these aims, a fully automated laboratory for growing plants in vertical farming has been developed and tested. The living lab makes extensive use of sensors to determine the overall state of the structure, crops, and systems used. The possibility of having specific measurements for each element involved in the cultivation process makes it possible to evaluate the effects of each variable of interest and allows for the creation of a robust model of the system as a whole. The automation of the laboratory is completed with the use of robots to carry out all the necessary operations, from sowing to handling to harvesting. These systems work synergistically thanks to the knowledge of detailed models developed based on the information collected, which allows for deepening the knowledge of these types of crops and guarantees the possibility of tracing every action performed on each single plant. To this end, artificial intelligence algorithms have been developed to allow synergistic operation of all systems.

Keywords: automation, vertical farming, robot, artificial intelligence, vision, control

Procedia PDF Downloads 42
67 Development of an Artificial Neural Network to Measure Science Literacy Leveraging Neuroscience

Authors: Amanda Kavner, Richard Lamb

Abstract:

Faster growth in science and technology of other nations may make staying globally competitive more difficult without shifting focus on how science is taught in US classes. An integral part of learning science involves visual and spatial thinking since complex, and real-world phenomena are often expressed in visual, symbolic, and concrete modes. The primary barrier to spatial thinking and visual literacy in Science, Technology, Engineering, and Math (STEM) fields is representational competence, which includes the ability to generate, transform, analyze and explain representations, as opposed to generic spatial ability. Although the relationship is known between the foundational visual literacy and the domain-specific science literacy, science literacy as a function of science learning is still not well understood. Moreover, the need for a more reliable measure is necessary to design resources which enhance the fundamental visuospatial cognitive processes behind scientific literacy. To support the improvement of students’ representational competence, first visualization skills necessary to process these science representations needed to be identified, which necessitates the development of an instrument to quantitatively measure visual literacy. With such a measure, schools, teachers, and curriculum designers can target the individual skills necessary to improve students’ visual literacy, thereby increasing science achievement. This project details the development of an artificial neural network capable of measuring science literacy using functional Near-Infrared Spectroscopy (fNIR) data. This data was previously collected by Project LENS standing for Leveraging Expertise in Neurotechnologies, a Science of Learning Collaborative Network (SL-CN) of scholars of STEM Education from three US universities (NSF award 1540888), utilizing mental rotation tasks, to assess student visual literacy. Hemodynamic response data from fNIRsoft was exported as an Excel file, with 80 of both 2D Wedge and Dash models (dash) and 3D Stick and Ball models (BL). Complexity data were in an Excel workbook separated by the participant (ID), containing information for both types of tasks. After changing strings to numbers for analysis, spreadsheets with measurement data and complexity data were uploaded to RapidMiner’s TurboPrep and merged. Using RapidMiner Studio, a Gradient Boosted Trees artificial neural network (ANN) consisting of 140 trees with a maximum depth of 7 branches was developed, and 99.7% of the ANN predictions are accurate. The ANN determined the biggest predictors to a successful mental rotation are the individual problem number, the response time and fNIR optode #16, located along the right prefrontal cortex important in processing visuospatial working memory and episodic memory retrieval; both vital for science literacy. With an unbiased measurement of science literacy provided by psychophysiological measurements with an ANN for analysis, educators and curriculum designers will be able to create targeted classroom resources to help improve student visuospatial literacy, therefore improving science literacy.

Keywords: artificial intelligence, artificial neural network, machine learning, science literacy, neuroscience

Procedia PDF Downloads 121
66 Increased Stability of Rubber-Modified Asphalt Mixtures to Swelling, Expansion and Rebound Effect during Post-Compaction

Authors: Fernando Martinez Soto, Gaetano Di Mino

Abstract:

The application of rubber into bituminous mixtures requires attention and care during mixing and compaction. Rubber modifies the properties because it reacts in the internal structure of bitumen at high temperatures changing the performance of the mixture (interaction process of solvents with binder-rubber aggregate). The main change is the increasing of the viscosity and elasticity of the binder due to the larger sizes of the rubber particles by dry process but, this positive effect is counteracted by short mixing times, compared to wet technology, and due to the transport processes, curing time and post-compaction of the mixtures. Therefore, negative effects as swelling of rubber particles, rebounding effect of the specimens and thermal changes by different expansion of the structure inside the mixtures, can change the mechanical properties of the rubberized blends. Based on the dry technology, different asphalt-rubber binders using devulcanized or natural rubber (truck and bus tread rubber), have served to demonstrate these effects and how to solve them into two dense-gap graded rubber modified asphalt concrete mixes (RUMAC) to enhance the stability, workability and durability of the compacted samples by Superpave gyratory compactor method. This paper specifies the procedures developed in the Department of Civil Engineering of the University of Palermo during September 2016 to March 2017, for characterizing the post-compaction and mix-stability of the one conventional mixture (hot mix asphalt without rubber) and two gap-graded rubberized asphalt mixes according granulometry for rail sub-ballast layers with nominal size of Ø22.4mm of aggregates according European standard. Thus, the main purpose of this laboratory research is the application of ambient ground rubber from scrap tires processed at conventional temperature (20ºC) inside hot bituminous mixtures (160-220ºC) as a substitute for 1.5%, 2% and 3% by weight of the total aggregates (3.2%, 4.2% and, 6.2% respectively by volumetric part of the limestone aggregates of bulk density equal to 2.81g/cm³) considered, not as a part of the asphalt binder. The reference bituminous mixture was designed with 4% of binder and ± 3% of air voids, manufactured for a conventional bitumen B50/70 at 160ºC-145ºC mix-compaction temperatures to guarantee the workability of the mixes. The proportions of rubber proposed are #60-40% for mixtures with 1.5 to 2% of rubber and, #20-80% for mixture with 3% of rubber (as example, a 60% of Ø0.4-2mm and 40% of Ø2-4mm). The temperature of the asphalt cement is between 160-180 ºC for mixing and 145-160 ºC for compaction, according to the optimal values for viscosity using Brookfield viscometer and 'ring and ball' - penetration tests. These crumb rubber particles act as a rubber-aggregate into the mixture, varying sizes between 0.4mm to 2mm in a first fraction, and 2-4mm as second proportion. Ambient ground rubber with a specific gravity of 1.154g/cm³ is used. The rubber is free of loose fabric, wire, and other contaminants. It was found optimal results in real beams and cylindrical specimens with each HMA mixture reducing the swelling effect. Different factors as temperature, particle sizes of rubber, number of cycles and pressures of compaction that affect the interaction process are explained.

Keywords: crumb-rubber, gyratory compactor, rebounding effect, superpave mix-design, swelling, sub-ballast railway

Procedia PDF Downloads 244