Search results for: Dielectric constant
75 The Effects of Lithofacies on Oil Enrichment in Lucaogou Formation Fine-Grained Sedimentary Rocks in Santanghu Basin, China
Authors: Guoheng Liu, Zhilong Huang
Abstract:
For more than the past ten years, oil and gas production from marine shale such as the Barnett shale. In addition, in recent years, major breakthroughs have also been made in lacustrine shale gas exploration, such as the Yanchang Formation of the Ordos Basin in China. Lucaogou Formation shale, which is also lacustrine shale, has also yielded a high production in recent years, for wells such as M1, M6, and ML2, yielding a daily oil production of 5.6 tons, 37.4 tons and 13.56 tons, respectively. Lithologic identification and classification of reservoirs are the base and keys to oil and gas exploration. Lithology and lithofacies obviously control the distribution of oil and gas in lithological reservoirs, so it is of great significance to describe characteristics of lithology and lithofacies of reservoirs finely. Lithofacies is an intrinsic property of rock formed under certain conditions of sedimentation. Fine-grained sedimentary rocks such as shale formed under different sedimentary conditions display great particularity and distinctiveness. Hence, to our best knowledge, no constant and unified criteria and methods exist for fine-grained sedimentary rocks regarding lithofacies definition and classification. Consequently, multi-parameters and multi-disciplines are necessary. A series of qualitative descriptions and quantitative analysis were used to figure out the lithofacies characteristics and its effect on oil accumulation of Lucaogou formation fine-grained sedimentary rocks in Santanghu basin. The qualitative description includes core description, petrographic thin section observation, fluorescent thin-section observation, cathode luminescence observation and scanning electron microscope observation. The quantitative analyses include X-ray diffraction, total organic content analysis, ROCK-EVAL.II Methodology, soxhlet extraction, porosity and permeability analysis and oil saturation analysis. Three types of lithofacies were mainly well-developed in this study area, which is organic-rich massive shale lithofacies, organic-rich laminated and cloddy hybrid sedimentary lithofacies and organic-lean massive carbonate lithofacies. Organic-rich massive shale lithofacies mainly include massive shale and tuffaceous shale, of which quartz and clay minerals are the major components. Organic-rich laminated and cloddy hybrid sedimentary lithofacies contain lamina and cloddy structure. Rocks from this lithofacies chiefly consist of dolomite and quartz. Organic-lean massive carbonate lithofacies mainly contains massive bedding fine-grained carbonate rocks, of which fine-grained dolomite accounts for the main part. Organic-rich massive shale lithofacies contain the highest content of free hydrocarbon and solid organic matter. Moreover, more pores were developed in organic-rich massive shale lithofacies. Organic-lean massive carbonate lithofacies contain the lowest content solid organic matter and develop the least amount of pores. Organic-rich laminated and cloddy hybrid sedimentary lithofacies develop the largest number of cracks and fractures. To sum up, organic-rich massive shale lithofacies is the most favorable type of lithofacies. Organic-lean massive carbonate lithofacies is impossible for large scale oil accumulation.Keywords: lithofacies classification, tuffaceous shale, oil enrichment, Lucaogou formation
Procedia PDF Downloads 21874 Modern Detection and Description Methods for Natural Plants Recognition
Authors: Masoud Fathi Kazerouni, Jens Schlemper, Klaus-Dieter Kuhnert
Abstract:
Green planet is one of the Earth’s names which is known as a terrestrial planet and also can be named the fifth largest planet of the solar system as another scientific interpretation. Plants do not have a constant and steady distribution all around the world, and even plant species’ variations are not the same in one specific region. Presence of plants is not only limited to one field like botany; they exist in different fields such as literature and mythology and they hold useful and inestimable historical records. No one can imagine the world without oxygen which is produced mostly by plants. Their influences become more manifest since no other live species can exist on earth without plants as they form the basic food staples too. Regulation of water cycle and oxygen production are the other roles of plants. The roles affect environment and climate. Plants are the main components of agricultural activities. Many countries benefit from these activities. Therefore, plants have impacts on political and economic situations and future of countries. Due to importance of plants and their roles, study of plants is essential in various fields. Consideration of their different applications leads to focus on details of them too. Automatic recognition of plants is a novel field to contribute other researches and future of studies. Moreover, plants can survive their life in different places and regions by means of adaptations. Therefore, adaptations are their special factors to help them in hard life situations. Weather condition is one of the parameters which affect plants life and their existence in one area. Recognition of plants in different weather conditions is a new window of research in the field. Only natural images are usable to consider weather conditions as new factors. Thus, it will be a generalized and useful system. In order to have a general system, distance from the camera to plants is considered as another factor. The other considered factor is change of light intensity in environment as it changes during the day. Adding these factors leads to a huge challenge to invent an accurate and secure system. Development of an efficient plant recognition system is essential and effective. One important component of plant is leaf which can be used to implement automatic systems for plant recognition without any human interface and interaction. Due to the nature of used images, characteristic investigation of plants is done. Leaves of plants are the first characteristics to select as trusty parts. Four different plant species are specified for the goal to classify them with an accurate system. The current paper is devoted to principal directions of the proposed methods and implemented system, image dataset, and results. The procedure of algorithm and classification is explained in details. First steps, feature detection and description of visual information, are outperformed by using Scale invariant feature transform (SIFT), HARRIS-SIFT, and FAST-SIFT methods. The accuracy of the implemented methods is computed. In addition to comparison, robustness and efficiency of results in different conditions are investigated and explained.Keywords: SIFT combination, feature extraction, feature detection, natural images, natural plant recognition, HARRIS-SIFT, FAST-SIFT
Procedia PDF Downloads 27573 Study of Secondary Metabolites of Sargassum Algae: Anticorrosive and Antibacterial Activities
Authors: Prescilla Lambert, Christophe Roos, Mounim Lebrini
Abstract:
For several years, the Caribbean islands and West Africa have had to deal with the massive arrival of the brown seaweed Sargassum. Overall, this macroalgae, which constitutes a habitat for a great diversity of marine organisms, is also an additional stress factor for the marine environment (e.g., coral reefs). In addition, the accumulation followed by the significant decomposition of the Sargassum spp. biomass on the coast leads to the release of toxic gases (H₂S and NH₃), which calls into question the functioning of the economic, health and tourist life of the island and the other interested territories. Originally, these algae are formed by the eutrophication of the oceans accentuated by global warming. Unfortunately, scientists predict a significant recurrence of these Sargassum strandings for years to come. It is therefore more than necessary to find solutions by putting in place a sustainable management plan for this phenomenon. Martinique, a small island in the Caribbean arc, is one of the many areas impacted by Sargassum seaweed strandings. Since 2011, there has been a constant increase in the degradation of the materials present in this region, largely due to toxic/corrosive gases released by the algae decomposition. In order to protect the structures and the vulnerable building materials while limiting the use of synthetic/petroleum based molecules as much as possible, research is being conducted on molecules of natural origin. Thus, thanks to the chemical composition, which comprise molecules with interesting properties, algae such as Sargassum could potentially help to solve many issues. Therefore, this study focuses on the green extraction and characterization of molecules from the species Sargassum fluitans and Sargassum natans present in Martinique. The secondary metabolites found in these extracts showed variability in yield rates due to local climatic conditions. The tests carried out shed light on the anticorrosive and antibacterial potential of the algae. These extracts can thus be described as natural inhibitors. The effect of variation in inhibitor concentrations was tested in electrochemistry using electrochemical impedance spectroscopy and polarization curves. The analysis of electrochemical results obtained by direct immersion in the extracts and self-assembled molecular layers (SAMs) for Sargassum fluitans III, Sargassum natans I and VIII species was conclusive in acid and alkaline environments. The excellent results obtained reveal an inhibitory efficacy of 88% at 50mg/L for the crude extract of Sargassum fluitans III and efficacies greater than 97% for the chemical families of Sargassum fluitans III. Similarly, microbiological tests also suggest a bactericidal character. Results for Sargassum fluitans III crude extract show a minimum inhibitory concentration (MIC) of 0.005 mg/mL on Gram-negative bacteria and a MIC greater than 0.6 mg/mL on Gram-positive bacteria. These results make it possible to consider the management of local and international issues while valuing a biomass rich in biodegradable molecules. The next step in this study will therefore be the evaluation of the toxicity of Sargassum spp..Keywords: Sargassum, secondary metabolites, anticorrosive, antibacterial, natural inhibitors
Procedia PDF Downloads 7072 Novel Numerical Technique for Dusty Plasma Dynamics (Yukawa Liquids): Microfluidic and Role of Heat Transport
Authors: Aamir Shahzad, Mao-Gang He
Abstract:
Currently, dusty plasmas motivated the researchers' widespread interest. Since the last two decades, substantial efforts have been made by the scientific and technological community to investigate the transport properties and their nonlinear behavior of three-dimensional and two-dimensional nonideal complex (dusty plasma) liquids (NICDPLs). Different calculations have been made to sustain and utilize strongly coupled NICDPLs because of their remarkable scientific and industrial applications. Understanding of the thermophysical properties of complex liquids under various conditions is of practical interest in the field of science and technology. The determination of thermal conductivity is also a demanding question for thermophysical researchers, due to some reasons; very few results are offered for this significant property. Lack of information of the thermal conductivity of dense and complex liquids at different parameters related to the industrial developments is a major barrier to quantitative knowledge of the heat flux flow from one medium to another medium or surface. The exact numerical investigation of transport properties of complex liquids is a fundamental research task in the field of thermophysics, as various transport data are closely related with the setup and confirmation of equations of state. A reliable knowledge of transport data is also important for an optimized design of processes and apparatus in various engineering and science fields (thermoelectric devices), and, in particular, the provision of precise data for the parameters of heat, mass, and momentum transport is required. One of the promising computational techniques, the homogenous nonequilibrium molecular dynamics (HNEMD) simulation, is over viewed with a special importance on the application to transport problems of complex liquids. This proposed work is particularly motivated by the FIRST TIME to modify the problem of heat conduction equations leads to polynomial velocity and temperature profiles algorithm for the investigation of transport properties with their nonlinear behaviors in the NICDPLs. The aim of proposed work is to implement a NEMDS algorithm (Poiseuille flow) and to delve the understanding of thermal conductivity behaviors in Yukawa liquids. The Yukawa system is equilibrated through the Gaussian thermostat in order to maintain the constant system temperature (canonical ensemble ≡ NVT)). The output steps will be developed between 3.0×105/ωp and 1.5×105/ωp simulation time steps for the computation of λ data. The HNEMD algorithm shows that the thermal conductivity is dependent on plasma parameters and the minimum value of lmin shifts toward higher G with an increase in k, as expected. New investigations give more reliable simulated data for the plasma conductivity than earlier known simulation data and generally the plasma λ0 by 2%-20%, depending on Γ and κ. It has been shown that the obtained results at normalized force field are in satisfactory agreement with various earlier simulation results. This algorithm shows that the new technique provides more accurate results with fast convergence and small size effects over a wide range of plasma states.Keywords: molecular dynamics simulation, thermal conductivity, nonideal complex plasma, Poiseuille flow
Procedia PDF Downloads 27371 Experimental and Modelling Performances of a Sustainable Integrated System of Conditioning for Bee-Pollen
Authors: Andrés Durán, Brian Castellanos, Marta Quicazán, Carlos Zuluaga-Domínguez
Abstract:
Bee-pollen is an apicultural-derived food product, with a growing appreciation among consumers given the remarkable nutritional and functional composition, in particular, protein (24%), dietary fiber (15%), phenols (15 – 20 GAE/g) and carotenoids (600 – 900 µg/g). These properties are given by the geographical and climatic characteristics of the region where it is collected. There are several countries recognized by their pollen production, e.g. China, United States, Japan, Spain, among others. Beekeepers use traps in the entrance of the hive where bee-pollen is collected. After the removal of foreign particles and drying, this product is ready to be marketed. However, in countries located along the equator, the absence of seasons and a constant tropical climate throughout the year favors a more rapid spoilage condition for foods with elevated water activity. The climatic conditions also trigger the proliferation of microorganisms and insects. This, added to the factor that beekeepers usually do not have adequate processing systems for bee-pollen, leads to deficiencies in the quality and safety of the product. In contrast, the Andean region of South America, lying on equator, typically has a high production of bee-pollen of up to 36 kg/year/hive, being four times higher than in countries with marked seasons. This region is also located in altitudes superior to 2500 meters above sea level, having extremes sun ultraviolet radiation all year long. As a mechanism of defense of radiation, plants produce more secondary metabolites acting as antioxidant agents, hence, plant products such as bee-pollen contain remarkable more phenolics and carotenoids than collected in other places. Considering this, the improvement of bee-pollen processing facilities by technical modifications and the implementation of an integrated cleaning and drying system for the product in an apiary in the area was proposed. The beehives were modified through the installation of alternative bee-pollen traps to avoid sources of contamination. The processing facility was modified according to considerations of Good Manufacturing Practices, implementing the combined use of a cabin dryer with temperature control and forced airflow and a greenhouse-type solar drying system. Additionally, for the separation of impurities, a cyclone type system was implemented, complementary to a screening equipment. With these modifications, a decrease in the content of impurities and the microbiological load of bee-pollen was seen from the first stages, principally with a reduction of the presence of molds and yeasts and in the number of foreign animal origin impurities. The use of the greenhouse solar dryer integrated to the cabin dryer allowed the processing of larger quantities of product with shorter waiting times in storage, reaching a moisture content of about 6% and a water activity lower than 0.6, being appropriate for the conservation of bee-pollen. Additionally, the contents of functional or nutritional compounds were not affected, even observing an increase of up to 25% in phenols content and a non-significant decrease in carotenoids content and antioxidant activity.Keywords: beekeeping, drying, food processing, food safety
Procedia PDF Downloads 10370 Formulation of Lipid-Based Tableted Spray-Congealed Microparticles for Zero Order Release of Vildagliptin
Authors: Hend Ben Tkhayat , Khaled Al Zahabi, Husam Younes
Abstract:
Introduction: Vildagliptin (VG), a dipeptidyl peptidase-4 inhibitor (DPP-4), was proven to be an active agent for the treatment of type 2 diabetes. VG works by enhancing and prolonging the activity of incretins which improves insulin secretion and decreases glucagon release, therefore lowering blood glucose level. It is usually used with various classes, such as insulin sensitizers or metformin. VG is currently only marketed as an immediate-release tablet that is administered twice daily. In this project, we aim to formulate an extended-release with a zero-order profile tableted lipid microparticles of VG that could be administered once daily ensuring the patient’s convenience. Method: The spray-congealing technique was used to prepare VG microparticles. Compritol® was heated at 10 oC above its melting point and VG was dispersed in the molten carrier using a homogenizer (IKA T25- USA) set at 13000 rpm. VG dispersed in the molten Compritol® was added dropwise to the molten Gelucire® 50/13 and PEG® (400, 6000, and 35000) in different ratios under manual stirring. The molten mixture was homogenized and Carbomer® amount was added. The melt was pumped through the two-fluid nozzle of the Buchi® Spray-Congealer (Buchi B-290, Switzerland) using a Pump drive (Master flex, USA) connected to a silicone tubing wrapped with silicone heating tape heated at the same temperature of the pumped mix. The physicochemical properties of the produced VG-loaded microparticles were characterized using Mastersizer, Scanning Electron Microscope (SEM), Differential Scanning Calorimeter (DSC) and X‐Ray Diffractometer (XRD). VG microparticles were then pressed into tablets using a single punch tablet machine (YDP-12, Minhua pharmaceutical Co. China) and in vitro dissolution study was investigated using Agilent Dissolution Tester (Agilent, USA). The dissolution test was carried out at 37±0.5 °C for 24 hours in three different dissolution media and time phases. The quantitative analysis of VG in samples was realized using a validated High-Pressure Liquid Chromatography (HPLC-UV) method. Results: The microparticles were spherical in shape with narrow distribution and smooth surface. DSC and XRD analyses confirmed the crystallinity of VG that was lost after being incorporated into the amorphous polymers. The total yields of the different formulas were between 70% and 80%. The VG content in the microparticles was found to be between 99% and 106%. The in vitro dissolution study showed that VG was released from the tableted particles in a controlled fashion. The adjustment of the hydrophilic/hydrophobic ratio of excipients, their concentration and the molecular weight of the used carriers resulted in tablets with zero-order kinetics. The Gelucire 50/13®, a hydrophilic polymer was characterized by a time-dependent profile with an important burst effect that was decreased by adding Compritol® as a lipophilic carrier to retard the release of VG which is highly soluble in water. PEG® (400,6000 and 35 000) were used for their gelling effect that led to a constant rate delivery and achieving a zero-order profile. Conclusion: Tableted spray-congealed lipid microparticles for extended-release of VG were successfully prepared and a zero-order profile was achieved.Keywords: vildagliptin, spray congealing, microparticles, controlled release
Procedia PDF Downloads 12069 Supporting a Moral Growth Mindset Among College Students
Authors: Kate Allman, Heather Maranges, Elise Dykhuis
Abstract:
Moral Growth Mindset (MGM) is the belief that one has the capacity to become a more moral person, as opposed to a fixed conception of one’s moral ability and capacity (Han et al., 2018). Building from Dweck’s work in incremental implicit theories of intelligence (2008), Moral Growth Mindset (Han et al., 2020) extends growth mindsets into the moral dimension. The concept of MGM has the potential to help researchers understand how both mindsets and interventions can impact character development, and it has even been shown to have connections to voluntary service engagement (Han et al., 2018). Understanding the contexts in which MGM might be cultivated could help to promote the further cultivation of character, in addition to prosocial behaviors like service engagement, which may, in turn, promote larger scale engagement in social justice-oriented thoughts, feelings, and behaviors. In particular, college may be a place to intentionally cultivate a growth mindset toward moral capacities, given the unique developmental and maturational components of the college experience, including contextual opportunity (Lapsley & Narvaez, 2006) and independence requiring the constant consideration, revision, and internalization of personal values (Lapsley & Woodbury, 2016). In a semester-long, quasi-experimental study, we examined the impact of a pedagogical approach designed to cultivate college student character development on participants’ MGM. With an intervention (n=69) and a control group (n=97; Pre-course: 27% Men; 66% Women; 68% White; 18% Asian; 2% Black; <1% Hispanic/Latino), we investigated whether college courses that intentionally incorporate character education pedagogy (Lamb, Brant, Brooks, 2021) affect a variety of psychosocial variables associated with moral thoughts, feelings, identity, and behavior (e.g. moral growth mindset, honesty, compassion, etc.). The intervention group consisted of 69 undergraduate students (Pre-course: 40% Men; 52% Women; 68% White; 10.5% Black; 7.4% Asian; 4.2% Hispanic/Latino) that voluntarily enrolled in five undergraduate courses that encouraged students to engage with key concepts and methods of character development through the application of research-based strategies and personal reflection on goals and experiences. Moral Growth Mindset was measured using the four-item Moral Growth Mindset scale (Han et al., 2020), with items such as You can improve your basic morals and character considerably on a six-point Likert scale from 1 (strongly disagree) to 6 (strongly agree). Higher scores of MGM indicate a stronger belief that one can become a more moral person with personal effort. Reliability at Time 1 was Cronbach’s ɑ= .833, and at Time 2 Cronbach’s ɑ= .772. An Analysis of Covariance (ANCOVA) was conducted to explore whether post-course MGM scores were different between the intervention and control when controlling for pre-course MGM scores. The ANCOVA indicated significant differences in MGM between groups post-course, F(1,163) = 8.073, p = .005, R² = .11, where descriptive statistics indicate that intervention scores were higher than the control group at post-course. Results indicate that intentional character development pedagogy can be leveraged to support the development of Moral Growth Mindset and related capacities in undergraduate settings.Keywords: moral personality, character education, incremental theories of personality, growth mindset
Procedia PDF Downloads 14568 Ultrafiltration Process Intensification for Municipal Wastewater Reuse: Water Quality, Optimization of Operating Conditions and Fouling Management
Authors: J. Yang, M. Monnot, T. Eljaddi, L. Simonian, L. Ercolei, P. Moulin
Abstract:
The application of membrane technology to wastewater treatment has expanded rapidly under increasing stringent legislation and environmental protection requirements. At the same time, the water resource is becoming precious, and water reuse has gained popularity. Particularly, ultrafiltration (UF) is a very promising technology for water reuse as it can retain organic matters, suspended solids, colloids, and microorganisms. Nevertheless, few studies dealing with operating optimization of UF as a tertiary treatment for water reuse on a semi-industrial scale appear in the literature. Therefore, this study aims to explore the permeate water quality and to optimize operating parameters (maximizing productivity and minimizing irreversible fouling) through the operation of a UF pilot plant under real conditions. The fully automatic semi-industrial UF pilot plant with periodic classic backwashes (CB) and air backwashes (AB) was set up to filtrate the secondary effluent of an urban wastewater treatment plant (WWTP) in France. In this plant, the secondary treatment consists of a conventional activated sludge process followed by a sedimentation tank. The UF process was thus defined as a tertiary treatment and was operated under constant flux. It is important to note that a combination of CB and chlorinated AB was used for better fouling management. The 200 kDa hollow fiber membrane was used in the UF module, with an initial permeability (for WWTP outlet water) of 600 L·m-2·h⁻¹·bar⁻¹ and a total filtration surface of 9 m². Fifteen filtration conditions with different fluxes, filtration times, and air backwash frequencies were operated for more than 40 hours of each to observe their hydraulic filtration performances. Through comparison, the best sustainable condition was flux at 60 L·h⁻¹·m⁻², filtration time at 60 min, and backwash frequency of 1 AB every 3 CBs. The optimized condition stands out from the others with > 92% water recovery rates, better irreversible fouling control, stable permeability variation, efficient backwash reversibility (80% for CB and 150% for AB), and no chemical washing occurrence in 40h’s filtration. For all tested conditions, the permeate water quality met the water reuse guidelines of the World Health Organization (WHO), French standards, and the regulation of the European Parliament adopted in May 2020, setting minimum requirements for water reuse in agriculture. In permeate: the total suspended solids, biochemical oxygen demand, and turbidity were decreased to < 2 mg·L-1, ≤ 10 mg·L⁻¹, < 0.5 NTU respectively; the Escherichia coli and Enterococci were > 5 log removal reduction, the other required microorganisms’ analysis were below the detection limits. Additionally, because of the COVID-19 pandemic, coronavirus SARS-CoV-2 was measured in raw wastewater of WWTP, UF feed, and UF permeate in November 2020. As a result, the raw wastewater was tested positive above the detection limit but below the quantification limit. Interestingly, the UF feed and UF permeate were tested negative to SARS-CoV-2 by these PCR assays. In summary, this work confirms the great interest in UF as intensified tertiary treatment for water reuse and gives operational indications for future industrial-scale production of reclaimed water.Keywords: semi-industrial UF pilot plant, water reuse, fouling management, coronavirus
Procedia PDF Downloads 11267 Investigating the Thermal Comfort Properties of Mohair Fabrics
Authors: Adine Gericke, Jiri Militky, Mohanapriya Venkataraman
Abstract:
Mohair, obtained from the Angora goat, is a luxury fiber and recognized as one of the best quality natural fibers. Expansion of the use of mohair into technical and functional textile products necessitates the need for a better understanding of how the use of mohair in fabrics will impact on its thermo-physiological comfort related properties. Despite its popularity, very little information is available on the quantification of the thermal and moisture management properties of mohair fabrics. This study investigated the effect of fibrous matter composition and fabric structural parameters on conductive and convective heat transfers to attain more information on the thermal comfort properties of mohair fabrics. Dry heat transfer through textiles may involve conduction through the fibrous phase, radiation through fabric interstices and convection of air within the structure. Factors that play a major role in heat transfer by conduction are fabric areal density (g/m2) and derived quantities such as cover factor and porosity. Convective heat transfer through fabrics is found in environmental conditions where there is wind-flow or the object is moving (e.g. running or walking). The thermal comfort properties of mohair fibers were objectively evaluated firstly in comparison with other textile fibers and secondly in a variety of fabric structures. Two sample sets were developed for this purpose, with fibre content, yarn structure and fabric design as main variables. SEM and microscopic images were obtained to closely examine the physical structures of the fibers and fabrics. Thermal comfort properties such as thermal resistance and thermal conductivity, as well as fabric thickness, were measured on the well-known Alambeta test instrument. Clothing insulation (clo) was calculated from the above. The thermal properties of fabrics under heat convection was evaluated using a laboratory model device developed at the Technical University of Liberec (referred to as the TP2-instrument). The effects of the different variables on fabric thermal comfort properties were analyzed statistically using TIBCO Statistica Software. The results showed that fabric structural properties, specifically sample thickness, played a significant role in determining the thermal comfort properties of the fabrics tested. It was found that regarding thermal resistance related to conductive heat flow, the effect of fiber type was not always statistically significant, probably as a result of the amount of trapped air within the fabric structure. The very low thermal conductivity of air, compared to that of the fibers, had a significant influence on the total conductivity and thermal resistance of the samples. This was confirmed by the high correlation of these factors with sample thickness. Regarding convective heat flow, the most important factor influencing the ability of the fabric to allow dry heat to move through the structure, was again fabric thickness. However, it would be wrong to totally disregard the effect of fiber composition on the thermal resistance of textile fabrics. In this study, the samples containing mohair or mohair/wool were consistently thicker than the others even though weaving parameters were kept constant. This can be ascribed to the physical properties of the mohair fibers that renders it exceptionally well towards trapping air among fibers (in a yarn) as well as among yarns (inside a fabric structure). The thicker structures trap more air to provide higher thermal insulation, but also prevent the free flow of air that allow thermal convection.Keywords: mohair fabrics, convective heat transfer, thermal comfort properties, thermal resistance
Procedia PDF Downloads 13966 Implementation of Performance Management and Development System: The Case of the Eastern Cape Provincial Department of Health, South Africa
Authors: Thanduxolo Elford Fana
Abstract:
Rationale and Purpose: Performance management and development system are central to effective and efficient service delivery, especially in highly labour intensive sectors such as South African public health. Performance management and development systems seek to ensure that good employee performance is rewarded accordingly, while those who underperform are developed so that they can reach their full potential. An effective and efficiently implemented performance management system motivates and improves employee engagement. The purpose of this study is to examine the implementation of the performance management and development system and the challenges that are encountered during its implementation in the Eastern Cape Provincial Department of Health. Methods: A qualitative research approach and a case study design was adopted in this study. The primary data were collected through observations, focus group discussions with employees, a group interview with shop stewards, and in-depth interviews with supervisors and managers, from April 2019 to September 2019. There were 45 study participants. In-depth interviews were held with 10 managers at facility level, which included chief executive officer, chief medical officer, assistant director’s in human resources management, patient admin, operations, finance, and two area manager and two operation managers nursing. A group interview was conducted with five shop stewards and an in-depth interview with one shop steward from the group. Five focus group discussions were conducted with clinical and non-clinical staff. The focus group discussions were supplemented with an in-depth interview with one person from each group in order to counter the group effect. Observations included moderation committee, contracting, and assessment meetings. Findings: The study shows that the performance management and development system was not properly implemented. There was non-compliance to performance management and development system policy guidelines in terms of time lines for contracting, evaluation, payment of incentives to good performers, and management of poor performance. The study revealed that the system is ineffective in raising the performance of employees and unable to assist employees to grow. The performance bonuses were no longer paid to qualifying employees. The study also revealed that lack of capacity and commitment, poor communication, constant policy changes, financial constraints, weak and highly bureaucratic management structures, union interference were challenges that were encountered during the implementation of the performance management and development system. Lastly, employees and supervisors were rating themselves three irrespective of how well or bad they performed. Conclusion: Performance management is regarded as vital to improved performance of the health workforce and healthcare service delivery among populations. Effective implementation of performance management and development system depends on well-capacitated and unbiased management at facility levels. Therefore, there is an urgent need to improve communication, link performance management to rewards, and capacitate staff on performance management and development system, as it is key to improved public health sector outcomes or performance.Keywords: challenges, implementation, performance management and development system, public hospital
Procedia PDF Downloads 13565 Evaluation of Alternative Approaches for Additional Damping in Dynamic Calculations of Railway Bridges under High-Speed Traffic
Authors: Lara Bettinelli, Bernhard Glatz, Josef Fink
Abstract:
Planning engineers and researchers use various calculation models with different levels of complexity, calculation efficiency and accuracy in dynamic calculations of railway bridges under high-speed traffic. When choosing a vehicle model to depict the dynamic loading on the bridge structure caused by passing high-speed trains, different goals are pursued: On the one hand, the selected vehicle models should allow the calculation of a bridge’s vibrations as realistic as possible. On the other hand, the computational efficiency and manageability of the models should be preferably high to enable a wide range of applications. The commonly adopted and straightforward vehicle model is the moving load model (MLM), which simplifies the train to a sequence of static axle loads moving at a constant speed over the structure. However, the MLM can significantly overestimate the structure vibrations, especially when resonance events occur. More complex vehicle models, which depict the train as a system of oscillating and coupled masses, can reproduce the interaction dynamics between the vehicle and the bridge superstructure to some extent and enable the calculation of more realistic bridge accelerations. At the same time, such multi-body models require significantly greater processing capacities and precise knowledge of various vehicle properties. The European standards allow for applying the so-called additional damping method when simple load models, such as the MLM, are used in dynamic calculations. An additional damping factor depending on the bridge span, which should take into account the vibration-reducing benefits of the vehicle-bridge interaction, is assigned to the supporting structure in the calculations. However, numerous studies show that when the current standard specifications are applied, the calculation results for the bridge accelerations are in many cases still too high compared to the measured bridge accelerations, while in other cases, they are not on the safe side. A proposal to calculate the additional damping based on extensive dynamic calculations for a parametric field of simply supported bridges with a ballasted track was developed to address this issue. In this contribution, several different approaches to determine the additional damping of the supporting structure considering the vehicle-bridge interaction when using the MLM are compared with one another. Besides the standard specifications, this includes the approach mentioned above and two additional recently published alternative formulations derived from analytical approaches. For a bridge catalogue of 65 existing bridges in Austria in steel, concrete or composite construction, calculations are carried out with the MLM for two different high-speed trains and the different approaches for additional damping. The results are compared with the calculation results obtained by applying a more sophisticated multi-body model of the trains used. The evaluation and comparison of the results allow assessing the benefits of different calculation concepts for the additional damping regarding their accuracy and possible applications. The evaluation shows that by applying one of the recently published redesigned additional damping methods, the calculation results can reflect the influence of the vehicle-bridge interaction on the design-relevant structural accelerations considerably more reliable than by using normative specifications.Keywords: Additional Damping Method, Bridge Dynamics, High-Speed Railway Traffic, Vehicle-Bridge-Interaction
Procedia PDF Downloads 16064 Conceptualizing the Moroccan Amazigh
Authors: Sanaa Riaz
Abstract:
The free people, Amazigh (plural Imazighen), often known by the more popular exonym, Berber, are spread across several North African countries with the highest population in Morocco have been substantially misunderstood and differentially showcased by entities from western-school educated scholars to human, health and women’s rights organizations, to the State to the international community. This paper is an examination of the various conceptualization of the Imazighen. With the popularity of the Arab Spring movement to oust monarchical and dictatorial rulers across the Middle East and North Africa in Morocco, the Moroccan monarchy introduced various reform programs to win public favor. These included social, economic and educational reforms to incorporate marginalized groups such as the Imazighen. The monarchy has ushered Amazigh representation in public offices and landscape through Amazigh script, even though theirs has been an oral culture. After the Arab Spring, the Justice and Development party, an Islamist party took over in Morocco due to its accessibility to the masses, In Sept. 2021, unlike the case of Egypt and Tunisia where military and constitutional means were sought, Morocco successfully removed it from power through the ballot, resulting in a real victory for the neutral monarchy and its representation as a moderate, secular and liberal force for the nation. As a result, supporting the perpetuation of Amazigh linguistic identity also became synonymous to making a secular statement as a Muslim. It has led to the telling of Amazigh identity at state museums as one representing the indigenous, pure, diverse, culturally-rich and united Morocco. Reform efforts have also prioritized an amiable look towards the economic and familial links of Moroccan Jews with the few thousand families still left in the country and a showcasing through museums and cultural centers of the Jewish identity as Moroccan first. In that endeavor, it is interesting to note the coverage of Jews as the indigenous of Morocco through the embracing of their “folk” cultural and religious practices, those that are not continued outside Morocco. In this epistemology, the concept of the Moroccan Jew becomes similar to the indigenous Amazigh, both cherished as the oldest peoples of Morocco and symbols of its unity and resilience. In the urban discourse, Amazigh identity is a concept that continues to be part of the deliberations of elites and scholars graduating from French schools on the incorporation of rural and illiterate Morocco in economic and educational advancement. Yet, with the constant influx of migrants from Western Sahara into cities like Fez and Marrakesh, Amazigh has often been described as the umbrella term of those of “mixed” ethnic ancestry who constitute the country’s free population. In sum, Amazigh identity highlights the changing discourse on marginalized communities, human rights, representation, Moroccan nationhood, and regional and transnational politics. The aim of this paper is to analyze perceptions of Amazigh identity in Morocco post-2021 ousting of the Islamist party using data from state-sponsored museum displays and cultural centers collected in Summer 2022 and scholarly analyses of Amazigh identity, representation and rights in Morocco.Keywords: Amazigh identity, Morocco, representation, state politics
Procedia PDF Downloads 8963 Contamination by Heavy Metals of Some Environmental Objects in Adjacent Territories of Solid Waste Landfill
Authors: D. Kekelidze, G. Tsotadze, G. Maisuradze, L. Akhalbedashvili, M. Chkhaidze
Abstract:
Statement of Problem: The problem of solid wastes -dangerous sources of environmental pollution,is the urgent issue for Georgia as there are no waste-treatment and waste- incineration plants. Urban peripheral and rural areas, frequently along small rivers, are occupied by landfills without any permission. The study of the pollution of some environmental objects in the adjacent territories of solid waste landfill in Tbilisi carried out in 2020-2021, within the framework of project: “Ecological monitoring of the landfills surrounding areas and population health risk assessment”. Research objects: This research had goal to assess the ecological state of environmental objects (soil cover and surface water) in the territories, adjacent of solid waste landfill, on the base of changes heavy metals' (HM) concentration with distance from landfill. An open sanitary landfill for solid domestic waste in Tbilisi locates at suburb Lilo surrounded with densely populated villages. Content of following HM was determined in soil and river water samples: Pb, Cd, Cu, Zn, Ni, Co, Mn. Methodology: The HM content in samples was measured, using flame atomic absorption spectrophotometry (spectrophotometer of firm Perkin-Elmer AAnalyst 200) in accordance with ISO 11466 and GOST Р 53218-2008. Results and discussion: Data obtained confirmed migration of HM mainly in terms of the distance from the polygon that can be explained by their areal emissions and storage in open state, they could also get into the soil cover under the influence of wind and precipitation. Concentration of Pb, Cd, Cu, Zn always increases with approaching to landfill. High concentrations of Pb, Cd are characteristic of the soil covers of the adjacent territories around the landfill at a distance of 250, 500 meters.They create a dangerous zone, since they can later migrate into plants, enter in rivers and lakes. The higher concentrations, compared to the maximum permissible concentrations (MPC) for surface waters of Georgia, are observed for Pb, Cd. One of the reasons for the low concentration of HM in river water may be high turbidity – as is known, suspended particles are good natural sorbents that causes low concentration of dissolved forms. Concentration of Cu, Ni, Mn increases in winter, since in this season the rivers are switched to groundwater feeding. Conclusion: Soil covers of the areas adjacent to the landfill in Lilo are contaminated with HM. High concentrations in soils are characteristic of lead and cadmium. Elevated concentrations in comparison with the MPC for surface waters adopted in Georgia are also observed for Pb, Cd at checkpoints along and below (1000 m) of the landfill downstream. Data obtained confirm migration of HM to the adjacent territories of the landfill and to the Lochini River. Since the migration and toxicity of metals depends also on the presence of their mobile forms in water bodies, samples of bottom sediments should be taken too. Bottom sediments reflect a long-term picture of pollution, they accumulate HM and represent a constant source of secondary pollution of water bodies. The study of the physicochemical forms of metals is one of the priority areas for further research.Keywords: landfill, pollution, heavy metals, migration
Procedia PDF Downloads 9962 Operation System for Aluminium-Air Cell: A Strategy to Harvest the Energy from Secondary Aluminium
Authors: Binbin Chen, Dennis Y. C. Leung
Abstract:
Aluminium (Al) -air cell holds a high volumetric capacity density of 8.05 Ah cm-3, benefit from the trivalence of Al ions. Additional benefits of Al-air cell are low price and environmental friendliness. Furthermore, the Al energy conversion process is characterized of 100% recyclability in theory. Along with a large base of raw material reserve, Al attracts considerable attentions as a promising material to be integrated within the global energy system. However, despite the early successful applications in military services, several problems exist that prevent the Al-air cells from widely civilian use. The most serious issue is the parasitic corrosion of Al when contacts with electrolyte. To overcome this problem, super-pure Al alloyed with various traces of metal elements are used to increase the corrosion resistance. Nevertheless, high-purity Al alloys are costly and require high energy consumption during production process. An alternative approach is to add inexpensive inhibitors directly into the electrolyte. However, such additives would increase the internal ohmic resistance and hamper the cell performance. So far these methods have not provided satisfactory solutions for the problem within Al-air cells. For the operation of alkaline Al-air cell, there are still other minor problems. One of them is the formation of aluminium hydroxide in the electrolyte. This process decreases ionic conductivity of electrolyte. Another one is the carbonation process within the gas diffusion layer of cathode, blocking the porosity of gas diffusion. Both these would hinder the performance of cells. The present work optimizes the above problems by building an Al-air cell operation system, consisting of four components. A top electrolyte tank containing fresh electrolyte is located at a high level, so that it can drive the electrolyte flow by gravity force. A mechanical rechargeable Al-air cell is fabricated with low-cost materials including low grade Al, carbon paper, and PMMA plates. An electrolyte waste tank with elaborate channel is designed to separate the hydrogen generated from the corrosion, which would be collected by gas collection device. In the first section of the research work, we investigated the performance of the mechanical rechargeable Al-air cell with a constant flow rate of electrolyte, to ensure the repeatability experiments. Then the whole system was assembled together and the feasibility of operating was demonstrated. During experiment, pure hydrogen is collected by collection device, which holds potential for various applications. By collecting this by-product, high utilization efficiency of aluminum is achieved. Considering both electricity and hydrogen generated, an overall utilization efficiency of around 90 % or even higher under different working voltages are achieved. Fluidic electrolyte could remove aluminum hydroxide precipitate and solve the electrolyte deterioration problem. This operation system provides a low-cost strategy for harvesting energy from the abundant secondary Al. The system could also be applied into other metal-air cells and is suitable for emergency power supply, power plant and other applications. The low cost feature implies great potential for commercialization. Further optimization, such as scaling up and optimization of fabrication, will help to refine the technology into practical market offerings.Keywords: aluminium-air cell, high efficiency, hydrogen, mechanical recharge
Procedia PDF Downloads 28361 Numerical Modeling of Timber Structures under Varying Humidity Conditions
Authors: Sabina Huč, Staffan Svensson, Tomaž Hozjan
Abstract:
Timber structures may be exposed to various environmental conditions during their service life. Often, the structures have to resist extreme changes in the relative humidity of surrounding air, with simultaneously carrying the loads. Wood material response for this load case is seen as increasing deformation of the timber structure. Relative humidity variations cause moisture changes in timber and consequently shrinkage and swelling of the material. Moisture changes and loads acting together result in mechano-sorptive creep, while sustained load gives viscoelastic creep. In some cases, magnitude of the mechano-sorptive strain can be about five times the elastic strain already at low stress levels. Therefore, analyzing mechano-sorptive creep and its influence on timber structures’ long-term behavior is of high importance. Relatively many one-dimensional rheological models for rheological behavior of wood can be found in literature, while a number of models coupling creep response in each material direction is limited. In this study, mathematical formulation of a coupled two-dimensional mechano-sorptive model and its application to the experimental results are presented. The mechano-sorptive model constitutes of a moisture transport model and a mechanical model. Variation of the moisture content in wood is modelled by multi-Fickian moisture transport model. The model accounts for processes of the bound-water and water-vapor diffusion in wood, that are coupled through sorption hysteresis. Sorption defines a nonlinear relation between moisture content and relative humidity. Multi-Fickian moisture transport model is able to accurately predict unique, non-uniform moisture content field within the timber member over time. Calculated moisture content in timber members is used as an input to the mechanical analysis. In the mechanical analysis, the total strain is assumed to be a sum of the elastic strain, viscoelastic strain, mechano-sorptive strain, and strain due to shrinkage and swelling. Mechano-sorptive response is modelled by so-called spring-dashpot type of a model, that proved to be suitable for describing creep of wood. Mechano-sorptive strain is dependent on change of moisture content. The model includes mechano-sorptive material parameters that have to be calibrated to the experimental results. The calibration is made to the experiments carried out on wooden blocks subjected to uniaxial compressive loaded in tangential direction and varying humidity conditions. The moisture and the mechanical model are implemented in a finite element software. The calibration procedure gives the required, distinctive set of mechano-sorptive material parameters. The analysis shows that mechano-sorptive strain in transverse direction is present, though its magnitude and variation are substantially lower than the mechano-sorptive strain in the direction of loading. The presented mechano-sorptive model enables observing real temporal and spatial distribution of the moisture-induced strains and stresses in timber members. Since the model’s suitability for predicting mechano-sorptive strains is shown and the required material parameters are obtained, a comprehensive advanced analysis of the stress-strain state in timber structures, including connections subjected to constant load and varying humidity is possible.Keywords: mechanical analysis, mechano-sorptive creep, moisture transport model, timber
Procedia PDF Downloads 24460 Optimization of Metal Pile Foundations for Solar Power Stations Using Cone Penetration Test Data
Authors: Adrian Priceputu, Elena Mihaela Stan
Abstract:
Our research addresses a critical challenge in renewable energy: improving efficiency and reducing the costs associated with the installation of ground-mounted photovoltaic (PV) panels. The most commonly used foundation solution is metal piles - with various sections adapted to soil conditions and the structural model of the panels. However, direct foundation systems are also sometimes used, especially in brownfield sites. Although metal micropiles are generally the first design option, understanding and predicting their bearing capacity, particularly under varied soil conditions, remains an open research topic. CPT Method and Current Challenges: Metal piles are favored for PV panel foundations due to their adaptability, but existing design methods rely heavily on costly and time-consuming in situ tests. The Cone Penetration Test (CPT) offers a more efficient alternative by providing valuable data on soil strength, stratification, and other key characteristics with reduced resources. During the test, a cone-shaped probe is pushed into the ground at a constant rate. Sensors within the probe measure the resistance of the soil to penetration, divided into cone penetration resistance and shaft friction resistance. Despite some existing CPT-based design approaches for metal piles, these methods are often cumbersome and difficult to apply. They vary significantly due to soil type and foundation method, and traditional approaches like the LCPC method involve complex calculations and extensive empirical data. The method was developed by testing 197 piles on a wide range of ground conditions, but the tested piles were very different from the ones used for PV pile foundations, making the method less accurate and practical for steel micropiles. Project Objectives and Methodology: Our research aims to develop a calculation method for metal micropile foundations using CPT data, simplifying the complex relationships involved. The goal is to estimate the pullout bearing capacity of piles without additional laboratory tests, streamlining the design process. To achieve this, a case study was selected which will serve for the development of an 80ha solar power station. Four testing locations were chosen spread throughout the site. At each location, two types of steel profiles (H160 and C100) were embedded into the ground at various depths (1.5m and 2.0m). The piles were tested for pullout capacity under natural and inundated soil conditions. CPT tests conducted nearby served as calibration points. The results served for the development of a preliminary equation for estimating pullout capacity. Future Work: The next phase involves validating and refining the proposed equation on additional sites by comparing CPT-based forecasts with in situ pullout tests. This validation will enhance the accuracy and reliability of the method, potentially transforming the foundation design process for PV panels.Keywords: cone penetration test, foundation optimization, solar power stations, steel pile foundations
Procedia PDF Downloads 5359 The End Justifies the Means: Using Programmed Mastery Drill to Teach Spoken English to Spanish Youngsters, without Relying on Homework
Authors: Robert Pocklington
Abstract:
Most current language courses expect students to be ‘vocational’, sacrificing their free time in order to learn. However, pupils with a full-time job, or bringing up children, hardly have a spare moment. Others just need the language as a tool or a qualification, as if it were book-keeping or a driving license. Then there are children in unstructured families whose stressful life makes private study almost impossible. And the countless parents whose evenings and weekends have become a nightmare, trying to get the children to do their homework. There are many arguments against homework being a necessity (rather than an optional extra for more ambitious or dedicated students), making a clear case for teaching methods which facilitate full learning of the key content within the classroom. A methodology which could be described as Programmed Mastery Learning has been used at Fluency Language Academy (Spain) since 1992, to teach English to over 4000 pupils yearly, with a staff of around 100 teachers, barely requiring homework. The course is structured according to the tenets of Programmed Learning: small manageable teaching steps, immediate feedback, and constant successful activity. For the Mastery component (not stopping until everyone has learned), the memorisation and practice are entrusted to flashcard-based drilling in the classroom, leading all students to progress together and develop a permanently growing knowledge base. Vocabulary and expressions are memorised using flashcards as stimuli, obliging the brain to constantly recover words from the long-term memory and converting them into reflex knowledge, before they are deployed in sentence building. The use of grammar rules is practised with ‘cue’ flashcards: the brain refers consciously to the grammar rule each time it produces a phrase until it comes easily. This automation of lexicon and correct grammar use greatly facilitates all other language and conversational activities. The full B2 course consists of 48 units each of which takes a class an average of 17,5 hours to complete, allowing the vast majority of students to reach B2 level in 840 class hours, which is corroborated by an 85% pass-rate in the Cambridge University B2 exam (First Certificate). In the past, studying for qualifications was just one of many different options open to young people. Nowadays, youngsters need to stay at school and obtain qualifications in order to get any kind of job. There are many students in our classes who have little intrinsic interest in what they are studying; they just need the certificate. In these circumstances and with increasing government pressure to minimise failure, teachers can no longer think ‘If they don’t study, and fail, its their problem’. It is now becoming the teacher’s problem. Teachers are ever more in need of methods which make their pupils successful learners; this means assuring learning in the classroom. Furthermore, homework is arguably the main divider between successful middle-class schoolchildren and failing working-class children who drop out: if everything important is learned at school, the latter will have a much better chance, favouring inclusiveness in the language classroom.Keywords: flashcard drilling, fluency method, mastery learning, programmed learning, teaching English as a foreign language
Procedia PDF Downloads 10958 Characteristics of Plasma Synthetic Jet Actuator in Repetitive Working Mode
Authors: Haohua Zong, Marios Kotsonis
Abstract:
Plasma synthetic jet actuator (PSJA) is a new concept of zero net mass flow actuator which utilizes pulsed arc/spark discharge to rapidly pressurize gas in a small cavity under constant-volume conditions. The unique combination of high exit jet velocity (>400 m/s) and high actuation frequency (>5 kHz) provides a promising solution for high-speed high-Reynolds-number flow control. This paper focuses on the performance of PSJA in repetitive working mode which is more relevant to future flow control applications. A two-electrodes PSJA (cavity volume: 424 mm3, orifice diameter: 2 mm) together with a capacitive discharge circuit (discharge energy: 50 mJ-110 mJ) is designed to enable repetitive operation. Time-Resolved Particle Imaging Velocimetry (TR-PIV) system working at 10 kHz is exploited to investigate the influence of discharge frequency on performance of PSJA. In total, seven cases are tested, covering a wide range of discharge frequencies (20 Hz-560 Hz). The pertinent flow features (shock wave, vortex ring and jet) remain the same for single shot mode and repetitive working mode. Shock wave is issued prior to jet eruption. Two distinct vortex rings are formed in one cycle. The first one is produced by the starting jet whereas the second one is related with the shock wave reflection in cavity. A sudden pressure rise is induced at the throat inlet by the reflection of primary shock wave, promoting the shedding of second vortex ring. In one cycle, jet exit velocity first increases sharply, then decreases almost linearly. Afterwards, an alternate occurrence of multiple jet stages and refresh stages is observed. By monitoring the dynamic evolution of exit velocity in one cycle, some integral performance parameters of PSJA can be deduced. As frequency increases, the jet intensity in steady phase decreases monotonically. In the investigated frequency range, jet duration time drops from 250 µs to 210 µs and peak jet velocity decreases from 53 m/s to approximately 39 m/s. The jet impulse and the expelled gas mass (0.69 µN∙s and 0.027 mg at 20 Hz) decline by 48% and 40%, respectively. However, the electro-mechanical efficiency of PSJA defined by the ratio of jet mechanical energy to capacitor energy doesn’t show significant difference (o(0.01%)). Fourier transformation of the temporal exit velocity signal indicates two dominant frequencies. One corresponds to the discharge frequency, while the other accounts for the alternation frequency of jet stage and refresh stage in one cycle. The alternation period (300 µs approximately) is independent of discharge frequency, and possibly determined intrinsically by the actuator geometry. A simple analytical model is established to interpret the alternation of jet stage and refresh stage. Results show that the dynamic response of exit velocity to a small-scale disturbance (jump in cavity pressure) can be treated as a second-order under-damping system. Oscillation frequency of the exit velocity, namely alternation frequency, is positively proportional to exit area, but inversely proportional to cavity volume and throat length. Theoretical value of alternation period (305 µs) agrees well with the experimental value.Keywords: plasma, synthetic jet, actuator, frequency effect
Procedia PDF Downloads 25157 Reduced General Dispersion Model in Cylindrical Coordinates and Isotope Transient Kinetic Analysis in Laminar Flow
Authors: Masood Otarod, Ronald M. Supkowski
Abstract:
This abstract discusses a method that reduces the general dispersion model in cylindrical coordinates to a second order linear ordinary differential equation with constant coefficients so that it can be utilized to conduct kinetic studies in packed bed tubular catalytic reactors at a broad range of Reynolds numbers. The model was tested by 13CO isotope transient tracing of the CO adsorption of Boudouard reaction in a differential reactor at an average Reynolds number of 0.2 over Pd-Al2O3 catalyst. Detailed experimental results have provided evidence for the validity of the theoretical framing of the model and the estimated parameters are consistent with the literature. The solution of the general dispersion model requires the knowledge of the radial distribution of axial velocity. This is not always known. Hence, up until now, the implementation of the dispersion model has been largely restricted to the plug-flow regime. But, ideal plug-flow is impossible to achieve and flow regimes approximating plug-flow leave much room for debate as to the validity of the results. The reduction of the general dispersion model transpires as a result of the application of a factorization theorem. Factorization theorem is derived from the observation that a cross section of a catalytic bed consists of a solid phase across which the reaction takes place and a void or porous phase across which no significant measure of reaction occurs. The disparity in flow and the heterogeneity of the catalytic bed cause the concentration of reacting compounds to fluctuate radially. These variabilities signify the existence of radial positions at which the radial gradient of concentration is zero. Succinctly, factorization theorem states that a concentration function of axial and radial coordinates in a catalytic bed is factorable as the product of the mean radial cup-mixing function and a contingent dimensionless function. The concentration of adsorbed compounds are also factorable since they are piecewise continuous functions and suffer the same variability but in the reverse order of the concentration of mobile phase compounds. Factorability is a property of packed beds which transforms the general dispersion model to an equation in terms of the measurable mean radial cup-mixing concentration of the mobile phase compounds and mean cross-sectional concentration of adsorbed species. The reduced model does not require the knowledge of the radial distribution of the axial velocity. Instead, it is characterized by new transport parameters so denoted by Ωc, Ωa, Ωc, and which are respectively denominated convection coefficient cofactor, axial dispersion coefficient cofactor, and radial dispersion coefficient cofactor. These cofactors adjust the dispersion equation as compensation for the unavailability of the radial distribution of the axial velocity. Together with the rest of the kinetic parameters they can be determined from experimental data via an optimization procedure. Our data showed that the estimated parameters Ωc, Ωa Ωr, are monotonically correlated with the Reynolds number. This is expected to be the case based on the theoretical construct of the model. Computer generated simulations of methanation reaction on nickel provide additional support for the utility of the newly conceptualized dispersion model.Keywords: factorization, general dispersion model, isotope transient kinetic, partial differential equations
Procedia PDF Downloads 26856 Automated End of Sprint Detection for Force-Velocity-Power Analysis with GPS/GNSS Systems
Authors: Patrick Cormier, Cesar Meylan, Matt Jensen, Dana Agar-Newman, Chloe Werle, Ming-Chang Tsai, Marc Klimstra
Abstract:
Sprint-derived horizontal force-velocity-power (FVP) profiles can be developed with adequate validity and reliability with satellite (GPS/GNSS) systems. However, FVP metrics are sensitive to small nuances in data processing procedures such that minor differences in defining the onset and end of the sprint could result in different FVP metric outcomes. Furthermore, in team-sports, there is a requirement for rapid analysis and feedback of results from multiple athletes, therefore developing standardized and automated methods to improve the speed, efficiency and reliability of this process are warranted. Thus, the purpose of this study was to compare different methods of sprint end detection on the development of FVP profiles from 10Hz GPS/GNSS data through goodness-of-fit and intertrial reliability statistics. Seventeen national team female soccer players participated in the FVP protocol which consisted of 2x40m maximal sprints performed towards the end of a soccer specific warm-up in a training session (1020 hPa, wind = 0, temperature = 30°C) on an open grass field. Each player wore a 10Hz Catapult system unit (Vector S7, Catapult Innovations) inserted in a vest in a pouch between the scapulae. All data were analyzed following common procedures. Variables computed and assessed were the model parameters, estimated maximal sprint speed (MSS) and the acceleration constant τ, in addition to horizontal relative force (F₀), velocity at zero (V₀), and relative mechanical power (Pmax). The onset of the sprints was standardized with an acceleration threshold of 0.1 m/s². The sprint end detection methods were: 1. Time when peak velocity (MSS) was achieved (zero acceleration), 2. Time after peak velocity drops by -0.4 m/s, 3. Time after peak velocity drops by -0.6 m/s, and 4. When the integrated distance from the GPS/GNSS signal achieves 40-m. Goodness-of-fit of each sprint end detection method was determined using the residual sum of squares (RSS) to demonstrate the error of the FVP modeling with the sprint data from the GPS/GNSS system. Inter-trial reliability (from 2 trials) was assessed utilizing intraclass correlation coefficients (ICC). For goodness-of-fit results, the end detection technique that used the time when peak velocity was achieved (zero acceleration) had the lowest RSS values, followed by -0.4 and -0.6 velocity decay, and 40-m end had the highest RSS values. For intertrial reliability, the end of sprint detection techniques that were defined as the time at (method 1) or shortly after (method 2 and 3) when MSS was achieved had very large to near perfect ICC and the time at the 40 m integrated distance (method 4) had large to very large ICCs. Peak velocity was reached at 29.52 ± 4.02-m. Therefore, sport scientists should implement end of sprint detection either when peak velocity is determined or shortly after to improve goodness of fit to achieve reliable between trial FVP profile metrics. Although, more robust processing and modeling procedures should be developed in future research to improve sprint model fitting. This protocol was seamlessly integrated into the usual training which shows promise for sprint monitoring in the field with this technology.Keywords: automated, biomechanics, team-sports, sprint
Procedia PDF Downloads 11855 Artificial Intelligence for Traffic Signal Control and Data Collection
Authors: Reggie Chandra
Abstract:
Trafficaccidents and traffic signal optimization are correlated. However, 70-90% of the traffic signals across the USA are not synchronized. The reason behind that is insufficient resources to create and implement timing plans. In this work, we will discuss the use of a breakthrough Artificial Intelligence (AI) technology to optimize traffic flow and collect 24/7/365 accurate traffic data using a vehicle detection system. We will discuss what are recent advances in Artificial Intelligence technology, how does AI work in vehicles, pedestrians, and bike data collection, creating timing plans, and what is the best workflow for that. Apart from that, this paper will showcase how Artificial Intelligence makes signal timing affordable. We will introduce a technology that uses Convolutional Neural Networks (CNN) and deep learning algorithms to detect, collect data, develop timing plans and deploy them in the field. Convolutional Neural Networks are a class of deep learning networks inspired by the biological processes in the visual cortex. A neural net is modeled after the human brain. It consists of millions of densely connected processing nodes. It is a form of machine learning where the neural net learns to recognize vehicles through training - which is called Deep Learning. The well-trained algorithm overcomes most of the issues faced by other detection methods and provides nearly 100% traffic data accuracy. Through this continuous learning-based method, we can constantly update traffic patterns, generate an unlimited number of timing plans and thus improve vehicle flow. Convolutional Neural Networks not only outperform other detection algorithms but also, in cases such as classifying objects into fine-grained categories, outperform humans. Safety is of primary importance to traffic professionals, but they don't have the studies or data to support their decisions. Currently, one-third of transportation agencies do not collect pedestrian and bike data. We will discuss how the use of Artificial Intelligence for data collection can help reduce pedestrian fatalities and enhance the safety of all vulnerable road users. Moreover, it provides traffic engineers with tools that allow them to unleash their potential, instead of dealing with constant complaints, a snapshot of limited handpicked data, dealing with multiple systems requiring additional work for adaptation. The methodologies used and proposed in the research contain a camera model identification method based on deep Convolutional Neural Networks. The proposed application was evaluated on our data sets acquired through a variety of daily real-world road conditions and compared with the performance of the commonly used methods requiring data collection by counting, evaluating, and adapting it, and running it through well-established algorithms, and then deploying it to the field. This work explores themes such as how technologies powered by Artificial Intelligence can benefit your community and how to translate the complex and often overwhelming benefits into a language accessible to elected officials, community leaders, and the public. Exploring such topics empowers citizens with insider knowledge about the potential of better traffic technology to save lives and improve communities. The synergies that Artificial Intelligence brings to traffic signal control and data collection are unsurpassed.Keywords: artificial intelligence, convolutional neural networks, data collection, signal control, traffic signal
Procedia PDF Downloads 16854 Heat Transfer Modeling of 'Carabao' Mango (Mangifera indica L.) during Postharvest Hot Water Treatments
Authors: Hazel James P. Agngarayngay, Arnold R. Elepaño
Abstract:
Mango is the third most important export fruit in the Philippines. Despite the expanding mango trade in world market, problems on postharvest losses caused by pests and diseases are still prevalent. Many disease control and pest disinfestation methods have been studied and adopted. Heat treatment is necessary to eliminate pests and diseases to be able to pass the quarantine requirements of importing countries. During heat treatments, temperature and time are critical because fruits can easily be damaged by over-exposure to heat. Modeling the process enables researchers and engineers to study the behaviour of temperature distribution within the fruit over time. Understanding physical processes through modeling and simulation also saves time and resources because of reduced experimentation. This research aimed to simulate the heat transfer mechanism and predict the temperature distribution in ‘Carabao' mangoes during hot water treatment (HWT) and extended hot water treatment (EHWT). The simulation was performed in ANSYS CFD Software, using ANSYS CFX Solver. The simulation process involved model creation, mesh generation, defining the physics of the model, solving the problem, and visualizing the results. Boundary conditions consisted of the convective heat transfer coefficient and a constant free stream temperature. The three-dimensional energy equation for transient conditions was numerically solved to obtain heat flux and transient temperature values. The solver utilized finite volume method of discretization. To validate the simulation, actual data were obtained through experiment. The goodness of fit was evaluated using mean temperature difference (MTD). Also, t-test was used to detect significant differences between the data sets. Results showed that the simulations were able to estimate temperatures accurately with MTD of 0.50 and 0.69 °C for the HWT and EHWT, respectively. This indicates good agreement between the simulated and actual temperature values. The data included in the analysis were taken at different locations of probe punctures within the fruit. Moreover, t-tests showed no significant differences between the two data sets. Maximum heat fluxes obtained at the beginning of the treatments were 394.15 and 262.77 J.s-1 for HWT and EHWT, respectively. These values decreased abruptly at the first 10 seconds and gradual decrease was observed thereafter. Data on heat flux is necessary in the design of heaters. If underestimated, the heating component of a certain machine will not be able to provide enough heat required by certain operations. Otherwise, over-estimation will result in wasting of energy and resources. This study demonstrated that the simulation was able to estimate temperatures accurately. Thus, it can be used to evaluate the influence of various treatment conditions on the temperature-time history in mangoes. When combined with information on insect mortality and quality degradation kinetics, it could predict the efficacy of a particular treatment and guide appropriate selection of treatment conditions. The effect of various parameters on heat transfer rates, such as the boundary and initial conditions as well as the thermal properties of the material, can be systematically studied without performing experiments. Furthermore, the use of ANSYS software in modeling and simulation can be explored in modeling various systems and processes.Keywords: heat transfer, heat treatment, mango, modeling and simulation
Procedia PDF Downloads 24653 Contemporary Paradoxical Expectations of the Nursing Profession and Revisiting the ‘Nurses’ Disciplinary Boundaries: India’s Historical and Gendered Perspective
Authors: Neha Adsul, Rohit Shah
Abstract:
Background: The global history of nursing is exclusively a history of deep contradictions as it seeks to negotiate inclusion in an already gendered world. Although a powerful 'clinical gaze exists, nurses have toiled to re-negotiate and subvert the 'medical gaze' by practicing the 'therapeutic gaze' to tether back 'care into nursing practice.' This helps address the duality of the 'body' and 'mind' wherein the patient is not just limited to being an object of medical inquiry. Nevertheless, there has been a consistent effort to fit 'nursing' into being an art or an emerging science over the years. Especially with advances in hospital-based techno-centric medical practices, the boundaries between technology and nursing practices are becoming more blurred as the technical process becomes synonymous with nursing, eroding the essence of nursing care. Aim: This paper examines the history of nursing and offers insights into how gendered relations and the ideological belief of 'nursing as gendered work' have propagated to the subjugation of the nursing profession. It further aims to provide insights into the patriarchally imbibed techno-centrism that negates the gendered caregiving which lies at the crux of a nurse's work. Method: A literature search was carried out using Google Scholar, Web of Science and PubMed databases. Search words included: technology and nursing, medical technology and nursing, history of nursing, sociology and nursing and nursing care. The history of nursing is presented in a discussion that weaves together the historical events of the 'Birth of the Clinic' and the shift from 'bed-side medicine' to 'hospital-based medicine' that legitimizes exploitation of the bodies of patients to the 'medical gaze while the emergence of nursing as acquiescent to instrumental, technical, positivist and dominant views of medicine. The resultant power asymmetries, wherein in contemporary nursing, the constant struggle of nurses to juggle between being the physicians "operational right arm" to harboring that subjective understanding of the patients to refrain from de-humanizing nursing-care. Findings: The nursing profession suffers from being rendered invisible due to gendered relations having patrifocal societal roots. This perpetuates a notion rooted in the idea that emphasizes empiricism and has resulted in theoretical and epistemological fragmentation of the understanding of body and mind as separate entities. Nurses operate within this structure while constantly being at the brink of being pushed beyond the legitimate professional boundaries while being labeled as being 'unscientific' as the work does not always corroborate and align with the existing dominant positivist lines of inquiries. Conclusion: When understood in this broader context of how nursing as a practice has evolved over the years, it provides a particularly crucial testbed for understanding contemporary gender relations. Not because nurses like to live in a gendered work trap but because the gendered relations at work are written in a covert narcissistic patriarchal milieu that fails to recognize the value of intangible yet utmost necessary 'caring work in nursing. This research urges and calls for preserving and revering the humane aspect of nursing care alongside the emerging tech-savvy expectations from nursing work.Keywords: nursing history, technocentric, power relations, scientific duality
Procedia PDF Downloads 14452 Application of Pedicled Perforator Flaps in Large Cavities of the Breast
Authors: Neerja Gupta
Abstract:
Objective-Reconstruction of large cavities of the breast without contralateral symmetrisation Background- Reconstruction of breast includes a wide spectrum of procedures from displacement to regional and distant flaps. The pedicled Perforator flaps cover a wide spectrum of reconstruction surgery for all quadrants of the breast, especially in patients with comorbidities. These axial flaps singly or adjunct are based on a near constant perforator vessel, a ratio of 2:1 at its entry in a flap is good to maintain vascularity. The perforators of lateral chest wall viz LICAP, LTAP have overlapping perfurosomes without clear demarcation. LTAP is localized in the narrow zone between the lateral breast fold and anterior axillary line,2.5-3.8cm from the fold. MICAP are localized at 1-2 cm from sternum. Being 1-2mm in diameter, a Single perforator is good to maintain the flap. LICAP has a dominant perforator in 6th-11th spaces, while LTAP has higher placed dominant perforators in 4th and 5th spaces. Methodology-Six consecutive patients who underwent reconstruction of the breast with pedicled perforator flaps were retrospectively analysed. Selections of the flap was done based on the size and locations of the tumour, anticipated volume loss, willingness to undergo contralateral symmetrisation, cosmetic expectations, and finances available.3 patients underwent vertical LTAP, the distal limit of the flap being the inframammary crease. 3 patients underwent MICAP, oriented along the axis of rib, the distal limit being the anterior axillary line. Preoperative identification was done using a unidirectional hand held doppler. The flap was raised caudal to cranial, the pivot point of rotation being the vessel entry into the skin. The donor area is determined by the skin pinch. Flap harvest time was 20-25 minutes. Intra operative vascularity was assessed with dermal bleed. The patient immediate pre, post-operative and follow up pics were compared independently by two breast surgeons. Patients were given a breast Q questionnaire (licensed) for scoring. Results-The median age of six patients was 46. Each patient had a hospital stay of 24 hours. None of the patients was willing for contralateral symmetrisation. The specimen dimensions were from 8x6.8x4 cm to 19x16x9 cm. The breast volume reconstructed range was 30 percent to 45 percent. All wide excision had free margins on frozen. The mean flap dimensions were 12x5x4.5 cm. One LTAP underwent marginal necrosis and delayed wound healing due to seroma. Three patients were phyllodes, of which one was borderline, and 2 were benign on final histopathology. All other 3 patients were invasive ductal cancer and have completed their radiation. The median follow up is 7 months the satisfaction scores at median follow of 7 months are 90 for physical wellbeing and 85 for surgical results. Surgeons scored fair to good in Harvard score. Conclusion- Pedicled perforator flaps are a valuable option for 3/8th volume of breast defects. LTAP is preferred for tumours at the Central, upper, and outer quadrants of the breast and MICAP for the inner and lower quadrant. The vascularity of the flap is dependent on the angiosomalterritories; adequate venous and cavity drainage.Keywords: breast, oncoplasty, pedicled, perforator
Procedia PDF Downloads 18651 Combustion Variability and Uniqueness in Cylinders of a Radial Aircraft Piston Engine
Authors: Michal Geca, Grzegorz Baranski, Ksenia Siadkowska
Abstract:
The work is a part of the project which aims at developing innovative power and control systems for the high power aircraft piston engine ASz62IR. Developed electronically controlled ignition system will reduce emissions of toxic compounds as a result of lowered fuel consumption, optimized combustion and engine capability of efficient combustion of ecological fuels. The tested unit is an air-cooled four-stroke gasoline engine of 9 cylinders in a radial setup, mechanically charged by a radial compressor powered by the engine crankshaft. The total engine cubic capac-ity is 29.87 dm3, and the compression ratio is 6.4:1. The maximum take-off power is 1000 HP at 2200 rpm. The maximum fuel consumption is 280 kg/h. Engine powers aircrafts: An-2, M-18 „Dromader”, DHC-3 „OTTER”, DC-3 „Dakota”, GAF-125 „HAWK” i Y5. The main problems of the engine includes the imbalanced work of cylinders. The non-uniformity value in each cylinder results in non-uniformity of their work. In radial engine cylinders arrangement causes that the mixture movement that takes place in accordance (lower cylinder) or the opposite (upper cylinders) to the direction of gravity. Preliminary tests confirmed the presence of uneven workflow of individual cylinders. The phenomenon is most intense at low speed. The non-uniformity is visible on the waveform of cylinder pressure. Therefore two studies were conducted to determine the impact of this phenomenon on the engine performance: simulation and real tests. Simplified simulation was conducted on the element of the intake system coated with fuel film. The study shows that there is an effect of gravity on the movement of the fuel film inside the radial engine intake channels. Both in the lower and the upper inlet channels the film flows downwards. It follows from the fact that gravity assists the movement of the film in the lower cylinder channels and prevents the movement in the upper cylinder channels. Real tests on aircraft engine ASz62IR was conducted in transients condition (rapid change of the excess air in each cylinder were performed. Calculations were conducted for mass of fuel reaching the cylinders theoretically and really and on this basis, the factors of fuel evaporation “x” were determined. Therefore a simplified model of the fuel supply to cylinder was adopted. Model includes time constant of the fuel film τ, the number of engine transport cycles of non-evaporating fuel along the intake pipe γ and time between next cycles Δt. The calculation results of identification of the model parameters are presented in the form of radar graphs. The figures shows the averages declines and increases of the injection time and the average values for both types of stroke. These studies shown, that the change of the position of the cylinder will cause changes in the formation of fuel-air mixture and thus changes in the combustion process. Based on the results of the work of simulation and experiments was possible to develop individual algorithms for ignition control. This work has been financed by the Polish National Centre for Research and Development, INNOLOT, under Grant Agreement No. INNOLOT/I/1/NCBR/2013.Keywords: radial engine, ignition system, non-uniformity, combustion process
Procedia PDF Downloads 36450 Assessment of Efficiency of Underwater Undulatory Swimming Strategies Using a Two-Dimensional CFD Method
Authors: Dorian Audot, Isobel Margaret Thompson, Dominic Hudson, Joseph Banks, Martin Warner
Abstract:
In competitive swimming, after dives and turns, athletes perform underwater undulatory swimming (UUS), copying marine mammals’ method of locomotion. The body, performing this wave-like motion, accelerates the fluid downstream in its vicinity, generating propulsion with minimal resistance. Through this technique, swimmers can maintain greater speeds than surface swimming and take advantage of the overspeed granted by the dive (or push-off). Almost all previous work has considered UUS when performed at maximum effort. Critical parameters to maximize UUS speed are frequently discussed; however, this does not apply to most races. In only 3 out of the 16 individual competitive swimming events are athletes likely to attempt to perform UUS with the greatest speed, without thinking of the cost of locomotion. In the other cases, athletes will want to control the speed of their underwater swimming, attempting to maximise speed whilst considering energy expenditure appropriate to the duration of the event. Hence, there is a need to understand how swimmers adapt their underwater strategies to optimize the speed within the allocated energetic cost. This paper develops a consistent methodology that enables different sets of UUS kinematics to be investigated. These may have different propulsive efficiencies and force generation mechanisms (e.g.: force distribution along with the body and force magnitude). The developed methodology, therefore, needs to: (i) provide an understanding of the UUS propulsive mechanisms at different speeds, (ii) investigate the key performance parameters when UUS is not performed solely for maximizing speed; (iii) consistently determine the propulsive efficiency of a UUS technique. The methodology is separated into two distinct parts: kinematic data acquisition and computational fluid dynamics (CFD) analysis. For the kinematic acquisition, the position of several joints along the body and their sequencing were either obtained by video digitization or by underwater motion capture (Qualisys system). During data acquisition, the swimmers were asked to perform UUS at a constant depth in a prone position (facing the bottom of the pool) at different speeds: maximum effort, 100m pace, 200m pace and 400m pace. The kinematic data were input to a CFD algorithm employing a two-dimensional Large Eddy Simulation (LES). The algorithm adopted was specifically developed in order to perform quick unsteady simulations of deforming bodies and is therefore suitable for swimmers performing UUS. Despite its approximations, the algorithm is applied such that simulations are performed with the inflow velocity updated at every time step. It also enables calculations of the resistive forces (total and applied to each segment) and the power input of the modeled swimmer. Validation of the methodology is achieved by comparing the data obtained from the computations with the original data (e.g.: sustained swimming speed). This method is applied to the different kinematic datasets and provides data on swimmers’ natural responses to pacing instructions. The results show how kinematics affect force generation mechanisms and hence how the propulsive efficiency of UUS varies for different race strategies.Keywords: CFD, efficiency, human swimming, hydrodynamics, underwater undulatory swimming
Procedia PDF Downloads 21949 Snake Locomotion: From Sinusoidal Curves and Periodic Spiral Formations to the Design of a Polymorphic Surface
Authors: Ennios Eros Giogos, Nefeli Katsarou, Giota Mantziorou, Elena Panou, Nikolaos Kourniatis, Socratis Giannoudis
Abstract:
In the context of the postgraduate course Productive Design, Department of Interior Architecture of the University of West Attica in Athens, under the guidance of Professors Nikolaos Koyrniatis and Socratis Giannoudis, kinetic mechanisms with parametric models were examined for their further application in the design of objects. In the first phase, the students studied a motion mechanism that they chose from daily experience and then analyzed its geometric structure in relation to the geometric transformations that exist. In the second phase, the students tried to design it through a parametric model in Grasshopper3d for Rhino algorithmic processor and plan the design of its application in an everyday object. For the project presented, our team began by studying the movement of living beings, specifically the snake. By studying the snake and the role that the environment has in its movement, four basic typologies were recognized: serpentine, concertina, sidewinding and rectilinear locomotion, as well as its ability to perform spiral formations. Most typologies are characterized by ripples, a series of sinusoidal curves. For the application of the snake movement in a polymorphic space divider, the use of a coil-type joint was studied. In the Grasshopper program, the simulation of the desired motion for the polymorphic surface was tested by applying a coil on a sinusoidal curve and a spiral curve. It was important throughout the process that the points corresponding to the nodes of the real object remain constant in number, as well as the distances between them and the elasticity of the construction had to be achieved through a modular movement of the coil and not some elastic element (material) at the nodes. Using mesh (repeating coil), the whole construction is transformed into a supporting body and combines functionality with aesthetics. The set of elements functions as a vertical spatial network, where each element participates in its coherence and stability. Depending on the positions of the elements in terms of the level of support, different perspectives are created in terms of the visual perception of the adjacent space. For the implementation of the model on the scale (1:3), (0.50m.x2.00m.), the load-bearing structure that was studied has aluminum rods for the basic pillars Φ6mm and Φ 2.50 mm, for the secondary columns. Filling elements and nodes are of similar material and were made of MDF surfaces. During the design process, four trapezoidal patterns were picketed, which function as filling elements, while in order to support their assembly, a different engraving facet was done. The nodes have holes that can be pierced by the rods, while their connection point with the patterns has a half-carved recess. The patterns have a corresponding recess. The nodes are of two different types depending on the column that passes through them. The patterns and knots were designed to be cut and engraved using a Laser Cutter and attached to the knots using glue. The parameters participate in the design as mechanisms that generate complex forms and structures through the repetition of constantly changing versions of the parts that compose the object.Keywords: polymorphic, locomotion, sinusoidal curves, parametric
Procedia PDF Downloads 10448 Design, Fabrication and Analysis of Molded and Direct 3D-Printed Soft Pneumatic Actuators
Authors: N. Naz, A. D. Domenico, M. N. Huda
Abstract:
Soft Robotics is a rapidly growing multidisciplinary field where robots are fabricated using highly deformable materials motivated by bioinspired designs. The high dexterity and adaptability to the external environments during contact make soft robots ideal for applications such as gripping delicate objects, locomotion, and biomedical devices. The actuation system of soft robots mainly includes fluidic, tendon-driven, and smart material actuation. Among them, Soft Pneumatic Actuator, also known as SPA, remains the most popular choice due to its flexibility, safety, easy implementation, and cost-effectiveness. However, at present, most of the fabrication of SPA is still based on traditional molding and casting techniques where the mold is 3d printed into which silicone rubber is cast and consolidated. This conventional method is time-consuming and involves intensive manual labour with the limitation of repeatability and accuracy in design. Recent advancements in direct 3d printing of different soft materials can significantly reduce the repetitive manual task with an ability to fabricate complex geometries and multicomponent designs in a single manufacturing step. The aim of this research work is to design and analyse the Soft Pneumatic Actuator (SPA) utilizing both conventional casting and modern direct 3d printing technologies. The mold of the SPA for traditional casting is 3d printed using fused deposition modeling (FDM) with the polylactic acid (PLA) thermoplastic wire. Hyperelastic soft materials such as Ecoflex-0030/0050 are cast into the mold and consolidated using a lab oven. The bending behaviour is observed experimentally with different pressures of air compressor to ensure uniform bending without any failure. For direct 3D-printing of SPA fused deposition modeling (FDM) with thermoplastic polyurethane (TPU) and stereolithography (SLA) with an elastic resin are used. The actuator is modeled using the finite element method (FEM) to analyse the nonlinear bending behaviour, stress concentration and strain distribution of different hyperelastic materials after pressurization. FEM analysis is carried out using Ansys Workbench software with a Yeon-2nd order hyperelastic material model. FEM includes long-shape deformation, contact between surfaces, and gravity influences. For mesh generation, quadratic tetrahedron, hybrid, and constant pressure mesh are used. SPA is connected to a baseplate that is in connection with the air compressor. A fixed boundary is applied on the baseplate, and static pressure is applied orthogonally to all surfaces of the internal chambers and channels with a closed continuum model. The simulated results from FEM are compared with the experimental results. The experiments are performed in a laboratory set-up where the developed SPA is connected to a compressed air source with a pressure gauge. A comparison study based on performance analysis is done between FDM and SLA printed SPA with the molded counterparts. Furthermore, the molded and 3d printed SPA has been used to develop a three-finger soft pneumatic gripper and has been tested for handling delicate objects.Keywords: finite element method, fused deposition modeling, hyperelastic, soft pneumatic actuator
Procedia PDF Downloads 8947 Using Low-Calorie Gas to Generate Heat and Electricity
Authors: Аndrey Marchenko, Oleg Linkov, Alexander Osetrov, Sergiy Kravchenko
Abstract:
The low-calorie of gases include biogas, coal gas, coke oven gas, associated petroleum gas, gases sewage, etc. These gases are usually released into the atmosphere or burned on flares, causing substantial damage to the environment. However, with the right approach, low-calorie gas fuel can become a valuable source of energy. Specified determines the relevance of areas related to the development of low-calorific gas utilization technologies. As an example, in the work considered one of way of utilization of coalmine gas, because Ukraine ranks fourth in the world in terms of coal mine gas emission (4.7% of total global emissions, or 1.2 billion m³ per year). Experts estimate that coal mine gas is actively released in the 70-80 percent of existing mines in Ukraine. The main component of coal mine gas is methane (25-60%) Methane in 21 times has a greater impact on the greenhouse effect than carbon dioxide disposal problem has become increasingly important in the context of the increasing need to address the problems of climate, ecology and environmental protection. So marked causes negative effect of both local and global nature. The efforts of the United Nations and the World Bank led to the adoption of the program 'Zero Routine Flaring by 2030' dedicated to the cessation of these gases burn in flares and disposing them with the ability to generate heat and electricity. This study proposes to use coal gas as a fuel for gas engines to generate heat and electricity. Analyzed the physical-chemical properties of low-calorie gas fuels were allowed to choose a suitable engine, as well as estimate the influence of the composition of the fuel at its techno-economic indicators. Most suitable for low-calorie gas is engine with pre-combustion chamber jet ignition. In Ukraine is accumulated extensive experience in exploitation and production of gas engines with capacity of 1100 kW type GD100 (10GDN 207/2 * 254) fueled by natural gas. By using system pre- combustion chamber jet ignition and quality control in the engines type GD100 introduces the concept of burning depleted burn fuel mixtures, which in turn leads to decrease in the concentration of harmful substances of exhaust gases. The main problems of coal mine gas as a fuel for ICE is low calorific value, the presence of components that adversely affect combustion processes and terms of operation of the ICE, the instability of the composition, weak ignition. In some cases, these problems can be solved by adaptation engine design using coal mine gas as fuel (changing compression ratio, fuel injection quantity increases, change ignition time, increase energy plugs, etc.). It is shown that the use of coal mine gas engines with prechamber has not led to significant changes in the indicator parameters (ηi = 0.43 - 0.45). However, this significantly increases the volumetric fuel consumption, which requires increased fuel injection quantity to ensure constant nominal engine power. Thus, the utilization of low-calorie gas fuels in stationary gas engine type-based GD100 will significantly reduce emissions of harmful substances into the atmosphere when the generate cheap electricity and heat.Keywords: gas engine, low-calorie gas, methane, pre-combustion chamber, utilization
Procedia PDF Downloads 26446 Hydraulic Headloss in Plastic Drainage Pipes at Full and Partially Full Flow
Authors: Velitchko G. Tzatchkov, Petronilo E. Cortes-Mejia, J. Manuel Rodriguez-Varela, Jesus Figueroa-Vazquez
Abstract:
Hydraulic headloss, expressed by the values of friction factor f and Manning’s coefficient n, is an important parameter in designing drainage pipes. Their values normally are taken from manufacturer recommendations, many times without sufficient experimental support. To our knowledge, currently there is no standard procedure for hydraulically testing such pipes. As a result of research carried out at the Mexican Institute of Water Technology, a laboratory testing procedure was proposed and applied on 6 and 12 inches diameter polyvinyl chloride (PVC) and high-density dual wall polyethylene pipe (HDPE) drainage pipes. While the PVC pipe is characterized by naturally smooth interior and exterior walls, the dual wall HDPE pipe has corrugated exterior wall and, although considered smooth, a slightly wavy interior wall. The pipes were tested at full and partially full pipe flow conditions. The tests for full pipe flow were carried out on a 31.47 m long pipe at flow velocities between 0.11 and 4.61 m/s. Water was supplied by gravity from a 10 m-high tank in some of the tests, and from a 3.20 m-high tank in the rest of the tests. Pressure was measured independently with piezometer readings and pressure transducers. The flow rate was measured by an ultrasonic meter. For the partially full pipe flow the pipe was placed inside an existing 49.63 m long zero slope (horizontal) channel. The flow depth was measured by piezometers located along the pipe, for flow rates between 2.84 and 35.65 L/s, measured by a rectangular weir. The observed flow profiles were then compared to computer generated theoretical gradually varied flow profiles for different Manning’s n values. It was found that Manning’s n, that normally is assumed constant for a given pipe material, is in fact dependent on flow velocity and pipe diameter for full pipe flow, and on flow depth for partially full pipe flow. Contrary to the expected higher values of n and f for the HDPE pipe, virtually the same values were obtained for the smooth interior wall PVC pipe and the slightly wavy interior wall HDPE pipe. The explanation of this fact was found in Henry Morris’ theory for smooth turbulent conduit flow over isolated roughness elements. Following Morris, three categories of the flow regimes are possible in a rough conduit: isolated roughness (or semi smooth turbulent) flow, wake interference (or hyper turbulent) flow, and skimming (or quasi-smooth) flow. Isolated roughness flow is characterized by friction drag turbulence over the wall between the roughness elements, independent vortex generation, and dissipation around each roughness element. In this regime, the wake and vortex generation zones at each element develop and dissipate before attaining the next element. The longitudinal spacing of the roughness elements and their height are important influencing agents. Given the slightly wavy form of the HDPE pipe interior wall, the flow for this type of pipe belongs to this category. Based on that theory, an equation for the hydraulic friction factor was obtained. The obtained coefficient values are going to be used in the Mexican design standards.Keywords: drainage plastic pipes, hydraulic headloss, hydraulic friction factor, Manning’s n
Procedia PDF Downloads 281