Search results for: engineering work package
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16327

Search results for: engineering work package

247 Screening of Antagonistic/Synergistic Effect between Lactic Acid Bacteria (LAB) and Yeast Strains Isolated from Kefir

Authors: Mihriban Korukluoglu, Goksen Arik, Cagla Erdogan, Selen Kocakoglu

Abstract:

Kefir is a traditional fermented refreshing beverage which is known for its valuable and beneficial properties for human health. Mainly yeast species, lactic acid bacteria (LAB) strains and fewer acetic acid bacteria strains live together in a natural matrix named “kefir grain”, which is formed from various proteins and polysaccharides. Different microbial species live together in slimy kefir grain and it has been thought that synergetic effect could take place between microorganisms, which belong to different genera and species. In this research, yeast and LAB were isolated from kefir samples obtained from Uludag University Food Engineering Department. The cell morphology of isolates was screened by microscopic examination. Gram reactions of bacteria isolates were determined by Gram staining method, and as well catalase activity was examined. After observing the microscopic/morphological and physical, enzymatic properties of all isolates, they were divided into the groups as LAB and/or yeast according to their physicochemical responses to the applied examinations. As part of this research, the antagonistic/synergistic efficacy of the identified five LAB and five yeast strains to each other were determined individually by disk diffusion method. The antagonistic or synergistic effect is one of the most important properties in a co-culture system that different microorganisms are living together. The synergistic effect should be promoted, whereas the antagonistic effect is prevented to provide effective culture for fermentation of kefir. The aim of this study was to determine microbial interactions between identified yeast and LAB strains, and whether their effect is antagonistic or synergistic. Thus, if there is a strain which inhibits or retards the growth of other strains found in Kefir microflora, this circumstance shows the presence of antagonistic effect in the medium. Such negative influence should be prevented, whereas the microorganisms which have synergistic effect on each other should be promoted by combining them in kefir grain. Standardisation is the most desired property for industrial production. Each microorganism found in the microbial flora of a kefir grain should be identified individually. The members of the microbial community found in the glue-like kefir grain may be redesigned as a starter culture regarding efficacy of each microorganism to another in kefir processing. The main aim of this research was to shed light on more effective production of kefir grain and to contribute a standardisation of kefir processing in the food industry.

Keywords: antagonistic effect, kefir, lactic acid bacteria (LAB), synergistic, yeast

Procedia PDF Downloads 255
246 A Comparative Study of Optimization Techniques and Models to Forecasting Dengue Fever

Authors: Sudha T., Naveen C.

Abstract:

Dengue is a serious public health issue that causes significant annual economic and welfare burdens on nations. However, enhanced optimization techniques and quantitative modeling approaches can predict the incidence of dengue. By advocating for a data-driven approach, public health officials can make informed decisions, thereby improving the overall effectiveness of sudden disease outbreak control efforts. The National Oceanic and Atmospheric Administration and the Centers for Disease Control and Prevention are two of the U.S. Federal Government agencies from which this study uses environmental data. Based on environmental data that describe changes in temperature, precipitation, vegetation, and other factors known to affect dengue incidence, many predictive models are constructed that use different machine learning methods to estimate weekly dengue cases. The first step involves preparing the data, which includes handling outliers and missing values to make sure the data is prepared for subsequent processing and the creation of an accurate forecasting model. In the second phase, multiple feature selection procedures are applied using various machine learning models and optimization techniques. During the third phase of the research, machine learning models like the Huber Regressor, Support Vector Machine, Gradient Boosting Regressor (GBR), and Support Vector Regressor (SVR) are compared with several optimization techniques for feature selection, such as Harmony Search and Genetic Algorithm. In the fourth stage, the model's performance is evaluated using Mean Square Error (MSE), Mean Absolute Error (MAE), and Root Mean Square Error (RMSE) as assistance. Selecting an optimization strategy with the least number of errors, lowest price, biggest productivity, or maximum potential results is the goal. In a variety of industries, including engineering, science, management, mathematics, finance, and medicine, optimization is widely employed. An effective optimization method based on harmony search and an integrated genetic algorithm is introduced for input feature selection, and it shows an important improvement in the model's predictive accuracy. The predictive models with Huber Regressor as the foundation perform the best for optimization and also prediction.

Keywords: deep learning model, dengue fever, prediction, optimization

Procedia PDF Downloads 33
245 Using Optimal Cultivation Strategies for Enhanced Biomass and Lipid Production of an Indigenous Thraustochytrium sp. BM2

Authors: Hsin-Yueh Chang, Pin-Chen Liao, Jo-Shu Chang, Chun-Yen Chen

Abstract:

Biofuel has drawn much attention as a potential substitute to fossil fuels. However, biodiesel from waste oil, oil crops or other oil sources can only satisfy partial existing demands for transportation. Due to the feature of being clean, green and viable for mass production, using microalgae as a feedstock for biodiesel is regarded as a possible solution for a low-carbon and sustainable society. In particular, Thraustochytrium sp. BM2, an indigenous heterotrophic microalga, possesses the potential for metabolizing glycerol to produce lipids. Hence, it is being considered as a promising microalgae-based oil source for biodiesel production and other applications. This study was to optimize the culture pH, scale up, assess the feasibility of producing microalgal lipid from crude glycerol and apply operation strategies following optimal results from shake flask system in a 5L stirred-tank fermenter for further enhancing lipid productivities. Cultivation of Thraustochytrium sp. BM2 without pH control resulted in the highest lipid production of 3944 mg/L and biomass production of 4.85 g/L. Next, when initial glycerol and corn steep liquor (CSL) concentration increased five times (50 g and 62.5 g, respectively), the overall lipid productivity could reach 124 mg/L/h. However, when using crude glycerol as a sole carbon source, direct addition of crude glycerol could inhibit culture growth. Therefore, acid and metal salt pretreatment methods were utilized to purify the crude glycerol. Crude glycerol pretreated with acid and CaCl₂ had the greatest overall lipid productivity 131 mg/L/h when used as a carbon source and proved to be a better substitute for pure glycerol as carbon source in Thraustochytrium sp. BM2 cultivation medium. Engineering operation strategies such as fed-batch and semi-batch operation were applied in the cultivation of Thraustochytrium sp. BM2 for the improvement of lipid production. In cultivation of fed-batch operation strategy, harvested biomass 132.60 g and lipid 69.15 g were obtained. Also, lipid yield 0.20 g/g glycerol was same as in batch cultivation, although with poor overall lipid productivity 107 mg/L/h. In cultivation of semi-batch operation strategy, overall lipid productivity could reach 158 mg/L/h due to the shorter cultivation time. Harvested biomass and lipid achieved 232.62 g and 126.61 g respectively. Lipid yield was improved from 0.20 to 0.24 g/g glycerol. Besides, product costs of three kinds of operation strategies were also calculated. The lowest product cost 12.42 $NTD/g lipid was obtained while employing semi-batch operation strategy and reduced 33% in comparison with batch operation strategy.

Keywords: heterotrophic microalga Thrasutochytrium sp. BM2, microalgal lipid, crude glycerol, fermentation strategy, biodiesel

Procedia PDF Downloads 130
244 Performance of HVOF Sprayed Ni-20CR and Cr3C2-NiCr Coatings on Fe-Based Superalloy in an Actual Industrial Environment of a Coal Fired Boiler

Authors: Tejinder Singh Sidhu

Abstract:

Hot corrosion has been recognized as a severe problem in steam-powered electricity generation plants and industrial waste incinerators as it consumes the material at an unpredictably rapid rate. Consequently, the load-carrying ability of the components reduces quickly, eventually leading to catastrophic failure. The inability to either totally prevent hot corrosion or at least detect it at an early stage has resulted in several accidents, leading to loss of life and/or destruction of infrastructures. A number of countermeasures are currently in use or under investigation to combat hot corrosion, such as using inhibitors, controlling the process parameters, designing a suitable industrial alloy, and depositing protective coatings. However, the protection system to be selected for a particular application must be practical, reliable, and economically viable. Due to the continuously rising cost of the materials as well as increased material requirements, the coating techniques have been given much more importance in recent times. Coatings can add value to products up to 10 times the cost of the coating. Among the different coating techniques, thermal spraying has grown into a well-accepted industrial technology for applying overlay coatings onto the surfaces of engineering components to allow them to function under extreme conditions of wear, erosion-corrosion, high-temperature oxidation, and hot corrosion. In this study, the hot corrosion performances of Ni-20Cr and Cr₃C₂-NiCr coatings developed by High Velocity Oxy-Fuel (HVOF) process have been studied. The coatings were developed on a Fe-based superalloy, and experiments were performed in an actual industrial environment of a coal-fired boiler. The cyclic study was carried out around the platen superheater zone where the temperature was around 1000°C. The study was conducted for 10 cycles, and one cycle was consisting of 100 hours of heating followed by 1 hour of cooling at ambient temperature. Both the coatings deposited on Fe-based superalloy imparted better hot corrosion resistance than the uncoated one. The Ni-20Cr coated superalloy performed better than the Cr₃C₂-NiCr coated in the actual working conditions of the coal fired boiler. It is found that the formation of chromium oxide at the boundaries of Ni-rich splats of the coating blocks the inward permeation of oxygen and other corrosive species to the substrate.

Keywords: hot corrosion, coating, HVOF, oxidation

Procedia PDF Downloads 56
243 STEAM and Project-Based Learning: Equipping Young Women with 21st Century Skills

Authors: Sonia Saddiqui, Maya Marcus

Abstract:

UTS STEAMpunk Girls is an educational program for young women (aged 12-16), to empower them to be more informed and active members of the 21st century workforce. With the number of STEM graduates on the decline, especially among young women, an additional aim of the program is to trial a STEAM (Science, Technology, Engineering, Arts/Humanities/Social Sciences, Mathematics), inter-disciplinary approach to improving STEM engagement. In-line with UNESCO’s recent focus on promoting ‘transversal competencies’ in future graduates, the program utilised co-design, project-based learning, entrepreneurial processes, and inter-disciplinary learning. The program consists of two phases. Taking a participatory design approach, the first phase (co-design workshops) provided valuable insight into student perspectives around engaging young women in STEM and inter-disciplinary thinking. The workshops positioned 26 young women from three schools as subject matter experts (SMEs), providing a platform for them to share their opinions, experiences and findings around the STEAM disciplines. The second (pilot) phase put the co-design phase findings into practice, with 64 students from four schools working in groups to articulate problems with real-world implications, and utilising design-thinking to solve them. The pilot phase utilised project-based learning to engage young women in entrepreneurial and STEAM frameworks and processes. Scalable program design and educational resources were trialed to determine appropriate mechanisms for engaging young women in STEM and in STEAM thinking. Across both phases, data was collected via longitudinal surveys to obtain pre-program, baseline attitudinal information, and compare that against post-program responses. Preliminary findings revealed students’ improved understanding of the STEM disciplines, industries and professions, improved awareness of STEAM as a concept, and improved understanding regarding inter-disciplinary and design thinking. Program outcomes will be of interest to high-school educators in both STEM and the Arts, Humanities and Social Sciences fields, and will hopefully inform future programmatic approaches to introducing inter-disciplinary STEAM learning in STEM curriculum.

Keywords: co-design, STEM, STEAM, project-based learning, inter-disciplinary

Procedia PDF Downloads 179
242 Network Based Speed Synchronization Control for Multi-Motor via Consensus Theory

Authors: Liqin Zhang, Liang Yan

Abstract:

This paper addresses the speed synchronization control problem for a network-based multi-motor system from the perspective of cluster consensus theory. Each motor is considered as a single agent connected through fixed and undirected network. This paper presents an improved control protocol from three aspects. First, for the purpose of improving both tracking and synchronization performance, this paper presents a distributed leader-following method. The improved control protocol takes the importance of each motor’s speed into consideration, and all motors are divided into different groups according to speed weights. Specifically, by using control parameters optimization, the synchronization error and tracking error can be regulated and decoupled to some extent. The simulation results demonstrate the effectiveness and superiority of the proposed strategy. In practical engineering, the simplified models are unrealistic, such as single-integrator and double-integrator. And previous algorithms require the acceleration information of the leader available to all followers if the leader has a varying velocity, which is also difficult to realize. Therefore, the method focuses on an observer-based variable structure algorithm for consensus tracking, which gets rid of the leader acceleration. The presented scheme optimizes synchronization performance, as well as provides satisfactory robustness. What’s more, the existing algorithms can obtain a stable synchronous system; however, the obtained stable system may encounter some disturbances that may destroy the synchronization. Focus on this challenging technological problem, a state-dependent-switching approach is introduced. In the presence of unmeasured angular speed and unknown failures, this paper investigates a distributed fault-tolerant consensus tracking algorithm for a group non-identical motors. The failures are modeled by nonlinear functions, and the sliding mode observer is designed to estimate the angular speed and nonlinear failures. The convergence and stability of the given multi-motor system are proved. Simulation results have shown that all followers asymptotically converge to a consistent state when one follower fails to follow the virtual leader during a large enough disturbance, which illustrates the good performance of synchronization control accuracy.

Keywords: consensus control, distributed follow, fault-tolerant control, multi-motor system, speed synchronization

Procedia PDF Downloads 103
241 A Modified QuEChERS Method Using Activated Carbon Fibers as r-DSPE Sorbent for Sample Cleanup: Application to Pesticides Residues Analysis in Food Commodities Using GC-MS/MS

Authors: Anshuman Srivastava, Shiv Singh, Sheelendra Pratap Singh

Abstract:

A simple, sensitive and effective gas chromatography tandem mass spectrometry (GC-MS/MS) method was developed for simultaneous analysis of multi pesticide residues (organophosphate, organochlorines, synthetic pyrethroids and herbicides) in food commodities using phenolic resin based activated carbon fibers (ACFs) as reversed-dispersive solid phase extraction (r-DSPE) sorbent in modified QuEChERS (Quick Easy Cheap Effective Rugged Safe) method. The acetonitrile-based QuEChERS technique was used for the extraction of the analytes from food matrices followed by sample cleanup with ACFs instead of traditionally used primary secondary amine (PSA). Different physico-chemical characterization techniques such as Fourier transform infrared spectroscopy, scanning electron microscopy, X-ray diffraction and Brunauer-Emmet-Teller surface area analysis were employed to investigate the engineering and structural properties of ACFs. The recovery of pesticides and herbicides was tested at concentration levels of 0.02 and 0.2 mg/kg in different commodities such as cauliflower, cucumber, banana, apple, wheat and black gram. The recoveries of all twenty-six pesticides and herbicides were found in acceptable limit (70-120%) according to SANCO guideline with relative standard deviation value < 15%. The limit of detection and limit of quantification of the method was in the range of 0.38-3.69 ng/mL and 1.26 -12.19 ng/mL, respectively. In traditional QuEChERS method, PSA used as r-DSPE sorbent plays a vital role in sample clean-up process and demonstrates good recoveries for multiclass pesticides. This study reports that ACFs are better in terms of removal of co-extractives in comparison of PSA without compromising the recoveries of multi pesticides from food matrices. Further, ACF replaces the need of charcoal in addition to the PSA from traditional QuEChERS method which is used to remove pigments. The developed method will be cost effective because the ACFs are significantly cheaper than the PSA. So the proposed modified QuEChERS method is more robust, effective and has better sample cleanup efficiency for multiclass multi pesticide residues analysis in different food matrices such as vegetables, grains and fruits.

Keywords: QuEChERS, activated carbon fibers, primary secondary amine, pesticides, sample preparation, carbon nanomaterials

Procedia PDF Downloads 244
240 Usage of Cyanobacteria in Battery: Saving Money, Enhancing the Storage Capacity, Making Portable, and Supporting the Ecology

Authors: Saddam Husain Dhobi, Bikrant Karki

Abstract:

The main objective of this paper is save money, balance ecosystem of the terrestrial organism, control global warming, and enhancing the storage capacity of the battery with requiring weight and thinness by using Cyanobacteria in the battery. To fulfill this purpose of paper we can use different methods: Analysis, Biological, Chemistry, theoretical and Physics with some engineering design. Using this different method, we can produce the special type of battery that has the long life, high storage capacity, and clean environment, save money so on and by using the byproduct of Cyanobacteria i.e. glucose. Cyanobacteria are a special type of bacteria that produces different types of extracellular glucoses and oxygen with the help of little sunlight, water, and carbon dioxide and can survive in freshwater, marine and in the land as well. In this process, O₂ is more in the comparison to plant due to rapid growth rate of Cyanobacteria. The required materials are easily available in this process to produce glucose with the help of Cyanobacteria. Since CO₂, is greenhouse gas that causes the global warming? We can utilize this gas and save our ecological balance and the byproduct (glucose) C₆H₁₂O₆ can be utilized for raw material for the battery where as O₂ escape is utilized by living organism. The glucose produce by Cyanobateria goes on Krebs's Cycle or Citric Acid Cycle, in which glucose is complete, oxidizes and all the available energy from glucose molecule has been release in the form of electron and proton as energy. If we use a suitable anodes and cathodes, we can capture these electrons and protons to produce require electricity current with the help of byproduct of Cyanobacteria. According to "Virginia Tech Bio-battery" and "Sony" 13 enzymes and the air is used to produce nearly 24 electrons from a single glucose unit. In this output power of 0.8 mW/cm, current density of 6 mA/cm, and energy storage density of 596 Ah/kg. This last figure is impressive, at roughly 10 times the energy density of the lithium-ion batteries in your mobile devices. When we use Cyanobacteria in battery, we are able to reduce Carbon dioxide, Stop global warming, and enhancing the storage capacity of battery more than 10 times that of lithium battery, saving money, balancing ecology. In this way, we can produce energy from the Cyanobacteria and use it in battery for different benefits. In addition, due to the mass, size and easy cultivation, they are better to maintain the size of battery. Hence, we can use Cyanobacteria for the battery having suitable size, enhancing the storing capacity of battery, helping the environment, portability and so on.

Keywords: anode, byproduct, cathode, cyanobacteri, glucose, storage capacity

Procedia PDF Downloads 321
239 Influence of the Location of Flood Embankments on the Condition of Oxbow Lakes and Riparian Forests: A Case Study of the Middle Odra River Beds on the Example of Dragonflies (Odonata), Ground Beetles (Coleoptera: Carabidae) and Plant Communities

Authors: Magda Gorczyca, Zofia Nocoń

Abstract:

Past and current studies from different countries showed that river engineering leads to environmental degradation and extinction of many species - often those protected by local and international wildlife conservation laws. Through the years, the main focus of rivers utilization has shifted from industrial applications to recreation and wildlife preservation with a focus on keeping the biodiversity which plays a significant role in preventing climate changes. Thus an opportunity appeared to recreate flooding areas and natural habitats, which are very rare in the scale of Europe. Additionally, river restoration helps to avoid floodings and periodic droughts, which are usually very damaging to the economy. In this research, the biodiversity of dragonflies and ground beetles was analyzed in the context of plant communities and forest stands structure. Results were enriched with data from past and current literature. A comparison was made between two parts of the Odra river. A part where oxbow lake and riparian forest were separated from the river bed by embankment and a part of the river with floodplains left intact. Validity assessment of embankments relocation was made based on the research results. In the period between May and September, insects were collected, phytosociological analysis were taken, and forest stand structure properties were specified. In the part of the river not separated by the embankments, rare and protected species of plants were spotted (e.g., Trapanatans, Salvinianatans) as well as greater species and quantitive diversity of dragonfly. Ground beetles fauna, though, was richer in the area separated by the embankment. Even though the research was done during only one season and in a limited area, the results can be a starting point for further extended research and may contribute to acquiring legal wildlife protection and restoration of the researched area. During the research, the presence of invasive species Impatiens parviflora, Echinocystislobata, and Procyonlotor were observed, which may lead to loss of the natural values of the researched areas.

Keywords: carabidae, floodplains, middle Odra river, Odonata, oxbow lakes, riparian forests

Procedia PDF Downloads 125
238 Prediction of Fluid Induced Deformation using Cavity Expansion Theory

Authors: Jithin S. Kumar, Ramesh Kannan Kandasami

Abstract:

Geomaterials are generally porous in nature due to the presence of discrete particles and interconnected voids. The porosity present in these geomaterials play a critical role in many engineering applications such as CO2 sequestration, well bore strengthening, enhanced oil and hydrocarbon recovery, hydraulic fracturing, and subsurface waste storage. These applications involves solid-fluid interactions, which govern the changes in the porosity which in turn affect the permeability and stiffness of the medium. Injecting fluid into the geomaterials results in permeation which exhibits small or negligible deformation of the soil skeleton followed by cavity expansion/ fingering/ fracturing (different forms of instabilities) due to the large deformation especially when the flow rate is greater than the ability of the medium to permeate the fluid. The complexity of this problem increases as the geomaterial behaves like a solid and fluid under certain conditions. Thus it is important to understand this multiphysics problem where in addition to the permeation, the elastic-plastic deformation of the soil skeleton plays a vital role during fluid injection. The phenomenon of permeation and cavity expansion in porous medium has been studied independently through extensive experimental and analytical/ numerical models. The analytical models generally use Darcy's/ diffusion equations to capture the fluid flow during permeation while elastic-plastic (Mohr-Coulomb and Modified Cam-Clay) models were used to predict the solid deformations. Hitherto, the research generally focused on modelling cavity expansion without considering the effect of injected fluid coming into the medium. Very few studies have considered the effect of injected fluid on the deformation of soil skeleton. However, the porosity changes during the fluid injection and coupled elastic-plastic deformation are not clearly understood. In this study, the phenomenon of permeation and instabilities such as cavity and finger/ fracture formation will be quantified extensively by performing experiments using a novel experimental setup in addition to utilizing image processing techniques. This experimental study will describe the fluid flow and soil deformation characteristics under different boundary conditions. Further, a well refined coupled semi-analytical model will be developed to capture the physics involved in quantifying the deformation behaviour of geomaterial during fluid injection.

Keywords: solid-fluid interaction, permeation, poroelasticity, plasticity, continuum model

Procedia PDF Downloads 51
237 Bioinformatics High Performance Computation and Big Data

Authors: Javed Mohammed

Abstract:

Right now, bio-medical infrastructure lags well behind the curve. Our healthcare system is dispersed and disjointed; medical records are a bit of a mess; and we do not yet have the capacity to store and process the crazy amounts of data coming our way from widespread whole-genome sequencing. And then there are privacy issues. Despite these infrastructure challenges, some researchers are plunging into bio medical Big Data now, in hopes of extracting new and actionable knowledge. They are doing delving into molecular-level data to discover bio markers that help classify patients based on their response to existing treatments; and pushing their results out to physicians in novel and creative ways. Computer scientists and bio medical researchers are able to transform data into models and simulations that will enable scientists for the first time to gain a profound under-standing of the deepest biological functions. Solving biological problems may require High-Performance Computing HPC due either to the massive parallel computation required to solve a particular problem or to algorithmic complexity that may range from difficult to intractable. Many problems involve seemingly well-behaved polynomial time algorithms (such as all-to-all comparisons) but have massive computational requirements due to the large data sets that must be analyzed. High-throughput techniques for DNA sequencing and analysis of gene expression have led to exponential growth in the amount of publicly available genomic data. With the increased availability of genomic data traditional database approaches are no longer sufficient for rapidly performing life science queries involving the fusion of data types. Computing systems are now so powerful it is possible for researchers to consider modeling the folding of a protein or even the simulation of an entire human body. This research paper emphasizes the computational biology's growing need for high-performance computing and Big Data. It illustrates this article’s indispensability in meeting the scientific and engineering challenges of the twenty-first century, and how Protein Folding (the structure and function of proteins) and Phylogeny Reconstruction (evolutionary history of a group of genes) can use HPC that provides sufficient capability for evaluating or solving more limited but meaningful instances. This article also indicates solutions to optimization problems, and benefits Big Data and Computational Biology. The article illustrates the Current State-of-the-Art and Future-Generation Biology of HPC Computing with Big Data.

Keywords: high performance, big data, parallel computation, molecular data, computational biology

Procedia PDF Downloads 344
236 Frequency Domain Decomposition, Stochastic Subspace Identification and Continuous Wavelet Transform for Operational Modal Analysis of Three Story Steel Frame

Authors: Ardalan Sabamehr, Ashutosh Bagchi

Abstract:

Recently, Structural Health Monitoring (SHM) based on the vibration of structures has attracted the attention of researchers in different fields such as: civil, aeronautical and mechanical engineering. Operational Modal Analysis (OMA) have been developed to identify modal properties of infrastructure such as bridge, building and so on. Frequency Domain Decomposition (FDD), Stochastic Subspace Identification (SSI) and Continuous Wavelet Transform (CWT) are the three most common methods in output only modal identification. FDD, SSI, and CWT operate based on the frequency domain, time domain, and time-frequency plane respectively. So, FDD and SSI are not able to display time and frequency at the same time. By the way, FDD and SSI have some difficulties in a noisy environment and finding the closed modes. CWT technique which is currently developed works on time-frequency plane and a reasonable performance in such condition. The other advantage of wavelet transform rather than other current techniques is that it can be applied for the non-stationary signal as well. The aim of this paper is to compare three most common modal identification techniques to find modal properties (such as natural frequency, mode shape, and damping ratio) of three story steel frame which was built in Concordia University Lab by use of ambient vibration. The frame has made of Galvanized steel with 60 cm length, 27 cm width and 133 cm height with no brace along the long span and short space. Three uniaxial wired accelerations (MicroStarin with 100mv/g accuracy) have been attached to the middle of each floor and gateway receives the data and send to the PC by use of Node Commander Software. The real-time monitoring has been performed for 20 seconds with 512 Hz sampling rate. The test is repeated for 5 times in each direction by hand shaking and impact hammer. CWT is able to detect instantaneous frequency by used of ridge detection method. In this paper, partial derivative ridge detection technique has been applied to the local maxima of time-frequency plane to detect the instantaneous frequency. The extracted result from all three methods have been compared, and it demonstrated that CWT has the better performance in term of its accuracy in noisy environment. The modal parameters such as natural frequency, damping ratio and mode shapes are identified from all three methods.

Keywords: ambient vibration, frequency domain decomposition, stochastic subspace identification, continuous wavelet transform

Procedia PDF Downloads 272
235 An Exploratory Study on the Impact of Climate Change on Design Rainfalls in the State of Qatar

Authors: Abdullah Al Mamoon, Niels E. Joergensen, Ataur Rahman, Hassan Qasem

Abstract:

Intergovernmental Panel for Climate Change (IPCC) in its fourth Assessment Report AR4 predicts a more extreme climate towards the end of the century, which is likely to impact the design of engineering infrastructure projects with a long design life. A recent study in 2013 developed new design rainfall for Qatar, which provides an improved design basis of drainage infrastructure for the State of Qatar under the current climate. The current design standards in Qatar do not consider increased rainfall intensity caused by climate change. The focus of this paper is to update recently developed design rainfalls in Qatar under the changing climatic conditions based on IPCC's AR4 allowing a later revision to the proposed design standards, relevant for projects with a longer design life. The future climate has been investigated based on the climate models released by IPCC’s AR4 and A2 story line of emission scenarios (SRES) using a stationary approach. Annual maximum series (AMS) of predicted 24 hours rainfall data for both wet (NCAR-CCSM) scenario and dry (CSIRO-MK3.5) scenario for the Qatari grid points in the climate models have been extracted for three periods, current climate 2010-2039, medium term climate (2040-2069) and end of century climate (2070-2099). A homogeneous region of the Qatari grid points has been formed and L-Moments based regional frequency approach is adopted to derive design rainfalls. The results indicate no significant changes in the design rainfall on the short term 2040-2069, but significant changes are expected towards the end of the century (2070-2099). New design rainfalls have been developed taking into account climate change for 2070-2099 scenario and by averaging results from the two scenarios. IPCC’s AR4 predicts that the rainfall intensity for a 5-year return period rain with duration of 1 to 2 hours will increase by 11% in 2070-2099 compared to current climate. Similarly, the rainfall intensity for more extreme rainfall, with a return period of 100 years and duration of 1 to 2 hours will increase by 71% in 2070-2099 compared to current climate. Infrastructure with a design life exceeding 60 years should add safety factors taking the predicted effects from climate change into due consideration.

Keywords: climate change, design rainfalls, IDF, Qatar

Procedia PDF Downloads 370
234 CRISPR/Cas9 Based Gene Stacking in Plants for Virus Resistance Using Site-Specific Recombinases

Authors: Sabin Aslam, Sultan Habibullah Khan, James G. Thomson, Abhaya M. Dandekar

Abstract:

Losses due to viral diseases are posing a serious threat to crop production. A quick breakdown of resistance to viruses like Cotton Leaf Curl Virus (CLCuV) demands the application of a proficient technology to engineer durable resistance. Gene stacking has recently emerged as a potential approach for integrating multiple genes in crop plants. In the present study, recombinase technology has been used for site-specific gene stacking. A target vector (pG-Rec) was designed for engineering a predetermined specific site in the plant genome whereby genes can be stacked repeatedly. Using Agrobacterium-mediated transformation, the pG-Rec was transformed into Coker-312 along with Nicotiana tabacum L. cv. Xanthi and Nicotiana benthamiana. The transgene analysis of target lines was conducted through junction PCR. The transgene positive target lines were used for further transformations to site-specifically stack two genes of interest using Bxb1 and PhiC31 recombinases. In the first instance, Cas9 driven by multiplex gRNAs (for Rep gene of CLCuV) was site-specifically integrated into the target lines and determined by the junction PCR and real-time PCR. The resulting plants were subsequently used to stack the second gene of interest (AVP3 gene from Arabidopsis for enhancing cotton plant growth). The addition of the genes is simultaneously achieved with the removal of marker genes for recycling with the next round of gene stacking. Consequently, transgenic marker-free plants were produced with two genes stacked at the specific site. These transgenic plants can be potential germplasm to introduce resistance against various strains of cotton leaf curl virus (CLCuV) and abiotic stresses. The results of the research demonstrate gene stacking in crop plants, a technology that can be used to introduce multiple genes sequentially at predefined genomic sites. The current climate change scenario highlights the use of such technologies so that gigantic environmental issues can be tackled by several traits in a single step. After evaluating virus resistance in the resulting plants, the lines can be a primer to initiate stacking of further genes in Cotton for other traits as well as molecular breeding with elite cotton lines.

Keywords: cotton, CRISPR/Cas9, gene stacking, genome editing, recombinases

Procedia PDF Downloads 123
233 Sample Hospital Buildings as Modern Health Facilities in Early Republican Turkey

Authors: Mehmet Sener, Emre Kishali

Abstract:

The establishment of republic brought radical changes related to the modernization of life in early republican Turkey considering the revolutions in socio-economical, cultural and political aspects. These changes also had many influences on the formation of city planning and architectural medium that the arrangements related with health facility production had an important place amongst them. While the health services were witnessing great transformations with all its sides, socio-cultural and architectural framework of these facilities necessitated the adaption of new conceptual approaches which led to the construction new hospital buildings by the republican state with a name ‘Sample Hospital’. In this period, the state constructed sample hospitals in some cities (Adana, Ankara, Erzurum, İstanbul, Konya, Sivas and Trabzon) for the aim of being a good example for further hospitals sheltering all the characteristics of a contemporary health complex for that day. In this study, these six hospitals will firstly be elucidated considering their historical evaluations and current situations. Then, being one of the most significant modern heritages of republican history, the ways to provide the interrelationship of these complexes with the rapidly evolving current world will be discussed by proposing solutions or approaches coming from the fields of city planning, architectural preservation, engineering and architectural history together with an awareness of the socio-economic conditions, health services and architectural medium of Turkey. These hospitals are complexes composed of building ensembles which have functional relationships with each other. So, some strategies will be proposed for the preservation, renovation, and refurbishment of these complexes with an awareness of the possibility of the conflict between conservation practices and today’s health facility standards. Accordingly, the addition or removal of some elements in the complex or the suggestion of some architectural changes for the modernization of these health facilities will be investigated considering the requirements of the contemporary architectural design of health facilities. Since these hospitals are highly complex structures and have vastly changing design and construction standards, they cannot be used without adopting necessary architectural and technological interventions. So, the adaptive re-use of these buildings instead of demolition or the preservation of their overall characteristics becomes inevitable for the sustaining of these health facility heritages in Turkey. In this context, a multidisciplinary analysis will be made in this study on ‘Sample Hospital’ concept and buildings existing in Turkish modern architectural history within the framework of the adaptive reuse of these health complexes.

Keywords: adaptive re-use, conservation, early republican Turkey, sample hospital

Procedia PDF Downloads 218
232 Perovskite Nanocrystals and Quantum Dots: Advancements in Light-Harvesting Capabilities for Photovoltaic Technologies

Authors: Mehrnaz Mostafavi

Abstract:

Perovskite nanocrystals and quantum dots have emerged as leaders in the field of photovoltaic technologies, demonstrating exceptional light-harvesting abilities and stability. This study investigates the substantial progress and potential of these nano-sized materials in transforming solar energy conversion. The research delves into the foundational characteristics and production methods of perovskite nanocrystals and quantum dots, elucidating their distinct optical and electronic properties that render them well-suited for photovoltaic applications. Specifically, it examines their outstanding light absorption capabilities, enabling more effective utilization of a wider solar spectrum compared to traditional silicon-based solar cells. Furthermore, this paper explores the improved durability achieved in perovskite nanocrystals and quantum dots, overcoming previous challenges related to degradation and inconsistent performance. Recent advancements in material engineering and techniques for surface passivation have significantly contributed to enhancing the long-term stability of these nanomaterials, making them more commercially feasible for solar cell usage. The study also delves into the advancements in device designs that incorporate perovskite nanocrystals and quantum dots. Innovative strategies, such as tandem solar cells and hybrid structures integrating these nanomaterials with conventional photovoltaic technologies, are discussed. These approaches highlight synergistic effects that boost efficiency and performance. Additionally, this paper addresses ongoing challenges and research endeavors aimed at further improving the efficiency, stability, and scalability of perovskite nanocrystals and quantum dots in photovoltaics. Efforts to mitigate concerns related to material degradation, toxicity, and large-scale production are actively pursued, paving the way for broader commercial application. In conclusion, this paper emphasizes the significant role played by perovskite nanocrystals and quantum dots in advancing photovoltaic technologies. Their exceptional light-harvesting capabilities, combined with increased stability, promise a bright future for next-generation solar cells, ushering in an era of highly efficient and cost-effective solar energy conversion systems.

Keywords: perovskite nanocrystals, quantum dots, photovoltaic technologies, light-harvesting, solar energy conversion, stability, device designs

Procedia PDF Downloads 47
231 Computational Fluid Dynamics Analysis of Sit-Ski Aerodynamics in Crosswind Conditions

Authors: Lev Chernyshev, Ekaterina Lieshout, Natalia Kabaliuk

Abstract:

Sit-skis enable individuals with limited lower limb or core movement to ski unassisted confidently. The rise in popularity of the Winter Paralympics has seen an influx of engineering innovation, especially for the Downhill and Super-Giant Slalom events, where the athletes achieve speeds as high as 160km/h. The growth in the sport has inspired recent research into sit-ski aerodynamics. Crosswinds are expected in mountain climates and, therefore, can greatly impact a skier's maneuverability and aerodynamics. This research investigates the impact of crosswinds on the drag force of a Paralympic sit-ski using Computational Fluid Dynamics (CFD). A Paralympic sit-ski with a model of a skier, a leg cover, a bucket seat, and a simplified suspension system was used for CFD analysis in ANSYS Fluent. The hybrid initialisation tool and the SST k–ω turbulence model were used with two tetrahedral mesh bodies of influence. The crosswinds (10, 30, and 50 km/h) acting perpendicular to the sit-ski's direction of travel were simulated, corresponding to the straight-line skiing speeds of 60, 80, and 100km/h. Following the initialisation, 150 iterations for both first and second order steady-state solvers were used, before switching to a transient solver with a computational time of 1.5s and a time step of 0.02s, to allow the solution to converge. CFD results were validated against wind tunnel data. The results suggested that for all crosswind and sit-ski speeds, on average, 64% of the total drag on the ski was due to the athlete's torso. The suspension was associated with the second largest overall sit-ski drag force contribution, averaging at 27%, followed by the leg cover at 10%. While the seat contributed a negligible 0.5% of the total drag force, averaging at 1.2N across the conditions studied. The effect of the crosswind increased the total drag force across all skiing speed studies, with the drag on the athlete's torso and suspension being the most sensitive to the changes in the crosswind magnitude. The effect of the crosswind on the ski drag reduced as the simulated skiing speed increased: for skiing at 60km/h, the drag force on the torso increased by 154% with the increase of the crosswind from 10km/h to 50km/h; whereas, at 100km/h the corresponding drag force increase was halved (75%). The analysis of the flow and pressure field characteristics for a sit-ski in crosswind conditions indicated the flow separation localisation and wake size correlated with the magnitude and directionality of the crosswind relative to straight-line skiing. The findings can inform aerodynamic improvements in sit-ski design and increase skiers' medalling chances.

Keywords: sit-ski, aerodynamics, CFD, crosswind effects

Procedia PDF Downloads 48
230 Performance Management of Tangible Assets within the Balanced Scorecard and Interactive Business Decision Tools

Authors: Raymond K. Jonkers

Abstract:

The present study investigated approaches and techniques to enhance strategic management governance and decision making within the framework of a performance-based balanced scorecard. The review of best practices from strategic, program, process, and systems engineering management provided for a holistic approach toward effective outcome-based capability management. One technique, based on factorial experimental design methods, was used to develop an empirical model. This model predicted the degree of capability effectiveness and is dependent on controlled system input variables and their weightings. These variables represent business performance measures, captured within a strategic balanced scorecard. The weighting of these measures enhances the ability to quantify causal relationships within balanced scorecard strategy maps. The focus in this study was on the performance of tangible assets within the scorecard rather than the traditional approach of assessing performance of intangible assets such as knowledge and technology. Tangible assets are represented in this study as physical systems, which may be thought of as being aboard a ship or within a production facility. The measures assigned to these systems include project funding for upgrades against demand, system certifications achieved against those required, preventive maintenance to corrective maintenance ratios, and material support personnel capacity against that required for supporting respective systems. The resultant scorecard is viewed as complimentary to the traditional balanced scorecard for program and performance management. The benefits from these scorecards are realized through the quantified state of operational capabilities or outcomes. These capabilities are also weighted in terms of priority for each distinct system measure and aggregated and visualized in terms of overall state of capabilities achieved. This study proposes the use of interactive controls within the scorecard as a technique to enhance development of alternative solutions in decision making. These interactive controls include those for assigning capability priorities and for adjusting system performance measures, thus providing for what-if scenarios and options in strategic decision-making. In this holistic approach to capability management, several cross functional processes were highlighted as relevant amongst the different management disciplines. In terms of assessing an organization’s ability to adopt this approach, consideration was given to the P3M3 management maturity model.

Keywords: management, systems, performance, scorecard

Procedia PDF Downloads 301
229 Molecular Dynamics Simulations on Richtmyer-Meshkov Instability of Li-H2 Interface at Ultra High-Speed Shock Loads

Authors: Weirong Wang, Shenghong Huang, Xisheng Luo, Zhenyu Li

Abstract:

Material mixing process and related dynamic issues at extreme compressing conditions have gained more and more concerns in last ten years because of the engineering appealings in inertial confinement fusion (ICF) and hypervelocity aircraft developments. However, there lacks models and methods that can handle fully coupled turbulent material mixing and complex fluid evolution under conditions of high energy density regime up to now. In aspects of macro hydrodynamics, three numerical methods such as direct numerical simulation (DNS), large eddy simulation (LES) and Reynolds-averaged Navier–Stokes equations (RANS) has obtained relative acceptable consensus under the conditions of low energy density regime. However, under the conditions of high energy density regime, they can not be applied directly due to occurrence of dissociation, ionization, dramatic change of equation of state, thermodynamic properties etc., which may make the governing equations invalid in some coupled situations. However, in view of micro/meso scale regime, the methods based on Molecular Dynamics (MD) as well as Monte Carlo (MC) model are proved to be promising and effective ways to investigate such issues. In this study, both classical MD and first-principle based electron force field MD (eFF-MD) methods are applied to investigate Richtmyer-Meshkov Instability of metal Lithium and gas Hydrogen (Li-H2) interface mixing at different shock loading speed ranging from 3 km/s to 30 km/s. It is found that: 1) Classical MD method based on predefined potential functions has some limits in application to extreme conditions, since it cannot simulate the ionization process and its potential functions are not suitable to all conditions, while the eFF-MD method can correctly simulate the ionization process due to its ‘ab initio’ feature; 2) Due to computational cost, the eFF-MD results are also influenced by simulation domain dimensions, boundary conditions and relaxation time choices, etc., in computations. Series of tests have been conducted to determine the optimized parameters. 3) Ionization induced by strong shock compression has important effects on Li-H2 interface evolutions of RMI, indicating a new micromechanism of RMI under conditions of high energy density regime.

Keywords: first-principle, ionization, molecular dynamics, material mixture, Richtmyer-Meshkov instability

Procedia PDF Downloads 208
228 Microorganisms in Fresh and Stored Bee Pollen Originated from Slovakia

Authors: Vladimíra Kňazovická, Mária Dovičičová, Miroslava Kačániová, Margita Čanigová

Abstract:

The aim of the study was to test the storage of bee pollen at room temperature and in cold store, and to describe microorganisms originated from it. Fresh bee pollen originating in West Slovakia was collected in May 2010. It was tested for presence of particular microbial groups using dilution plating method, and divided into two parts with different storage (in cold store and at room temperature). Microbial analyses of pollen were repeated after one year of storage. Several bacterial strains were isolated and tested using Gram staining, for catalase and fructose-6-phosphate-phosphoketolase presence, and by rapid ID 32A (BioMérieux, France). Micromycetes were identified at genus level. Fresh pollen contained coliform bacteria, which were not detected after one year of storage in both ways. Total plate count (TPC) of aerobes and anaerobes and of yeasts in fresh bee pollen exceeded 5.00 log CFU/g. TPC of aerobes and anaerobes decreased below 2.00 log CFU/g after one year of storage in both ways. Count of yeasts decreased to 2.32 log CFU/g (at room temperature) and to 3.66 log CFU/g (in cold store). Microscopic filamentous fungi decreased from 3.41 log CFU/g (fresh bee pollen) to 1.13 log CFU/g (at room temperature) and to 1.89 log CFU/g (in cold store). In fresh bee pollen, 12 genera of micromycetes were identified in the following order according to their relative density: Penicillium > Mucor > Absidia > Cladosporium, Fusarium > Alternaria > Eurotium > Aspergillus, Rhizopus > Emericella > Arthrinium and Mycelium sterilium. After one year at room temperature, only three genera were detected in bee pollen (Penicillium > Aspergillus, Mucor) and after one year in cold store, seven genera were detected (Mucor > Penicillium, Emericella > Aspergillus, Absidia > Arthrinium, Eurotium). From the plates designated for anaerobes, eight colonies originating in fresh bee pollen were isolated. Among them, a single yeast isolate occurred. Other isolates were G+ bacteria, with a total of five rod shaped. In three out of these five, catalase was absent and fructose-6-phosphate-phosphoketolase was present. Bacterial isolates originating in fresh pollen belonged probably to genus Bifidobacterium or relative genera, but their identity was not confirmed unequivocally. In general, cold conditions are suitable for maintaining the natural properties of foodstuffs for a longer time. Slight decrease of microscopic fungal number and diversity was recorded in cold temperatures compared with storage at room temperature.

Keywords: bacteria, bee product, microscopic fungi, biosystems engineering

Procedia PDF Downloads 307
227 Genome-Wide Mining of Potential Guide RNAs for Streptococcus pyogenes and Neisseria meningitides CRISPR-Cas Systems for Genome Engineering

Authors: Farahnaz Sadat Golestan Hashemi, Mohd Razi Ismail, Mohd Y. Rafii

Abstract:

Clustered regularly interspaced short palindromic repeats (CRISPR) and CRISPR-associated protein (Cas) system can facilitate targeted genome editing in organisms. Dual or single guide RNA (gRNA) can program the Cas9 nuclease to cut target DNA in particular areas; thus, introducing concise mutations either via error-prone non-homologous end-joining repairing or via incorporating foreign DNAs by homologous recombination between donor DNA and target area. In spite of high demand of such promising technology, developing a well-organized procedure in order for reliable mining of potential target sites for gRNAs in large genomic data is still challenging. Hence, we aimed to perform high-throughput detection of target sites by specific PAMs for not only common Streptococcus pyogenes (SpCas9) but also for Neisseria meningitides (NmCas9) CRISPR-Cas systems. Previous research confirmed the successful application of such RNA-guided Cas9 orthologs for effective gene targeting and subsequently genome manipulation. However, Cas9 orthologs need their particular PAM sequence for DNA cleavage activity. Activity levels are based on the sequence of the protospacer and specific combinations of favorable PAM bases. Therefore, based on the specific length and sequence of PAM followed by a constant length of the target site for the two orthogonals of Cas9 protein, we created a reliable procedure to explore possible gRNA sequences. To mine CRISPR target sites, four different searching modes of sgRNA binding to target DNA strand were applied. These searching modes are as follows i) coding strand searching, ii) anti-coding strand searching, iii) both strand searching, and iv) paired-gRNA searching. Finally, a complete list of all potential gRNAs along with their locations, strands, and PAMs sequence orientation can be provided for both SpCas9 as well as another potential Cas9 ortholog (NmCas9). The artificial design of potential gRNAs in a genome of interest can accelerate functional genomic studies. Consequently, the application of such novel genome editing tool (CRISPR/Cas technology) will enhance by presenting increased versatility and efficiency.

Keywords: CRISPR/Cas9 genome editing, gRNA mining, SpCas9, NmCas9

Procedia PDF Downloads 232
226 Nondestructive Electrochemical Testing Method for Prestressed Concrete Structures

Authors: Tomoko Fukuyama, Osamu Senbu

Abstract:

Prestressed concrete is used a lot in infrastructures such as roads or bridges. However, poor grout filling and PC steel corrosion are currently major issues of prestressed concrete structures. One of the problems with nondestructive corrosion detection of PC steel is a plastic pipe which covers PC steel. The insulative property of pipe makes a nondestructive diagnosis difficult; therefore a practical technology to detect these defects is necessary for the maintenance of infrastructures. The goal of the research is a development of an electrochemical technique which enables to detect internal defects from the surface of prestressed concrete nondestructively. Ideally, the measurements should be conducted from the surface of structural members to diagnose non-destructively. In the present experiment, a prestressed concrete member is simplified as a layered specimen to simulate a current path between an input and an output electrode on a member surface. The specimens which are layered by mortar and the prestressed concrete constitution materials (steel, polyethylene, stainless steel, or galvanized steel plates) were provided to the alternating current impedance measurement. The magnitude of an applied electric field was 0.01-volt or 1-volt, and the frequency range was from 106 Hz to 10-2 Hz. The frequency spectrums of impedance, which relate to charge reactions activated by an electric field, were measured to clarify the effects of the material configurations or the properties. In the civil engineering field, the Nyquist diagram is popular to analyze impedance and it is a good way to grasp electric relaxation using a shape of the plot. However, it is slightly not suitable to figure out an influence of a measurement frequency which is reciprocal of reaction time. Hence, Bode diagram is also applied to describe charge reactions in the present paper. From the experiment results, the alternating current impedance method looks to be applicable to the insulative material measurement and eventually prestressed concrete diagnosis. At the same time, the frequency spectrums of impedance show the difference of the material configuration. This is because the charge mobility reflects the variety of substances and also the measuring frequency of the electric field determines migration length of charges which are under the influence of the electric field. However, it could not distinguish the differences of the material thickness and is inferred the difficulties of prestressed concrete diagnosis to identify the amount of an air void or a layer of corrosion product by the technique.

Keywords: capacitance, conductance, prestressed concrete, susceptance

Procedia PDF Downloads 390
225 Flexible Coupling between Gearbox and Pump (High Speed Machine)

Authors: Naif Mohsen Alharbi

Abstract:

This paper present failure occurred on flexible coupling installed at oil anf gas operation. Also it presents maintenance ideas implemented on the flexible coupling installed to transmit high torque from gearbox to pump. Basically, the machine train is including steam turbine which drives the pump and there is gearbox located in between for speed reduction. investigation are identifying the root causes, solving and developing the technology designs or bad actor. This report provides the study intentionally for continues operation optimization, utilize the advanced opportunity and implement a improvement. Objective: The main objectives of the investigation are identifying the root causes, solving and developing the technology designs or bad actor. Ultimately, fulfilling the operation productivity, also ensuring better technology, quality and design by solutions. This report provides the study intentionally for continues operation optimization, utilize the advanced opportunity and implemet improvement. Method: The method used in this project was a very focused root cause analysis procedure that incorporated engineering analysis and measurements. The analysis method extensively covers the measuring of the complete coupling dimensions. Including the membranes thickness, hubs, bore diameter and total length, dismantle flexible coupling to diagnose how deep the coupling has been affected. Also, defining failure modes, so that the causes could be identified and verified. Moreover, Vibration analysis and metallurgy test. Lastly applying several solutions by advanced tools (will be mentioned in detail). Results and observation: Design capacity: Coupling capacity is an inadequate to fulfil 100% of operating conditions. Therefore, design modification of service factor to be at least 2.07 is crucial to address this issue and prevent recurrence of similar scenario, especially for the new upgrading project. Discharge fluctuation: High torque flexible coupling encountered during the operation. Therefore, discharge valve behaviour, tuning, set point and general conditions revaluated and modified subsequently, it can be used as baseline for upcoming Coupling design project. Metallurgy test: Material of flexible coupling membrane (discs) tested at the lab, for a detailed metallurgical investigation, better material grade has been selected for our operating conditions,

Keywords: high speed machine, reliabilty, flexible coupling, rotating equipment

Procedia PDF Downloads 48
224 Study into the Interactions of Primary Limbal Epithelial Stem Cells and HTCEPI Using Tissue Engineered Cornea

Authors: Masoud Sakhinia, Sajjad Ahmad

Abstract:

Introduction: Though knowledge of the compositional makeup and structure of the limbal niche has progressed exponentially during the past decade, much is yet to be understood. Identifying the precise profile and role of the stromal makeup which spans the ocular surface may inform researchers of the most optimum conditions needed to effectively expand LESCs in vitro, whilst preserving their differentiation status and phenotype. Limbal fibroblasts, as opposed to corneal fibroblasts are thought to form an important component of the microenvironment where LESCs reside. Methods: The corneal stroma was tissue engineered in vitro using both limbal and corneal fibroblasts embedded within a tissue engineered 3D collagen matrix. The effect of these two different fibroblasts on LESCs and hTCEpi corneal epithelial cell line were then subsequently determined using phase contrast microscopy, histolological analysis and PCR for specific stem cell markers. The study aimed to develop an in vitro model which could be used to determine whether limbal, as opposed to corneal fibroblasts, maintained the stem cell phenotype of LESCs and hTCEpi cell line. Results: Tissue culture analysis was inconclusive and required further quantitative analysis for remarks on cell proliferation within the varying stroma. Histological analysis of the tissue-engineered cornea showed a comparable structure to that of the human cornea, though with limited epithelial stratification. PCR results for epithelial cell markers of cells cultured on limbal fibroblasts showed reduced expression of CK3, a negative marker for LESC’s, whilst also exhibiting a relatively low expression level of P63, a marker for undifferentiated LESCs. Conclusion: We have shown the potential for the construction of a tissue engineered human cornea using a 3D collagen matrix and described some preliminary results in the analysis of the effects of varying stroma consisting of limbal and corneal fibroblasts, respectively, on the proliferation of stem cell phenotype of primary LESCs and hTCEpi corneal epithelial cells. Although no definitive marker exists to conclusively illustrate the presence of LESCs, the combination of positive and negative stem cell markers in our study were inconclusive. Though it is less traslational to the human corneal model, the use of conditioned medium from that of limbal and corneal fibroblasts may provide a more simple avenue. Moreover, combinations of extracellular matrices could be used as a surrogate in these culture models.

Keywords: cornea, Limbal Stem Cells, tissue engineering, PCR

Procedia PDF Downloads 257
223 Insectivorous Medicinal Plant Drosera Ecologyand its Biodiversity Conservation through Tissue Culture and Sustainable Biotechnology

Authors: Sushil Pradhan

Abstract:

Biotechnology contributes to sustainable development in several ways such as biofertilizer production, biopesticide production and management of environmental pollution, tissue culture and biodiversity conservation in vitro, in vivo and in situ, Insectivorous medicinal plant Drosera burmannii Vahl belongs to the Family-Droseraceae under Order-Caryophyllales, Dicotyledoneae, Angiospermeae which has 31 (thirty one) living genera and 194 species besides 7 (seven) extinct (fossil) genera. Locally it is known as “Patkanduri” in Odia. Its Hindi name is “Mukhajali” and its English name is “Sundew”. The earliest species of Drosera was first reported in 1753 by Carolous Linnaeus called Drosera indica L (Indian Sundew). The latest species of Drosera reported by Fleisch A, Robinson, AS, McPherson S, Heinrich V, Gironella E and Madulida D.A. (2011) is Drosera ultramafica from Malaysia. More than 50 % species of Drosera have been reported from Australia and next to Australia is South Africa. India harbours only 3 species such as D. indica L, Drosera burmannii Vahl and D. peltata L. From our Odisha only D. burmannii Vahl is being reported for the first time from the district of Subarnapur near Sonepur (Arjunpur Reserve Forest Area). Drosera plant is autotrophic but to supplement its Nitrogen (N2) requirement it adopts heterotrophic mode of nutrition (insectivorous/carnivorous) as well. The colour of plant in mostly red and about 20-30cm in height with beautiful pink or white pentamerous flowers. Plants grow luxuriantly during November to February in shady and moist places near small water bodies of running water stream. Medicinally it is a popular herb in the locality for the treatment of cold and cough in children in rainy season by the local Doctors (Kabiraj and Baidya). In the present field investigation an attempt has been made to understand the unique reproductive phase and life cycle of the plant thereby planning for its conservation and propagation through various techniques of tissue culture and biotechnology. More importantly besides morphological and anatomical studies, cytological investigation is being carried out to find out the number of chromosomes in the cell and its genomics as there is no such report as yet for Drosera burmannii Vahl. The ecological significance and biodiversity conservation of Drosera with special reference to energy, environmental and chemical engineering has been discussed in the research paper presentation.

Keywords: insectivorous, medicinal, drosera, biotechnology, chromosome, genome

Procedia PDF Downloads 362
222 Practical Challenges of Tunable Parameters in Matlab/Simulink Code Generation

Authors: Ebrahim Shayesteh, Nikolaos Styliaras, Alin George Raducu, Ozan Sahin, Daniel Pombo VáZquez, Jonas Funkquist, Sotirios Thanopoulos

Abstract:

One of the important requirements in many code generation projects is defining some of the model parameters tunable. This helps to update the model parameters without performing the code generation again. This paper studies the concept of embedded code generation by MATLAB/Simulink coder targeting the TwinCAT Simulink system. The generated runtime modules are then tested and deployed to the TwinCAT 3 engineering environment. However, defining the parameters tunable in MATLAB/Simulink code generation targeting TwinCAT is not very straightforward. This paper focuses on this subject and reviews some of the techniques tested here to make the parameters tunable in generated runtime modules. Three techniques are proposed for this purpose, including normal tunable parameters, callback functions, and mask subsystems. Moreover, some test Simulink models are developed and used to evaluate the results of proposed approaches. A brief summary of the study results is presented in the following. First of all, the parameters defined tunable and used in defining the values of other Simulink elements (e.g., gain value of a gain block) could be changed after the code generation and this value updating will affect the values of all elements defined based on the values of the tunable parameter. For instance, if parameter K=1 is defined as a tunable parameter in the code generation process and this parameter is used to gain a gain block in Simulink, the gain value for the gain block is equal to 1 in the gain block TwinCAT environment after the code generation. But, the value of K can be changed to a new value (e.g., K=2) in TwinCAT (without doing any new code generation in MATLAB). Then, the gain value of the gain block will change to 2. Secondly, adding a callback function in the form of “pre-load function,” “post-load function,” “start function,” and will not help to make the parameters tunable without performing a new code generation. This means that any MATLAB files should be run before performing the code generation. The parameters defined/calculated in this file will be used as fixed values in the generated code. Thus, adding these files as callback functions to the Simulink model will not make these parameters flexible since the MATLAB files will not be attached to the generated code. Therefore, to change the parameters defined/calculated in these files, the code generation should be done again. However, adding these files as callback functions forces MATLAB to run them before the code generation, and there is no need to define the parameters mentioned in these files separately. Finally, using a tunable parameter in defining/calculating the values of other parameters through the mask is an efficient method to change the value of the latter parameters after the code generation. For instance, if tunable parameter K is used in calculating the value of two other parameters K1 and K2 and, after the code generation, the value of K is updated in TwinCAT environment, the value of parameters K1 and K2 will also be updated (without any new code generation).

Keywords: code generation, MATLAB, tunable parameters, TwinCAT

Procedia PDF Downloads 204
221 Investigation of Mechanical and Tribological Property of Graphene Reinforced SS-316L Matrix Composite Prepared by Selective Laser Melting

Authors: Ajay Mandal, Jitendar Kumar Tiwari, N. Sathish, A. K. Srivastava

Abstract:

A fundamental investigation is performed on the development of graphene (Gr) reinforced stainless steel 316L (SS 316L) metal matrix composite via selective laser melting (SLM) in order to improve specific strength and wear resistance property of SS 316L. Firstly, SS 316L powder and graphene were mixed in a fixed ratio using low energy planetary ball milling. The milled powder is then subjected to the SLM process to fabricate composite samples at a laser power of 320 W and exposure time of 100 µs. The prepared composite was mechanically tested (hardness and tensile test) at ambient temperature, and obtained results indicate that the properties of the composite increased significantly with the addition of 0.2 wt. % Gr. Increment of about 25% (from 194 to 242 HV) and 70% (from 502 to 850 MPa) is obtained in hardness and yield strength of composite, respectively. Raman mapping and XRD were performed to see the distribution of Gr in the matrix and its effect on the formation of carbide, respectively. Results of Raman mapping show the uniform distribution of graphene inside the matrix. Electron back scatter diffraction (EBSD) map of the prepared composite was analyzed under FESEM in order to understand the microstructure and grain orientation. Due to thermal gradient, elongated grains were observed along the building direction, and grains get finer with the addition of Gr. Most of the mechanical components are subjected to several types of wear conditions. Therefore, it is very necessary to improve the wear property of the component, and hence apart from strength and hardness, a tribological property of composite was also measured under dry sliding condition. Solid lubrication property of Gr plays an important role during the sliding process due to which the wear rate of composite reduces up to 58%. Also, the surface roughness of worn surface reduces up to 70% as measured by 3D surface profilometry. Finally, it can be concluded that SLM is an efficient method of fabricating cutting edge metal matrix nano-composite having Gr like reinforcement, which was very difficult to fabricate through conventional manufacturing techniques. Prepared composite has superior mechanical and tribological properties and can be used for a wide variety of engineering applications. However, due to the unavailability of a considerable amount of literature in a similar domain, more experimental works need to perform, such as thermal property analysis, and is a part of ongoing study.

Keywords: selective laser melting, graphene, composite, mechanical property, tribological property

Procedia PDF Downloads 111
220 Comparative Study of Equivalent Linear and Non-Linear Ground Response Analysis for Rapar District of Kutch, India

Authors: Kulin Dave, Kapil Mohan

Abstract:

Earthquakes are considered to be the most destructive rapid-onset disasters human beings are exposed to. The amount of loss it brings in is sufficient to take careful considerations for designing of structures and facilities. Seismic Hazard Analysis is one such tool which can be used for earthquake resistant design. Ground Response Analysis is one of the most crucial and decisive steps for seismic hazard analysis. Rapar district of Kutch, Gujarat falls in Zone 5 of earthquake zone map of India and thus has high seismicity because of which it is selected for analysis. In total 8 bore-log data were studied at different locations in and around Rapar district. Different soil engineering properties were analyzed and relevant empirical correlations were used to calculate maximum shear modulus (Gmax) and shear wave velocity (Vs) for the soil layers. The soil was modeled using Pressure-Dependent Modified Kodner Zelasko (MKZ) model and the reference curve used for fitting was Seed and Idriss (1970) for sand and Darendeli (2001) for clay. Both Equivalent linear (EL), as well as Non-linear (NL) ground response analysis, has been carried out with Masing Hysteretic Re/Unloading formulation for comparison. Commercially available DEEPSOIL v. 7.0 software is used for this analysis. In this study an attempt is made to quantify ground response regarding generated acceleration time-history at top of the soil column, Response spectra calculation at 5 % damping and Fourier amplitude spectrum calculation. Moreover, the variation of Peak Ground Acceleration (PGA), Maximum Displacement, Maximum Strain (in %), Maximum Stress Ratio, Mobilized Shear Stress with depth is also calculated. From the study, PGA values estimated in rocky strata are nearly same as bedrock motion and marginal amplification is observed in sandy silt and silty clays by both analyses. The NL analysis gives conservative results of maximum displacement as compared to EL analysis. Maximum strain predicted by both studies is very close to each other. And overall NL analysis is more efficient and realistic because it follows the actual hyperbolic stress-strain relationship, considers stiffness degradation and mobilizes stresses generated due to pore water pressure.

Keywords: DEEPSOIL v 7.0, ground response analysis, pressure-dependent modified Kodner Zelasko model, MKZ model, response spectra, shear wave velocity

Procedia PDF Downloads 114
219 Experimental Study Analyzing the Similarity Theory Formulations for the Effect of Aerodynamic Roughness Length on Turbulence Length Scales in the Atmospheric Surface Layer

Authors: Matthew J. Emes, Azadeh Jafari, Maziar Arjomandi

Abstract:

Velocity fluctuations of shear-generated turbulence are largest in the atmospheric surface layer (ASL) of nominal 100 m depth, which can lead to dynamic effects such as galloping and flutter on small physical structures on the ground when the turbulence length scales and characteristic length of the physical structure are the same order of magnitude. Turbulence length scales are a measure of the average sizes of the energy-containing eddies that are widely estimated using two-point cross-correlation analysis to convert the temporal lag to a separation distance using Taylor’s hypothesis that the convection velocity is equal to the mean velocity at the corresponding height. Profiles of turbulence length scales in the neutrally-stratified ASL, as predicted by Monin-Obukhov similarity theory in Engineering Sciences Data Unit (ESDU) 85020 for single-point data and ESDU 86010 for two-point correlations, are largely dependent on the aerodynamic roughness length. Field measurements have shown that longitudinal turbulence length scales show significant regional variation, whereas length scales of the vertical component show consistent Obukhov scaling from site to site because of the absence of low-frequency components. Hence, the objective of this experimental study is to compare the similarity theory relationships between the turbulence length scales and aerodynamic roughness length with those calculated using the autocorrelations and cross-correlations of field measurement velocity data at two sites: the Surface Layer Turbulence and Environmental Science Test (SLTEST) facility in a desert ASL in Dugway, Utah, USA and the Commonwealth Scientific and Industrial Research Organisation (CSIRO) wind tower in a rural ASL in Jemalong, NSW, Australia. The results indicate that the longitudinal turbulence length scales increase with increasing aerodynamic roughness length, as opposed to the relationships derived by similarity theory correlations in ESDU models. However, the ratio of the turbulence length scales in the lateral and vertical directions to the longitudinal length scales is relatively independent of surface roughness, showing consistent inner-scaling between the two sites and the ESDU correlations. Further, the diurnal variation of wind velocity due to changes in atmospheric stability conditions has a significant effect on the turbulence structure of the energy-containing eddies in the lower ASL.

Keywords: aerodynamic roughness length, atmospheric surface layer, similarity theory, turbulence length scales

Procedia PDF Downloads 108
218 Analysis of the Statistical Characterization of Significant Wave Data Exceedances for Designing Offshore Structures

Authors: Rui Teixeira, Alan O’Connor, Maria Nogal

Abstract:

The statistical theory of extreme events is progressively a topic of growing interest in all the fields of science and engineering. The changes currently experienced by the world, economic and environmental, emphasized the importance of dealing with extreme occurrences with improved accuracy. When it comes to the design of offshore structures, particularly offshore wind turbines, the importance of efficiently characterizing extreme events is of major relevance. Extreme events are commonly characterized by extreme values theory. As an alternative, the accurate modeling of the tails of statistical distributions and the characterization of the low occurrence events can be achieved with the application of the Peak-Over-Threshold (POT) methodology. The POT methodology allows for a more refined fit of the statistical distribution by truncating the data with a minimum value of a predefined threshold u. For mathematically approximating the tail of the empirical statistical distribution the Generalised Pareto is widely used. Although, in the case of the exceedances of significant wave data (H_s) the 2 parameters Weibull and the Exponential distribution, which is a specific case of the Generalised Pareto distribution, are frequently used as an alternative. The Generalized Pareto, despite the existence of practical cases where it is applied, is not completely recognized as the adequate solution to model exceedances over a certain threshold u. References that set the Generalised Pareto distribution as a secondary solution in the case of significant wave data can be identified in the literature. In this framework, the current study intends to tackle the discussion of the application of statistical models to characterize exceedances of wave data. Comparison of the application of the Generalised Pareto, the 2 parameters Weibull and the Exponential distribution are presented for different values of the threshold u. Real wave data obtained in four buoys along the Irish coast was used in the comparative analysis. Results show that the application of the statistical distributions to characterize significant wave data needs to be addressed carefully and in each particular case one of the statistical models mentioned fits better the data than the others. Depending on the value of the threshold u different results are obtained. Other variables of the fit, as the number of points and the estimation of the model parameters, are analyzed and the respective conclusions were drawn. Some guidelines on the application of the POT method are presented. Modeling the tail of the distributions shows to be, for the present case, a highly non-linear task and, due to its growing importance, should be addressed carefully for an efficient estimation of very low occurrence events.

Keywords: extreme events, offshore structures, peak-over-threshold, significant wave data

Procedia PDF Downloads 242