Search results for: rigidity constraint
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 573

Search results for: rigidity constraint

63 Meeting the Energy Balancing Needs in a Fully Renewable European Energy System: A Stochastic Portfolio Framework

Authors: Iulia E. Falcan

Abstract:

The transition of the European power sector towards a clean, renewable energy (RE) system faces the challenge of meeting power demand in times of low wind speed and low solar radiation, at a reasonable cost. This is likely to be achieved through a combination of 1) energy storage technologies, 2) development of the cross-border power grid, 3) installed overcapacity of RE and 4) dispatchable power sources – such as biomass. This paper uses NASA; derived hourly data on weather patterns of sixteen European countries for the past twenty-five years, and load data from the European Network of Transmission System Operators-Electricity (ENTSO-E), to develop a stochastic optimization model. This model aims to understand the synergies between the four classes of technologies mentioned above and to determine the optimal configuration of the energy technologies portfolio. While this issue has been addressed before, it was done so using deterministic models that extrapolated historic data on weather patterns and power demand, as well as ignoring the risk of an unbalanced grid-risk stemming from both the supply and the demand side. This paper aims to explicitly account for the inherent uncertainty in the energy system transition. It articulates two levels of uncertainty: a) the inherent uncertainty in future weather patterns and b) the uncertainty of fully meeting power demand. The first level of uncertainty is addressed by developing probability distributions for future weather data and thus expected power output from RE technologies, rather than known future power output. The latter level of uncertainty is operationalized by introducing a Conditional Value at Risk (CVaR) constraint in the portfolio optimization problem. By setting the risk threshold at different levels – 1%, 5% and 10%, important insights are revealed regarding the synergies of the different energy technologies, i.e., the circumstances under which they behave as either complements or substitutes to each other. The paper concludes that allowing for uncertainty in expected power output - rather than extrapolating historic data - paints a more realistic picture and reveals important departures from results of deterministic models. In addition, explicitly acknowledging the risk of an unbalanced grid - and assigning it different thresholds - reveals non-linearity in the cost functions of different technology portfolio configurations. This finding has significant implications for the design of the European energy mix.

Keywords: cross-border grid extension, energy storage technologies, energy system transition, stochastic portfolio optimization

Procedia PDF Downloads 153
62 Propagation of Ultra-High Energy Cosmic Rays through Extragalactic Magnetic Fields: An Exploratory Study of the Distance Amplification from Rectilinear Propagation

Authors: Rubens P. Costa, Marcelo A. Leigui de Oliveira

Abstract:

The comprehension of features on the energy spectra, the chemical compositions, and the origins of Ultra-High Energy Cosmic Rays (UHECRs) - mainly atomic nuclei with energies above ~1.0 EeV (exa-electron volts) - are intrinsically linked to the problem of determining the magnitude of their deflections in cosmic magnetic fields on cosmological scales. In addition, as they propagate from the source to the observer, modifications are expected in their original energy spectra, anisotropy, and the chemical compositions due to interactions with low energy photons and matter. This means that any consistent interpretation of the nature and origin of UHECRs has to include the detailed knowledge of their propagation in a three-dimensional environment, taking into account the magnetic deflections and energy losses. The parameter space range for the magnetic fields in the universe is very large because the field strength and especially their orientation have big uncertainties. Particularly, the strength and morphology of the Extragalactic Magnetic Fields (EGMFs) remain largely unknown, because of the intrinsic difficulty of observing them. Monte Carlo simulations of charged particles traveling through a simulated magnetized universe is the straightforward way to study the influence of extragalactic magnetic fields on UHECRs propagation. However, this brings two major difficulties: an accurate numerical modeling of charged particles diffusion in magnetic fields, and an accurate numerical modeling of the magnetized Universe. Since magnetic fields do not cause energy losses, it is important to impose that the particle tracking method conserve the particle’s total energy and that the energy changes are results of the interactions with background photons only. Hence, special attention should be paid to computational effects. Additionally, because of the number of particles necessary to obtain a relevant statistical sample, the particle tracking method must be computationally efficient. In this work, we present an analysis of the propagation of ultra-high energy charged particles in the intergalactic medium. The EGMFs are considered to be coherent within cells of 1 Mpc (mega parsec) diameter, wherein they have uniform intensities of 1 nG (nano Gauss). Moreover, each cell has its field orientation randomly chosen, and a border region is defined such that at distances beyond 95% of the cell radius from the cell center smooth transitions have been applied in order to avoid discontinuities. The smooth transitions are simulated by weighting the magnetic field orientation by the particle's distance to the two nearby cells. The energy losses have been treated in the continuous approximation parameterizing the mean energy loss per unit path length by the energy loss length. We have shown, for a particle with the typical energy of interest the integration method performance in the relative error of Larmor radius, without energy losses and the relative error of energy. Additionally, we plotted the distance amplification from rectilinear propagation as a function of the traveled distance, particle's magnetic rigidity, without energy losses, and particle's energy, with energy losses, to study the influence of particle's species on these calculations. The results clearly show when it is necessary to use a full three-dimensional simulation.

Keywords: cosmic rays propagation, extragalactic magnetic fields, magnetic deflections, ultra-high energy

Procedia PDF Downloads 117
61 Evaluation of Dry Matter Yield of Panicum maximum Intercropped with Pigeonpea and Sesbania Sesban

Authors: Misheck Musokwa, Paramu Mafongoya, Simon Lorentz

Abstract:

Seasonal shortages of fodder during the dry season is a major constraint to smallholder livestock farmers in South Africa. To mitigate the shortage of fodder, legume trees can be intercropped with pastures which can diversify the sources of feed and increase the amount of protein for grazing animals. The objective was to evaluate dry matter yield of Panicum maximum and land productivity under different fodder production systems during 2016/17-2017/18 seasons at Empangeni (28.6391° S and 31.9400° E). A randomized complete block design, replicated three times was used, the treatments were sole Panicum maximum, Panicum maximum + Sesbania sesban, Panicum maximum + pigeonpea, sole Sesbania sesban, Sole pigeonpea. Three months S.sesbania seedlings were transplanted whilst pigeonpea was direct seeded at spacing of 1m x 1m. P. maximum seeds were drilled at a respective rate of 7.5 kg/ha having an inter-row spacing of 0.25 m apart. In between rows of trees P. maximum seeds were drilled. The dry matter yield harvesting times were separated by six months’ timeframe. A 0.25 m² quadrant randomly placed on 3 points on the plot was used as sampling area during harvesting P. maximum. There was significant difference P < 0.05 across 3 harvests and total dry matter. P. maximum had higher dry matter yield as compared to both intercrops at first harvest and total. The second and third harvest had no significant difference with pigeonpea intercrop. The results was in this order for all 3 harvest: P. maximum (541.2c, 1209.3b and 1557b) kg ha¹ ≥ P. maximum + pigeonpea (157.2b, 926.7b and 1129b) kg ha¹ > P. maximum + S. sesban (36.3a, 282a and 555a) kg ha¹. Total accumulation of dry matter yield of P. maximum (3307c kg ha¹) > P. maximum + pigeonpea (2212 kg ha¹) ≥ P. maximum + S. sesban (874 kg ha¹). There was a significant difference (P< 0.05) on seed yield for trees. Pigeonpea (1240.3 kg ha¹) ≥ Pigeonpea + P. maximum (862.7 kg ha¹) > S.sesbania (391.9 kg ha¹) ≥ S.sesbania + P. maximum. The Land Equivalent Ratio (LER) was in the following order P. maximum + pigeonpea (1.37) > P. maximum + S. sesban (0.84) > Pigeonpea (0.59) ≥ S. Sesbania (0.57) > P. maximum (0.26). Results indicates that it is beneficial to have P. maximum intercropped with pigeonpea because of higher land productivity. Planting grass with pigeonpea was more beneficial than S. sesban with grass or sole cropping in terms of saving the shortage of arable land. P. maximum + pigeonpea saves a substantial (37%) land which can be subsequently be used for other crop production. Pigeonpea is recommended as an intercrop with P. maximum due to its higher LER and combined production of livestock feed, human food, and firewood. Panicum grass is low in crude protein though high in carbohydrates, there is a need for intercropping it with legume trees. A farmer who buys concentrates can reduce costs by combining P. maximum with pigeonpea this will provide a balanced diet at low cost.

Keywords: fodder, livestock, productivity, smallholder farmers

Procedia PDF Downloads 138
60 Finite Element Modelling and Optimization of Post-Machining Distortion for Large Aerospace Monolithic Components

Authors: Bin Shi, Mouhab Meshreki, Grégoire Bazin, Helmi Attia

Abstract:

Large monolithic components are widely used in the aerospace industry in order to reduce airplane weight. Milling is an important operation in manufacturing of the monolithic parts. More than 90% of the material could be removed in the milling operation to obtain the final shape. This results in low rigidity and post-machining distortion. The post-machining distortion is the deviation of the final shape from the original design after releasing the clamps. It is a major challenge in machining of the monolithic parts, which costs billions of economic losses every year. Three sources are directly related to the part distortion, including initial residual stresses (RS) generated from previous manufacturing processes, machining-induced RS and thermal load generated during machining. A finite element model was developed to simulate a milling process and predicate the post-machining distortion. In this study, a rolled-aluminum plate AA7175 with a thickness of 60 mm was used for the raw block. The initial residual stress distribution in the block was measured using a layer-removal method. A stress-mapping technique was developed to implement the initial stress distribution into the part. It is demonstrated that this technique significantly accelerates the simulation time. Machining-induced residual stresses on the machined surface were measured using MTS3000 hole-drilling strain-gauge system. The measured RS was applied on the machined surface of a plate to predict the distortion. The predicted distortion was compared with experimental results. It is found that the effect of the machining-induced residual stress on the distortion of a thick plate is very limited. The distortion can be ignored if the wall thickness is larger than a certain value. The RS generated from the thermal load during machining is another important factor causing part distortion. Very limited number of research on this topic was reported in literature. A coupled thermo-mechanical FE model was developed to evaluate the thermal effect on the plastic deformation of a plate. A moving heat source with a feed rate was used to simulate the dynamic cutting heat in a milling process. When the heat source passed the part surface, a small layer was removed to simulate the cutting operation. The results show that for different feed rates and plate thicknesses, the plastic deformation/distortion occurs only if the temperature exceeds a critical level. It was found that the initial residual stress has a major contribution to the part distortion. The machining-induced stress has limited influence on the distortion for thin-wall structure when the wall thickness is larger than a certain value. The thermal load can also generate part distortion when the cutting temperature is above a critical level. The developed numerical model was employed to predict the distortion of a frame part with complex structures. The predictions were compared with the experimental measurements, showing both are in good agreement. Through optimization of the position of the part inside the raw plate using the developed numerical models, the part distortion can be significantly reduced by 50%.

Keywords: modelling, monolithic parts, optimization, post-machining distortion, residual stresses

Procedia PDF Downloads 39
59 Design and Manufacture of Removable Nosecone Tips with Integrated Pitot Tubes for High Power Sounding Rocketry

Authors: Bjorn Kierulf, Arun Chundru

Abstract:

Over the past decade, collegiate rocketry teams have emerged across the country with various goals: space, liquid-fueled flight, etc. A critical piece of the development of knowledge within a club is the use of so-called "sounding rockets," whose goal is to take in-flight measurements that inform future rocket design. Common measurements include acceleration from inertial measurement units (IMU's), and altitude from barometers. With a properly tuned filter, these measurements can be used to find velocity, but are susceptible to noise, offset, and filter settings. Instead, velocity can be measured more directly and more instantaneously using a pitot tube, which operates by measuring the stagnation pressure. At supersonic speeds, an additional thermodynamic property is necessary to constrain the upstream state. One possibility is the stagnation temperature, measured by a thermocouple in the pitot tube. The routing of the pitot tube from the nosecone tip down to a pressure transducer is complicated by the nosecone's structure. Commercial-off-the-shelf (COTS) nosecones come with a removable metal tip (without a pitot tube). This provides the opportunity to make custom tips with integrated measurement systems without making the nosecone from scratch. The main design constraint is how the nosecone tip is held down onto the nosecone, using the tension in a threaded rod anchored to a bulkhead below. Because the threaded rod connects into a threaded hole in the center of the nosecone tip, the pitot tube follows a winding path, and the pressure fitting is off-center. Two designs will be presented in the paper, one with a curved pitot tube and a coaxial design that eliminates the need for the winding path by routing pressure through a structural tube. Additionally, three manufacturing methods will be presented for these designs: bound powder filament metal 3D printing, stereo-lithography (SLA) 3D printing, and traditional machining. These will employ three different materials, copper, steel, and proprietary resin. These manufacturing methods and materials are relatively low cost, thus accessible to student researchers. These designs and materials cover multiple use cases, based on how fast the sounding rocket is expected to travel and how important heating effects are - to measure and to avoid melting. This paper will include drawings showing key features and an overview of the design changes necessitated by the manufacture. It will also include a look at the successful use of these nosecone tips and the data they have gathered to date.

Keywords: additive manufacturing, machining, pitot tube, sounding rocketry

Procedia PDF Downloads 154
58 Stroke Prevention in Patients with Atrial Fibrillation and Co-Morbid Physical and Mental Health Problems

Authors: Dina Farran, Mark Ashworth, Fiona Gaughran

Abstract:

Atrial fibrillation (AF), the most prevalent cardiac arrhythmia, is associated with an increased risk of stroke, contributing to heart failure and death. In this project, we aim to improve patient safety by screening for stroke risk among people with AF and co-morbid mental illness. To do so, we started by conducting a systematic review and meta-analysis on prevalence, management, and outcomes of AF in people with Serious Mental Illness (SMI) versus the general population. We then evaluated oral anticoagulation (OAC) prescription trends in people with AF and co-morbid SMI in King’s College Hospital. We also evaluated the association between mental illness severity and OAC prescription in eligible patients in South London and Maudsley (SLaM) NHS Foundation Trust. Next, we implemented an electronic clinical decision support system (eCDSS) consisting of a visual prompt on patient electronic Personal Health Records to screen for AF-related stroke risk in three Mental Health of Older Adults wards at SLaM. Finally, we assessed the feasibility and acceptability of the eCDSS by qualitatively investigating clinicians’ perspectives of the potential usefulness of the eCDSS (pre-intervention) and their experiences and their views regarding its impact on clinicians and patients (post-intervention). The systematic review showed that people with SMI had low reported rates of AF. AF patients with SMI were less likely to receive OAC than the general population. When receiving warfarin, people with SMI, particularly bipolar disorder, experienced poor anticoagulation control compared to the general population. Meta-analysis showed that SMI was not significantly associated with an increased risk of stroke or major bleeding when adjusting for underlying risk factors. The main findings of the first observational study were that among AF patients having a high stroke risk, those with co-morbid SMI were less likely than non-SMI to be prescribed any OAC, particularly warfarin. After 2019, there was no significant difference between the two groups. In the second observational study, patients with AF and co-morbid SMI were less likely to be prescribed any OAC compared to those with dementia, substance use disorders, or common mental disorders, adjusting for age, sex, stroke, and bleeding risk scores. Among AF patients with co-morbid SMI, warfarin was less likely to be prescribed to those having alcohol or substance dependency, serious self-injury, hallucinations or delusions, and activities of daily living impairment. In the intervention, clinicians were asked to confirm the presence of AF, clinically assess stroke and bleeding risks, record risk scores in clinical notes, and refer patients at high risk of stroke to OAC clinics. Clinicians reported many potential benefits for the eCDSS, including improving clinical effectiveness, better identification of patients at risk, safer and more comprehensive care, consistency in decision making and saving time. Identified potential risks included rigidity in decision-making, overreliance, reduced critical thinking, false positive recommendations, annoyance, and increased workload. This study presents a unique opportunity to quantify AF patients with mental illness who are at high risk of severe outcomes using electronic health records. This has the potential to improve health outcomes and, therefore patients' quality of life.

Keywords: atrial fibrillation, stroke, mental health conditions, electronic clinical decision support systems

Procedia PDF Downloads 36
57 On-Site Coaching on Freshly-Graduated Nurses to Improves Quality of Clinical Handover and to Avoid Clinical Error

Authors: Sau Kam Adeline Chan

Abstract:

World Health Organization had listed ‘Communication during Patient Care Handovers’ as one of its highest 5 patient safety initiatives. Clinical handover means transfer of accountability and responsibility of clinical information from one health professional to another. The main goal of clinical handover is to convey patient’s current condition and treatment plan accurately. Ineffective communication at point of care is globally regarded as the main cause of the sentinel event. Situation, Background, Assessment and Recommendation (SBAR), a communication tool, is extensively regarded as an effective communication tool in healthcare setting. Nonetheless, just by scenario-based program in nursing school or attending workshops on SBAR would not be enough for freshly graduated nurses to apply it competently in a complex clinical practice. To what extend and in-depth of information should be conveyed during handover process is not easy to learn. As such, on-site coaching is essential to upgrade their expertise on the usage of SBAR and ultimately to avoid any clinical error. On-site coaching for all freshly graduated nurses on the usage of SBAR in clinical handover was commenced in August 2014. During the preceptorship period, freshly graduated nurses were coached by the preceptor. After that, they were gradually assigned to take care of a group of patients independently. Nurse leaders would join in their shift handover process at patient’s bedside. Feedback and support were given to them accordingly. Discrepancies on their clinical handover process were shared with them and documented for further improvement work. Owing to the constraint of manpower in nurse leader, about coaching for 30 times were provided to a nurse in a year. Staff satisfaction survey was conducted to gauge their feelings about the coaching and look into areas for further improvement. Number of clinical error avoided was documented as well. The nurses reported that there was a significant improvement particularly in their confidence and knowledge in clinical handover process. In addition, the sense of empowerment was developed when liaising with senior and experienced nurses. Their proficiency in applying SBAR was enhanced and they become more alert to the critical criteria of an effective clinical handover. Most importantly, accuracy of transferring patient’s condition was improved and repetition of information was avoided. Clinical errors were prevented and quality patient care was ensured. Using SBAR as a communication tool looks simple. The tool only provides a framework to guide the handover process. Nevertheless, without on-site training, loophole on clinical handover still exists, patient’s safety will be affected and clinical error still happens.

Keywords: freshly graduated nurse, competency of clinical handover, quality, clinical error

Procedia PDF Downloads 138
56 Virtual Engineers on Wheels: Transitioning from Mobile to Online Outreach

Authors: Kauser Jahan, Jason Halvorsen, Kara Banks, Kara Natoli, Elizabeth McWeeney, Brittany LeMasney, Nicole Caramanna, Justin Hillman, Christopher Hauske, Meghan Sparks

Abstract:

The Virtual Engineers on Wheels (ViEW) is a revised version of our established mobile K-12 outreach program Engineers on Wheels in order to address the pandemic. The Virtual Engineers on Wheels' (VIEW) goal has stayed the same as in prior years: to provide K-12 students and educators with the necessary resources to peak interest in the expanding fields of engineering. With these trying times, the Virtual Engineers on Wheels outreach has adapted its medium of instruction to be more seamless with the online approach to teaching and outreach. In the midst of COVID-19, providing a safe transfer of information has become a constraint for research. The focus has become how to uphold a level of quality instruction without diminishing the safety of those involved by promoting proper health practices and giving hope to students as well as their families. Furthermore, ViEW has created resources on effective strategies that minimize risk factors of COVID-19 and inform families that there is still a promising future ahead. To obtain these goals while still maintaining true to the hands-on learning that is so crucial to young minds, the approach is online video lectures followed by experiments within different engineering disciplines. ViEW has created a comprehensive website that students can leverage to explore the different fields of study. One of the experiments entails teaching about drone usage and how it might play a factor in the future of unmanned deliveries. Some of the other experiments focus on the differences in mask materials and their effectiveness, as well as their environmental outlook. Having students perform from home enables them a safe environment to learn at their own pace while still providing quality instruction that would normally be achieved in the classroom. Contact information is readily available on the website to provide interested parties with a means to ask their inquiries. As it currently stands, the interest in engineering/STEM-related fields is underrepresented from women and certain minority groups. So alongside the desire to grow interest, helping balance the scales is one of the main priorities of VIEW. In previous years, VIEW surveyed students before and after instruction to see if their perception of engineering has changed. In general, it is the understanding that being exposed to engineering/STEM at a young age increases the chances that it will be pursued later in life.

Keywords: STEM, engineering outreach, teaching pedagogy, pandemic

Procedia PDF Downloads 117
55 A Method to Identify the Critical Delay Factors for Building Maintenance Projects of Institutional Buildings: Case Study of Eastern India

Authors: Shankha Pratim Bhattacharya

Abstract:

In general building repair and renovation projects are minor in nature. It requires less attention as the primary cost involvement is relatively small. Although the building repair and maintenance projects look simple, it involves much complexity during execution. Many of the present research indicate that few uncertain situations are usually linked with maintenance projects. Those may not be read properly in the planning stage of the projects, and finally, lead to time overrun. Building repair and maintenance become essential and periodical after commissioning of the building. In Institutional buildings, the regular maintenance projects also include addition –alteration, modification activities. Increase in the student admission, new departments, and sections, new laboratories and workshops, up gradation of existing laboratories are very common in the institutional buildings in the developing nations like India. The project becomes very critical because it undergoes space problem, architectural design issues, structural modification, etc. One of the prime factors in the institutional building maintenance and modification project is the time constraint. Mostly it required being executed a specific non-work time period. The present research considered only the institutional buildings of the Eastern part of India to analyse the repair and maintenance project delay. A general survey was conducted among the technical institutes to find the causes and corresponding nature of construction delay factors. Five technical institutes are considered in the present study with repair, renovation, modification and extension type of projects. Construction delay factors are categorically subdivided into four groups namely, material, manpower (works), Contract and Site. The survey data are collected for the nature of delay responsible for a specific project and the absolute amount of delay through proposed and actual duration of work. In the first stage of the paper, a relative importance index (RII) is proposed for the delay factors. The occurrence of the delay factors is also judged by its frequency-severity nature. Finally, the delay factors are then rated and linked with the type of work. In the second stage, a regression analysis is executed to establish an empirical relationship between the actual time of a project and the percentage of delay. It also indicates the impact of the factors for delay responsibility. Ultimately, the present paper makes an effort to identify the critical delay factors for the repair and renovation type project in the Eastern Indian Institutional building.

Keywords: delay factor, institutional building, maintenance, relative importance index, regression analysis, repair

Procedia PDF Downloads 241
54 Regenerating Habitats. A Housing Based on Modular Wooden Systems

Authors: Rui Pedro de Sousa Guimarães Ferreira, Carlos Alberto Maia Domínguez

Abstract:

Despite the ambitions to achieve climate neutrality by 2050, to fulfill the Paris Agreement's goals, the building and construction sector remains one of the most resource-intensive and greenhouse gas-emitting industries in the world, accounting for 40% of worldwide CO ₂ emissions. Over the past few decades, globalization and population growth have led to an exponential rise in demand in the housing market and, by extension, in the building industry. Considering this housing crisis, it is obvious that we will not stop building in the near future. However, the transition, which has already started, is challenging and complex because it calls for the worldwide participation of numerous organizations in altering how building systems, which have been a part of our everyday existence for over a century, are used. Wood is one of the alternatives that is most frequently used nowadays (under responsible forestry conditions) because of its physical qualities and, most importantly, because it produces fewer carbon emissions during manufacturing than steel or concrete. Furthermore, as wood retains its capacity to store CO ₂ after application and throughout the life of the building, working as a natural carbon filter, it helps to reduce greenhouse gas emissions. After a century-long focus on other materials, in the last few decades, technological advancements have made it possible to innovate systems centered around the use of wood. However, there are still some questions that require further exploration. It is necessary to standardize production and manufacturing processes based on prefabrication and modularization principles to achieve greater precision and optimization of the solutions, decreasing building time, prices, and waste from raw materials. In addition, this approach will make it possible to develop new architectural solutions to solve the rigidity and irreversibility of buildings, two of the most important issues facing housing today. Most current models are still created as inflexible, fixed, monofunctional structures that discourage any kind of regeneration, based on matrices that sustain the conventional family's traditional model and are founded on rigid, impenetrable compartmentalization. Adaptability and flexibility in housing are, and always have been, necessities and key components of architecture. People today need to constantly adapt to their surroundings and themselves because of the fast-paced, disposable, and quickly obsolescent nature of modern items. Migrations on a global scale, different kinds of co-housing, or even personal changes are some of the new questions that buildings have to answer. Designing with the reversibility of construction systems and materials in mind not only allows for the concept of "looping" in construction, with environmental advantages that enable the development of a circular economy in the sector but also unleashes multiple social benefits. In this sense, it is imperative to develop prefabricated and modular construction systems able to address the formalization of a reversible proposition that adjusts to the scale of time and its multiple reformulations, many of which are unpredictable. We must allow buildings to change, grow, or shrink over their lifetime, respecting their nature and, finally, the nature of the people living in them. It´s the ability to anticipate the unexpected, adapt to social factors, and take account of demographic shifts in society to stabilize communities, the foundation of real innovative sustainability.

Keywords: modular, timber, flexibility, housing

Procedia PDF Downloads 60
53 A Simple Chemical Approach to Regenerating Strength of Thermally Recycled Glass Fibre

Authors: Sairah Bashir, Liu Yang, John Liggat, James Thomason

Abstract:

Glass fibre is currently used as reinforcement in over 90% of all fibre-reinforced composites produced. The high rigidity and chemical resistance of these composites are required for optimum performance but unfortunately results in poor recyclability; when such materials are no longer fit for purpose, they are frequently deposited in landfill sites. Recycling technologies, for example, thermal treatment, can be employed to address this issue; temperatures typically between 450 and 600 °C are required to allow degradation of the rigid polymeric matrix and subsequent extraction of fibrous reinforcement. However, due to the severe thermal conditions utilised in the recycling procedure, glass fibres become too weak for reprocessing in second-life composite materials. In addition, more stringent legislation is being put in place regarding disposal of composite waste, and so it is becoming increasingly important to develop long-term recycling solutions for such materials. In particular, the development of a cost-effective method to regenerate strength of thermally recycled glass fibres will have a positive environmental effect as a reduced volume of composite material will be destined for landfill. This research study has demonstrated the positive impact of sodium hydroxide (NaOH) and potassium hydroxide (KOH) solution, prepared at relatively mild temperatures and at concentrations of 1.5 M and above, on the strength of heat-treated glass fibres. As a result, alkaline treatments can potentially be implemented to glass fibres that are recycled from composite waste to allow their reuse in second-life materials. The optimisation of the strength recovery process is being conducted by varying certain reaction parameters such as molarity of alkaline solution and treatment time. It is believed that deep V-shaped surface flaws exist commonly on severely damaged fibre surfaces and are effectively removed to form smooth, U-shaped structures following alkaline treatment. Although these surface flaws are believed to be present on glass fibres they have not in fact been observed, however, they have recently been discovered in this research investigation through analytical techniques such as AFM (atomic force microscopy) and SEM (scanning electron microscopy). Reaction conditions such as molarity of alkaline solution affect the degree of etching of the glass fibre surface, and therefore the extent to which fibre strength is recovered. A novel method in determining the etching rate of glass fibres after alkaline treatment has been developed, and the data acquired can be correlated with strength. By varying reaction conditions such as alkaline solution temperature and molarity, the activation energy of the glass etching process and the reaction order can be calculated respectively. The promising results obtained from NaOH and KOH treatments have opened an exciting route to strength regeneration of thermally recycled glass fibres, and the optimisation of the alkaline treatment process is being continued in order to produce recycled fibres with properties that match original glass fibre products. The reuse of such glass filaments indicates that closed-loop recycling of glass fibre reinforced composite (GFRC) waste can be achieved. In fact, the development of a closed-loop recycling process for GFRC waste is already underway in this research study.

Keywords: glass fibers, glass strengthening, glass structure and properties, surface reactions and corrosion

Procedia PDF Downloads 243
52 Fuzzy Optimization for Identifying Anticancer Targets in Genome-Scale Metabolic Models of Colon Cancer

Authors: Feng-Sheng Wang, Chao-Ting Cheng

Abstract:

Developing a drug from conception to launch is costly and time-consuming. Computer-aided methods can reduce research costs and accelerate the development process during the early drug discovery and development stages. This study developed a fuzzy multi-objective hierarchical optimization framework for identifying potential anticancer targets in a metabolic model. First, RNA-seq expression data of colorectal cancer samples and their healthy counterparts were used to reconstruct tissue-specific genome-scale metabolic models. The aim of the optimization framework was to identify anticancer targets that lead to cancer cell death and evaluate metabolic flux perturbations in normal cells that have been caused by cancer treatment. Four objectives were established in the optimization framework to evaluate the mortality of cancer cells for treatment and to minimize side effects causing toxicity-induced tumorigenesis on normal cells and smaller metabolic perturbations. Through fuzzy set theory, a multiobjective optimization problem was converted into a trilevel maximizing decision-making (MDM) problem. The applied nested hybrid differential evolution was applied to solve the trilevel MDM problem using two nutrient media to identify anticancer targets in the genome-scale metabolic model of colorectal cancer, respectively. Using Dulbecco’s Modified Eagle Medium (DMEM), the computational results reveal that the identified anticancer targets were mostly involved in cholesterol biosynthesis, pyrimidine and purine metabolisms, glycerophospholipid biosynthetic pathway and sphingolipid pathway. However, using Ham’s medium, the genes involved in cholesterol biosynthesis were unidentifiable. A comparison of the uptake reactions for the DMEM and Ham’s medium revealed that no cholesterol uptake reaction was included in DMEM. Two additional media, i.e., a cholesterol uptake reaction was included in DMEM and excluded in HAM, were respectively used to investigate the relationship of tumor cell growth with nutrient components and anticancer target genes. The genes involved in the cholesterol biosynthesis were also revealed to be determinable if a cholesterol uptake reaction was not induced when the cells were in the culture medium. However, the genes involved in cholesterol biosynthesis became unidentifiable if such a reaction was induced.

Keywords: Cancer metabolism, genome-scale metabolic model, constraint-based model, multilevel optimization, fuzzy optimization, hybrid differential evolution

Procedia PDF Downloads 65
51 Integrated Geophysical Surveys for Sinkhole and Subsidence Vulnerability Assessment, in the West Rand Area of Johannesburg

Authors: Ramoshweu Melvin Sethobya, Emmanuel Chirenje, Mihlali Hobo, Simon Sebothoma

Abstract:

The recent surge in residential infrastructure development around the metropolitan areas of South Africa has necessitated conditions for thorough geotechnical assessments to be conducted prior to site developments to ensure human and infrastructure safety. This paper appraises the success in the application of multi-method geophysical techniques for the delineation of sinkhole vulnerability in a residential landscape. Geophysical techniques ERT, MASW, VES, Magnetics and gravity surveys were conducted to assist in mapping sinkhole vulnerability, using an existing sinkhole as a constraint at Venterspost town, West of Johannesburg city. A combination of different geophysical techniques and results integration from those proved to be useful in the delineation of the lithologic succession around sinkhole locality, and determining the geotechnical characteristics of each layer for its contribution to the development of sinkholes, subsidence and cavities at the vicinity of the site. Study results have also assisted in the determination of the possible depth extension of the currently existing sinkhole and the location of sites where other similar karstic features and sinkholes could form. Results of the ERT, VES and MASW surveys have uncovered dolomitic bedrock at varying depths around the sites, which exhibits high resistivity values in the range 2500-8000ohm.m and corresponding high velocities in the range 1000-2400 m/s. The dolomite layer was found to be overlain by a weathered chert-poor dolomite layer, which has resistivities between the range 250-2400ohm.m, and velocities ranging from 500-600m/s, from which the large sinkhole has been found to collapse/ cave in. A compiled 2.5D high resolution Shear Wave Velocity (Vs) map of the study area was created using 2D profiles of MASW data, offering insights into the prevailing lithological setup conducive for formation various types of karstic features around the site. 3D magnetic models of the site highlighted the regions of possible subsurface interconnections between the currently existing large sinkhole and the other subsidence feature at the site. A number of depth slices were used to detail the conditions near the sinkhole as depth increases. Gravity surveys results mapped the possible formational pathways for development of new karstic features around the site. Combination and correlation of different geophysical techniques proved useful in delineation of the site geotechnical characteristics and mapping the possible depth extend of the currently existing sinkhole.

Keywords: resistivity, magnetics, sinkhole, gravity, karst, delineation, VES

Procedia PDF Downloads 58
50 Developing Pedagogy for Argumentation and Teacher Agency: An Educational Design Study in the UK

Authors: Zeynep Guler

Abstract:

Argumentation and the production of scientific arguments are essential components that are necessary for helping students become scientifically literate through engaging them in constructing and critiquing ideas. Incorporating argumentation into science classrooms is challenging and can be a long-term process for both students and teachers. Students have difficulty in engaging tasks that require them to craft arguments, evaluate them to seek weaknesses, and revise them. Teachers also struggle with facilitating argumentation when they have underdeveloped science practices, underdeveloped pedagogical knowledge for argumentation science teaching, or underdeveloped teaching practice with argumentation (or a combination of all three). Thus, there is a need to support teachers in developing pedagogy for science teaching as argumentation, planning and implementing teaching practice for facilitating argumentation and also in becoming more agentic in this regards. Looking specifically at the experience of agency within education, it is arguable that agency is necessary for teachers’ renegotiation of professional purposes and practices in the light of changing educational practices. This study investigated how science teachers develop pedagogy for argumentation both individually and with their colleagues and also how teachers become more agentic (or not) through the active engagement of their contexts-for-action that refer to this as an ecological understanding of agency in order to positively influence or change their practice and their students' engagement with argumentation over two academic years. Through educational design study, this study conducted with three secondary science teachers (key stage 3-year 7 students aged 11-12) in the UK to find out if similar or different patterns of developing pedagogy for argumentation and of becoming more agentic emerge as they engage in planning and implementing a cycle of activities during the practice of teaching science with argumentation. Data from video and audio-recording of classroom practice and open-ended interviews with the science teachers were analysed using content analysis. The findings indicated that all the science teachers perceived strong agency in their opportunities to develop and apply pedagogical practices within the classroom. The teachers were pro-actively shaping their practices and classroom contexts in ways that were over and above the amendments to their pedagogy. They demonstrated some outcomes in developing pedagogy for argumentation and becoming more agentic in their teaching in this regards as a result of the collaboration with their colleagues and researcher; some appeared more agentic than others. The role of the collaboration between their colleagues was seen crucial for the teachers’ practice in the schools: close collaboration and support from other teachers in planning and implementing new educational innovations were seen as crucial for the development of pedagogy and becoming more agentic in practice. They needed to understand the importance of scientific argumentation but also understand how it can be planned and integrated into classroom practice. They also perceived constraint emerged from their lack of competence and knowledge in posing appropriate questions to help the students engage in argumentation, providing support for the students' construction of oral and written arguments.

Keywords: argumentation, teacher professional development, teacher agency, students' construction of argument

Procedia PDF Downloads 122
49 Facies Sedimentology and Astronomic Calibration of the Reinech Member (Lutetian)

Authors: Jihede Haj Messaoud, Hamdi Omar, Hela Fakhfakh Ben Jemia, Chokri Yaich

Abstract:

The Upper Lutetian alternating marl–limestone succession of Reineche Member was deposited over a warm shallow carbonate platform that permits Nummulites proliferation. High-resolution studies of 30 meters thick Nummulites-bearing Reineche Member, cropping out in Central Tunisia (Jebel Siouf), have been undertaken, regarding pronounced cyclical sedimentary sequences, in order to investigate the periodicity of cycles and their related orbital-scale oceanic and climatic changes. The palaeoenvironmental and palaeoclimatic data are preserved in several proxies obtainable through high-resolution sampling and laboratories measurement and analysis as magnetic susceptibility (MS) and carbonates contents in conjunction with a wireline logging tools. The time series analysis of proxies permits to establish cyclicity orders present in the studied intervals which could be linked to the orbital cycles. MS records provide high-resolution proxies for relative sea level change in Late Lutetian strata. The spectral analysis of MS fluctuations confirmed the orbital forcing by the presence of the complete suite of orbital frequencies in the precession of 23 ka, the obliquity of 41 ka, and notably the two modes of eccentricity of 100 and 405 ka. Regarding the two periodic sedimentary cycles detected by wavelet analysis of proxy fluctuations which coincide with the long-term 405 ka eccentricity cycle, the Reineche Member spanned 0,8 Myr. Wireline logging tools as gamma ray and sonic were used as a proxies to decipher cyclicity and trends in sedimentation and contribute to identifying and correlate units. There are used to constraint the highest frequency cyclicity modulated by a long term wavelength cycling apparently controlled by clay content. Interpreted as a result of variations in carbonate productivity, it has been suggested that the marl-limestone couplets, represent the sedimentary response to the orbital forcing. The calculation of cycle durations through Reineche Member, is used as a geochronometer and permit the astronomical calibration of the geologic time scale. Furthermore, MS coupled with carbonate contents, and fossil occurrences provide strong evidence for combined detrital inputs and marine surface carbonate productivity cycles. These two synchronous processes were driven by the precession index and ‘fingerprinted’ in the basic marl–limestone couplets, modulated by orbital eccentricity.

Keywords: magnetic susceptibility, cyclostratigraphy, orbital forcing, spectral analysis, Lutetian

Procedia PDF Downloads 284
48 Comparison of Methodologies to Compute the Probabilistic Seismic Hazard Involving Faults and Associated Uncertainties

Authors: Aude Gounelle, Gloria Senfaute, Ludivine Saint-Mard, Thomas Chartier

Abstract:

The long-term deformation rates of faults are not fully captured by Probabilistic Seismic Hazard Assessment (PSHA). PSHA that use catalogues to develop area or smoothed-seismicity sources is limited by the data available to constraint future earthquakes activity rates. The integration of faults in PSHA can at least partially address the long-term deformation. However, careful treatment of fault sources is required, particularly, in low strain rate regions, where estimated seismic hazard levels are highly sensitive to assumptions concerning fault geometry, segmentation and slip rate. When integrating faults in PSHA various constraints on earthquake rates from geologic and seismologic data have to be satisfied. For low strain rate regions where such data is scarce it would be especially challenging. Faults in PSHA requires conversion of the geologic and seismologic data into fault geometries, slip rates and then into earthquake activity rates. Several approaches exist for translating slip rates into earthquake activity rates. In the most frequently used approach, the background earthquakes are handled using a truncated approach, in which earthquakes with a magnitude lower or equal to a threshold magnitude (Mw) occur in the background zone, with a rate defined by the rate in the earthquake catalogue. Although magnitudes higher than the threshold are located on the fault with a rate defined using the average slip rate of the fault. As high-lighted by several research, seismic events with magnitudes stronger than the selected magnitude threshold may potentially occur in the background and not only at the fault, especially in regions of slow tectonic deformation. It also has been known that several sections of a fault or several faults could rupture during a single fault-to-fault rupture. It is then essential to apply a consistent modelling procedure to allow for a large set of possible fault-to-fault ruptures to occur aleatory in the hazard model while reflecting the individual slip rate of each section of the fault. In 2019, a tool named SHERIFS (Seismic Hazard and Earthquake Rates in Fault Systems) was published. The tool is using a methodology to calculate the earthquake rates in a fault system where the slip-rate budget of each fault is conversed into rupture rates for all possible single faults and faultto-fault ruptures. The objective of this paper is to compare the SHERIFS method with one other frequently used model to analyse the impact on the seismic hazard and through sensibility studies better understand the influence of key parameters and assumptions. For this application, a simplified but realistic case study was selected, which is in an area of moderate to hight seismicity (South Est of France) and where the fault is supposed to have a low strain.

Keywords: deformation rates, faults, probabilistic seismic hazard, PSHA

Procedia PDF Downloads 44
47 Evaluating the Teaching and Learning Value of Tablets

Authors: Willem J. A. Louw

Abstract:

The wave of new advanced computing technology that has been developed during the recent past has significantly changed the way we communicate, collaborate and collect information. It has created a new technology environment and paradigm in which our children and students grow-up and this impacts on their learning. Research confirmed that Generation Y students have a preference for learning in the new technology environment. The challenge or question is: How do we adjust our teaching and learning to make the most of these changes. The complexity of effective and efficient teaching and learning must not be underestimated and changes must be preceded by proper objective research to prevent any haphazard developments that could do more harm than benefit. A blended learning approach has been used in the Forestry department for a few numbers of years including the use of electronic-peer assisted learning (e-pal) in a fixed-computer set-up within a learning management system environment. It was decided to extend the investigation and do some exploratory research by using a range of different Tablet devices. For this purpose, learning activities or assignments were designed to cover aspects of communication, collaboration and collection of information. The Moodle learning management system was used to present normal module information, to communicate with students and for feedback and data collection. Student feedback was collected by using an online questionnaire and informal discussions. The research project was implemented in 2013, 2014 and 2015 amongst first and third-year students doing a forestry three-year technical tertiary qualification in commercial plantation management. In general, more than 80% of the students alluded to that the device was very useful in their learning environment while the rest indicated that the devices were not very useful. More than ninety percent of the students acknowledged that they would like to continue using the devices for all of their modules whilst the rest alluded to functioning efficiently without the devices. Results indicated that information collection (access to resources) was rated the highest advantageous factor followed by communication and collaboration. The main general advantages of using Tablets were listed by the students as being mobility (portability), 24/7 access to learning material and information of any kind on a user friendly device in a Wi-Fi environment, fast computing process speeds, saving time, effort and airtime through skyping and e-mail, and use of various applications. Ownership of the device is a critical factor while the risk was identified as a major potential constraint. Significant differences were reported between the different types and quality of Tablets. The preferred types are those with a bigger screen and the ones with overall better functionality and quality features. Tablets significantly increase the collaboration, communication and information collection needs of the students. It does, however, not replace the need of a computer/laptop because of limited storage and computation capacity, small screen size and inefficient typing.

Keywords: tablets, teaching, blended learning, tablet quality

Procedia PDF Downloads 241
46 Structure Conduct and Performance of Rice Milling Industry in Sri Lanka

Authors: W. A. Nalaka Wijesooriya

Abstract:

The increasing paddy production, stabilization of domestic rice consumption and the increasing dynamism of rice processing and domestic markets call for a rethinking of the general direction of the rice milling industry in Sri Lanka. The main purpose of the study was to explore levels of concentration in rice milling industry in Polonnaruwa and Hambanthota which are the major hubs of the country for rice milling. Concentration indices reveal that the rice milling industry in Polonnaruwa operates weak oligopsony and is highly competitive in Hambanthota. According to the actual quantity of paddy milling per day, 47 % is less than 8Mt/Day, while 34 % is 8-20 Mt/day, and the rest (19%) is greater than 20 Mt/day. In Hambanthota, nearly 50% of the mills belong to the range of 8-20 Mt/day. Lack of experience of the milling industry, poor knowledge on milling technology, lack of capital and finding an output market are the major entry barriers to the industry. Major problems faced by all the rice millers are the lack of a uniform electricity supply and low quality paddy. Many of the millers emphasized that the rice ceiling price is a constraint to produce quality rice. More than 80% of the millers in Polonnaruwa which is the major parboiling rice producing area have mechanical dryers. Nearly 22% millers have modern machineries like color sorters, water jet polishers. Major paddy purchasing method of large scale millers in Polonnaruwa is through brokers. In Hambanthota major channel is miller purchasing from paddy farmers. Millers in both districts have major rice selling markets in Colombo and suburbs. Huge variation can be observed in the amount of pledge (for paddy storage) loans. There is a strong relationship among the storage ability, credit affordability and the scale of operation of rice millers. The inter annual price fluctuation ranged 30%-35%. Analysis of market margins by using series of secondary data shows that farmers’ share on rice consumer price is stable or slightly increases in both districts. In Hambanthota a greater share goes to the farmer. Only four mills which have obtained the Good Manufacturing Practices (GMP) certification from Sri Lanka Standards Institution can be found. All those millers are small quantity rice exporters. Priority should be given for the Small and medium scale millers in distribution of storage paddy of PMB during the off season. The industry needs a proper rice grading system, and it is recommended to introduce a ceiling price based on graded rice according to the standards. Both husk and rice bran were underutilized. Encouraging investment for establishing rice oil manufacturing plant in Polonnaruwa area is highly recommended. The current taxation procedure needs to be restructured in order to ensure the sustainability of the industry.

Keywords: conduct, performance, structure (SCP), rice millers

Procedia PDF Downloads 318
45 The Social Structuring of Mate Selection: Assortative Marriage Patterns in the Israeli Jewish Population

Authors: Naava Dihi, Jon Anson

Abstract:

Love, so it appears, is not socially blind. We show that partner selection is socially constrained, and the freedom to choose is limited by at least two major factors or capitals: on the one hand, material resources and education, locating the partners on a scale of personal achievement and economic independence. On the other, the partners' ascriptive belonging to particular ethnic, or origin, groups, differentiated by the groups' social prestige, as well as by their culture, history and even physical characteristics. However, the relative importance of achievement and ascriptive factors, as well as the overlap between them, varies from society to society, depending on the society's structure and the factors shaping it. Israeli social structure has been shaped by the waves of new immigrants who arrived over the years. The timing of their arrival, their patterns of physical settlement and their occupational inclusion or exclusion have together created a mosaic of social groups whose principal common feature has been the country of origin from which they arrived. The analysis of marriage patterns helps illuminate the social meanings of the groups and their borders. To the extent that ethnic group membership has meaning for individuals and influences their life choices, the ascriptive factor will gain in importance relative to the achievement factor in their choice of marriage partner. In this research, we examine Jewish Israeli marriage patterns by looking at the marriage choices of 5,041 women aged 15 to 49 who were single at the census in 1983, and who were married at the time of the 1995 census, 12 years later. The database for this study was a file linking respondents from the 1983 and the 1995 censuses. In both cases, 5 percent of household were randomly chosen, so that our sample includes about 4 percent of women in Israel in 1983. We present three basic analyses: (1) Who was still single in 1983, using personal and household data from the 1983 census (binomial model), (2) Who married between 1983 and a1995, using personal and household data from the 1983 census (binomial model), (3) What were the personal characteristics of the womens’ partners in 1995, using data from the 1995 census (loglinear model). We show (i) that material and cultural capital both operate to delay marriage and to increase the probability of remaining single; and (ii) while there is a clear association between ethnic group membership and education, endogamy and homogamy both operate as separate forces which constraint (but do not determine) the choice of marriage partner, and thus both serve to reproduce the current pattern of relationships, as well as identifying patterns of proximity and distance between the different groups.

Keywords: Israel, nuptiality, ascription, achievement

Procedia PDF Downloads 106
44 The Scenario Analysis of Shale Gas Development in China by Applying Natural Gas Pipeline Optimization Model

Authors: Meng Xu, Alexis K. H. Lau, Ming Xu, Bill Barron, Narges Shahraki

Abstract:

As an emerging unconventional energy, shale gas has been an economically viable step towards a cleaner energy future in U.S. China also has shale resources that are estimated to be potentially the largest in the world. In addition, China has enormous unmet for a clean alternative to substitute coal. Nonetheless, the geological complexity of China’s shale basins and issues of water scarcity potentially impose serious constraints on shale gas development in China. Further, even if China could replicate to a significant degree the U.S. shale gas boom, China faces the problem of transporting the gas efficiently overland with its limited pipeline network throughput capacity and coverage. The aim of this study is to identify the potential bottlenecks in China’s gas transmission network, as well as to examine the shale gas development affecting particular supply locations and demand centers. We examine this through application of three scenarios with projecting domestic shale gas supply by 2020: optimistic, medium and conservative shale gas supply, taking references from the International Energy Agency’s (IEA’s) projections and China’s shale gas development plans. Separately we project the gas demand at provincial level, since shale gas will have more significant impact regionally than nationally. To quantitatively assess each shale gas development scenario, we formulated a gas pipeline optimization model. We used ArcGIS to generate the connectivity parameters and pipeline segment length. Other parameters are collected from provincial “twelfth-five year” plans and “China Oil and Gas Pipeline Atlas”. The multi-objective optimization model uses GAMs and Matlab. It aims to minimize the demands that are unable to be met, while simultaneously seeking to minimize total gas supply and transmission costs. The results indicate that, even if the primary objective is to meet the projected gas demand rather than cost minimization, there’s a shortfall of 9% in meeting total demand under the medium scenario. Comparing the results between the optimistic and medium supply of shale gas scenarios, almost half of the shale gas produced in Sichuan province and Chongqing won’t be able to be transmitted out by pipeline. On the demand side, the Henan province and Shanghai gas demand gap could be filled as much as 82% and 39% respectively, with increased shale gas supply. To conclude, the pipeline network in China is currently not sufficient in meeting the projected natural gas demand in 2020 under medium and optimistic scenarios, indicating the need for substantial pipeline capacity expansion for some of the existing network, and the importance of constructing new pipelines from particular supply to demand sites. If the pipeline constraint is overcame, Beijing, Shanghai, Jiangsu and Henan’s gas demand gap could potentially be filled, and China could thereby reduce almost 25% its dependency on LNG imports under the optimistic scenario.

Keywords: energy policy, energy systematic analysis, scenario analysis, shale gas in China

Procedia PDF Downloads 270
43 Fabrication of High-Aspect Ratio Vertical Silicon Nanowire Electrode Arrays for Brain-Machine Interfaces

Authors: Su Yin Chiam, Zhipeng Ding, Guang Yang, Danny Jian Hang Tng, Peiyi Song, Geok Ing Ng, Ken-Tye Yong, Qing Xin Zhang

Abstract:

Brain-machine interfaces (BMI) is a ground rich of exploration opportunities where manipulation of neural activity are used for interconnect with myriad form of external devices. These research and intensive development were evolved into various areas from medical field, gaming and entertainment industry till safety and security field. The technology were extended for neurological disorders therapy such as obsessive compulsive disorder and Parkinson’s disease by introducing current pulses to specific region of the brain. Nonetheless, the work to develop a real-time observing, recording and altering of neural signal brain-machine interfaces system will require a significant amount of effort to overcome the obstacles in improving this system without delay in response. To date, feature size of interface devices and the density of the electrode population remain as a limitation in achieving seamless performance on BMI. Currently, the size of the BMI devices is ranging from 10 to 100 microns in terms of electrodes’ diameters. Henceforth, to accommodate the single cell level precise monitoring, smaller and denser Nano-scaled nanowire electrode arrays are vital in fabrication. In this paper, we would like to showcase the fabrication of high aspect ratio of vertical silicon nanowire electrodes arrays using microelectromechanical system (MEMS) method. Nanofabrication of the nanowire electrodes involves in deep reactive ion etching, thermal oxide thinning, electron-beam lithography patterning, sputtering of metal targets and bottom anti-reflection coating (BARC) etch. Metallization on the nanowire electrode tip is a prominent process to optimize the nanowire electrical conductivity and this step remains a challenge during fabrication. Metal electrodes were lithographically defined and yet these metal contacts outline a size scale that is larger than nanometer-scale building blocks hence further limiting potential advantages. Therefore, we present an integrated contact solution that overcomes this size constraint through self-aligned Nickel silicidation process on the tip of vertical silicon nanowire electrodes. A 4 x 4 array of vertical silicon nanowires electrodes with the diameter of 290nm and height of 3µm has been successfully fabricated.

Keywords: brain-machine interfaces, microelectromechanical systems (MEMS), nanowire, nickel silicide

Procedia PDF Downloads 428
42 Effect of Preoxidation on the Effectiveness of Gd₂O₃ Nanoparticles Applied as a Source of Active Element in the Crofer 22 APU Coated with a Protective-conducting Spinel Layer

Authors: Łukasz Mazur, Kamil Domaradzki, Maciej Bik, Tomasz Brylewski, Aleksander Gil

Abstract:

Interconnects used in solid oxide fuel and electrolyzer cells (SOFCₛ/SOECs) serve several important functions, and therefore interconnect materials must exhibit certain properties. Their thermal expansion coefficient needs to match that of the ceramic components of these devices – the electrolyte, anode and cathode. Interconnects also provide structural rigidity to the entire device, which is why interconnect materials must exhibit sufficient mechanical strength at high temperatures. Gas-tightness is also a prerequisite since they separate gas reagents, and they also must provide very good electrical contact between neighboring cells over the entire operating time. High-chromium ferritic steels meets these requirements to a high degree but are affected by the formation of a Cr₂O₃ scale, which leads to increased electrical resistance. The final criterion for interconnect materials is chemical inertness in relation to the remaining cell components. In the case of ferritic steels, this has proved difficult due to the formation of volatile and reactive oxyhydroxides observed when Cr₂O3 is exposed to oxygen and water vapor. This process is particularly harmful on the cathode side in SOFCs and the anode side in SOECs. To mitigate this, protective-conducting ceramic coatings can be deposited on an interconnect's surface. The area-specific resistance (ASR) of a single interconnect cannot exceed 0.1 m-2 at any point of the device's operation. The rate at which the CrO₃ scale grows on ferritic steels can be reduced significantly via the so-called reactive element effect (REE). Research has shown that the deposition of Gd₂O₃ nanoparticles on the surface of the Crofer 22 APU, already modified using a protective-conducting spinel layer, further improves the oxidation resistance of this steel. However, the deposition of the manganese-cobalt spinel layer is a rather complex process and is performed at high temperatures in reducing and oxidizing atmospheres. There was thus reason to believe that this process may reduce the effectiveness of Gd₂O₃ nanoparticles added as an active element source. The objective of the present study was, therefore, to determine any potential impact by introducing a preoxidation stage after the nanoparticle deposition and before the steel is coated with the spinel. This should have allowed the nanoparticles to incorporate into the interior of the scale formed on the steel. Different samples were oxidized for 7000 h in air at 1073 K under quasi-isothermal conditions. The phase composition, chemical composition, and microstructure of the oxidation products formed on the samples were determined using X-ray diffraction, Raman spectroscopy, and scanning electron microscopy combined with energy-dispersive X-ray spectroscopy. A four-point, two-probe DC method was applied to measure ASR. It was found that coating deposition does indeed reduce the beneficial effect of Gd₂O₃ addition, since the smallest mass gain and the lowest ASR value were determined for the sample for which the additional preoxidation stage had been performed. It can be assumed that during this stage, gadolinium incorporates into and segregates at grain boundaries in the thin Cr₂O₃ that is forming. This allows the Gd₂O₃ nanoparticles to be a more effective source of the active element.

Keywords: interconnects, oxide nanoparticles, reactive element effect, SOEC, SOFC

Procedia PDF Downloads 72
41 Developing an Online Application for Mental Skills Training and Development

Authors: Arjun Goutham, Chaitanya Sridhar, Sunita Maheshwari, Robin Uthappa, Prasanna Gopinath

Abstract:

In alignment with the growth in the sporting industry, a number of people playing and competing in sports are growing exponentially across the globe. However, the number of sports psychology experts are not growing at a similar rate, especially in the Asian and more so, Indian context. Hence, the access to actionable mental training solutions specific to individual athletes is limited. Also, the time constraint an athlete faces due to their intense training schedule makes one-on-one sessions difficult. One of the means to bridge that gap is through technology. Technology makes individualization possible. It allows for easy access to specific-qualitative content/information and provides a medium to place individualized assessments, analysis, solutions directly into an athlete's hands. This enables mental training awareness, education, and real-time actionable solutions possible for athletes in-spite of the limitation of available sports psychology experts in their region. Furthermore, many athletes are hesitant to seek support due to the stigma of appearing weak. Such individuals would prefer a more discreet way. Athletes who have strong mental performance tend to produce better results. The mobile application helps to equip athletes with assessing and developing their mental strategies directed towards improving performance on an ongoing basis. When an athlete understands their strengths and limitations in their mental application, they can focus specifically on applying the strategies that work and improve on zones of limitation. With reports, coaches get to understand the unique inner workings of an athlete and can utilize the data & analysis to coach them with better precision and use coaching styles & communication that suits better. Systematically capturing data and supporting athletes(with individual-specific solutions) or teams with assessment, planning, instructional content, actionable tools & strategies, reviewing mental performance and the achievement of objectives & goals facilitate for a consistent mental skills development at all levels of sporting stages of an athlete's career. The mobile application will help athletes recognize and align with their stable attributes such as their personalities, learning & execution modalities, challenges & requirements of their sport, etc and help develop dynamic attributes like states, beliefs, motivation levels, focus etc. with practice and training. It will provide measurable analysis on a regular basis and help them stay aligned to their objectives & goals. The solutions are based on researched areas of influence on sporting performance individually or in teams.

Keywords: athletes, mental training, mobile application, performance, sports

Procedia PDF Downloads 257
40 Genetic Variability and Heritability Among Indigenous Pearl Millet (Pennisetum Glaucum L. R. BR.) in Striga Infested Fields of Sudan Savanna, Nigeria

Authors: Adamu Usman, Grace Stanley Balami

Abstract:

Pearl millet (Pennisetum glaucum L. R. Br.) is a cereal cultivated in arid and semi-arid areas of the world. It supports more than 100 million people around the world. Parasitic weed (Striga hermonthica Del. Benth) is a major constraint to its production. Estimated yield losses are put at 10 - 95% depending on variety, ecology and cultural practices. Potentials in selection of traits in pearl millets for grain yield have been reported and it depends on genotypic variability and heritability among landraces. Variability and heritability among cultivars could offer opportunities for improvement. The study was conducted to determine the genetic variability among cultivars and estimate broad sense heritability among grain yield and related traits. F1 breeding populations were generated with 9 parental cultivars, viz; Ex-Gubio, Ex-Monguno, Ex-Baga as males and PEO 5984, Super-SOSAT, SOSAT-C88, Ex-Borno and LCIC9702 as females through Line × Tester mating during 2017 dry season at Lushi Irrigation Station, Bauchi Metropolitan in Bauchi State, Nigeria. The F1 population and the parents were evaluated during cropping season of 2018 at Bauchi and Maiduguri. Data collected were subjected to analysis of variance. Results showed significant difference among cultivars and among traits indicating variability. Number of plants at emergence, days to 50% flowering, days to 100% flowering, plant height, panicle length, number of plants at harvest, Striga count at 90 days after sowing, panicle weight and grain yield were significantly different. Significant variability offer opportunity for improvement as superior individuals can be isolated. Genotypic variance estimates of traits were largely greater than environmental variances except in plant height and 1000 seed weight. Environmental variances were low and in some cases negligible. The phenotypic variances of all traits were higher than genotypic variances. Similarly phenotypic coefficient of variation (PCV) was higher than genotypic coefficient of variation (GCV). High heritability was found in days to 50% flowering (90.27%), Striga count at 90 days after sowing (90.07%), number of plants at harvest (87.97%), days to 100% flowering (83.89%), number of plants at emergence (82.19%) and plant height (73.18%). Greater heritability estimates could be due to presence of additive gene. The result revealed wider variability among genotypes and traits. Traits having high heritability could easily respond to selection. High value of GCV, PCV and heritability estimates indicate that selection for these traits are possible and could be effective.

Keywords: variability, heritability, phenotypic, genotypic, striga

Procedia PDF Downloads 39
39 Status and Management of Grape Stem Borer, Celosterna scrabrator with Soil Application of Chlorantraniliprole 0.4 gr

Authors: D. N. Kambrekar, S. B. Jagginavar, J. Aruna

Abstract:

Grape stem borer, Celosterna scrabrator is an important production constraint in grapes in India. Hitherto this pest was a severe menace only on the aged and unmanaged fields but during the recent past it has also started damaging the newly established fields. In India, since Karnataka, Andra Pradesh, Tamil Nadu and Maharashtra are the major grape production states, the incidence of stem borer is also restricted and severe in these states. The grubs of the beetle bore in to the main stem and even the branches, which affect the translocation of nutrients to the areal parts of the plant. Since, the grubs bore inside the stem, the chewed material along with its excreta is discharged outside the holes and the frass is found on the ground just below the bored holes. The portion of vines above the damaged part has a sticky appearance. The leaves become pale yellow which looks like a deficiency of micronutrients. The leaves ultimately dry and drop down. The status of the incidence of the grape stem borer in different grape growing districts of Northern Karnataka was carried out during three years. In each taluka five locations were surveyed for the incidence of grape stem borer. Further, the experiment on management of stem borer was carried out in the grape gardens of Vijayapur districts under farmers field during three years. Stem borer infested plants that show live holes were selected per treatments and it was replicated three times. Live and dead holes observed during pre-treatment were closely monitored and only plants with live holes were selected and tagged. Different doses of chlorantraniliprole 0.4% GR were incorporated into the soil around the vine basins near root zone surrounded to trunk region by removing soils up to 5-10 cm with a peripheral distance of 1 to 1.5 feet from the main trunk where feeder roots are present. Irrigation was followed after application of insecticide for proper incorporation of the test chemical. The results indicated that there was sever to moderate incidence of the stem borer in all the grape growing districts of northern Karnataka. Maximum incidence was recorded in Belagavi (11 holes per vine) and minimum was in Gadag district (8.5 holes per vine). The investigations carried out to study the efficacy of chlorantraniliprole on grape stem borer for successive three years under farmers field indicated that chlorantraniliprole @ 15g/vine applied just near the active root zone of the plant followed by irrigation has successfully managed the pest. The insecticide has translocated to all the parts of the plants and thereby stopped the activity of the pest which has resulted in to better growth of the plant and higher berry yield compared to other treatments under investigation. Thus, chlorantraniliprole 0.4 GR @ 15g/vine can be effective means in managing the stem borer.

Keywords: chlorantraniliprole, grape stem borer, Celosterna scrabrator, management

Procedia PDF Downloads 432
38 A Study of The Factors Predicting Radiation Exposure to Contacts of Saudi Patients Treated With Low-Dose Radioactive Iodine (I-131)

Authors: Khalid A. Salman, Shereen Wagih, Tariq Munshi, Musaed Almalki, Safwan Zatari, Zahid Khan

Abstract:

Aim: To measure exposure levels to family members and caregivers of Saudi patients treated with low dose I131 therapy, and household radiation exposure rate to predict different factors that can affect radiation exposure. Patients and methods: All adult self dependent patients with hyperthyroidism or cancer thyroid referred for low dose radioactive I131 therapy on outpatient basis are included. Radiation protection procedures are given to the participant and family members in details. TLD’s were dispensed to each participant in sufficient quantity for his/her family members living in the household. TLD’s are collected at fifth days post-dispense from patients who agreed to have a home visit during which the household is inspected and level of radiation contamination of surfaces was measured. Results: Thirty-two patients were enrolled in the current study, with a mean age of 43.1± 17.1 years Out of them 25 patients (78%) are females. I131 therapy was given in twenty patients (63%) for cancer thyroid of and for toxic goiter in the remaining twelve patients (37%), with an overall mean I131 dose of 24.1 ± 7.5mCi that is relatively higher in the former. The overall number of household family members and helpers of patients are 139, out of them77 are females (55.4%) & 62 are males (44.6%) with a mean age of 29.8± 17.6. The mean period of contact with the patient is 7.6 ±5.6hours. The cumulative radiation exposure shows that radiation exposure to all family members is below the exposure constraint (1mSv), with a range of 109 to 503uSv, and a mean value of 220.9±91 uSv. Numerical data shows a little higher exposure rate for family members of those who receive higher dose of I131 (patients with thyroid cancer) and household members who spent longer time with the patient, yet, the difference is statistically insignificant (P>0.05). Besides, no significant correlation was found between the degree of cumulative exposure of the family members to their gender, age, socioeconomic standard, educational level and residential factors. In the 21 home visits all data from bedrooms, reception areas and kitchens are below hazardous limits (0.5uSv/h) apart from bathrooms that give a slightly higher reading of 0.57±0.39 uSv/h in those with cancer thyroid who receive a higher radiation dose. A statistically significant difference was found between radiation exposure rate in bathrooms used by the patient versus those used by family members only, with a mean value of exposure rate of 0.701±0.21 uSv/h and 0.17±0.82 uSv/h respectively, with a p-value of 0.018 (<0.05). Conclusion: Family members of patients treated with low dose I131 on outpatient basis have a good compliance to radiation protection instruction if given properly with a cumulative radiation exposure rate evidently beyond the radiation exposure constraints of 1 mSv. Given I131 dose, hours spent with the patient, age, gender, socioeconomic standard, educational level and residential factors have no significant correlation with the cumulative radiation exposure. The patient bathroom exhibits more radiation exposure rate, needing more strict instructions for patient bathroom use and health hygiene.

Keywords: family members, radiation exposure, radioactive iodine therapy, radiation safety

Procedia PDF Downloads 264
37 Digital Twins: Towards an Overarching Framework for the Built Environment

Authors: Astrid Bagireanu, Julio Bros-Williamson, Mila Duncheva, John Currie

Abstract:

Digital Twins (DTs) have entered the built environment from more established industries like aviation and manufacturing, although there has never been a common goal for utilising DTs at scale. Defined as the cyber-physical integration of data between an asset and its virtual counterpart, DT has been identified in literature from an operational standpoint – in addition to monitoring the performance of a built asset. However, this has never been translated into how DTs should be implemented into a project and what responsibilities each project stakeholder holds in the realisation of a DT. What is needed is an approach to translate these requirements into actionable DT dimensions. This paper presents a foundation for an overarching framework specific to the built environment. For the purposes of this research, the UK widely used the Royal Institute of British Architects (RIBA) Plan of Work from 2020 is used as a basis for itemising project stages. The RIBA Plan of Work consists of eight stages designed to inform on the definition, briefing, design, coordination, construction, handover, and use of a built asset. Similar project stages are utilised in other countries; therefore, the recommendations from the interviews presented in this paper are applicable internationally. Simultaneously, there is not a single mainstream software resource that leverages DT abilities. This ambiguity meets an unparalleled ambition from governments and industries worldwide to achieve a national grid of interconnected DTs. For the construction industry to access these benefits, it necessitates a defined starting point. This research aims to provide a comprehensive understanding of the potential applications and ramifications of DT in the context of the built environment. This paper is an integral part of a larger research aimed at developing a conceptual framework for the Architecture, Engineering, and Construction (AEC) sector following a conventional project timeline. Therefore, this paper plays a pivotal role in providing practical insights and a tangible foundation for developing a stage-by-stage approach to assimilate the potential of DT within the built environment. First, the research focuses on a review of relevant literature, albeit acknowledging the inherent constraint of limited sources available. Secondly, a qualitative study compiling the views of 14 DT experts is presented, concluding with an inductive analysis of the interview findings - ultimately highlighting the barriers and strengths of DT in the context of framework development. As parallel developments aim to progress net-zero-centred design and improve project efficiencies across the built environment, the limited resources available to support DTs should be leveraged to propel the industry to reach its digitalisation era, in which AEC stakeholders have a fundamental role in understanding this, from the earliest stages of a project.

Keywords: digital twins, decision-making, design, net-zero, built environment

Procedia PDF Downloads 103
36 Balanced Score Card a Tool to Improve Naac Accreditation – a Case Study in Indian Higher Education

Authors: CA Kishore S. Peshori

Abstract:

Introduction: India, a country with vast diversity and huge population is going to have largest young population by 2020. Higher education has and will always be the basic requirement for making a developing nation to a developed nation. To improve any system it needs to be bench-marked. There have been various tools for bench-marking the systems. Education is delivered in India by universities which are mainly funded by government. This universities for delivering the education sets up colleges which are again funded mainly by government. Recently however there has also been autonomy given to universities and colleges. Moreover foreign universities are waiting to enter Indian boundaries. With a large number of universities and colleges it has become more and more necessary to measure this institutes for bench-marking. There have been various tools for measuring the institute. In India college assessments have been made compulsory by UGC. Naac has been offically recognised as the accrediation criteria. The Naac criteria has been based on seven criterias namely: 1. Curricular assessments, 2. Teaching learning and evaluation, 3. Research Consultancy and Extension, 4. Infrastructure and learning resources, 5. Student support and progression, 6. Governance leadership and management, 7. Innovation and best practices. The Naac tries to bench mark the institution for identification, sustainability, dissemination and adaption of best practices. It grades the institution according to this seven criteria and the funding of institution is based on these grades. Many of the colleges are struggling to get best of grades but they have not come across a systematic tool to achieve the results. Balanced Scorecard developed by Kaplan has been a successful tool for corporates to develop best of practices so as to increase their financial performance and also retain and increase their customers so as to grow the organization to next level.It is time to test this tool for an educational institute. Methodology: The paper tries to develop a prototype for college based on the secondary data. Once a prototype is developed the researcher based on questionnaire will try to test this tool for successful implementation. The success of this research will depend on its implementation of BSC on an institute and its grading improved due to this successful implementation. Limitation of time is a major constraint in this research as Naac cycle takes minimum 4 years for accreditation and reaccreditation the methodology will limit itself to secondary data and questionnaire to be circulated to colleges along with the prototype model of BSC. Conclusion: BSC is a successful tool for enhancing growth of an organization. Educational institutes are no exception to these. BSC will only have to be realigned to suit the Naac criteria. Once this prototype is developed the success will be tested only on its implementation but this research paper will be the first step towards developing this tool and will also initiate the success by developing a questionnaire and getting and evaluating the responses for moving to the next level of actual implementation

Keywords: balanced scorecard, bench marking, Naac, UGC

Procedia PDF Downloads 262
35 Topology Optimization Design of Transmission Structure in Flapping-Wing Micro Aerial Vehicle via 3D Printing

Authors: Zuyong Chen, Jianghao Wu, Yanlai Zhang

Abstract:

Flapping-wing micro aerial vehicle (FMAV) is a new type of aircraft by mimicking the flying behavior to that of small birds or insects. Comparing to the traditional fixed wing or rotor-type aircraft, FMAV only needs to control the motion of flapping wings, by changing the size and direction of lift to control the flight attitude. Therefore, its transmission system should be designed very compact. Lightweight design can effectively extend its endurance time, while engineering experience alone is difficult to simultaneously meet the requirements of FMAV for structural strength and quality. Current researches still lack the guidance of considering nonlinear factors of 3D printing material when carrying out topology optimization, especially for the tiny FMAV transmission system. The coupling of non-linear material properties and non-linear contact behaviors of FMAV transmission system is a great challenge to the reliability of the topology optimization result. In this paper, topology optimization design based on FEA solver package Altair Optistruct for the transmission system of FMAV manufactured by 3D Printing was carried out. Firstly, the isotropic constitutive behavior of the Ultraviolet (UV) Cureable Resin used to fabricate the structure of FMAV was evaluated and confirmed through tensile test. Secondly, a numerical computation model describing the mechanical behavior of FMAV transmission structure was established and verified by experiments. Then topology optimization modeling method considering non-linear factors were presented, and optimization results were verified by dynamic simulation and experiments. Finally, detail discussions of different load status and constraints were carried out to explore the leading factors affecting the optimization results. The contributions drawn from this article helpful for guiding the lightweight design of FMAV are summarizing as follow; first, a dynamic simulation modeling method used to obtain the load status is presented. Second, verification method of optimized results considering non-linear factors is introduced. Third, based on or can achieve a better weight reduction effect and improve the computational efficiency rather than taking multi-states into account. Fourth, basing on makes for improving the ability to resist bending deformation. Fifth, constraint of displacement helps to improve the structural stiffness of optimized result. Results and engineering guidance in this paper may shed lights on the structural optimization and light-weight design for future advanced FMAV.

Keywords: flapping-wing micro aerial vehicle, 3d printing, topology optimization, finite element analysis, experiment

Procedia PDF Downloads 160
34 Evolving Credit Scoring Models using Genetic Programming and Language Integrated Query Expression Trees

Authors: Alexandru-Ion Marinescu

Abstract:

There exist a plethora of methods in the scientific literature which tackle the well-established task of credit score evaluation. In its most abstract form, a credit scoring algorithm takes as input several credit applicant properties, such as age, marital status, employment status, loan duration, etc. and must output a binary response variable (i.e. “GOOD” or “BAD”) stating whether the client is susceptible to payment return delays. Data imbalance is a common occurrence among financial institution databases, with the majority being classified as “GOOD” clients (clients that respect the loan return calendar) alongside a small percentage of “BAD” clients. But it is the “BAD” clients we are interested in since accurately predicting their behavior is crucial in preventing unwanted loss for loan providers. We add to this whole context the constraint that the algorithm must yield an actual, tractable mathematical formula, which is friendlier towards financial analysts. To this end, we have turned to genetic algorithms and genetic programming, aiming to evolve actual mathematical expressions using specially tailored mutation and crossover operators. As far as data representation is concerned, we employ a very flexible mechanism – LINQ expression trees, readily available in the C# programming language, enabling us to construct executable pieces of code at runtime. As the title implies, they model trees, with intermediate nodes being operators (addition, subtraction, multiplication, division) or mathematical functions (sin, cos, abs, round, etc.) and leaf nodes storing either constants or variables. There is a one-to-one correspondence between the client properties and the formula variables. The mutation and crossover operators work on a flattened version of the tree, obtained via a pre-order traversal. A consequence of our chosen technique is that we can identify and discard client properties which do not take part in the final score evaluation, effectively acting as a dimensionality reduction scheme. We compare ourselves with state of the art approaches, such as support vector machines, Bayesian networks, and extreme learning machines, to name a few. The data sets we benchmark against amount to a total of 8, of which we mention the well-known Australian credit and German credit data sets, and the performance indicators are the following: percentage correctly classified, area under curve, partial Gini index, H-measure, Brier score and Kolmogorov-Smirnov statistic, respectively. Finally, we obtain encouraging results, which, although placing us in the lower half of the hierarchy, drive us to further refine the algorithm.

Keywords: expression trees, financial credit scoring, genetic algorithm, genetic programming, symbolic evolution

Procedia PDF Downloads 105